Discussion:
Delete file without the directory object
ha
2013-04-22 21:26:17 UTC
Permalink
Hi guys,

I've been going through the docs and the documentation I find always gets
the directory first and then get the file to be destroyed.

I got a simple script running and it works like a champ!

The script is like this:

*require 'fog'
*
*
*
*conn = Fog::Storage.new(
*
* provider: 'Rackspace',*
* rackspace_username: 'username',*
* rackspace_api_key: 'key'*
*)*
*
*
*directory = conn.directories.get('mycontainer')
*
*
*
*f = directory.files.head("path/file1.jpg")*
*puts "Should return file: #{f.inspect}"*
*f.destroy if f*

My big problem is that I know the path of the file "path/file1.jpg" (which
is the key attribute to the file) but not the container name, and I have
about 20 different containers.

Is there a method that works like a search file that receives the file key?!


I don't know why the container is not saved along with 'path/file1.jpg' but
I'm already thinking on adding it, I just want to know if I will have to do
that before or after cleaning my containers.

Anyway, if there is no search I'll have to brute force and iterate through
all containers all files to delete the old files...


Thanks in advance,
./Helio
--
You received this message because you are subscribed to the Google Groups "ruby-fog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ruby-fog+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
geemus (Wesley Beary)
2013-04-22 21:31:10 UTC
Permalink
You should be able to get a performance boost by avoiding the get (if you
can assume the directory already exists) and instead using new.

So instead of:

*directory = conn.directories.get('mycontainer')*
*
*
You would have:
*
*
*directory = conn.directories.new(:key => 'mycontainer')**
*
*
*
Beyond that, however, I don't think there is a simple facility for
searching across buckets. So I fear falling back to iterating over them and
using head may be your best bet.

wes
Post by ha
Hi guys,
I've been going through the docs and the documentation I find always gets
the directory first and then get the file to be destroyed.
I got a simple script running and it works like a champ!
*require 'fog'
*
*
*
*conn = Fog::Storage.new(
*
* provider: 'Rackspace',*
* rackspace_username: 'username',*
* rackspace_api_key: 'key'*
*)*
*
*
*directory = conn.directories.get('mycontainer')
*
*
*
*f = directory.files.head("path/file1.jpg")*
*puts "Should return file: #{f.inspect}"*
*f.destroy if f*
My big problem is that I know the path of the file "path/file1.jpg" (which
is the key attribute to the file) but not the container name, and I have
about 20 different containers.
Is there a method that works like a search file that receives the file key?!
I don't know why the container is not saved along with 'path/file1.jpg'
but I'm already thinking on adding it, I just want to know if I will have
to do that before or after cleaning my containers.
Anyway, if there is no search I'll have to brute force and iterate through
all containers all files to delete the old files...
Thanks in advance,
./Helio
--
You received this message because you are subscribed to the Google Groups "ruby-fog" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/groups/opt_out.
--
You received this message because you are subscribed to the Google Groups "ruby-fog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ruby-fog+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
ha
2013-04-23 13:09:47 UTC
Permalink
I was seeing that coming...

Do you know if rackspace has some kind of interface that I can search
across buckets?!
I can spend sometime working on a search, if Rackspace has an way of doing
that...

Thanks,
./Helio
Post by ha
Hi guys,
I've been going through the docs and the documentation I find always gets
the directory first and then get the file to be destroyed.
I got a simple script running and it works like a champ!
*require 'fog'
*
*
*
*conn = Fog::Storage.new(
*
* provider: 'Rackspace',*
* rackspace_username: 'username',*
* rackspace_api_key: 'key'*
*)*
*
*
*directory = conn.directories.get('mycontainer')
*
*
*
*f = directory.files.head("path/file1.jpg")*
*puts "Should return file: #{f.inspect}"*
*f.destroy if f*
My big problem is that I know the path of the file "path/file1.jpg" (which
is the key attribute to the file) but not the container name, and I have
about 20 different containers.
Is there a method that works like a search file that receives the file key?!
I don't know why the container is not saved along with 'path/file1.jpg'
but I'm already thinking on adding it, I just want to know if I will have
to do that before or after cleaning my containers.
Anyway, if there is no search I'll have to brute force and iterate through
all containers all files to delete the old files...
Thanks in advance,
./Helio
--
You received this message because you are subscribed to the Google Groups "ruby-fog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ruby-fog+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
ha
2013-04-23 15:53:05 UTC
Permalink
Wes,
I do have the container CDN URL.

Do you know if it is possible something like this?
*directory = conn.directories.new(:key => 'CDN_URL')*
*directory = conn.directories.new(:cdn_name => 'CDN_URL')*
*directory = conn.directories.new(:public_url => 'CDN_URL')*


Thanks,
./Helio
Post by ha
I was seeing that coming...
Do you know if rackspace has some kind of interface that I can search
across buckets?!
I can spend sometime working on a search, if Rackspace has an way of doing
that...
Thanks,
./Helio
Post by ha
Hi guys,
I've been going through the docs and the documentation I find always gets
the directory first and then get the file to be destroyed.
I got a simple script running and it works like a champ!
*require 'fog'
*
*
*
*conn = Fog::Storage.new(
*
* provider: 'Rackspace',*
* rackspace_username: 'username',*
* rackspace_api_key: 'key'*
*)*
*
*
*directory = conn.directories.get('mycontainer')
*
*
*
*f = directory.files.head("path/file1.jpg")*
*puts "Should return file: #{f.inspect}"*
*f.destroy if f*
My big problem is that I know the path of the file "path/file1.jpg"
(which is the key attribute to the file) but not the container name, and I
have about 20 different containers.
Is there a method that works like a search file that receives the file key?!
I don't know why the container is not saved along with 'path/file1.jpg'
but I'm already thinking on adding it, I just want to know if I will have
to do that before or after cleaning my containers.
Anyway, if there is no search I'll have to brute force and iterate
through all containers all files to delete the old files...
Thanks in advance,
./Helio
--
You received this message because you are subscribed to the Google Groups "ruby-fog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ruby-fog+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
geemus (Wesley Beary)
2013-04-23 17:48:56 UTC
Permalink
Not that I know of, hopefully some of our Rackspace experts can chime in.
It might be easier to engage with them if you open an issue actually.
Thanks!
wes
Post by ha
Wes,
I do have the container CDN URL.
Do you know if it is possible something like this?
*directory = conn.directories.new(:key => 'CDN_URL')*
*directory = conn.directories.new(:cdn_name => 'CDN_URL')*
*directory = conn.directories.new(:public_url => 'CDN_URL')*
Thanks,
./Helio
Post by ha
I was seeing that coming...
Do you know if rackspace has some kind of interface that I can search
across buckets?!
I can spend sometime working on a search, if Rackspace has an way of
doing that...
Thanks,
./Helio
Post by ha
Hi guys,
I've been going through the docs and the documentation I find always
gets the directory first and then get the file to be destroyed.
I got a simple script running and it works like a champ!
*require 'fog'
*
*
*
*conn = Fog::Storage.new(
*
* provider: 'Rackspace',*
* rackspace_username: 'username',*
* rackspace_api_key: 'key'*
*)*
*
*
*directory = conn.directories.get('mycontainer')
*
*
*
*f = directory.files.head("path/file1.jpg")*
*puts "Should return file: #{f.inspect}"*
*f.destroy if f*
My big problem is that I know the path of the file "path/file1.jpg"
(which is the key attribute to the file) but not the container name, and I
have about 20 different containers.
Is there a method that works like a search file that receives the file key?!
I don't know why the container is not saved along with 'path/file1.jpg'
but I'm already thinking on adding it, I just want to know if I will have
to do that before or after cleaning my containers.
Anyway, if there is no search I'll have to brute force and iterate
through all containers all files to delete the old files...
Thanks in advance,
./Helio
--
You received this message because you are subscribed to the Google Groups "ruby-fog" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/groups/opt_out.
--
You received this message because you are subscribed to the Google Groups "ruby-fog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ruby-fog+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Rupak Ganguly
2013-04-23 20:03:51 UTC
Permalink
Helio,
I am not a Racker but I work for HP Cloud, and we share the Openstack
platform. From what I know, you cannot pass in a CDN url to get to a
directory object. You need a directory 'name' to 'get'/'head' a directory
object.

As far as I know, none of these are possible.
*directory = conn.directories.new(:key => 'CDN_URL')*
*directory = conn.directories.new(:cdn_name => 'CDN_URL')*
*directory = conn.directories.new(:public_url => 'CDN_URL')*
*
*
But, still as Wes mentioned, I would open an issue and one of the Rackers
to comment.

Thanks,
Rupak Ganguly
Ph: 678-648-7434
Post by geemus (Wesley Beary)
Not that I know of, hopefully some of our Rackspace experts can chime in.
It might be easier to engage with them if you open an issue actually.
Thanks!
wes
Post by ha
Wes,
I do have the container CDN URL.
Do you know if it is possible something like this?
*directory = conn.directories.new(:key => 'CDN_URL')*
*directory = conn.directories.new(:cdn_name => 'CDN_URL')*
*directory = conn.directories.new(:public_url => 'CDN_URL')*
Thanks,
./Helio
Post by ha
I was seeing that coming...
Do you know if rackspace has some kind of interface that I can search
across buckets?!
I can spend sometime working on a search, if Rackspace has an way of
doing that...
Thanks,
./Helio
Post by ha
Hi guys,
I've been going through the docs and the documentation I find always
gets the directory first and then get the file to be destroyed.
I got a simple script running and it works like a champ!
*require 'fog'
*
*
*
*conn = Fog::Storage.new(
*
* provider: 'Rackspace',*
* rackspace_username: 'username',*
* rackspace_api_key: 'key'*
*)*
*
*
*directory = conn.directories.get('mycontainer')
*
*
*
*f = directory.files.head("path/file1.jpg")*
*puts "Should return file: #{f.inspect}"*
*f.destroy if f*
My big problem is that I know the path of the file "path/file1.jpg"
(which is the key attribute to the file) but not the container name, and I
have about 20 different containers.
Is there a method that works like a search file that receives the file key?!
I don't know why the container is not saved along with 'path/file1.jpg'
but I'm already thinking on adding it, I just want to know if I will have
to do that before or after cleaning my containers.
Anyway, if there is no search I'll have to brute force and iterate
through all containers all files to delete the old files...
Thanks in advance,
./Helio
--
You received this message because you are subscribed to the Google Groups
"ruby-fog" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/groups/opt_out.
--
You received this message because you are subscribed to the Google Groups "ruby-fog" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/groups/opt_out.
--
You received this message because you are subscribed to the Google Groups "ruby-fog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ruby-fog+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Kyle Rames
2013-05-13 18:59:40 UTC
Permalink
Hi Helio,

Sorry for the late response! I have been out of the office for the birth of
my daughter.

Unfortunately, you cannot use the CDN URL to get a handle to the directory.
The only solution is to would be to get a list of all the CDN enabled
containers and then head each container until you find the desired URL
which is sub-optimal at best.

I am also unaware of a call that will search all of the containers for an
object. I have forwarded your question onto our cloud files team. Hopefully
they will have a better answer for you.


Kyle
Post by ha
Wes,
I do have the container CDN URL.
Do you know if it is possible something like this?
*directory = conn.directories.new(:key => 'CDN_URL')*
*directory = conn.directories.new(:cdn_name => 'CDN_URL')*
*directory = conn.directories.new(:public_url => 'CDN_URL')*
Thanks,
./Helio
Post by ha
I was seeing that coming...
Do you know if rackspace has some kind of interface that I can search
across buckets?!
I can spend sometime working on a search, if Rackspace has an way of
doing that...
Thanks,
./Helio
Post by ha
Hi guys,
I've been going through the docs and the documentation I find always
gets the directory first and then get the file to be destroyed.
I got a simple script running and it works like a champ!
*require 'fog'
*
*
*
*conn = Fog::Storage.new(
*
* provider: 'Rackspace',*
* rackspace_username: 'username',*
* rackspace_api_key: 'key'*
*)*
*
*
*directory = conn.directories.get('mycontainer')
*
*
*
*f = directory.files.head("path/file1.jpg")*
*puts "Should return file: #{f.inspect}"*
*f.destroy if f*
My big problem is that I know the path of the file "path/file1.jpg"
(which is the key attribute to the file) but not the container name, and I
have about 20 different containers.
Is there a method that works like a search file that receives the file key?!
I don't know why the container is not saved along with 'path/file1.jpg'
but I'm already thinking on adding it, I just want to know if I will have
to do that before or after cleaning my containers.
Anyway, if there is no search I'll have to brute force and iterate
through all containers all files to delete the old files...
Thanks in advance,
./Helio
--
You received this message because you are subscribed to the Google Groups "ruby-fog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ruby-fog+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
Loading...