1
0
Fork 0
mirror of https://github.com/fog/fog.git synced 2022-11-09 13:51:43 -05:00

Merged pull request #279 from smerritt/each_file_doc_fix.

[storage|aws] Fix docs to say files.each, not each_file.
This commit is contained in:
Wesley Beary 2011-04-28 13:10:26 -07:00
commit 80c13e489d

View file

@ -86,17 +86,34 @@ and maybe some pictures of your cat doing funny stuff. Since this is
all of vital importance, you need to back it up. all of vital importance, you need to back it up.
# copy each file to local disk # copy each file to local disk
directory.each_file do |s3_file| directory.files.each do |s3_file|
File.open(s3_file.key, 'w') do |local_file| File.open(s3_file.key, 'w') do |local_file|
local_file.write(s3_file.body) local_file.write(s3_file.body)
end end
end end
One caveat: it's tempting to just write `directory.files.each` here, One caveat: it's way more efficient to do this:
but that only works until you get a large number of files. S3's API
for listing files forces pagination on you. `directory.each_file` # do two things per file
takes pagination into account; `directory.files.each` will only directory.files.each do |file|
operate on the first page. do_one_thing(file)
do_another_thing(file)
end
than it is to do this:
# do two things per file
directory.files.each do |file|
do_one_thing(file)
end.each do |file|
do_another_thing(file)
end
The reason is that the list of files might be large. Really
large. Eat-all-your-RAM-and-ask-for-more large. Therefore, every time
you say `files.each`, fog makes a fresh set of API calls to Amazon to
list the available files (Amazon's API returns a page at a time, so
fog works a page at a time in order to keep its memory requirements sane).
## Sending it out ## Sending it out