From 07b49be6fbb63d9533f911923310206b9e9d0eba Mon Sep 17 00:00:00 2001 From: geemus Date: Thu, 31 Mar 2011 22:35:20 -0400 Subject: [PATCH] pull in old blog posts (need reformatting still) --- docs/_posts/2011-01-01-cdn.md | 82 +++++++++++++++++ docs/_posts/2011-01-01-dns.md | 79 ++++++++++++++++ docs/_posts/2011-01-01-storage.md | 145 ++++++++++++++++++++++++++++++ 3 files changed, 306 insertions(+) create mode 100644 docs/_posts/2011-01-01-cdn.md create mode 100644 docs/_posts/2011-01-01-dns.md create mode 100644 docs/_posts/2011-01-01-storage.md diff --git a/docs/_posts/2011-01-01-cdn.md b/docs/_posts/2011-01-01-cdn.md new file mode 100644 index 000000000..def731227 --- /dev/null +++ b/docs/_posts/2011-01-01-cdn.md @@ -0,0 +1,82 @@ +--- +layout: default +title: fog - cdn +--- + +Faster websites are better. Better experience, better sales, you name it. Unfortunately, making a website faster can be tough. Thankfully a content distribution network, or CDN, can give you great performance bang for your buck. A CDN helps speed things up by putting copies of your files closer to your users. It's like the difference between pizza delivery from across the street and pizza delivery from the next town over. + +The ease and deliciousness are the good news, but until recently CDN's were only available in the big leagues via 'my business guys will talk to your business guys' deals. Fortunately for us, Amazon recently updated CloudFront, their CDN service, to allow us to get these benefits with just a credit card and an API call. So now we'll see how you can spend a few minutes to save your users countless hours of load time. + +## Preliminaries + +First, make sure you have fog installed: + + gem install fog + +Now you'll need to sign up for Cloudfront. Gather up the credentials your new credentials to initialize a connection to the service: + + require 'fog' + + # create a connection to the service + cdn = Fog::CDN.new({ + :provider => 'AWS', + :aws_access_key_id => AWS_ACCESS_KEY_ID, + :aws_secret_access_key => AWS_SECRET_ACCESS_KEY + } + +## Setting Up Your CDN + +Now you'll need to create a 'distribution' which represents a mapping from the CDN to your domain. For the examples we'll pretend we are working on 'http://www.example.com', but you can just switch it to your actual domain. Some other options are available, but the only other one we need to fill in is OriginProtocolPolicy. This sets what to do about http vs https. We will use 'match-viewer' which returns the same protocol as the request, but you can also choose 'http-only' which always returns http responses. + + data = cdn.post_distribution({ + 'CustomOrigin' => { + 'DNSName' => 'www.example.com', + 'OriginProtocolPolicy' => 'match-viewer' + } + }) + + # parse the response for stuff you'll need later + distribution_id = data.body['Id'] + caller_reference = data.body['CallerReference'] + etag = data.headers['ETag'] + cdn_domain_name = data.body['DomainName'] + + # wait for the updates to propogate + Fog.wait_for { + cdn.get_distribution(distribution_id).body['Status'] ## 'Deployed' + } + +## Getting Served + +With the domain name from the distribution in hand you should now be ready to serve content from the edge. All you need to do is start replacing urls like `http://www.example.com/stylesheets/foo.css` with `#{cdn_domain_name}/stylesheets/foo.css`. Just because you can do something doesn't always mean you should though. Dynamic pages are not really well suited to CDN storage, since CDN content will be the same for every user. Fortunately some of your most used content is a great fit. By just switching over your images, javascripts and stylesheets you can have an impact for each and every one of your users. + +Congrats, your site is faster! By default the urls aren't very pretty, something like `http://d1xdx2sah5udd0.cloudfront.net/stylesheets/foo.css`. Thankfully you can use CNAME config options to utilize something like `http://assets.example.com/stylesheets/foo.css`, if you are interested in learning more about this let me know in the comments. + +## Cleaning Up + +But, just in case you need to update things I'll run through how you can make changes. In my case I just want to clean up after myself, so I'll use the distribution_id and ETag from before to disable the distribution. We need to use the ETag as well because it provides a way to refer to different versions of the same distribution and ensures we are updating the version that we think we are. + + data = cdn.put_distribution_config( + distribution_id, + etag, + { + 'CustomOrigin' => { + 'DNSName' => 'www.example.com', + 'OriginProtocolPolicy' => 'match-viewer' + }, + 'CallerReference' => caller_reference, + 'Enabled' => 'false' + } + ) + + # parse the updated etag + etag = data.headers['ETag'] + +Now you just need to wait for the update to happen like before and once its disabled we can delete it: + + Fog.wait_for { + cdn.get_distribution(distribution_id).body['Status'] ## 'Deployed' + } + cdn.delete_distribution(distribution_id, etag) + +Thats it, now go forth and speed up some load times! diff --git a/docs/_posts/2011-01-01-dns.md b/docs/_posts/2011-01-01-dns.md new file mode 100644 index 000000000..9601bb1c3 --- /dev/null +++ b/docs/_posts/2011-01-01-dns.md @@ -0,0 +1,79 @@ +--- +layout: default +title: fog - dns +--- + +The power and flexibility of the cloud are amazing. But sometimes it can be a pain to chase your resources around and keep everything up to date. This is especially true of keeping track of addresses for DNS, but thankfully more and more API driven options are available, allowing you to automate your DNS to keep up with your hardware changes. + +## Setup + +First, make sure you have fog installed: + + gem install fog + +For this first example we will use Zerigo (see below for how to use other providers). You can signup for Zerigo DNS here. Gather up your new credentials to initialize a connection to the service: + + require 'rubygems' + require 'fog' + + # create a connection to the service + dns = Fog::DNS.new({ + :provider => 'Zerigo', + :zerigo_email => ZERIGO_EMAIL, + :zerigo_token => ZERIGO_TOKEN + } + +## Getting in the Zone + +The first thing you need to do to prepare for your DNS excursion is create a zone for your domain. The zone will contain all of the more specific records that you will create later. You will just need to specify the domain, which should be your url without the 'http' or 'www' parts, and an email address. Then you can create the zone with your DNS connection: + + zone = @dns.zones.create( + :domain => 'example.com', + :email => 'admin@example.com' + ) + +Now that you have a zone you will need to update your registrar to let them know what DNS servers are responsible for your domain. You can ask the zone what values to use: + + zone.nameservers + +## Spinning Records + +With your new zone in hand you can add records as needed. First and foremost you will probably want the 'www' version of your site to point to whatever your ip might be: + + record = @zone.records.create( + :ip => '1.2.3.4', + :name => 'example.com', + :type => 'A' + ) + +Adding other records is similarly easy, for instance if we want 'www.example.com' to go to the same place, we can use a cname record: + + record = @zone.records.create( + :ip => 'example.com', + :name => 'www', + :type => 'CNAME' + ) + +Or, similarly you might want to have your blog elsewhere: + + record = @zone.records.create( + :ip => '4.3.2.1', + :name => 'blog.example.com', + :type => 'A' + ) + +You can add more specifics if you need to, but reasonable defaults make it just that easy. You can also add any other kind of DNS record you might need for mail or other purposes, you can find a nice overview of record options and types on Wikipedia. + +## No Zerigo? No Problem + +If you already have an account with another service you can just as easily use this same code with different credentials. fog currently supports AWS Route 53, Blue Box, DNSimple, Linode, Slicehost and Zerigo; so you can have your pick. As an example you can connect to AWS instead of Zerigo: + + dns = Fog::DNS.new( + :provider => 'AWS', + :aws_access_key_id => AWS_ACCESS_KEY_ID, + :aws_secret_access_key => AWS_SECRET_ACCESS_KEY + ) + +## Go Forth and Resolve + +You can see an example of reusing code like this in the examples folder. Using this makes it easier to give yourself shortcuts to your cloud servers and manage how clients and users access them as well. It is great to have this flexibility so that you can modify your cloud infrastructure as needed while keeping everything ship shape. It also provides a nice way to create custom subdomains for users and just generally round out your cloud solution. diff --git a/docs/_posts/2011-01-01-storage.md b/docs/_posts/2011-01-01-storage.md new file mode 100644 index 000000000..038be5b62 --- /dev/null +++ b/docs/_posts/2011-01-01-storage.md @@ -0,0 +1,145 @@ +--- +layout: default +title: fog - storage +--- + +Having Ruby experience makes you hirable; but how can you stand out? You need to demonstrate your abilities. What better way than using Ruby and "the cloud" to store and serve your resume! + +In this blog post you will learn to use fog - the cloud computing library - to upload your resume to Amazon's Simple Storage Service (S3), Rackspace's CloudFiles or Google's Storage for Developers. + +Here's my out of date resume stored on S3, CloudFiles and Google Storage; programmatically stored in the cloud using this tutorial. NOTE: my boss would like me to add that I'm not currently looking for a new gig ;) + +Check out those cloud-specific URLs! You could put all three in your job application, add the Ruby source for how you did it, and have your choice of Ruby jobs for being so awesome! + +How? The all-clouds-in-one library of choice is fog. + +## Installing fog + +fog is distributed as a RubyGem: + + gem install fog + +Or add it in your application's Gemfile: + + gem "fog" + +## Using Amazon S3 and fog + +Sign up for an account here and copy down your secret access key and access key id from here. We are about to get into the code samples, so be sure to fill in anything in ALL_CAPS with your own values! + +First, create a connection with your new account: + + require 'rubygems' + require 'fog' + + # create a connection + connection = Fog::Storage.new( + :provider => 'AWS', + :aws_secret_access_key => YOUR_SECRET_ACCESS_KEY, + :aws_access_key_id => YOUR_SECRET_ACCESS_KEY_ID + ) + + # First, a place to contain the glorious details + directory = connection.directories.create( + :key => "fog-demo-#{Time.now.to_i}", # globally unique name + :public => true + ) + + # list directories + p connection.directories + + # upload that resume + file = directory.files.create( + :key => 'resume.html', + :body => File.open("/path/to/my/resume.html"), + :public => true + ) + +If you are anything like me, you will continually tweak your resume. Pushing updates is easy: + + file.body = File.open("/path/to/my/resume.html") + file.save + +As you can see, cloud storage files in fog are a lot like an ActiveRecord model. Attributes that can be changed and a `#save` method that creates or updates the stored file in the cloud. + +But if it took you longer to realize the mistake you might not still have file around, but you've got options. + +directory = connection.directories.get("proclamations1234567890") + + # get the resume file + file = directory.files.get('resume.html') + file.body = File.open("/path/to/my/resume.html") + file.save + + # also, create(attributes) is just new(attributes).save, so you can also do: + file = directory.files.new( + :key => 'resume.html', + :body => 'improvements', + :public => true + ) + file.save + +Alright, so you (eventually) become satisfied enough to send it off, what is the URL endpoint to your resume? + + puts file.public_url + +Pop that link in an email and you should be ready to cruise job ads and send your resume far and wide (Engine Yard is hiring, so check us out!). Now you are set, unless you are interviewing for Google, or Rackspace... Both of these companies have their own cloud storage services, so using Amazon S3 might not be the foot in the door you hoped for. + +More clouds? How much extra stuff will you have to do for these services!?! Hardly anything needs to change, you just have to pass slightly different credentials in, but I'm getting ahead of myself. + +## Google Storage for Developers + +Sign up here and get your credentials here. + + connection = Fog::Storage.new( + :provider => 'Google', + :google_storage_secret_access_key => YOUR_SECRET_ACCESS_KEY, + :google_storage_access_key_id => YOUR_SECRET_ACCESS_KEY_ID + ) + +## Rackspace CloudFiles + +Rackspace has Cloud Files and you can sign up here and get your credentials here. + + connection = Fog::Storage.new( + :provider => 'Rackspace', + :rackspace_username => RACKSPACE_USERNAME, + :rackspace_api_key => RACKSPACE_API_KEY + ) + +Then create, save, destroy as per fog-for-AWS. The `:public => true` option when creating directories (see above) is important for Rackspace; your folder and files won't be shared to Rackspace's CDN and hence your users without it. Similarly the `:public => true` on files is important for AWS and Google or they will be private. + +## Local Storage + +While you are working out the kinks you might not want to do everything live though, ditto for while you are running tests, so you have a couple options to try before you buy. First, you can use a local provider to store things in a directory on your machine. + + connection = Fog::Storage.new( + :provider => 'Local', + :local_root => '~/fog' + ) + +## Mocking out Cloud Storage + +Of course when you are testing or developing you can always just use the mocks (at least for AWS and Google, Rackspace still needs mocks implemented if you are looking for somewhere to contribute). They emulate the behavior of the external systems without actually using them. It is as simple as: + + Fog.mock! + connection = Fog::Storage.new(config_hash) + +## Cleaning up + +Fog takes care of the rest so you can focus on your cover letter. And with the awesome cover letter and cloud delivered resume you are probably a shoe-in. So all that is left is to cleanup that leftover job hunt residue. + + file.destroy + directory.destroy + +## Summary + +All done. Try out all the different options and let me know if you have any bugs or issues. I also wrote up a more consolidated example as a script that you can use for reference. + +Bonus, note the `Fog.mock!` command. In your tests you can easily mock out calls to cloud providers. + +Please let me know in the comments if you got a new Ruby job because you hosted your CV on 3 different Cloud Stores without getting your hands dirty. + +Have questions or comments? Hop into #ruby-fog on freenode, ping @fog or @geemus. + +And please always remember that I accept high fives and contributions!