mirror of
https://github.com/fog/fog.git
synced 2022-11-09 13:51:43 -05:00
[cloudstack|compute] merged in upstream
This commit is contained in:
commit
8745aa0342
242 changed files with 6463 additions and 905 deletions
1
.gitignore
vendored
1
.gitignore
vendored
|
|
@ -2,6 +2,7 @@
|
|||
*.gem
|
||||
*.rbc
|
||||
*.sw?
|
||||
.rvmrc
|
||||
.bundle
|
||||
.DS_Store
|
||||
coverage
|
||||
|
|
|
|||
11
README.rdoc
11
README.rdoc
|
|
@ -108,17 +108,9 @@ geemus says: "That should give you everything you need to get started, but let m
|
|||
* Find something you would like to work on. For suggestions look for the `easy`, `medium` and `hard` tags in the {issues}[http://github.com/fog/fog/issues]
|
||||
* Fork the project and do your work in a topic branch.
|
||||
* Add shindo tests to prove your code works and run all the tests using `bundle exec rake`.
|
||||
* Rebase your branch against geemus/fog to make sure everything is up to date.
|
||||
* Rebase your branch against fog/fog to make sure everything is up to date.
|
||||
* Commit your changes and send a pull request.
|
||||
|
||||
== T-Shirts
|
||||
|
||||
Wonder how you can get a lovely fog shirt? Look no further!
|
||||
|
||||
* Blue shirts go to people who have contributed indirectly, great examples are writing blog posts or giving lightning talks.
|
||||
* Grey shirts and a follow from @fog go to people who have made it on to the {contributors list}[https://github.com/fog/fog/contributors] by submitting code.
|
||||
* Black shirts go to people who have made it on to the {collaborators list}[https://github.com/api/v2/json/repos/show/geemus/fog/collaborators] by coercing geemus into adding them.
|
||||
|
||||
== Additional Resources
|
||||
|
||||
{fog.io}[http://fog.io]
|
||||
|
|
@ -128,6 +120,7 @@ Wonder how you can get a lovely fog shirt? Look no further!
|
|||
http://www.engineyard.com/images/logo.png
|
||||
|
||||
All new work on fog is sponsored by {Engine Yard}[http://engineyard.com]
|
||||
|
||||
== Copyright
|
||||
|
||||
(The MIT License)
|
||||
|
|
|
|||
7
Rakefile
7
Rakefile
|
|
@ -72,11 +72,11 @@ def tests(mocked)
|
|||
start = Time.now.to_i
|
||||
threads = []
|
||||
Thread.main[:results] = []
|
||||
Fog.providers.each do |provider|
|
||||
Fog.providers.each do |key, value|
|
||||
threads << Thread.new do
|
||||
Thread.main[:results] << {
|
||||
:provider => provider,
|
||||
:success => sh("export FOG_MOCK=#{mocked} && bundle exec shindont +#{provider.downcase}")
|
||||
:provider => value,
|
||||
:success => sh("export FOG_MOCK=#{mocked} && bundle exec shindont +#{key}")
|
||||
}
|
||||
end
|
||||
end
|
||||
|
|
@ -249,6 +249,7 @@ task :changelog do
|
|||
'Henry Addison',
|
||||
'Lincoln Stoll',
|
||||
'Luqman Amjad',
|
||||
'Michael Zeng',
|
||||
'nightshade427',
|
||||
'Patrick Debois',
|
||||
'Wesley Beary'
|
||||
|
|
|
|||
426
changelog.txt
426
changelog.txt
|
|
@ -1,3 +1,429 @@
|
|||
1.1.2 12/18/2011 c1873e37e76af83e9de3f3308f3baa0664dd8dc2
|
||||
=========================================================
|
||||
|
||||
Stats! { 'collaborators' => 20, 'downloads' => 351821, 'forks' => 332, 'open_issues' => 21, 'watchers' => 1731 }
|
||||
|
||||
MVP! Stepan G. Fedorov
|
||||
|
||||
[Brightbox]
|
||||
Fix zone_id/flavour_id getter/setter for Server. thanks Hemant Kumar
|
||||
Add zone/server_type attribute for Server. thanks Hemant Kumar
|
||||
Add username to Image. thanks Hemant Kumar
|
||||
Add request for remove_firewall_policy. thanks Hemant Kumar
|
||||
Add model method for remove. thanks Hemant Kumar
|
||||
Change logic of fetching zone and flavour_id. thanks Hemant Kumar
|
||||
Remove name as mandatory parameter for creating server group. thanks Hemant Kumar
|
||||
Add created_at attribute for server_group,policy and firewall rule. thanks Hemant Kumar
|
||||
Updated Image format tests for username. thanks Paul Thornthwaite
|
||||
Updated ServerGroup format for created_at time. thanks Paul Thornthwaite
|
||||
|
||||
[aws|autoscaling]
|
||||
allow sa-east-1 region in mocks. thanks Nick Osborn
|
||||
|
||||
[aws|compute]
|
||||
fix security_group format for mock tests. thanks geemus
|
||||
|
||||
[aws|dns]
|
||||
fix capitilization for records#all options. thanks geemus
|
||||
|
||||
[aws|elb]
|
||||
update SSL certificates on listeners. :christmas_tree:. thanks Dylan Egan
|
||||
|
||||
[aws|storage]
|
||||
Support ACL on copy_object. :v:. thanks Dylan Egan
|
||||
|
||||
[brightbox]
|
||||
Adding *_server actions to ServerGroup model. thanks Caius Durling
|
||||
Pass along server_groups when creating a server. thanks Caius Durling
|
||||
Make update_cloud_ip request work. thanks Caius Durling
|
||||
Firewall models. thanks Paul Thornthwaite
|
||||
Added missing requirement and request arg. thanks Paul Thornthwaite
|
||||
Corrected deprecated argument. thanks Paul Thornthwaite
|
||||
Dynamically select testing image. thanks Paul Thornthwaite
|
||||
Helper to get a test server ready. thanks Paul Thornthwaite
|
||||
Revised tests structure. thanks Paul Thornthwaite
|
||||
Test and fix for API client secret reset. thanks Paul Thornthwaite
|
||||
Test update of reverse DNS for CIP. thanks Paul Thornthwaite
|
||||
Updated default Ubuntu image. thanks Paul Thornthwaite
|
||||
Make Cloud IP model's map nicer to use. thanks Paul Thornthwaite
|
||||
Correctly get Server's IP addresses as strings. thanks Paul Thornthwaite
|
||||
ServerGroup association to Servers. thanks Paul Thornthwaite
|
||||
Replace duplicate remove with move test. thanks Paul Thornthwaite
|
||||
Load balancer request tests expanded. thanks Paul Thornthwaite
|
||||
Request test for snapshotting a server. thanks Paul Thornthwaite
|
||||
fix mock tests. thanks geemus
|
||||
|
||||
[clodo]
|
||||
: Added missing field. thanks NomadRain
|
||||
Some cleanup before pool request. thanks NomadRain
|
||||
add fake credentials for mock tests. thanks geemus
|
||||
|
||||
[clodo|compute]
|
||||
Bug fixes. thanks NomadRain
|
||||
I don't know what is ignore_awful_caching, so i removed it. thanks Stepan G Fedorov
|
||||
server.ssh with password. Not only with key. thanks Stepan G Fedorov
|
||||
Fix Mocks. thanks Stepan G Fedorov
|
||||
Enable get_image_details. thanks Stepan G Fedorov
|
||||
Actualize Mocks. thanks Stepan G. Fedorov
|
||||
Enable :get_image_details. thanks Stepan G. Fedorov
|
||||
Add tests. thanks Stepan G. Fedorov
|
||||
Remove ddosprotect field from Mock. thanks Stepan G. Fedorov
|
||||
Add ip-address management. thanks Stepan G. Fedorov
|
||||
Rename moveip to move_ip_address. thanks Stepan G. Fedorov
|
||||
Enable ip-management. thanks Stepan G. Fedorov
|
||||
Fix delete_server mock. thanks Stepan G. Fedorov
|
||||
Fix move_ip_address behaviour. thanks Stepan G. Fedorov
|
||||
Add ip-address management. thanks Stepan G. Fedorov
|
||||
Rename moveip to move_ip_address. thanks Stepan G. Fedorov
|
||||
Enable ip-management. thanks Stepan G. Fedorov
|
||||
Fix delete_server mock. thanks Stepan G. Fedorov
|
||||
Fix move_ip_address behaviour. thanks Stepan G. Fedorov
|
||||
Added missing field (server.type). thanks Обоев Рулон ибн Хаттаб
|
||||
|
||||
[core]
|
||||
Cast Fog.wait_for interval to float. thanks Aaron Suggs
|
||||
fix exceptions from nil credential value. thanks Blake Gentry
|
||||
`@credential` should always be a symbol. thanks Hunter Haugen
|
||||
|
||||
[docs]
|
||||
note in title that multiple keys is an EC2 thing. thanks geemus
|
||||
|
||||
[glesys|compute]
|
||||
fixed tests due to changes in the api. thanks Anton Lindström
|
||||
fix test formats and whitespaces. thanks Anton Lindström
|
||||
|
||||
[misc]
|
||||
parse SQS timestamps as milliseconds. thanks Andrew Bruce
|
||||
Allow use of sa-east-1 in the ec2 mock as well. thanks Andy Delcambre
|
||||
Enabled tests for setting S3 ACL by id and uri on buckets and objects when mocking. thanks Arvid Andersson
|
||||
Added acl_to_hash helper method to Fog::Storage::AWS. thanks Arvid Andersson
|
||||
Ensuring that get_object_acl and get_bucket_acl mock methods returns a hash representation of the ACL. thanks Arvid Andersson
|
||||
Created Rackspace LB models folder. thanks Brian Hartsock
|
||||
This patch adds the ability to specify security groups by security group id, rather than group name. This is a required feature to use security groups within a VPC. thanks Eric Stonfer
|
||||
indentation change. thanks Eric Stonfer
|
||||
Add the ability to return the security group ID when requesting a SecurityGroupData object. thanks Eric Stonfer
|
||||
fix tests to accomodate the new SecurityGroupId. thanks Eric Stonfer
|
||||
Revert "fix tests to accomodate the new SecurityGroupId". thanks Eric Stonfer
|
||||
fix tests to accomodate the addition of security_group_id. thanks Eric Stonfer
|
||||
indentation fix. thanks Eric Stonfer
|
||||
indentation fix. thanks Eric Stonfer
|
||||
[Brightbox]Add remove_firewall_policy to computer.rb. thanks Hemant Kumar
|
||||
[Brightbox]Protocol is no longer required parameter for firewall. thanks Hemant Kumar
|
||||
Add implementation of DescribeInstanceStatus. thanks JD Huntington & Jason Hansen
|
||||
fixed type-o in rdoc on Fog::DNS:DNSMadeEasy. thanks John Dyer
|
||||
add query options to Fog::Storage::AWS#get_object_https_url. thanks Mateusz Juraszek
|
||||
add options hash to Fog::Storage::AWS::File#url and Fog::Storage::AWS::Files#get_https_url which use get_object_https_url method. thanks Mateusz Juraszek
|
||||
add query param to get_object_http_url for consistency. thanks Mateusz Juraszek
|
||||
Fix regression in Rakefile introduced in 70e7ea13. thanks Michael Brodhead
|
||||
add são paulo/brasil region. thanks Raphael Costa
|
||||
mock create_db_instance. thanks Rodrigo Estebanez
|
||||
mocking describe_db_instance. Fix hash structure in create_db_instance. thanks Rodrigo Estebanez
|
||||
mocking delete_db_instance. thanks Rodrigo Estebanez
|
||||
mocking wait_for through describe_db_instances. thanks Rodrigo Estebanez
|
||||
mocking modify_db_instance and reboot_db_instance. thanks Rodrigo Estebanez
|
||||
raise exception instead of excon response. thanks Rodrigo Estebanez
|
||||
Fixing bug: It always showed the first instance when using get. thanks Rodrigo Estebanez
|
||||
Fixes for issues 616 and 617. thanks Sergio Rubio
|
||||
* remove unnecessary debugging. thanks Sergio Rubio
|
||||
* Add missing recognized :libvirt_ip_command. thanks Sergio Rubio
|
||||
* Add server_name environment variable to ip_command. thanks Sergio Rubio
|
||||
* implement :destroy_volumes in Server.destroy (libvirt provider). thanks Sergio Rubio
|
||||
Add documentation for using multiple ssh keys on AWS. thanks Sven Pfleiderer
|
||||
Update bootstrap description. thanks Sven Pfleiderer
|
||||
Escape underscore charakters. thanks Sven Pfleiderer
|
||||
implement STS support. thanks Thom May
|
||||
Allow use of session tokens in AWS Compute. thanks Thom May
|
||||
handle session tokens for SQS and SimpleDB. thanks Thom May
|
||||
Split [AWS|STS] tests into separate files per #609. thanks Thom May
|
||||
Bug fix, metric_statistic#save would always fail. thanks bmiller
|
||||
bump excon dep. thanks geemus
|
||||
bump excon dep. thanks geemus
|
||||
Fixing Rackspace's lack of integer-as-string support as per https://github.com/fog/fog/pull/657#issuecomment-3145337. thanks jimworm
|
||||
add current set of elasticache endpoints. thanks lostboy
|
||||
added sa-east-1 region. thanks thattommyhall
|
||||
Add clodo support. thanks Обоев Рулон ибн Хаттаб
|
||||
Enable clodo support. thanks Обоев Рулон ибн Хаттаб
|
||||
|
||||
[rackspace|dns]
|
||||
error state callbacks now return an error. thanks Brian Hartsock
|
||||
fixed broken test. thanks Brian Hartsock
|
||||
should recognize rackspace_dns_endpoint argument. thanks geemus
|
||||
record should pass priority. thanks geemus
|
||||
mark tests for models pending in mocked mode. thanks geemus
|
||||
|
||||
[rackspace|lb]
|
||||
Fixed bug #644 with HTTP health monitors. thanks Brian Hartsock
|
||||
fix for #650 - Connection logging now loads appropriately. thanks Brian Hartsock
|
||||
added error page requests. thanks Brian Hartsock
|
||||
Added error pages to the model. thanks Brian Hartsock
|
||||
Added list parameter for nodeddress. thanks Brian Hartsock
|
||||
fixed broken test; cleaned up some tests. thanks Brian Hartsock
|
||||
|
||||
[rackspace|load balancers]
|
||||
fixed broken tests. thanks Brian Hartsock
|
||||
|
||||
[rackspace|loadbalancers]
|
||||
Fixed bug in deleting multiple nodes. thanks Brian Hartsock
|
||||
|
||||
[slicehost|compute]
|
||||
update image id in tests. thanks geemus
|
||||
|
||||
[storm_on_demand]
|
||||
fixes for formats in tests. thanks geemus
|
||||
|
||||
[tests | clodo]
|
||||
Added ip-management tests. thanks Stepan G. Fedorov
|
||||
Added ip-management tests. thanks Stepan G. Fedorov
|
||||
|
||||
[tests | clodo ]
|
||||
ddosprotect field must not exist. thanks Stepan G. Fedorov
|
||||
|
||||
[tests | clodo | compute]
|
||||
Add most tests. thanks Stepan G Fedorov
|
||||
Add image tests. thanks Stepan G Fedorov
|
||||
|
||||
[tests | clodo | compute ]
|
||||
create_server - First try. thanks Stepan G Fedorov
|
||||
|
||||
[vcloud]
|
||||
mark tests pending in mocked mode. thanks geemus
|
||||
|
||||
[vcloud|compute]
|
||||
introduce organizations. thanks Peter Meier
|
||||
make networks working also in organizations. thanks Peter Meier
|
||||
remove server from organizations as they are within vApps of vDC. thanks Peter Meier
|
||||
add catalogs to an organization. thanks Peter Meier
|
||||
a vdc does not have a tasklist. thanks Peter Meier
|
||||
introduce vapps. thanks Peter Meier
|
||||
More work on getting server in a useable shape. thanks Peter Meier
|
||||
fix network to the minimum. thanks Peter Meier
|
||||
a vapp might not have any childrens attached. thanks Peter Meier
|
||||
improve models add tests. thanks Peter Meier
|
||||
improve disk info access. thanks Peter Meier
|
||||
improve network. thanks Peter Meier
|
||||
introduce link on a network to parent network. thanks Peter Meier
|
||||
fix an issue if this is not parsed as an array. thanks Peter Meier
|
||||
stopgap fix for test data files. thanks geemus
|
||||
properly namespace vcloud test to prevent breaking others. thanks geemus
|
||||
|
||||
[vsphere]
|
||||
(#10644) Add servers filter to improve clone performance. thanks Jeff McCune
|
||||
fix whitespace issue in yaml for mocks. thanks geemus
|
||||
|
||||
|
||||
1.1.1 11/11/2011 a468aa9a3445aae4f496b1a51e26572b8379c3da
|
||||
=========================================================
|
||||
|
||||
Stats! { 'collaborators' => 19, 'downloads' => 300403, 'forks' => 300, 'open_issues' => 14, 'watchers' => 1667 }
|
||||
|
||||
[core]
|
||||
loosen net-ssh dependency to avoid chef conflict. thanks geemus
|
||||
|
||||
[misc]
|
||||
1.1.0 changelog. thanks geemus
|
||||
|
||||
|
||||
1.1.0 11/11/2011 b706c7ed66c2e760fdd6222e38c68768575483b2
|
||||
=========================================================
|
||||
|
||||
Stats! { 'collaborators' => 19, 'downloads' => 300383, 'forks' => 300, 'open_issues' => 16, 'watchers' => 1667 }
|
||||
|
||||
MVP! Michael Zeng
|
||||
|
||||
[Compute|Libvirt]
|
||||
Take into account a query string can be empty, different on some rubies it gives nil, on some empty string. thanks Patrick Debois
|
||||
|
||||
[OpenStack|compute]
|
||||
fix v2.0 auth endpoints. thanks Todd Willey
|
||||
default metadata to empy hash. thanks Todd Willey
|
||||
add zone awareness. thanks Todd Willey
|
||||
|
||||
[aws]
|
||||
add us-west-2 region. thanks geemus
|
||||
|
||||
[aws|cloud_watch]
|
||||
mark tests pending when mocked. thanks geemus
|
||||
|
||||
[aws|cloudwatch]
|
||||
Add support for put-metric-alarm call. thanks Jens Braeuer
|
||||
Remove duplicate RequestId from response. thanks Jens Braeuer
|
||||
Add mocked implementation of put_metric_alarm. thanks Jens Braeuer
|
||||
Fix whitespace. thanks Jens Braeuer
|
||||
Fix merge error. thanks Jens Braeuer
|
||||
Add mocked version of put_metric_alarm. thanks Jens Braeuer
|
||||
|
||||
[aws|compute]
|
||||
Mock modify_image_attribute add/remove users. thanks Dan Peterson
|
||||
Allow mock tagging to work across accounts. thanks Dan Peterson
|
||||
Fix new instance eventual consistency for the non-filtered case. thanks Dan Peterson
|
||||
Update security group operations. thanks Dan Peterson
|
||||
Test for more invalid security group request input when mocking. thanks Dan Peterson
|
||||
Fix a bug in delete_tags, but come up against a bug in AWS where tags aren't deleted if the resource still exists. thanks Dylan Egan
|
||||
tags are reset when reloading. #570. thanks Dylan Egan
|
||||
fixed sopt_instance_request reply parsing when the original request contained a device mapping. thanks MaF
|
||||
wait_for reload then add server tags. thanks geemus
|
||||
spot request fixes. thanks geemus
|
||||
tweaks for spot request bootstrap. thanks geemus
|
||||
save tags for spot_requests#bootstrap. thanks geemus
|
||||
update ami for windows. thanks geemus
|
||||
|
||||
[aws|elb]
|
||||
Missed a change as part of #545. thanks Dan Peterson
|
||||
use a set union to register new instances. thanks Dylan Egan
|
||||
return only the instance IDs on describe. Use only available availability zones. :v:. thanks Dylan Egan
|
||||
attribute aliases for CanonicalHostedZoneName(ID). :v:. thanks Dylan Egan
|
||||
eventually consistent, like me getting a haircut. :v:. thanks Dylan Egan
|
||||
|
||||
[aws|emr]
|
||||
mark tests pending when mocked. thanks geemus
|
||||
|
||||
[aws|iam]
|
||||
slight cleanup and test with a certificate chain. :cake:. thanks Dylan Egan
|
||||
|
||||
[aws|mock]
|
||||
Dig into mock data instead of instantiating new service objects. thanks Dan Peterson
|
||||
|
||||
[aws|storage]
|
||||
ensure path isn't empty when specifying endpoint. thanks geemus
|
||||
|
||||
[brightbox]
|
||||
Fixed incorrect call to reset_ftp_password. thanks Paul Thornthwaite
|
||||
|
||||
[brightbox|compute]
|
||||
format fixes for tests. thanks geemus
|
||||
|
||||
[core]
|
||||
treat boolean values as a boolean. thanks Peter Meier
|
||||
fix attribute squashing with : in key. thanks Peter Meier
|
||||
all services should recognize :connection_options. thanks geemus
|
||||
separate loggers for deprecations/warnings. thanks geemus
|
||||
avoid duplicates in Fog.providers. thanks geemus
|
||||
more useful structure for Fog.providers. thanks geemus
|
||||
add newlines to logger messages. thanks geemus
|
||||
update stats raketask to point to org. thanks geemus
|
||||
toss out nil-value keys when checking required credentials. thanks geemus
|
||||
|
||||
[dns]
|
||||
Made model tests use uniq domain names. thanks Brian Hartsock
|
||||
|
||||
[dnsmadeeasy|dns]
|
||||
Fix Fog::DNS::DNSMadeEasy::Record#save to handle updating a record correctly. thanks Peter Weldon
|
||||
|
||||
[docs]
|
||||
update links to point to http://github.com/fog/fog. thanks geemus
|
||||
|
||||
[dynect|dns]
|
||||
Automatically poll jobs if we get them. Closes #575. thanks Dan Peterson
|
||||
|
||||
[misc]
|
||||
Change response parameter. thanks Alan Ivey
|
||||
Missing HEAD method. thanks Andrew Newman
|
||||
Missing HEAD method. thanks Andrew Newman
|
||||
Putting version back. thanks Andrew Newman
|
||||
Reformatting and making consistent with other classes. thanks Andrew Newman
|
||||
Missed renam to head_namespace. thanks Andrew Newman
|
||||
Reverting version and date in gemspec. thanks Andrew Newman
|
||||
Formatting. thanks Andrew Newman
|
||||
Removed puts of element name. thanks Arvid Andersson
|
||||
Changes to allow EMR control through fog. thanks Bob Briski
|
||||
Added EMR functions for AWS. thanks Bob Briski
|
||||
Adding tests. thanks Bob Briski
|
||||
merge EMR changes with upstream repo. thanks Bob Briski
|
||||
(#10055) Search vmFolder inventory vs children. thanks Carl Caum
|
||||
Adding a path attribute to the vm_mob_ref hash. thanks Carl Caum
|
||||
Cleanup Attributes#merge_attributes. thanks Hemant Kumar
|
||||
Update S3 doc example to show current API. thanks Jason Roelofs
|
||||
Restructure main website's navigation. thanks Jason Roelofs
|
||||
Add CloudFormation UpdateStack call. thanks Jason Roelofs
|
||||
Minor whitespace change. thanks Jens Braeuer
|
||||
Trailing whitespace cleanup. thanks Jens Braeuer
|
||||
Whitespace cleanup. thanks Jens Braeuer
|
||||
Fix merge error. thanks Jens Braeuer
|
||||
Removed statement about @geemus being only member of collaborators list since it's not true anymore. thanks John Wang
|
||||
Fixes Fog::AWS::Storage#put_(bucket|object)_acl. thanks Jonas Pfenniger
|
||||
Randomize bucket names in tests. thanks Jonas Pfenniger
|
||||
Fix AWS S3 bucket and object tests. thanks Jonas Pfenniger
|
||||
(#10570) Use nil in-place of missing attributes. thanks Kelsey Hightower
|
||||
(#10570) Update `Fog::Compute::Vsphere` tests. thanks Kelsey Hightower
|
||||
We use 'Key' for all S3 objects now. thanks Kevin Menard
|
||||
Implemented mocks for Zerigo. thanks Kevin Menard
|
||||
Updated docs to use newer arg, rather than the old deprecated one. thanks Kevin Menard
|
||||
Added the ability to search Zerigo records for a particular zone. thanks Kevin Menard
|
||||
Return the only element of the array, not the array itself. thanks Kevin Menard
|
||||
Fixed an issue whereby saving an existing record in Zerigo would nil out its value. thanks Kevin Menard
|
||||
added DeleteAlarms, DescribeAlarms and PutMetricAlarms. thanks Michael Zeng
|
||||
re-adding files. thanks Michael Zeng
|
||||
adding describe_alarm_history. thanks Michael Zeng
|
||||
adding diable/enable alarm actions. thanks Michael Zeng
|
||||
added DescribeAlarmHistory request and parser. thanks Michael Zeng
|
||||
fixing describe_alarms and describe_alarms_for_metric requests. thanks Michael Zeng
|
||||
cleaned up requesters and parsers. thanks Michael Zeng
|
||||
added SetAlarmState. thanks Michael Zeng
|
||||
included more response elements, request parameters should now be complete. Included model and collection classes. thanks Michael Zeng
|
||||
bug fixes. thanks Michael Zeng
|
||||
fixed models and added tests. thanks Michael Zeng
|
||||
no need to add rake dep. thanks Michael Zeng
|
||||
revert gempspec date change. thanks Michael Zeng
|
||||
reverting cloud_watch.rb. thanks Michael Zeng
|
||||
reverting cloud_watch.rb. thanks Michael Zeng
|
||||
reverting cloud_watch.rb. thanks Michael Zeng
|
||||
reverting cloud_watch.rb. thanks Michael Zeng
|
||||
reverting cloud_watch.rb. thanks Michael Zeng
|
||||
added newline to the end of file. thanks Michael Zeng
|
||||
removed all tabs. thanks Michael Zeng
|
||||
added alarm_data_tests. thanks Michael Zeng
|
||||
spacing change. thanks Michael Zeng
|
||||
AWS#hash_to_acl - add support for EmailAddress and URI grantee types. thanks Nathan Sutton
|
||||
Test and improve Fog::Storage::AWS.hash_to_acl. thanks Nathan Sutton
|
||||
Adding a method to unmock Fog. Addresses issue #594. thanks Nathan Sutton
|
||||
Adding documentation for Fog.unmock! and Fog::Mock.reset. thanks Nathan Sutton
|
||||
added linode ssh support. thanks Nicholas Ricketts
|
||||
added linode ssh support with proper public ip address. thanks Nicholas Ricketts
|
||||
cleaned up code to use att_XX methods. thanks Nicholas Ricketts
|
||||
clean up public_ip_address code for linode. thanks Nicholas Ricketts
|
||||
Seems like rackspace might have changed this. thanks Nik Wakelin
|
||||
Sends power parameter in GoGrid's grid_server_power request. thanks Pablo Baños López
|
||||
Slicehost uses record-type and zone-id for their API, which messes with Fog internals, so changing these to record_type and zone_id in the parser. thanks Patrick McKenzie
|
||||
Did this do anything?. thanks Patrick McKenzie
|
||||
Revert "Slicehost uses record-type and zone-id for their API, which messes with Fog internals, so changing these to record_type and zone_id in the parser.". thanks Patrick McKenzie
|
||||
Not having the best of days with git. Revert the reversion of the commit that I really do want to make. thanks Patrick McKenzie
|
||||
Slicehost uses record-type and zone-id for their API, which messes with Fog internals, so changing these to record_type and zone_id in the parser. thanks Patrick McKenzie
|
||||
Do not touch .gitignore. thanks Patrick McKenzie
|
||||
Fixing Slicehost DNS so that a) tests pass b) token names map to what Fog expects -- record_type not record-type, value not data, etc c) creation of new DNS records possible. thanks Patrick McKenzie
|
||||
1) Fix so that getting a single record actually works. 2) zone.records currently returns all records in account, not just records for that zone. Add failing test (temporarily, assumes test account has existing zones for this to actually fail) + fix. 3) Add in data alias for record.value, just in case someone needs it, as Slicehost calls this data. thanks Patrick McKenzie
|
||||
Allow updates of DNS records. Updates on zones not supported yet. thanks Patrick McKenzie
|
||||
Fixing parsing of zone.records.get(id) so that it parses a single record properly rather than attempting to parse a list of records improperly. Fixing tests to match this (expected) behavior rather than work-around the broken way. thanks Patrick McKenzie
|
||||
Getting it so zone.records works as expected (loads all records, for that zone only). thanks Patrick McKenzie
|
||||
simplification. thanks Peter Meier
|
||||
Optimize vSphere convert_vm_mob_ref_to_attr_hash. thanks Rich Lane
|
||||
Compact the way options are mapped to request. thanks Todd Willey
|
||||
Allow setting userdata as plain ascii or b64. thanks Todd Willey
|
||||
bump excon dep. thanks geemus
|
||||
[rackspace][dns] fixes for job request format. thanks geemus
|
||||
bump net-ssh dependency. thanks geemus
|
||||
tshirt offer should be implicit, rather than explicit. thanks geemus
|
||||
add region option to aws sns service recognizes method. thanks lostboy
|
||||
add capabilities support to cloudformation createstack request. thanks lostboy
|
||||
|
||||
[ninefold|storage]
|
||||
omit signature in stringtosign. thanks geemus
|
||||
check objectid for existence. thanks geemus
|
||||
allow overwriting files for consistency. thanks geemus
|
||||
|
||||
[rackspace|dns]
|
||||
Fixed request tests that need unique domain name. thanks Brian Hartsock
|
||||
Adapted to changes in callback mechanism. thanks Brian Hartsock
|
||||
|
||||
[rackspace|load_balancers]
|
||||
made lb endpoint configurable. thanks Brian Hartsock
|
||||
|
||||
[release]
|
||||
omit Patrick Debois from future MVP status. thanks geemus
|
||||
|
||||
[vsphere|compute]
|
||||
test fixes. thanks geemus
|
||||
|
||||
|
||||
1.0.0 09/29/2011 a81be08ef2473af91f16f4926e5b3dfa962a34ae
|
||||
=========================================================
|
||||
|
||||
|
|
|
|||
|
|
@ -35,7 +35,7 @@
|
|||
<dl>
|
||||
<dt>version</dt><dd>vX.Y.Z</dd>
|
||||
<dt>install</dt><dd><code>gem install fog</code></dd>
|
||||
<dt>source</dt><dd><a href="http://github.com/fog/fog">geemus/fog</a></dd>
|
||||
<dt>source</dt><dd><a href="http://github.com/fog/fog">fog/fog</a></dd>
|
||||
</dl>
|
||||
</header>
|
||||
|
||||
|
|
|
|||
|
|
@ -61,7 +61,7 @@ Cycling servers is great, but in order to actually ssh in we need to setup ssh k
|
|||
|
||||
server = connection.servers.bootstrap(:private_key_path => '~/.ssh/id_rsa', :public_key_path => '~/.ssh/id_rsa.pub', :username => 'ubuntu')
|
||||
|
||||
Bootstrap will create the server, but it will also make sure that port 22 is open for traffic and has ssh keys setup. In order to hook everything up it will need the server to be running, so by the time it finishes it will be ready. You can then make commands to it directly:
|
||||
Bootstrap will create the server, but it will also make sure that port 22 is open for traffic and has ssh keys setup. The ssh key pair you specified will be registered under the name "fog\_default" unless you've set `Fog.credential` to a custom string value. In order to hook everything up `bootstrap` will need the server to be running, so by the time it finishes it will be ready. You can then make commands to it directly:
|
||||
|
||||
server.ssh('pwd')
|
||||
server.ssh(['pwd', 'whoami'])
|
||||
|
|
@ -70,6 +70,20 @@ These return an array of results, where each has stdout, stderr and status value
|
|||
|
||||
server.destroy
|
||||
|
||||
|
||||
### Managing multiple ssh key pairs on EC2
|
||||
|
||||
The key pair you've specified, will be registered as "fog\_default" after running `bootstrap` for the first time. If you want to use multiple key pairs with the same AWS credentials, you need to set `Fog.credential` to register your other key pairs under different names. Your additional key pair will then be registered as "fog\_#{Fog.credential}":
|
||||
|
||||
Fog.credential = 'my_custom_key'
|
||||
connection.servers.bootstrap(:private_key_path => '~/.ssh/my_custom_key', :public_key_path => '~/.ssh/my_custom_key.pub')
|
||||
|
||||
If you've already registered a custom key pair e.g. using `connection.create_key_pair` or `connection.import_key_pair`, you can set your key paths using `Fog.credentials` and pass in the name of this key so `bootstrap` will use it instead of "fog\_default":
|
||||
|
||||
Fog.credentials = Fog.credentials.merge({ :private_key_path => "~/.ssh/my_custom_key", :public_key_path => "~/.ssh/my_custom_key.pub" })
|
||||
connection.import_key_pair('my_custom_key', IO.read('~/.ssh/my_custom_key.pub')) if connection.key_pairs.get('my_custom_key').nil?
|
||||
server = connection.servers.bootstrap(:key_name => 'my_custom_key')
|
||||
|
||||
## Rackspace Cloud Servers
|
||||
|
||||
Rackspace has <a href="http://www.rackspacecloud.com/cloud_hosting_products/servers">Cloud Servers</a> and you can sign up <a href="https://www.rackspacecloud.com/signup">here</a> and get your credentials <a href="https://manage.rackspacecloud.com/APIAccess.do">here</a>.
|
||||
|
|
|
|||
|
|
@ -45,17 +45,9 @@ geemus says: "That should give you everything you need to get started, but let m
|
|||
* Find something you would like to work on. For suggestions look for the `easy`, `medium` and `hard` tags in the [issues](http://github.com/fog/fog/issues)
|
||||
* Fork the project and do your work in a topic branch.
|
||||
* Add shindo tests to prove your code works and run all the tests using `bundle exec rake`.
|
||||
* Rebase your branch against geemus/fog to make sure everything is up to date.
|
||||
* Rebase your branch against fog/fog to make sure everything is up to date.
|
||||
* Commit your changes and send a pull request.
|
||||
|
||||
## T-Shirts
|
||||
|
||||
Wonder how you can get a lovely fog shirt? Look no further!
|
||||
|
||||
* Blue shirts go to people who have contributed indirectly, great examples are writing blog posts or giving lightning talks.
|
||||
* Grey shirts and a follow from @fog go to people who have made it on to the [contributors list](https://github.com/fog/fog/contributors) by submitting code.
|
||||
* Black shirts go to people who have made it on to the [collaborators list](https://github.com/api/v2/json/repos/show/geemus/fog/collaborators) by coercing geemus into adding them.
|
||||
|
||||
## Resources
|
||||
|
||||
Enjoy, and let me know what I can do to continue improving fog!
|
||||
|
|
|
|||
|
|
@ -35,8 +35,8 @@ First, create a connection with your new account:
|
|||
# create a connection
|
||||
connection = Fog::Storage.new({
|
||||
:provider => 'AWS',
|
||||
:aws_secret_access_key => YOUR_SECRET_ACCESS_KEY,
|
||||
:aws_access_key_id => YOUR_SECRET_ACCESS_KEY_ID
|
||||
:aws_access_key_id => YOUR_SECRET_ACCESS_KEY_ID,
|
||||
:aws_secret_access_key => YOUR_SECRET_ACCESS_KEY
|
||||
})
|
||||
|
||||
# First, a place to contain the glorious details
|
||||
|
|
@ -131,8 +131,8 @@ Sign up <a href="http://gs-signup-redirect.appspot.com/">here</a> and get your c
|
|||
|
||||
connection = Fog::Storage.new({
|
||||
:provider => 'Google',
|
||||
:google_storage_secret_access_key => YOUR_SECRET_ACCESS_KEY,
|
||||
:google_storage_access_key_id => YOUR_SECRET_ACCESS_KEY_ID
|
||||
:google_storage_access_key_id => YOUR_SECRET_ACCESS_KEY_ID,
|
||||
:google_storage_secret_access_key => YOUR_SECRET_ACCESS_KEY
|
||||
})
|
||||
|
||||
## Rackspace CloudFiles
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ require File.join(File.dirname(__FILE__), '..', 'tests', 'helper')
|
|||
Shindo.tests('compute examples', 'compute') do
|
||||
|
||||
# iterate over all the providers
|
||||
Fog.providers.each do |provider|
|
||||
Fog.providers.values.each do |provider|
|
||||
|
||||
# FIXME: implement expected shared compute stuff for these providers as well
|
||||
next if ['Bluebox', 'Brightbox', 'Ecloud', 'GoGrid', 'Linode', 'NewServers', 'Ninefold', 'Slicehost', 'StormOnDemand', 'VirtualBox', 'Voxel'].include?(provider)
|
||||
|
|
@ -44,17 +44,29 @@ Shindo.tests('compute examples', 'compute') do
|
|||
|
||||
# scp a file to a server
|
||||
lorem_path = File.join([File.dirname(__FILE__), '..', 'tests', 'lorem.txt'])
|
||||
tests("@server.scp('#{lorem_path}', 'lorem.txt')").succeeds do
|
||||
@server.scp(lorem_path, 'lorem.txt')
|
||||
tests("@server.scp_upload('#{lorem_path}', 'lorem.txt')").succeeds do
|
||||
@server.scp_upload(lorem_path, 'lorem.txt')
|
||||
end
|
||||
|
||||
# scp a file from a server
|
||||
tests("@server.scp_download('lorem.txt', '/tmp/lorem.txt)").succeeds do
|
||||
@server.scp_download('lorem.txt', '/tmp/lorem.txt')
|
||||
end
|
||||
File.delete('/tmp/lorem.txt')
|
||||
|
||||
# scp a directory to a server
|
||||
Dir.mkdir('/tmp/lorem')
|
||||
file = ::File.new('/tmp/lorem/lorem.txt', 'w')
|
||||
file.write(File.read(lorem_path))
|
||||
lorem_dir = File.join([File.dirname(__FILE__), '..', 'tests'])
|
||||
tests("@server.scp('#{lorem_dir}', '/tmp/lorem', :recursive => true)").succeeds do
|
||||
@server.scp(lorem_dir, '/tmp/lorem', :recursive => true)
|
||||
tests("@server.scp_upload('/tmp/lorem', '/tmp', :recursive => true)").succeeds do
|
||||
@server.scp_upload('/tmp/lorem', '/tmp', :recursive => true)
|
||||
end
|
||||
File.delete('/tmp/lorem/lorem.txt')
|
||||
Dir.rmdir('/tmp/lorem')
|
||||
|
||||
# scp a directory from a server
|
||||
tests("@server.scp_download('/tmp/lorem', '/tmp', :recursive => true)").succeeds do
|
||||
@server.scp_download('/tmp/lorem', '/tmp', :recursive => true)
|
||||
end
|
||||
File.delete('/tmp/lorem/lorem.txt')
|
||||
Dir.rmdir('/tmp/lorem')
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ require File.join(File.dirname(__FILE__), '..', 'tests', 'helper')
|
|||
Shindo.tests('dns examples', 'dns') do
|
||||
|
||||
# iterate over all the providers
|
||||
Fog.providers.each do |provider|
|
||||
Fog.providers.values.each do |provider|
|
||||
|
||||
provider = eval(provider) # convert from string to object
|
||||
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ require File.join(File.dirname(__FILE__), '..', 'tests', 'helper')
|
|||
Shindo.tests('storage examples', 'storage') do
|
||||
|
||||
# iterate over all the providers
|
||||
Fog.providers.each do |provider|
|
||||
Fog.providers.values.each do |provider|
|
||||
|
||||
provider = eval(provider) # convert from string to object
|
||||
|
||||
|
|
|
|||
|
|
@ -6,8 +6,8 @@ Gem::Specification.new do |s|
|
|||
## If your rubyforge_project name is different, then edit it and comment out
|
||||
## the sub! line in the Rakefile
|
||||
s.name = 'fog'
|
||||
s.version = '1.0.0'
|
||||
s.date = '2011-09-29'
|
||||
s.version = '1.1.2'
|
||||
s.date = '2011-12-18'
|
||||
s.rubyforge_project = 'fog'
|
||||
|
||||
## Make sure your summary is short. The description may be as long
|
||||
|
|
@ -37,12 +37,12 @@ Gem::Specification.new do |s|
|
|||
## List your runtime dependencies here. Runtime dependencies are those
|
||||
## that are needed for an end user to actually USE your code.
|
||||
s.add_dependency('builder')
|
||||
s.add_dependency('excon', '~>0.7.4')
|
||||
s.add_dependency('excon', '~>0.9.0')
|
||||
s.add_dependency('formatador', '~>0.2.0')
|
||||
s.add_dependency('multi_json', '~>1.0.3')
|
||||
s.add_dependency('mime-types')
|
||||
s.add_dependency('net-scp', '~>1.0.4')
|
||||
s.add_dependency('net-ssh', '>=2.2.1')
|
||||
s.add_dependency('net-ssh', '>=2.1.3')
|
||||
s.add_dependency('nokogiri', '~>1.5.0')
|
||||
s.add_dependency('ruby-hmac')
|
||||
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ require File.join(File.dirname(__FILE__), 'fog', 'core')
|
|||
module Fog
|
||||
|
||||
unless const_defined?(:VERSION)
|
||||
VERSION = '1.0.0'
|
||||
VERSION = '1.1.2'
|
||||
end
|
||||
|
||||
end
|
||||
|
|
|
|||
|
|
@ -20,6 +20,7 @@ module Fog
|
|||
service(:simpledb, 'aws/simpledb', 'SimpleDB')
|
||||
service(:sns, 'aws/sns', 'SNS')
|
||||
service(:sqs, 'aws/sqs', 'SQS')
|
||||
service(:sts, 'aws/sts', 'STS')
|
||||
service(:storage, 'aws/storage', 'Storage')
|
||||
|
||||
def self.indexed_param(key, values)
|
||||
|
|
@ -85,6 +86,10 @@ module Fog
|
|||
'Version' => options[:version]
|
||||
})
|
||||
|
||||
params.merge!({
|
||||
'SecurityToken' => options[:aws_session_token]
|
||||
}) if options[:aws_session_token]
|
||||
|
||||
body = ''
|
||||
for key in params.keys.sort
|
||||
unless (value = params[key]).nil?
|
||||
|
|
@ -218,6 +223,10 @@ module Fog
|
|||
def self.volume_id
|
||||
"vol-#{Fog::Mock.random_hex(8)}"
|
||||
end
|
||||
|
||||
def self.rds_address(db_name,region)
|
||||
"#{db_name}.#{Fog::Mock.random_letters(rand(12) + 4)}.#{region}.rds.amazonaws.com"
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
|
|
|
|||
|
|
@ -90,6 +90,8 @@ module Fog
|
|||
'autoscaling.us-west-1.amazonaws.com'
|
||||
when 'us-west-2'
|
||||
'autoscaling.us-west-2.amazonaws.com'
|
||||
when 'sa-east-1'
|
||||
'autoscaling.sa-east-1.amazonaws.com'
|
||||
else
|
||||
raise ArgumentError, "Unknown region: #{options[:region].inspect}"
|
||||
end
|
||||
|
|
@ -206,7 +208,7 @@ module Fog
|
|||
|
||||
@region = options[:region] || 'us-east-1'
|
||||
|
||||
unless ['ap-northeast-1', 'ap-southeast-1', 'eu-west-1', 'us-east-1', 'us-west-1', 'us-west-2'].include?(@region)
|
||||
unless ['ap-northeast-1', 'ap-southeast-1', 'eu-west-1', 'sa-east-1', 'us-east-1', 'us-west-1', 'us-west-2'].include?(@region)
|
||||
raise ArgumentError, "Unknown region: #{@region.inspect}"
|
||||
end
|
||||
|
||||
|
|
|
|||
|
|
@ -67,6 +67,8 @@ module Fog
|
|||
'cloudformation.us-west-1.amazonaws.com'
|
||||
when 'us-west-2'
|
||||
'cloudformation.us-west-2.amazonaws.com'
|
||||
when 'sa-east-1'
|
||||
'cloudformation.sa-east-1.amazonaws.com'
|
||||
else
|
||||
raise ArgumentError, "Unknown region: #{options[:region].inspect}"
|
||||
end
|
||||
|
|
|
|||
|
|
@ -81,6 +81,8 @@ module Fog
|
|||
'monitoring.us-west-1.amazonaws.com'
|
||||
when 'us-west-2'
|
||||
'monitoring.us-west-2.amazonaws.com'
|
||||
when 'sa-east-1'
|
||||
'monitoring.sa-east-1.amazonaws.com'
|
||||
else
|
||||
raise ArgumentError, "Unknown region: #{options[:region].inspect}"
|
||||
end
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ module Fog
|
|||
class AWS < Fog::Service
|
||||
|
||||
requires :aws_access_key_id, :aws_secret_access_key
|
||||
recognizes :endpoint, :region, :host, :path, :port, :scheme, :persistent
|
||||
recognizes :endpoint, :region, :host, :path, :port, :scheme, :persistent, :aws_session_token
|
||||
|
||||
model_path 'fog/aws/models/compute'
|
||||
model :address
|
||||
|
|
@ -57,6 +57,7 @@ module Fog
|
|||
request :describe_images
|
||||
request :describe_instances
|
||||
request :describe_reserved_instances
|
||||
request :describe_instance_status
|
||||
request :describe_key_pairs
|
||||
request :describe_placement_groups
|
||||
request :describe_regions
|
||||
|
|
@ -170,7 +171,7 @@ module Fog
|
|||
|
||||
@region = options[:region] || 'us-east-1'
|
||||
|
||||
unless ['ap-northeast-1', 'ap-southeast-1', 'eu-west-1', 'us-east-1', 'us-west-1', 'us-west-2'].include?(@region)
|
||||
unless ['ap-northeast-1', 'ap-southeast-1', 'eu-west-1', 'us-east-1', 'us-west-1', 'us-west-2', 'sa-east-1'].include?(@region)
|
||||
raise ArgumentError, "Unknown region: #{@region.inspect}"
|
||||
end
|
||||
end
|
||||
|
|
@ -249,6 +250,7 @@ module Fog
|
|||
# * options<~Hash> - config arguments for connection. Defaults to {}.
|
||||
# * region<~String> - optional region to use, in
|
||||
# ['eu-west-1', 'us-east-1', 'us-west-1', 'us-west-2', 'ap-northeast-1', 'ap-southeast-1']
|
||||
# * aws_session_token<~String> - when using Session Tokens or Federated Users, a session_token must be presented
|
||||
#
|
||||
# ==== Returns
|
||||
# * EC2 object with connection to aws.
|
||||
|
|
@ -257,6 +259,7 @@ module Fog
|
|||
|
||||
@aws_access_key_id = options[:aws_access_key_id]
|
||||
@aws_secret_access_key = options[:aws_secret_access_key]
|
||||
@aws_session_token = options[:aws_session_token]
|
||||
@connection_options = options[:connection_options] || {}
|
||||
@hmac = Fog::HMAC.new('sha256', @aws_secret_access_key)
|
||||
@region = options[:region] ||= 'us-east-1'
|
||||
|
|
@ -281,6 +284,8 @@ module Fog
|
|||
'ec2.us-west-1.amazonaws.com'
|
||||
when 'us-west-2'
|
||||
'ec2.us-west-2.amazonaws.com'
|
||||
when 'sa-east-1'
|
||||
'ec2.sa-east-1.amazonaws.com'
|
||||
else
|
||||
raise ArgumentError, "Unknown region: #{options[:region].inspect}"
|
||||
end
|
||||
|
|
@ -306,11 +311,12 @@ module Fog
|
|||
params,
|
||||
{
|
||||
:aws_access_key_id => @aws_access_key_id,
|
||||
:aws_session_token => @aws_session_token,
|
||||
:hmac => @hmac,
|
||||
:host => @host,
|
||||
:path => @path,
|
||||
:port => @port,
|
||||
:version => '2011-05-15'
|
||||
:version => '2011-11-01'
|
||||
}
|
||||
)
|
||||
|
||||
|
|
|
|||
|
|
@ -57,7 +57,14 @@ module Fog
|
|||
@host = options[:host] || case options[:region]
|
||||
when 'us-east-1'
|
||||
'elasticache.us-east-1.amazonaws.com'
|
||||
#TODO: Support other regions
|
||||
when 'us-west-1'
|
||||
'elasticache.us-west-1.amazonaws.com'
|
||||
when 'eu-west-1'
|
||||
'elasticache.eu-west-1.amazonaws.com'
|
||||
when 'ap-southeast-1'
|
||||
'elasticache.ap-southeast-1.amazonaws.com'
|
||||
when 'ap-northeast-1'
|
||||
'elasticache.ap-northeast-1.amazonaws.com'
|
||||
else
|
||||
raise ArgumentError, "Unknown region: #{options[:region].inspect}"
|
||||
end
|
||||
|
|
|
|||
|
|
@ -4,9 +4,13 @@ module Fog
|
|||
module AWS
|
||||
class ELB < Fog::Service
|
||||
|
||||
class IdentifierTaken < Fog::Errors::Error; end
|
||||
class InvalidInstance < Fog::Errors::Error; end
|
||||
class Throttled < Fog::Errors::Error; end
|
||||
class DuplicatePolicyName < Fog::Errors::Error; end
|
||||
class IdentifierTaken < Fog::Errors::Error; end
|
||||
class InvalidInstance < Fog::Errors::Error; end
|
||||
class PolicyNotFound < Fog::Errors::Error; end
|
||||
class PolicyTypeNotFound < Fog::Errors::Error; end
|
||||
class Throttled < Fog::Errors::Error; end
|
||||
class TooManyPolicies < Fog::Errors::Error; end
|
||||
|
||||
requires :aws_access_key_id, :aws_secret_access_key
|
||||
recognizes :region, :host, :path, :port, :scheme, :persistent
|
||||
|
|
@ -17,12 +21,15 @@ module Fog
|
|||
request :create_lb_cookie_stickiness_policy
|
||||
request :create_load_balancer
|
||||
request :create_load_balancer_listeners
|
||||
request :create_load_balancer_policy
|
||||
request :delete_load_balancer
|
||||
request :delete_load_balancer_listeners
|
||||
request :delete_load_balancer_policy
|
||||
request :deregister_instances_from_load_balancer
|
||||
request :describe_instance_health
|
||||
request :describe_load_balancers
|
||||
request :describe_load_balancer_policies
|
||||
request :describe_load_balancer_policy_types
|
||||
request :disable_availability_zones_for_load_balancer
|
||||
request :enable_availability_zones_for_load_balancer
|
||||
request :register_instances_with_load_balancer
|
||||
|
|
@ -39,13 +46,16 @@ module Fog
|
|||
|
||||
class Mock
|
||||
|
||||
require 'fog/aws/elb/policy_types'
|
||||
|
||||
def self.data
|
||||
@data ||= Hash.new do |hash, region|
|
||||
owner_id = Fog::AWS::Mock.owner_id
|
||||
hash[region] = Hash.new do |region_hash, key|
|
||||
region_hash[key] = {
|
||||
:owner_id => owner_id,
|
||||
:load_balancers => {}
|
||||
:load_balancers => {},
|
||||
:policy_types => Fog::AWS::ELB::Mock::POLICY_TYPES
|
||||
}
|
||||
end
|
||||
end
|
||||
|
|
@ -120,6 +130,8 @@ module Fog
|
|||
'elasticloadbalancing.us-west-1.amazonaws.com'
|
||||
when 'us-west-2'
|
||||
'elasticloadbalancing.us-west-2.amazonaws.com'
|
||||
when 'sa-east-1'
|
||||
'elasticloadbalancing.sa-east-1.amazonaws.com'
|
||||
else
|
||||
raise ArgumentError, "Unknown region: #{options[:region].inspect}"
|
||||
end
|
||||
|
|
@ -148,7 +160,7 @@ module Fog
|
|||
:host => @host,
|
||||
:path => @path,
|
||||
:port => @port,
|
||||
:version => '2011-04-05'
|
||||
:version => '2011-11-15'
|
||||
}
|
||||
)
|
||||
|
||||
|
|
@ -166,14 +178,22 @@ module Fog
|
|||
case match[1]
|
||||
when 'CertificateNotFound'
|
||||
raise Fog::AWS::IAM::NotFound.slurp(error, match[2])
|
||||
when 'LoadBalancerNotFound'
|
||||
raise Fog::AWS::ELB::NotFound.slurp(error, match[2])
|
||||
when 'DuplicateLoadBalancerName'
|
||||
raise Fog::AWS::ELB::IdentifierTaken.slurp(error, match[2])
|
||||
when 'DuplicatePolicyName'
|
||||
raise Fog::AWS::ELB::DuplicatePolicyName.slurp(error, match[2])
|
||||
when 'InvalidInstance'
|
||||
raise Fog::AWS::ELB::InvalidInstance.slurp(error, match[2])
|
||||
when 'LoadBalancerNotFound'
|
||||
raise Fog::AWS::ELB::NotFound.slurp(error, match[2])
|
||||
when 'PolicyNotFound'
|
||||
raise Fog::AWS::ELB::PolicyNotFound.slurp(error, match[2])
|
||||
when 'PolicyTypeNotFound'
|
||||
raise Fog::AWS::ELB::PolicyTypeNotFound.slurp(error, match[2])
|
||||
when 'Throttling'
|
||||
raise Fog::AWS::ELB::Throttled.slurp(error, match[2])
|
||||
when 'TooManyPolicies'
|
||||
raise Fog::AWS::ELB::TooManyPolicies.slurp(error, match[2])
|
||||
else
|
||||
raise
|
||||
end
|
||||
|
|
|
|||
35
lib/fog/aws/elb/policy_types.rb
Normal file
35
lib/fog/aws/elb/policy_types.rb
Normal file
|
|
@ -0,0 +1,35 @@
|
|||
class Fog::AWS::ELB::Mock
|
||||
POLICY_TYPES = [{
|
||||
"Description" => "",
|
||||
"PolicyAttributeTypeDescriptions" => [{
|
||||
"AttributeName"=>"CookieName",
|
||||
"AttributeType"=>"String",
|
||||
"Cardinality"=>"ONE",
|
||||
"DefaultValue"=>"",
|
||||
"Description"=>""
|
||||
}],
|
||||
"PolicyTypeName"=>"AppCookieStickinessPolicyType"
|
||||
},
|
||||
{
|
||||
"Description" => "",
|
||||
"PolicyAttributeTypeDescriptions" => [{
|
||||
"AttributeName"=>"CookieExpirationPeriod",
|
||||
"AttributeType"=>"String",
|
||||
"Cardinality"=>"ONE",
|
||||
"DefaultValue"=>"",
|
||||
"Description"=>""
|
||||
}],
|
||||
"PolicyTypeName"=>"LBCookieStickinessPolicyType"
|
||||
},
|
||||
{
|
||||
"Description" => "Policy containing a list of public keys to accept when authenticating the back-end server(s). This policy cannot be applied directly to back-end servers or listeners but must be part of a BackendServerAuthenticationPolicyType.",
|
||||
"PolicyAttributeTypeDescriptions" => [{
|
||||
"AttributeName"=>"PublicKey",
|
||||
"AttributeType"=>"String",
|
||||
"Cardinality"=>"ONE",
|
||||
"DefaultValue"=>"",
|
||||
"Description"=>""
|
||||
}],
|
||||
"PolicyTypeName"=>"PublicKeyPolicyType"
|
||||
}]
|
||||
end
|
||||
|
|
@ -26,10 +26,10 @@ module Fog
|
|||
# collection :snapshots
|
||||
# model :parameter_group
|
||||
# collection :parameter_groups
|
||||
#
|
||||
#
|
||||
# model :parameter
|
||||
# collection :parameters
|
||||
#
|
||||
#
|
||||
# model :security_group
|
||||
# collection :security_groups
|
||||
|
||||
|
|
@ -68,12 +68,30 @@ module Fog
|
|||
@hmac = Fog::HMAC.new('sha256', @aws_secret_access_key)
|
||||
|
||||
options[:region] ||= 'us-east-1'
|
||||
@host = options[:host] || 'elasticmapreduce.amazonaws.com'
|
||||
@host = options[:host] || case options[:region]
|
||||
when 'ap-northeast-1'
|
||||
'elasticmapreduce.ap-northeast-1.amazonaws.com'
|
||||
when 'ap-southeast-1'
|
||||
'elasticmapreduce.ap-southeast-1.amazonaws.com'
|
||||
when 'eu-west-1'
|
||||
'elasticmapreduce.eu-west-1.amazonaws.com'
|
||||
when 'us-east-1'
|
||||
'elasticmapreduce.us-east-1.amazonaws.com'
|
||||
when 'us-west-1'
|
||||
'elasticmapreduce.us-west-1.amazonaws.com'
|
||||
when 'us-west-2'
|
||||
'elasticmapreduce.us-west-2.amazonaws.com'
|
||||
when 'sa-east-1'
|
||||
'elasticmapreduce.sa-east-1.amazonaws.com'
|
||||
else
|
||||
raise ArgumentError, "Unknown region: #{options[:region].inspect}"
|
||||
end
|
||||
@path = options[:path] || '/'
|
||||
@persistent = options[:persistent] || false
|
||||
@port = options[:port] || 443
|
||||
@scheme = options[:scheme] || 'https'
|
||||
@connection = Fog::Connection.new("#{@scheme}://#{@host}:#{@port}#{@path}", @persistent, @connection_options)
|
||||
@region = options[:region]
|
||||
end
|
||||
|
||||
def reload
|
||||
|
|
|
|||
|
|
@ -25,7 +25,6 @@ module Fog
|
|||
|
||||
put_opts = {'MetricName' => metric_name, 'Unit' => unit}
|
||||
put_opts.merge!('Dimensions' => dimensions) if dimensions
|
||||
put_opts.merge!('Timestamp' => dimensions) if timestamp
|
||||
if value
|
||||
put_opts.merge!('Value' => value)
|
||||
else
|
||||
|
|
@ -43,4 +42,4 @@ module Fog
|
|||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
|
|
|
|||
|
|
@ -7,11 +7,11 @@ module Fog
|
|||
class SecurityGroup < Fog::Model
|
||||
|
||||
identity :name, :aliases => 'groupName'
|
||||
|
||||
attribute :description, :aliases => 'groupDescription'
|
||||
attribute :group_id, :aliases => 'groupId'
|
||||
attribute :ip_permissions, :aliases => 'ipPermissions'
|
||||
attribute :owner_id, :aliases => 'ownerId'
|
||||
attribute :vpc_id, :aliases => 'vpcId'
|
||||
|
||||
# Authorize access by another security group
|
||||
#
|
||||
|
|
@ -194,8 +194,7 @@ module Fog
|
|||
|
||||
def save
|
||||
requires :description, :name
|
||||
|
||||
data = connection.create_security_group(name, description).body
|
||||
data = connection.create_security_group(name, description, vpc_id).body
|
||||
true
|
||||
end
|
||||
|
||||
|
|
|
|||
|
|
@ -25,6 +25,7 @@ module Fog
|
|||
# description=nil,
|
||||
# ip_permissions=nil,
|
||||
# owner_id=nil
|
||||
# vpc_id=nil
|
||||
# >
|
||||
#
|
||||
|
||||
|
|
@ -50,6 +51,7 @@ module Fog
|
|||
# description="default group",
|
||||
# ip_permissions=[{"groups"=>[{"groupName"=>"default", "userId"=>"312571045469"}], "fromPort"=>-1, "toPort"=>-1, "ipRanges"=>[], "ipProtocol"=>"icmp"}, {"groups"=>[{"groupName"=>"default", "userId"=>"312571045469"}], "fromPort"=>0, "toPort"=>65535, "ipRanges"=>[], "ipProtocol"=>"tcp"}, {"groups"=>[{"groupName"=>"default", "userId"=>"312571045469"}], "fromPort"=>0, "toPort"=>65535, "ipRanges"=>[], "ipProtocol"=>"udp"}],
|
||||
# owner_id="312571045469"
|
||||
# vpc_id=nill
|
||||
# >
|
||||
# ]
|
||||
# >
|
||||
|
|
@ -79,6 +81,7 @@ module Fog
|
|||
# description="default group",
|
||||
# ip_permissions=[{"groups"=>[{"groupName"=>"default", "userId"=>"312571045469"}], "fromPort"=>-1, "toPort"=>-1, "ipRanges"=>[], "ipProtocol"=>"icmp"}, {"groups"=>[{"groupName"=>"default", "userId"=>"312571045469"}], "fromPort"=>0, "toPort"=>65535, "ipRanges"=>[], "ipProtocol"=>"tcp"}, {"groups"=>[{"groupName"=>"default", "userId"=>"312571045469"}], "fromPort"=>0, "toPort"=>65535, "ipRanges"=>[], "ipProtocol"=>"udp"}],
|
||||
# owner_id="312571045469"
|
||||
# vpc_id=nil
|
||||
# >
|
||||
#
|
||||
|
||||
|
|
|
|||
|
|
@ -34,6 +34,7 @@ module Fog
|
|||
attribute :reason
|
||||
attribute :root_device_name, :aliases => 'rootDeviceName'
|
||||
attribute :root_device_type, :aliases => 'rootDeviceType'
|
||||
attribute :security_group_ids, :aliases => 'securityGroupIds'
|
||||
attribute :state, :aliases => 'instanceState', :squash => 'name'
|
||||
attribute :state_reason, :aliases => 'stateReason'
|
||||
attribute :subnet_id, :aliases => 'subnetId'
|
||||
|
|
@ -45,7 +46,7 @@ module Fog
|
|||
attr_writer :private_key, :private_key_path, :public_key, :public_key_path, :username
|
||||
|
||||
def initialize(attributes={})
|
||||
self.groups ||= ["default"] unless attributes[:subnet_id]
|
||||
self.groups ||= ["default"] unless (attributes[:subnet_id] || attributes[:security_group_ids])
|
||||
self.flavor_id ||= 't1.micro'
|
||||
self.image_id ||= begin
|
||||
self.username = 'ubuntu'
|
||||
|
|
@ -152,6 +153,7 @@ module Fog
|
|||
'Placement.Tenancy' => tenancy,
|
||||
'RamdiskId' => ramdisk_id,
|
||||
'SecurityGroup' => groups,
|
||||
'SecurityGroupId' => security_group_ids,
|
||||
'SubnetId' => subnet_id,
|
||||
'UserData' => user_data
|
||||
}
|
||||
|
|
|
|||
|
|
@ -20,9 +20,9 @@ module Fog
|
|||
|
||||
def all(options = {})
|
||||
requires :zone
|
||||
options['MaxItems'] ||= max_items
|
||||
options['Name'] ||= name
|
||||
options['Type'] ||= type
|
||||
options['maxitems'] ||= max_items
|
||||
options['name'] ||= name
|
||||
options['type'] ||= type
|
||||
data = connection.list_resource_record_sets(zone.id, options).body
|
||||
merge_attributes(data.reject {|key, value| !['IsTruncated', 'MaxItems', 'NextRecordName', 'NextRecordType'].include?(key)})
|
||||
# leave out the default, read only records
|
||||
|
|
|
|||
|
|
@ -96,6 +96,12 @@ module Fog
|
|||
reload
|
||||
end
|
||||
|
||||
def set_listener_ssl_certificate(port, ssl_certificate_id)
|
||||
requires :id
|
||||
connection.set_load_balancer_listener_ssl_certificate(id, port, ssl_certificate_id)
|
||||
reload
|
||||
end
|
||||
|
||||
def unset_listener_policy(port)
|
||||
set_listener_policy(port, [])
|
||||
end
|
||||
|
|
|
|||
|
|
@ -50,7 +50,7 @@ module Fog
|
|||
requires :directory, :key
|
||||
connection.copy_object(directory.key, key, target_directory_key, target_file_key, options)
|
||||
target_directory = connection.directories.new(:key => target_directory_key)
|
||||
target_directory.files.get(target_file_key)
|
||||
target_directory.files.head(target_file_key)
|
||||
end
|
||||
|
||||
def destroy
|
||||
|
|
@ -123,9 +123,9 @@ module Fog
|
|||
true
|
||||
end
|
||||
|
||||
def url(expires)
|
||||
def url(expires, options = {})
|
||||
requires :key
|
||||
collection.get_https_url(key, expires)
|
||||
collection.get_https_url(key, expires, options)
|
||||
end
|
||||
|
||||
private
|
||||
|
|
|
|||
|
|
@ -78,14 +78,14 @@ module Fog
|
|||
end
|
||||
end
|
||||
|
||||
def get_http_url(key, expires)
|
||||
def get_http_url(key, expires, options = {})
|
||||
requires :directory
|
||||
connection.get_object_http_url(directory.key, key, expires)
|
||||
connection.get_object_http_url(directory.key, key, expires, options)
|
||||
end
|
||||
|
||||
def get_https_url(key, expires)
|
||||
def get_https_url(key, expires, options = {})
|
||||
requires :directory
|
||||
connection.get_object_https_url(directory.key, key, expires)
|
||||
connection.get_object_https_url(directory.key, key, expires, options)
|
||||
end
|
||||
|
||||
def head(key, options = {})
|
||||
|
|
|
|||
64
lib/fog/aws/parsers/compute/describe_instance_status.rb
Normal file
64
lib/fog/aws/parsers/compute/describe_instance_status.rb
Normal file
|
|
@ -0,0 +1,64 @@
|
|||
module Fog
|
||||
module Parsers
|
||||
module Compute
|
||||
module AWS
|
||||
class DescribeInstanceStatus < Fog::Parsers::Base
|
||||
|
||||
def new_instance
|
||||
@instance = { 'eventsSet' => [], 'instanceState' => {} }
|
||||
end
|
||||
|
||||
def new_event
|
||||
@event = {}
|
||||
end
|
||||
|
||||
def reset
|
||||
@instance_status = {}
|
||||
@response = { 'instanceStatusSet' => [] }
|
||||
@in_events_set = false
|
||||
new_event
|
||||
new_instance
|
||||
end
|
||||
|
||||
def start_element(name, attrs=[])
|
||||
super
|
||||
case name
|
||||
when 'eventsSet'
|
||||
@in_events_set = true
|
||||
end
|
||||
end
|
||||
|
||||
|
||||
def end_element(name)
|
||||
if @in_events_set
|
||||
case name
|
||||
when 'code', 'description'
|
||||
@event[name] = value
|
||||
when 'notAfter', 'notBefore'
|
||||
@event[name] = Time.parse(value)
|
||||
when 'item'
|
||||
@instance['eventsSet'] << @event
|
||||
new_event
|
||||
when 'eventsSet'
|
||||
@in_events_set = false
|
||||
end
|
||||
else
|
||||
case name
|
||||
when 'instanceId', 'availabilityZone'
|
||||
@instance[name] = value
|
||||
when 'name', 'code'
|
||||
@instance['instanceState'][name] = value
|
||||
when 'item'
|
||||
@response['instanceStatusSet'] << @instance
|
||||
new_instance
|
||||
when 'requestId'
|
||||
@response[name] = value
|
||||
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
64
lib/fog/aws/parsers/elb/describe_load_balancer_policies.rb
Normal file
64
lib/fog/aws/parsers/elb/describe_load_balancer_policies.rb
Normal file
|
|
@ -0,0 +1,64 @@
|
|||
module Fog
|
||||
module Parsers
|
||||
module AWS
|
||||
module ELB
|
||||
|
||||
class DescribeLoadBalancerPolicies < Fog::Parsers::Base
|
||||
|
||||
def reset
|
||||
reset_policy
|
||||
reset_policy_attribute_description
|
||||
@results = { 'PolicyDescriptions' => [] }
|
||||
@response = { 'DescribeLoadBalancerPoliciesResult' => {}, 'ResponseMetadata' => {} }
|
||||
end
|
||||
|
||||
def reset_policy
|
||||
@policy = { 'PolicyAttributeDescriptions' => [], 'PolicyName' => '', 'PolicyTypeName' => '' }
|
||||
end
|
||||
|
||||
def reset_policy_attribute_description
|
||||
@policy_attribute_description = { 'AttributeName' => '', 'AttributeValue' => '' }
|
||||
end
|
||||
|
||||
def start_element(name, attrs = [])
|
||||
super
|
||||
case name
|
||||
when 'PolicyAttributeDescriptions'
|
||||
@in_policy_attributes = true
|
||||
end
|
||||
end
|
||||
|
||||
def end_element(name)
|
||||
case name
|
||||
when 'member'
|
||||
if @in_policy_attributes
|
||||
@policy['PolicyAttributeDescriptions'] << @policy_attribute_description
|
||||
reset_policy_attribute_description
|
||||
elsif !@in_policy_attributes
|
||||
@results['PolicyDescriptions'] << @policy
|
||||
reset_policy
|
||||
end
|
||||
|
||||
when 'PolicyName', 'PolicyTypeName'
|
||||
@policy[name] = value
|
||||
|
||||
when 'PolicyAttributeDescriptions'
|
||||
@in_policy_attributes = false
|
||||
|
||||
when 'AttributeName', 'AttributeValue'
|
||||
@policy_attribute_description[name] = value
|
||||
|
||||
when 'RequestId'
|
||||
@response['ResponseMetadata'][name] = value
|
||||
|
||||
when 'DescribeLoadBalancerPoliciesResponse'
|
||||
@response['DescribeLoadBalancerPoliciesResult'] = @results
|
||||
end
|
||||
end
|
||||
|
||||
end
|
||||
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
|
|
@ -0,0 +1,70 @@
|
|||
module Fog
|
||||
module Parsers
|
||||
module AWS
|
||||
module ELB
|
||||
|
||||
class DescribeLoadBalancerPolicyTypes < Fog::Parsers::Base
|
||||
|
||||
def reset
|
||||
reset_policy_type
|
||||
reset_policy_attribute_type_description
|
||||
@results = { 'PolicyTypeDescriptions' => [] }
|
||||
@response = { 'DescribeLoadBalancerPolicyTypesResult' => {}, 'ResponseMetadata' => {} }
|
||||
end
|
||||
|
||||
def reset_policy_type
|
||||
@policy_type = { 'Description' => '', 'PolicyAttributeTypeDescriptions' => [], 'PolicyTypeName' => '' }
|
||||
end
|
||||
|
||||
def reset_policy_attribute_type_description
|
||||
@policy_attribute_type_description = { 'AttributeName' => '', 'AttributeType' => '', 'Cardinality' => '', 'DefaultValue' => '', 'Description' => '' }
|
||||
end
|
||||
|
||||
def start_element(name, attrs = [])
|
||||
super
|
||||
case name
|
||||
when 'PolicyAttributeTypeDescriptions'
|
||||
@in_policy_attribute_types = true
|
||||
end
|
||||
end
|
||||
|
||||
def end_element(name)
|
||||
case name
|
||||
when 'member'
|
||||
if @in_policy_attribute_types
|
||||
@policy_type['PolicyAttributeTypeDescriptions'] << @policy_attribute_type_description
|
||||
reset_policy_attribute_type_description
|
||||
elsif !@in_policy_attribute_types
|
||||
@results['PolicyTypeDescriptions'] << @policy_type
|
||||
reset_policy_type
|
||||
end
|
||||
|
||||
when 'Description'
|
||||
if @in_policy_attribute_types
|
||||
@policy_attribute_type_description[name] = value
|
||||
else
|
||||
@policy_type[name] = value
|
||||
end
|
||||
when 'PolicyTypeName'
|
||||
@policy_type[name] = value
|
||||
|
||||
when 'PolicyAttributeTypeDescriptions'
|
||||
@in_policy_attribute_types = false
|
||||
|
||||
when 'AttributeName', 'AttributeType', 'Cardinality', 'DefaultValue'
|
||||
@policy_attribute_type_description[name] = value
|
||||
|
||||
when 'RequestId'
|
||||
@response['ResponseMetadata'][name] = value
|
||||
|
||||
when 'DescribeLoadBalancerPolicyTypesResponse'
|
||||
@response['DescribeLoadBalancerPolicyTypesResult'] = @results
|
||||
end
|
||||
end
|
||||
|
||||
end
|
||||
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
|
|
@ -24,7 +24,7 @@ module Fog
|
|||
when 'Value'
|
||||
case @current_attribute_name
|
||||
when 'ApproximateFirstReceiveTimestamp', 'SentTimestamp'
|
||||
@message['Attributes'][@current_attribute_name] = Time.at(@value.to_i)
|
||||
@message['Attributes'][@current_attribute_name] = Time.at(@value.to_i / 1000.0)
|
||||
when 'ApproximateReceiveCount'
|
||||
@message['Attributes'][@current_attribute_name] = @value.to_i
|
||||
else
|
||||
|
|
|
|||
31
lib/fog/aws/parsers/sts/get_session_token.rb
Normal file
31
lib/fog/aws/parsers/sts/get_session_token.rb
Normal file
|
|
@ -0,0 +1,31 @@
|
|||
module Fog
|
||||
module Parsers
|
||||
module AWS
|
||||
module STS
|
||||
|
||||
class GetSessionToken < Fog::Parsers::Base
|
||||
# http://docs.amazonwebservices.com/IAM/latest/UserGuide/index.html?CreatingFedTokens.html
|
||||
|
||||
def reset
|
||||
@response = {}
|
||||
end
|
||||
|
||||
def end_element(name)
|
||||
case name
|
||||
when 'SessionToken', 'SecretAccessKey', 'Expiration', 'AccessKeyId'
|
||||
@response[name] = @value.strip
|
||||
when 'Arn', 'FederatedUserId'
|
||||
@response[name] = @value
|
||||
when 'PackedPolicySize'
|
||||
@response[name] = @value
|
||||
when 'RequestId'
|
||||
@response[name] = @value
|
||||
end
|
||||
end
|
||||
|
||||
end
|
||||
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
|
|
@ -5,6 +5,8 @@ module Fog
|
|||
class RDS < Fog::Service
|
||||
|
||||
class IdentifierTaken < Fog::Errors::Error; end
|
||||
|
||||
class AuthorizationAlreadyExists < Fog::Errors::Error; end
|
||||
|
||||
requires :aws_access_key_id, :aws_secret_access_key
|
||||
recognizes :region, :host, :path, :port, :scheme, :persistent
|
||||
|
|
@ -57,9 +59,43 @@ module Fog
|
|||
|
||||
class Mock
|
||||
|
||||
def initialize(options={})
|
||||
Fog::Mock.not_implemented
|
||||
def self.data
|
||||
@data ||= Hash.new do |hash, region|
|
||||
owner_id = Fog::AWS::Mock.owner_id
|
||||
hash[region] = Hash.new do |region_hash, key|
|
||||
region_hash[key] = {
|
||||
:servers => {},
|
||||
:security_groups => {}
|
||||
}
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
def self.reset
|
||||
@data = nil
|
||||
end
|
||||
|
||||
def initialize(options={})
|
||||
|
||||
@aws_access_key_id = options[:aws_access_key_id]
|
||||
|
||||
@region = options[:region] || 'us-east-1'
|
||||
|
||||
unless ['ap-northeast-1', 'ap-southeast-1', 'eu-west-1', 'us-east-1', 'us-west-1', 'us-west-2'].include?(@region)
|
||||
raise ArgumentError, "Unknown region: #{@region.inspect}"
|
||||
end
|
||||
|
||||
end
|
||||
|
||||
def data
|
||||
self.class.data[@region][@aws_access_key_id]
|
||||
end
|
||||
|
||||
def reset_data
|
||||
self.class.data[@region].delete(@aws_access_key_id)
|
||||
end
|
||||
|
||||
|
||||
|
||||
end
|
||||
|
||||
|
|
@ -103,6 +139,8 @@ module Fog
|
|||
'rds.us-west-1.amazonaws.com'
|
||||
when 'us-west-2'
|
||||
'rds.us-west-2.amazonaws.com'
|
||||
when 'sa-east-1'
|
||||
'rds.sa-east-1.amazonaws.com'
|
||||
else
|
||||
raise ArgumentError, "Unknown region: #{options[:region].inspect}"
|
||||
end
|
||||
|
|
@ -152,6 +190,8 @@ module Fog
|
|||
raise Fog::AWS::RDS::NotFound.slurp(error, match[2])
|
||||
when 'DBParameterGroupAlreadyExists'
|
||||
raise Fog::AWS::RDS::IdentifierTaken.slurp(error, match[2])
|
||||
when 'AuthorizationAlreadyExists'
|
||||
raise Fog::AWS::RDS::AuthorizationAlreadyExists.slurp(error, match[2])
|
||||
else
|
||||
raise
|
||||
end
|
||||
|
|
|
|||
|
|
@ -18,12 +18,13 @@ module Fog
|
|||
# * 'return'<~Boolean> - success?
|
||||
#
|
||||
# {Amazon API Reference}[http://docs.amazonwebservices.com/AWSEC2/latest/APIReference/ApiReference-query-CreateSecurityGroup.html]
|
||||
def create_security_group(name, description)
|
||||
def create_security_group(name, description, vpc_id=nil)
|
||||
request(
|
||||
'Action' => 'CreateSecurityGroup',
|
||||
'GroupName' => name,
|
||||
'GroupDescription' => description,
|
||||
:parser => Fog::Parsers::Compute::AWS::Basic.new
|
||||
:parser => Fog::Parsers::Compute::AWS::Basic.new,
|
||||
'VpcId' => vpc_id
|
||||
)
|
||||
end
|
||||
|
||||
|
|
@ -31,7 +32,7 @@ module Fog
|
|||
|
||||
class Mock
|
||||
|
||||
def create_security_group(name, description)
|
||||
def create_security_group(name, description, vpc_id=nil)
|
||||
response = Excon::Response.new
|
||||
unless self.data[:security_groups][name]
|
||||
data = {
|
||||
|
|
@ -39,7 +40,8 @@ module Fog
|
|||
'groupName' => name,
|
||||
'ipPermissionsEgress' => [],
|
||||
'ipPermissions' => [],
|
||||
'ownerId' => self.data[:owner_id]
|
||||
'ownerId' => self.data[:owner_id],
|
||||
'vpcId' => vpc_id
|
||||
}
|
||||
self.data[:security_groups][name] = data
|
||||
response.body = {
|
||||
|
|
|
|||
|
|
@ -57,6 +57,9 @@ module Fog
|
|||
|
||||
{"messageSet" => [], "regionName" => "us-west-2", "zoneName" => "us-west-2a", "zoneState" => "available"},
|
||||
{"messageSet" => [], "regionName" => "us-west-2", "zoneName" => "us-west-2b", "zoneState" => "available"},
|
||||
|
||||
{"messageSet" => [], "regionName" => "sa-east-1", "zoneName" => "sa-east-1a", "zoneState" => "available"},
|
||||
{"messageSet" => [], "regionName" => "sa-east-1", "zoneName" => "sa-east-1b", "zoneState" => "available"},
|
||||
|
||||
{"messageSet" => [], "regionName" => "eu-west-1", "zoneName" => "eu-west-1a", "zoneState" => "available"},
|
||||
{"messageSet" => [], "regionName" => "eu-west-1", "zoneName" => "eu-west-1b", "zoneState" => "available"},
|
||||
|
|
|
|||
36
lib/fog/aws/requests/compute/describe_instance_status.rb
Normal file
36
lib/fog/aws/requests/compute/describe_instance_status.rb
Normal file
|
|
@ -0,0 +1,36 @@
|
|||
module Fog
|
||||
module Compute
|
||||
class AWS
|
||||
class Real
|
||||
|
||||
require 'fog/aws/parsers/compute/describe_instance_status'
|
||||
|
||||
def describe_instance_status(filters = {})
|
||||
raise ArgumentError.new("Filters must be a hash, but is a #{filters.class}.") unless filters.is_a?(Hash)
|
||||
|
||||
params = Fog::AWS.indexed_filters(filters)
|
||||
request({
|
||||
'Action' => 'DescribeInstanceStatus',
|
||||
:idempotent => true,
|
||||
:parser => Fog::Parsers::Compute::AWS::DescribeInstanceStatus.new
|
||||
}.merge!(params))
|
||||
end
|
||||
end
|
||||
|
||||
class Mock
|
||||
def describe_instance_status(filters = {})
|
||||
response = Excon::Response.new
|
||||
response.status = 200
|
||||
|
||||
response.body = {
|
||||
'instanceStatusSet' => [],
|
||||
'requestId' => Fog::AWS::Mock.request_id
|
||||
}
|
||||
|
||||
response
|
||||
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
|
|
@ -16,6 +16,7 @@ module Fog
|
|||
# * 'requestId'<~String> - Id of request
|
||||
# * 'securityGroupInfo'<~Array>:
|
||||
# * 'groupDescription'<~String> - Description of security group
|
||||
# * 'groupId'<~String> - ID of the security group.
|
||||
# * 'groupName'<~String> - Name of security group
|
||||
# * 'ipPermissions'<~Array>:
|
||||
# * 'fromPort'<~Integer> - Start of port range (or -1 for ICMP wildcard)
|
||||
|
|
@ -59,6 +60,7 @@ module Fog
|
|||
aliases = {
|
||||
'description' => 'groupDescription',
|
||||
'group-name' => 'groupName',
|
||||
'group-id' => 'groupId',
|
||||
'owner-id' => 'ownerId'
|
||||
}
|
||||
permission_aliases = {
|
||||
|
|
|
|||
|
|
@ -37,6 +37,8 @@ module Fog
|
|||
|
||||
load_balancer['Policies']['AppCookieStickinessPolicies'] << { 'CookieName' => cookie_name, 'PolicyName' => policy_name }
|
||||
|
||||
create_load_balancer_policy(lb_name, policy_name, 'AppCookieStickinessPolicyType', {'CookieName' => cookie_name})
|
||||
|
||||
response.body = {
|
||||
'ResponseMetadata' => {
|
||||
'RequestId' => Fog::AWS::Mock.request_id
|
||||
|
|
|
|||
|
|
@ -37,7 +37,9 @@ module Fog
|
|||
response = Excon::Response.new
|
||||
response.status = 200
|
||||
|
||||
load_balancer['Policies']['LBCookieStickinessPolicies'] << { 'PolicyName' => policy_name, 'CookieExpirationPeriod' => cookie_expiration_period }
|
||||
load_balancer['Policies']['LBCookieStickinessPolicies'] << { 'CookieExpirationPeriod' => cookie_expiration_period, 'PolicyName' => policy_name }
|
||||
|
||||
create_load_balancer_policy(lb_name, policy_name, 'LBCookieStickinessPolicyType', {'CookieExpirationPeriod' => cookie_expiration_period})
|
||||
|
||||
response.body = {
|
||||
'ResponseMetadata' => {
|
||||
|
|
|
|||
|
|
@ -83,8 +83,9 @@ module Fog
|
|||
'ListenerDescriptions' => listeners,
|
||||
'LoadBalancerName' => lb_name,
|
||||
'Policies' => {
|
||||
'AppCookieStickinessPolicies' => [],
|
||||
'LBCookieStickinessPolicies' => [],
|
||||
'AppCookieStickinessPolicies' => []
|
||||
'Proper' => []
|
||||
},
|
||||
'SourceSecurityGroup' => {
|
||||
'GroupName' => '',
|
||||
|
|
|
|||
79
lib/fog/aws/requests/elb/create_load_balancer_policy.rb
Normal file
79
lib/fog/aws/requests/elb/create_load_balancer_policy.rb
Normal file
|
|
@ -0,0 +1,79 @@
|
|||
module Fog
|
||||
module AWS
|
||||
class ELB
|
||||
class Real
|
||||
|
||||
require 'fog/aws/parsers/elb/empty'
|
||||
|
||||
# Create Elastic Load Balancer Policy
|
||||
#
|
||||
# ==== Parameters
|
||||
# * lb_name<~String> - The name associated with the LoadBalancer for which the policy is being created. This name must be unique within the client AWS account.
|
||||
# * attributes<~Hash> - A list of attributes associated with the policy being created.
|
||||
# * 'AttributeName'<~String> - The name of the attribute associated with the policy.
|
||||
# * 'AttributeValue'<~String> - The value of the attribute associated with the policy.
|
||||
# * name<~String> - The name of the LoadBalancer policy being created. The name must be unique within the set of policies for this LoadBalancer.
|
||||
# * type_name<~String> - The name of the base policy type being used to create this policy. To get the list of policy types, use the DescribeLoadBalancerPolicyTypes action.
|
||||
# ==== Returns
|
||||
# * response<~Excon::Response>:
|
||||
# * body<~Hash>:
|
||||
# * 'ResponseMetadata'<~Hash>:
|
||||
# * 'RequestId'<~String> - Id of request
|
||||
def create_load_balancer_policy(lb_name, name, type_name, attributes = {})
|
||||
params = {}
|
||||
|
||||
attribute_name = []
|
||||
attribute_value = []
|
||||
attributes.each do |name, value|
|
||||
attribute_name.push(name)
|
||||
attribute_value.push(value)
|
||||
end
|
||||
|
||||
params.merge!(Fog::AWS.indexed_param('PolicyAttributes.member.%d.AttributeName', attribute_name))
|
||||
params.merge!(Fog::AWS.indexed_param('PolicyAttributes.member.%d.AttributeValue', attribute_value))
|
||||
|
||||
request({
|
||||
'Action' => 'CreateLoadBalancerPolicy',
|
||||
'LoadBalancerName' => lb_name,
|
||||
'PolicyName' => name,
|
||||
'PolicyTypeName' => type_name,
|
||||
:parser => Fog::Parsers::AWS::ELB::Empty.new
|
||||
}.merge!(params))
|
||||
end
|
||||
|
||||
end
|
||||
|
||||
class Mock
|
||||
def create_load_balancer_policy(lb_name, name, type_name, attributes = {})
|
||||
if load_balancer = self.data[:load_balancers][lb_name]
|
||||
raise Fog::AWS::IAM::DuplicatePolicyName if policy = load_balancer['Policies']['Proper'].find { |p| p['PolicyName'] == name }
|
||||
raise Fog::AWS::IAM::PolicyTypeNotFound unless policy_type = self.data[:policy_types].find { |pt| pt['PolicyTypeName'] == type_name }
|
||||
|
||||
response = Excon::Response.new
|
||||
|
||||
attributes = attributes.map do |key, value|
|
||||
{"AttributeName" => key, "AttributeValue" => value.to_s}
|
||||
end
|
||||
|
||||
load_balancer['Policies']['Proper'] << {
|
||||
'PolicyAttributeDescriptions' => attributes,
|
||||
'PolicyName' => name,
|
||||
'PolicyTypeName' => type_name
|
||||
}
|
||||
|
||||
response.status = 200
|
||||
response.body = {
|
||||
'ResponseMetadata' => {
|
||||
'RequestId' => Fog::AWS::Mock.request_id
|
||||
}
|
||||
}
|
||||
|
||||
response
|
||||
else
|
||||
raise Fog::AWS::ELB::NotFound
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
|
|
@ -34,7 +34,7 @@ module Fog
|
|||
response.status = 200
|
||||
|
||||
load_balancer['Policies'].each do |name, policies|
|
||||
policies.delete_if { |p| p['PolicyName'] == policy_name }
|
||||
policies.delete_if { |policy| policy['PolicyName'] == policy_name }
|
||||
end
|
||||
|
||||
response.body = {
|
||||
|
|
|
|||
71
lib/fog/aws/requests/elb/describe_load_balancer_policies.rb
Normal file
71
lib/fog/aws/requests/elb/describe_load_balancer_policies.rb
Normal file
|
|
@ -0,0 +1,71 @@
|
|||
module Fog
|
||||
module AWS
|
||||
class ELB
|
||||
class Real
|
||||
|
||||
require 'fog/aws/parsers/elb/describe_load_balancer_policies'
|
||||
|
||||
# Describe all or specified load balancer policies
|
||||
#
|
||||
# ==== Parameters
|
||||
# * lb_name<~String> - The mnemonic name associated with the LoadBalancer. If no name is specified, the operation returns the attributes of either all the sample policies pre-defined by Elastic Load Balancing or the specified sample polices.
|
||||
# * names<~Array> - The names of LoadBalancer policies you've created or Elastic Load Balancing sample policy names.
|
||||
#
|
||||
# ==== Returns
|
||||
# * response<~Excon::Response>:
|
||||
# * body<~Hash>:
|
||||
# * 'ResponseMetadata'<~Hash>:
|
||||
# * 'RequestId'<~String> - Id of request
|
||||
# * 'DescribeLoadBalancerPoliciesResult'<~Hash>:
|
||||
# * 'PolicyDescriptions'<~Array>
|
||||
# * 'PolicyAttributeDescriptions'<~Array>
|
||||
# * 'AttributeName'<~String> - The name of the attribute associated with the policy.
|
||||
# * 'AttributeValue'<~String> - The value of the attribute associated with the policy.
|
||||
# * 'PolicyName'<~String> - The name mof the policy associated with the LoadBalancer.
|
||||
# * 'PolicyTypeName'<~String> - The name of the policy type.
|
||||
def describe_load_balancer_policies(lb_name = nil, names = [])
|
||||
params = Fog::AWS.indexed_param('PolicyNames.member', [*names])
|
||||
request({
|
||||
'Action' => 'DescribeLoadBalancerPolicies',
|
||||
'LoadBalancerName' => lb_name,
|
||||
:parser => Fog::Parsers::AWS::ELB::DescribeLoadBalancerPolicies.new
|
||||
}.merge!(params))
|
||||
end
|
||||
|
||||
end
|
||||
|
||||
class Mock
|
||||
def describe_load_balancer_policies(lb_name = nil, names = [])
|
||||
if lb_name
|
||||
raise Fog::AWS::ELB::NotFound unless load_balancer = self.data[:load_balancers][lb_name]
|
||||
names = [*names]
|
||||
policies = if names.any?
|
||||
names.map do |name|
|
||||
raise Fog::AWS::ELB::PolicyNotFound unless policy = load_balancer['Policies']['Proper'].find { |p| p['PolicyName'] == name }
|
||||
policy.dup
|
||||
end.compact
|
||||
else
|
||||
load_balancer['Policies']['Proper']
|
||||
end
|
||||
else
|
||||
policies = []
|
||||
end
|
||||
|
||||
response = Excon::Response.new
|
||||
response.status = 200
|
||||
|
||||
response.body = {
|
||||
'ResponseMetadata' => {
|
||||
'RequestId' => Fog::AWS::Mock.request_id
|
||||
},
|
||||
'DescribeLoadBalancerPoliciesResult' => {
|
||||
'PolicyDescriptions' => policies
|
||||
}
|
||||
}
|
||||
|
||||
response
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
|
|
@ -0,0 +1,68 @@
|
|||
module Fog
|
||||
module AWS
|
||||
class ELB
|
||||
class Real
|
||||
|
||||
require 'fog/aws/parsers/elb/describe_load_balancer_policy_types'
|
||||
|
||||
# Describe all or specified load balancer policy types
|
||||
#
|
||||
# ==== Parameters
|
||||
# * type_name<~Array> - Specifies the name of the policy types. If no names are specified, returns the description of all the policy types defined by Elastic Load Balancing service.
|
||||
#
|
||||
# ==== Returns
|
||||
# * response<~Excon::Response>:
|
||||
# * body<~Hash>:
|
||||
# * 'ResponseMetadata'<~Hash>:
|
||||
# * 'RequestId'<~String> - Id of request
|
||||
# * 'DescribeLoadBalancerPolicyTypesResult'<~Hash>:
|
||||
# * 'PolicyTypeDescriptions'<~Array>
|
||||
# * 'Description'<~String> - A human-readable description of the policy type.
|
||||
# * 'PolicyAttributeTypeDescriptions'<~Array>
|
||||
# * 'AttributeName'<~String> - The name of the attribute associated with the policy type.
|
||||
# * 'AttributeValue'<~String> - The type of attribute. For example, Boolean, Integer, etc.
|
||||
# * 'Cardinality'<~String> - The cardinality of the attribute.
|
||||
# * 'DefaultValue'<~String> - The default value of the attribute, if applicable.
|
||||
# * 'Description'<~String> - A human-readable description of the attribute.
|
||||
# * 'PolicyTypeName'<~String> - The name of the policy type.
|
||||
def describe_load_balancer_policy_types(type_names = [])
|
||||
params = Fog::AWS.indexed_param('PolicyTypeNames.member', [*type_names])
|
||||
request({
|
||||
'Action' => 'DescribeLoadBalancerPolicyTypes',
|
||||
:parser => Fog::Parsers::AWS::ELB::DescribeLoadBalancerPolicyTypes.new
|
||||
}.merge!(params))
|
||||
end
|
||||
|
||||
end
|
||||
|
||||
class Mock
|
||||
def describe_load_balancer_policy_types(type_names = [])
|
||||
type_names = [*type_names]
|
||||
policy_types = if type_names.any?
|
||||
type_names.map do |type_name|
|
||||
policy_type = self.data[:policy_types].find { |pt| pt['PolicyTypeName'] == type_name }
|
||||
raise Fog::AWS::ELB::PolicyTypeNotFound unless policy_type
|
||||
policy_type[1].dup
|
||||
end.compact
|
||||
else
|
||||
self.data[:policy_types].map { |policy_type| policy_type.dup }
|
||||
end
|
||||
|
||||
response = Excon::Response.new
|
||||
response.status = 200
|
||||
|
||||
response.body = {
|
||||
'ResponseMetadata' => {
|
||||
'RequestId' => Fog::AWS::Mock.request_id
|
||||
},
|
||||
'DescribeLoadBalancerPolicyTypesResult' => {
|
||||
'PolicyTypeDescriptions' => policy_types
|
||||
}
|
||||
}
|
||||
|
||||
response
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
|
|
@ -73,7 +73,11 @@ module Fog
|
|||
'RequestId' => Fog::AWS::Mock.request_id
|
||||
},
|
||||
'DescribeLoadBalancersResult' => {
|
||||
'LoadBalancerDescriptions' => load_balancers.map { |lb| lb['Instances'] = lb['Instances'].map { |i| i['InstanceId'] }; lb }
|
||||
'LoadBalancerDescriptions' => load_balancers.map do |lb|
|
||||
lb['Instances'] = lb['Instances'].map { |i| i['InstanceId'] }
|
||||
lb['Policies'] = lb['Policies'].reject { |name, policies| name == 'Proper' }
|
||||
lb
|
||||
end
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -34,6 +34,33 @@ module Fog
|
|||
end
|
||||
|
||||
class Mock
|
||||
def set_load_balancer_listener_ssl_certificate(lb_name, load_balancer_port, ssl_certificate_id)
|
||||
raise Fog::AWS::ELB::NotFound unless load_balancer = self.data[:load_balancers][lb_name]
|
||||
|
||||
certificate_ids = Fog::AWS::IAM::Mock.data[@aws_access_key_id][:server_certificates].map {|n, c| c['Arn'] }
|
||||
if !certificate_ids.include? ssl_certificate_id
|
||||
raise Fog::AWS::IAM::NotFound.new('CertificateNotFound')
|
||||
end
|
||||
|
||||
response = Excon::Response.new
|
||||
|
||||
unless listener = load_balancer['ListenerDescriptions'].find { |listener| listener['Listener']['LoadBalancerPort'] == load_balancer_port }
|
||||
response.status = 400
|
||||
response.body = "<?xml version=\"1.0\"?><Response><Errors><Error><Code>ListenerNotFound</Code><Message>LoadBalancer does not have a listnener configured at the given port.</Message></Error></Errors><RequestID>#{Fog::AWS::Mock.request_id}</RequestId></Response>"
|
||||
raise Excon::Errors.status_error({:expects => 200}, response)
|
||||
end
|
||||
|
||||
listener['Listener']['SSLCertificateId'] = ssl_certificate_id
|
||||
|
||||
response.status = 200
|
||||
response.body = {
|
||||
"ResponseMetadata" => {
|
||||
"RequestId" => Fog::AWS::Mock.request_id
|
||||
}
|
||||
}
|
||||
|
||||
response
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
|
|
|
|||
|
|
@ -57,7 +57,7 @@ module Fog
|
|||
raise Excon::Errors.status_error({:expects => 200}, response)
|
||||
end
|
||||
|
||||
unless load_balancer['Policies'].find { |name, policies| policies.find { |policy| policy['PolicyName'] == policy_names.first } }
|
||||
unless load_balancer['Policies']['Proper'].find { |policy| policy['PolicyName'] == policy_names.first }
|
||||
response.status = 400
|
||||
response.body = "<?xml version=\"1.0\"?><Response><Errors><Error><Code>PolicyNotFound</Code><Message>One or more specified policies were not found.</Message></Error></Errors><RequestID>#{Fog::AWS::Mock.request_id}</RequestId></Response>"
|
||||
raise Excon::Errors.status_error({:expects => 200}, response)
|
||||
|
|
|
|||
|
|
@ -77,13 +77,15 @@ module Fog
|
|||
'Args' => ['s3://us-east-1.elasticmapreduce/libs/hive/hive-script', '--base-path', 's3://us-east-1.elasticmapreduce/libs/hive/', '--install-hive']},
|
||||
'ActionOnFailure' => 'TERMINATE_JOB_FLOW'
|
||||
}
|
||||
steps << {
|
||||
'Name' => 'Install Hive Site Configuration',
|
||||
'HadoopJarStep' => {
|
||||
'Jar' => 's3://us-east-1.elasticmapreduce/libs/script-runner/script-runner.jar',
|
||||
'Args' => ['s3://us-east-1.elasticmapreduce/libs/hive/hive-script', '--base-path', 's3://us-east-1.elasticmapreduce/libs/hive/', '--install-hive-site', '--hive-site=s3://raybeam.okl/prod/hive/hive-site.xml']},
|
||||
'ActionOnFailure' => 'TERMINATE_JOB_FLOW'
|
||||
}
|
||||
|
||||
# To add a configuration step to the Hive flow, see the step below
|
||||
# steps << {
|
||||
# 'Name' => 'Install Hive Site Configuration',
|
||||
# 'HadoopJarStep' => {
|
||||
# 'Jar' => 's3://us-east-1.elasticmapreduce/libs/script-runner/script-runner.jar',
|
||||
# 'Args' => ['s3://us-east-1.elasticmapreduce/libs/hive/hive-script', '--base-path', 's3://us-east-1.elasticmapreduce/libs/hive/', '--install-hive-site', '--hive-site=s3://my.bucket/hive/hive-site.xml']},
|
||||
# 'ActionOnFailure' => 'TERMINATE_JOB_FLOW'
|
||||
# }
|
||||
options['Steps'] = steps
|
||||
|
||||
if not options['Instances'].nil?
|
||||
|
|
|
|||
|
|
@ -71,7 +71,7 @@ module Fog
|
|||
raise Fog::AWS::IAM::EntityAlreadyExists.new
|
||||
else
|
||||
response.status = 200
|
||||
path = options['path'] || "/"
|
||||
path = options['Path'] || "/"
|
||||
data = {
|
||||
'Arn' => Fog::AWS::Mock.arn('iam', self.data[:owner_id], "server-certificate/#{name}"),
|
||||
'Path' => path,
|
||||
|
|
|
|||
|
|
@ -33,7 +33,36 @@ module Fog
|
|||
class Mock
|
||||
|
||||
def authorize_db_security_group_ingress(name, opts = {})
|
||||
Fog::Mock.not_implemented
|
||||
unless opts.key?('CIDRIP') || (opts.key?('EC2SecurityGroupName') && opts.key?('EC2SecurityGroupOwnerId'))
|
||||
raise ArgumentError, 'Must specify CIDRIP, or both EC2SecurityGroupName and EC2SecurityGroupOwnerId'
|
||||
end
|
||||
|
||||
response = Excon::Response.new
|
||||
|
||||
if sec_group = self.data[:security_groups][name]
|
||||
if opts.key?('CIDRIP')
|
||||
if sec_group['IPRanges'].detect{|h| h['CIDRIP'] == opts['CIDRIP']}
|
||||
raise Fog::AWS::RDS::AuthorizationAlreadyExists.new("AuthorizationAlreadyExists => #{opts['CIDRIP']} is alreay defined")
|
||||
end
|
||||
sec_group['IPRanges'] << opts.merge({"Status" => 'authorizing'})
|
||||
else
|
||||
if sec_group['EC2SecurityGroups'].detect{|h| h['EC2SecurityGroupName'] == opts['EC2SecurityGroupName']}
|
||||
raise Fog::AWS::RDS::AuthorizationAlreadyExists.new("AuthorizationAlreadyExists => #{opts['EC2SecurityGroupName']} is alreay defined")
|
||||
end
|
||||
sec_group['EC2SecurityGroups'] << opts.merge({"Status" => 'authorizing'})
|
||||
end
|
||||
response.status = 200
|
||||
response.body = {
|
||||
"ResponseMetadata"=>{ "RequestId"=> Fog::AWS::Mock.request_id },
|
||||
'AuthorizeDBSecurityGroupIngressResult' => {
|
||||
'DBSecurityGroup' => sec_group
|
||||
}
|
||||
}
|
||||
response
|
||||
else
|
||||
raise Fog::AWS::RDS::NotFound.new("DBSecurityGroupNotFound => #{name} not found")
|
||||
end
|
||||
|
||||
end
|
||||
|
||||
end
|
||||
|
|
|
|||
|
|
@ -47,7 +47,73 @@ module Fog
|
|||
class Mock
|
||||
|
||||
def create_db_instance(db_name, options={})
|
||||
Fog::Mock.not_implemented
|
||||
response = Excon::Response.new
|
||||
if self.data[:servers] and self.data[:servers][db_name]
|
||||
# I don't know how to raise an exception that contains the excon data
|
||||
#response.status = 400
|
||||
#response.body = {
|
||||
# 'Code' => 'DBInstanceAlreadyExists',
|
||||
# 'Message' => "DB Instance already exists"
|
||||
#}
|
||||
#return response
|
||||
raise Fog::AWS::RDS::IdentifierTaken.new("DBInstanceAlreadyExists #{response.body.to_s}")
|
||||
end
|
||||
|
||||
# These are the required parameters according to the API
|
||||
required_params = %w{AllocatedStorage DBInstanceClass Engine MasterUserPassword MasterUsername }
|
||||
required_params.each do |key|
|
||||
unless options.has_key?(key) and options[key] and !options[key].to_s.empty?
|
||||
#response.status = 400
|
||||
#response.body = {
|
||||
# 'Code' => 'MissingParameter',
|
||||
# 'Message' => "The request must contain the parameter #{key}"
|
||||
#}
|
||||
#return response
|
||||
raise Fog::AWS::RDS::NotFound.new("The request must contain the parameter #{key}")
|
||||
end
|
||||
end
|
||||
|
||||
data =
|
||||
{
|
||||
"DBInstanceIdentifier"=> db_name,
|
||||
"DBName" => options["DBName"],
|
||||
"InstanceCreateTime" => nil,
|
||||
"AutoMinorVersionUpgrade"=>true,
|
||||
"Endpoint"=>{},
|
||||
"ReadReplicaDBInstanceIdentifiers"=>['bla'],
|
||||
"PreferredMaintenanceWindow"=>"mon:04:30-mon:05:00",
|
||||
"Engine"=> options["Engine"],
|
||||
"EngineVersion"=> options["EngineVersion"] || "5.1.57",
|
||||
"PendingModifiedValues"=>{"MasterUserPassword"=>"****"}, # This clears when is available
|
||||
"MultiAZ"=>false,
|
||||
"MasterUsername"=> options["MasterUsername"],
|
||||
"DBInstanceClass"=> options["DBInstanceClass"],
|
||||
"DBInstanceStatus"=>"creating",
|
||||
"BackupRetentionPeriod"=> options["BackupRetentionPeriod"] || 1,
|
||||
"AllocatedStorage"=> options["AllocatedStorage"],
|
||||
"DBParameterGroups"=> # I think groups should be in the self.data method
|
||||
[{"DBParameterGroupName"=>"default.mysql5.1",
|
||||
"ParameterApplyStatus"=>"in-sync"}],
|
||||
"DBSecurityGroups"=>
|
||||
[{"Status"=>"active",
|
||||
"DBSecurityGroupName"=>"default"}],
|
||||
"LicenseModel"=>"general-public-license",
|
||||
"PreferredBackupWindow"=>"08:00-08:30",
|
||||
# "ReadReplicaSourceDBInstanceIdentifier" => nil,
|
||||
# "LatestRestorableTime" => nil,
|
||||
"AvailabilityZone" => options["AvailabilityZone"]
|
||||
}
|
||||
|
||||
|
||||
self.data[:servers][db_name] = data
|
||||
response.body = {
|
||||
"ResponseMetadata"=>{ "RequestId"=> Fog::AWS::Mock.request_id },
|
||||
"CreateDBInstanceResult"=> {"DBInstance"=> data}
|
||||
}
|
||||
response.status = 200
|
||||
# This values aren't showed at creating time but at available time
|
||||
self.data[:servers][db_name]["InstanceCreateTime"] = Time.now
|
||||
response
|
||||
end
|
||||
|
||||
end
|
||||
|
|
|
|||
|
|
@ -27,7 +27,25 @@ module Fog
|
|||
class Mock
|
||||
|
||||
def create_db_security_group(name, description = name)
|
||||
Fog::Mock.not_implemented
|
||||
response = Excon::Response.new
|
||||
if self.data[:security_groups] and self.data[:security_groups][name]
|
||||
raise Fog::AWS::RDS::IdentifierTaken.new("DBInstanceAlreadyExists => The security group '#{name}' already exists")
|
||||
end
|
||||
|
||||
data = {
|
||||
'DBSecurityGroupName' => name,
|
||||
'DBSecurityGroupDescription' => description,
|
||||
'EC2SecurityGroups' => [],
|
||||
'IPRanges' => [],
|
||||
'OwnerId' => '0123456789'
|
||||
}
|
||||
self.data[:security_groups][name] = data
|
||||
response.body = {
|
||||
"ResponseMetadata"=>{ "RequestId"=> Fog::AWS::Mock.request_id },
|
||||
'CreateDBSecurityGroupResult' => { 'DBSecurityGroup' => data }
|
||||
}
|
||||
response
|
||||
|
||||
end
|
||||
|
||||
end
|
||||
|
|
|
|||
|
|
@ -15,7 +15,7 @@ module Fog
|
|||
# ==== Returns
|
||||
# * response<~Excon::Response>:
|
||||
# * body<~Hash>:
|
||||
def delete_db_instance(identifier, snapshot_identifier, skip_snapshot = false)
|
||||
def delete_db_instance(identifier, snapshot_identifier, skip_snapshot = false)
|
||||
params = {}
|
||||
params['FinalDBSnapshotIdentifier'] = snapshot_identifier if snapshot_identifier
|
||||
request({
|
||||
|
|
@ -30,8 +30,24 @@ module Fog
|
|||
|
||||
class Mock
|
||||
|
||||
def delete_db_snapshot(identifier, snapshot_identifier, skip_snapshot = false)
|
||||
Fog::Mock.not_implemented
|
||||
def delete_db_instance(identifier, snapshot_identifier, skip_snapshot = false)
|
||||
response = Excon::Response.new
|
||||
|
||||
unless skip_snapshot
|
||||
# I don't know how to mock snapshot_identifier
|
||||
Fog::Logger.warning("snapshot_identifier is not mocked [light_black](#{caller.first})[/]")
|
||||
end
|
||||
|
||||
if server_set = self.data[:servers].delete(identifier)
|
||||
response.status = 200
|
||||
response.body = {
|
||||
"ResponseMetadata"=>{ "RequestId"=> Fog::AWS::Mock.request_id },
|
||||
"DeleteDBInstanceResult" => { "DBInstance" => server_set }
|
||||
}
|
||||
response
|
||||
else
|
||||
raise Fog::AWS::RDS::NotFound.new("DBInstance #{identifier} not found")
|
||||
end
|
||||
end
|
||||
|
||||
end
|
||||
|
|
|
|||
|
|
@ -25,7 +25,17 @@ module Fog
|
|||
class Mock
|
||||
|
||||
def delete_db_security_group(name, description = name)
|
||||
Fog::Mock.not_implemented
|
||||
response = Excon::Response.new
|
||||
|
||||
if self.data[:security_groups].delete(name)
|
||||
response.status = 200
|
||||
response.body = {
|
||||
"ResponseMetadata"=>{ "RequestId"=> Fog::AWS::Mock.request_id },
|
||||
}
|
||||
response
|
||||
else
|
||||
raise Fog::AWS::RDS::NotFound.new("DBSecurityGroupNotFound => #{name} not found")
|
||||
end
|
||||
end
|
||||
|
||||
end
|
||||
|
|
|
|||
|
|
@ -33,8 +33,63 @@ module Fog
|
|||
class Mock
|
||||
|
||||
def describe_db_instances(identifier=nil, opts={})
|
||||
Fog::Mock.not_implemented
|
||||
response = Excon::Response.new
|
||||
server_set = []
|
||||
if identifier
|
||||
if server = self.data[:servers][identifier]
|
||||
server_set << server
|
||||
else
|
||||
raise Fog::AWS::RDS::NotFound.new("DBInstance #{identifier} not found")
|
||||
end
|
||||
else
|
||||
server_set = self.data[:servers].values
|
||||
end
|
||||
|
||||
server_set.each do |server|
|
||||
case server["DBInstanceStatus"]
|
||||
when "creating"
|
||||
if Time.now - server['InstanceCreateTime'] >= Fog::Mock.delay * 2
|
||||
region = "us-east-1"
|
||||
server["DBInstanceStatus"] = "available"
|
||||
server["AvailabilityZone"] = region + 'a'
|
||||
server["Endpoint"] = {"Port"=>3306,
|
||||
"Address"=> Fog::AWS::Mock.rds_address(server["DBInstanceIdentifier"],region) }
|
||||
server["PendingModifiedValues"] = {}
|
||||
end
|
||||
when "rebooting" # I don't know how to show rebooting just once before it changes to available
|
||||
# it applies pending modified values
|
||||
if server["PendingModifiedValues"]
|
||||
server.merge!(server["PendingModifiedValues"])
|
||||
server["PendingModifiedValues"] = {}
|
||||
self.data[:tmp] ||= Time.now + Fog::Mock.delay * 2
|
||||
if self.data[:tmp] <= Time.now
|
||||
server["DBInstanceStatus"] = 'available'
|
||||
self.data.delete(:tmp)
|
||||
end
|
||||
end
|
||||
when "modifying"
|
||||
# TODO there are some fields that only applied after rebooting
|
||||
if server["PendingModifiedValues"]
|
||||
server.merge!(server["PendingModifiedValues"])
|
||||
server["PendingModifiedValues"] = {}
|
||||
server["DBInstanceStatus"] = 'available'
|
||||
end
|
||||
when "available" # I'm not sure if amazon does this
|
||||
if server["PendingModifiedValues"]
|
||||
server["DBInstanceStatus"] = 'modifying'
|
||||
end
|
||||
|
||||
end
|
||||
end
|
||||
|
||||
response.status = 200
|
||||
response.body = {
|
||||
"ResponseMetadata"=>{ "RequestId"=> Fog::AWS::Mock.request_id },
|
||||
"DescribeDBInstancesResult" => { "DBInstances" => server_set }
|
||||
}
|
||||
response
|
||||
end
|
||||
|
||||
|
||||
end
|
||||
end
|
||||
|
|
|
|||
|
|
@ -32,7 +32,7 @@ module Fog
|
|||
|
||||
class Mock
|
||||
|
||||
def describe_db_instances(identifier=nil, opts={})
|
||||
def describe_db_reserved_instances(identifier=nil, opts={})
|
||||
Fog::Mock.not_implemented
|
||||
end
|
||||
|
||||
|
|
|
|||
|
|
@ -28,8 +28,50 @@ module Fog
|
|||
|
||||
class Mock
|
||||
|
||||
def describe_db_security_group(opts={})
|
||||
Fog::Mock.not_implemented
|
||||
def describe_db_security_groups(opts={})
|
||||
response = Excon::Response.new
|
||||
sec_group_set = []
|
||||
if opts.is_a?(String)
|
||||
sec_group_name = opts
|
||||
if sec_group = self.data[:security_groups][sec_group_name]
|
||||
sec_group_set << sec_group
|
||||
else
|
||||
raise Fog::AWS::RDS::NotFound.new("Security Group #{sec_group_name} not found")
|
||||
end
|
||||
else
|
||||
sec_group_set = self.data[:security_groups].values
|
||||
end
|
||||
|
||||
sec_group_set.each do |sec_group|
|
||||
sec_group["IPRanges"].each do |iprange|
|
||||
if iprange["Status"] == "authorizing" || iprange["Status"] == "revoking"
|
||||
iprange[:tmp] ||= Time.now + Fog::Mock.delay * 2
|
||||
if iprange[:tmp] <= Time.now
|
||||
iprange["Status"] = "authorized" if iprange["Status"] == "authorizing"
|
||||
iprange.delete(:tmp)
|
||||
sec_group["IPRanges"].delete(iprange) if iprange["Status"] == "revoking"
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
sec_group["EC2SecurityGroups"].each do |ec2_secg|
|
||||
if ec2_secg["Status"] == "authorizing" || iprange["Status"] == "revoking"
|
||||
ec2_secg[:tmp] ||= Time.now + Fog::Mock.delay * 2
|
||||
if ec2_secg[:tmp] <= Time.now
|
||||
ec2_secg["Status"] = "authorized" if ec2_secg["Status"] == "authorizing"
|
||||
ec2_secg.delete(:tmp)
|
||||
sec_group["EC2SecurityGroups"].delete(ec2_secg) if ec2_secg["Status"] == "revoking"
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
response.status = 200
|
||||
response.body = {
|
||||
"ResponseMetadata"=>{ "RequestId"=> Fog::AWS::Mock.request_id },
|
||||
"DescribeDBSecurityGroupsResult" => { "DBSecurityGroups" => sec_group_set }
|
||||
}
|
||||
response
|
||||
end
|
||||
|
||||
end
|
||||
|
|
|
|||
|
|
@ -45,7 +45,32 @@ module Fog
|
|||
class Mock
|
||||
|
||||
def modify_db_instance(db_name, apply_immediately, options={})
|
||||
Fog::Mock.not_implemented
|
||||
response = Excon::Response.new
|
||||
if self.data[:servers][db_name]
|
||||
if self.data[:servers][db_name]["DBInstanceStatus"] != "available"
|
||||
raise Fog::AWS::RDS::NotFound.new("DBInstance #{db_name} not available for modification")
|
||||
else
|
||||
# TODO verify the params options
|
||||
# if apply_immediately is false, all the options go to pending_modified_values and then apply and clear after either
|
||||
# a reboot or the maintainance window
|
||||
#if apply_immediately
|
||||
# modified_server = server.merge(options)
|
||||
#else
|
||||
# modified_server = server["PendingModifiedValues"].merge!(options) # it appends
|
||||
#end
|
||||
self.data[:servers][db_name]["PendingModifiedValues"].merge!(options) # it appends
|
||||
#self.data[:servers][db_name]["DBInstanceStatus"] = "modifying"
|
||||
response.status = 200
|
||||
response.body = {
|
||||
"ResponseMetadata"=>{ "RequestId"=> Fog::AWS::Mock.request_id },
|
||||
"ModifyDBInstanceResult" => { "DBInstance" => self.data[:servers][db_name] }
|
||||
}
|
||||
response
|
||||
|
||||
end
|
||||
else
|
||||
raise Fog::AWS::RDS::NotFound.new("DBInstance #{db_name} not found")
|
||||
end
|
||||
end
|
||||
|
||||
end
|
||||
|
|
|
|||
|
|
@ -25,7 +25,23 @@ module Fog
|
|||
class Mock
|
||||
|
||||
def reboot_db_instance(instance_identifier)
|
||||
Fog::Mock.not_implemented
|
||||
response = Excon::Response.new
|
||||
if self.data[:servers][instance_identifier]
|
||||
if self.data[:servers][instance_identifier]["DBInstanceStatus"] != "available"
|
||||
raise Fog::AWS::RDS::NotFound.new("DBInstance #{instance_identifier} not available for rebooting")
|
||||
else
|
||||
self.data[:servers][instance_identifier]["DBInstanceStatus"] = 'rebooting'
|
||||
response.status = 200
|
||||
response.body = {
|
||||
"ResponseMetadata"=>{ "RequestId"=> Fog::AWS::Mock.request_id },
|
||||
"RebootDBInstanceResult" => { "DBInstance" => self.data[:servers][instance_identifier] }
|
||||
}
|
||||
response
|
||||
|
||||
end
|
||||
else
|
||||
raise Fog::AWS::RDS::NotFound.new("DBInstance #{instance_identifier} not found")
|
||||
end
|
||||
end
|
||||
|
||||
end
|
||||
|
|
|
|||
|
|
@ -33,7 +33,33 @@ module Fog
|
|||
class Mock
|
||||
|
||||
def revoke_db_security_group_ingress(name, opts = {})
|
||||
Fog::Mock.not_implemented
|
||||
unless opts.key?('CIDRIP') || (opts.key?('EC2SecurityGroupName') && opts.key?('EC2SecurityGroupOwnerId'))
|
||||
raise ArgumentError, 'Must specify CIDRIP, or both EC2SecurityGroupName and EC2SecurityGroupOwnerId'
|
||||
end
|
||||
|
||||
response = Excon::Response.new
|
||||
|
||||
if sec_group = self.data[:security_groups][name]
|
||||
if opts.key?('CIDRIP')
|
||||
sec_group['IPRanges'].each do |iprange|
|
||||
iprange['Status']= 'revoking' if iprange['CIDRIP'] == opts['CIDRIP']
|
||||
end
|
||||
else
|
||||
sec_group['EC2SecurityGroups'].each do |ec2_secg|
|
||||
ec2_secg['Status']= 'revoking' if ec2_secg['EC2SecurityGroupName'] == opts['EC2SecurityGroupName']
|
||||
end
|
||||
end
|
||||
response.status = 200
|
||||
response.body = {
|
||||
"ResponseMetadata"=>{ "RequestId"=> Fog::AWS::Mock.request_id },
|
||||
'RevokeDBSecurityGroupIngressResult' => {
|
||||
'DBSecurityGroup' => sec_group
|
||||
}
|
||||
}
|
||||
response
|
||||
else
|
||||
raise Fog::AWS::RDS::NotFound.new("DBSecurityGroupNotFound => #{name} not found")
|
||||
end
|
||||
end
|
||||
|
||||
end
|
||||
|
|
|
|||
|
|
@ -37,37 +37,36 @@ module Fog
|
|||
max_number_of_messages = options['MaxNumberOfMessages'] || 1
|
||||
now = Time.now
|
||||
|
||||
keys = queue[:messages].keys[0, max_number_of_messages]
|
||||
|
||||
messages = queue[:messages].values_at(*keys).map do |m|
|
||||
messages = []
|
||||
|
||||
queue[:messages].values.each do |m|
|
||||
message_id = m['MessageId']
|
||||
|
||||
|
||||
invisible = if (received_handles = queue[:receipt_handles][message_id])
|
||||
visibility_timeout = m['Attributes']['VisibilityTimeout'] || queue['Attributes']['VisibilityTimeout']
|
||||
received_handles.any? { |handle, time| now < time + visibility_timeout }
|
||||
else
|
||||
false
|
||||
end
|
||||
|
||||
if invisible
|
||||
nil
|
||||
else
|
||||
|
||||
unless invisible
|
||||
receipt_handle = Fog::Mock.random_base64(300)
|
||||
|
||||
|
||||
queue[:receipt_handles][message_id] ||= {}
|
||||
queue[:receipt_handles][message_id][receipt_handle] = now
|
||||
|
||||
|
||||
m['Attributes'].tap do |attrs|
|
||||
attrs['ApproximateFirstReceiveTimestamp'] ||= now
|
||||
attrs['ApproximateReceiveCount'] = (attrs['ApproximateReceiveCount'] || 0) + 1
|
||||
end
|
||||
|
||||
m.merge({
|
||||
|
||||
messages << m.merge({
|
||||
'ReceiptHandle' => receipt_handle
|
||||
})
|
||||
break if messages.size >= max_number_of_messages
|
||||
end
|
||||
end.compact
|
||||
|
||||
end
|
||||
|
||||
response.body = {
|
||||
'ResponseMetadata' => {
|
||||
'RequestId' => Fog::AWS::Mock.request_id
|
||||
|
|
|
|||
|
|
@ -2,6 +2,8 @@ module Fog
|
|||
module Storage
|
||||
class AWS
|
||||
|
||||
require 'fog/aws/parsers/storage/access_control_list'
|
||||
|
||||
private
|
||||
def self.hash_to_acl(acl)
|
||||
data = "<AccessControlPolicy>\n"
|
||||
|
|
@ -49,6 +51,12 @@ module Fog
|
|||
data
|
||||
end
|
||||
|
||||
def self.acl_to_hash(acl_xml)
|
||||
parser = Fog::Parsers::Storage::AWS::AccessControlList.new
|
||||
Nokogiri::XML::SAX::Parser.new(parser).parse(acl_xml)
|
||||
parser.response
|
||||
end
|
||||
|
||||
end
|
||||
end
|
||||
end
|
||||
|
|
@ -51,6 +51,14 @@ module Fog
|
|||
source_object = source_bucket && source_bucket[:objects][source_object_name]
|
||||
target_bucket = self.data[:buckets][target_bucket_name]
|
||||
|
||||
acl = options['x-amz-acl'] || 'private'
|
||||
if !['private', 'public-read', 'public-read-write', 'authenticated-read'].include?(acl)
|
||||
raise Excon::Errors::BadRequest.new('invalid x-amz-acl')
|
||||
else
|
||||
self.data[:acls][:object][target_bucket_name] ||= {}
|
||||
self.data[:acls][:object][target_bucket_name][target_object_name] = self.class.acls(acl)
|
||||
end
|
||||
|
||||
if source_object && target_bucket
|
||||
response.status = 200
|
||||
target_object = source_object.dup
|
||||
|
|
|
|||
|
|
@ -48,11 +48,17 @@ module Fog
|
|||
|
||||
class Mock # :nodoc:all
|
||||
|
||||
require 'fog/aws/requests/storage/acl_utils'
|
||||
|
||||
def get_bucket_acl(bucket_name)
|
||||
response = Excon::Response.new
|
||||
if acl = self.data[:acls][:bucket][bucket_name]
|
||||
response.status = 200
|
||||
response.body = acl
|
||||
if acl.is_a?(String)
|
||||
response.body = Fog::Storage::AWS.acl_to_hash(acl)
|
||||
else
|
||||
response.body = acl
|
||||
end
|
||||
else
|
||||
response.status = 404
|
||||
raise(Excon::Errors.status_error({:expects => 200}, response))
|
||||
|
|
|
|||
|
|
@ -59,11 +59,17 @@ module Fog
|
|||
|
||||
class Mock # :nodoc:all
|
||||
|
||||
require 'fog/aws/requests/storage/acl_utils'
|
||||
|
||||
def get_object_acl(bucket_name, object_name, options = {})
|
||||
response = Excon::Response.new
|
||||
if acl = self.data[:acls][:object][bucket_name] && self.data[:acls][:object][bucket_name][object_name]
|
||||
response.status = 200
|
||||
response.body = acl
|
||||
if acl.is_a?(String)
|
||||
response.body = Fog::Storage::AWS.acl_to_hash(acl)
|
||||
else
|
||||
response.body = acl
|
||||
end
|
||||
else
|
||||
response.status = 404
|
||||
raise(Excon::Errors.status_error({:expects => 200}, response))
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@ module Fog
|
|||
|
||||
module GetObjectHttpUrl
|
||||
|
||||
def get_object_http_url(bucket_name, object_name, expires)
|
||||
def get_object_http_url(bucket_name, object_name, expires, options = {})
|
||||
unless bucket_name
|
||||
raise ArgumentError.new('bucket_name is required')
|
||||
end
|
||||
|
|
@ -15,7 +15,8 @@ module Fog
|
|||
:headers => {},
|
||||
:host => @host,
|
||||
:method => 'GET',
|
||||
:path => "#{bucket_name}/#{object_name}"
|
||||
:path => "#{bucket_name}/#{object_name}",
|
||||
:query => options[:query]
|
||||
}, expires)
|
||||
end
|
||||
|
||||
|
|
@ -48,4 +49,4 @@ module Fog
|
|||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@ module Fog
|
|||
|
||||
module GetObjectHttpsUrl
|
||||
|
||||
def get_object_https_url(bucket_name, object_name, expires)
|
||||
def get_object_https_url(bucket_name, object_name, expires, options = {})
|
||||
unless bucket_name
|
||||
raise ArgumentError.new('bucket_name is required')
|
||||
end
|
||||
|
|
@ -15,7 +15,8 @@ module Fog
|
|||
:headers => {},
|
||||
:host => @host,
|
||||
:method => 'GET',
|
||||
:path => "#{bucket_name}/#{object_name}"
|
||||
:path => "#{bucket_name}/#{object_name}",
|
||||
:query => options[:query]
|
||||
}, expires)
|
||||
end
|
||||
|
||||
|
|
@ -48,4 +49,4 @@ module Fog
|
|||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
|
|
|
|||
|
|
@ -3,8 +3,8 @@ module Fog
|
|||
class AWS
|
||||
class Real
|
||||
|
||||
require 'fog/aws/requests/storage/hash_to_acl'
|
||||
|
||||
require 'fog/aws/requests/storage/acl_utils'
|
||||
|
||||
# Change access control list for an S3 bucket
|
||||
#
|
||||
# ==== Parameters
|
||||
|
|
|
|||
|
|
@ -44,6 +44,23 @@ DATA
|
|||
end
|
||||
|
||||
end
|
||||
|
||||
class Mock # :nodoc:all
|
||||
|
||||
def put_bucket_website(bucket_name, suffix, options = {})
|
||||
response = Excon::Response.new
|
||||
if self.data[:buckets][bucket_name]
|
||||
response.status = 200
|
||||
else
|
||||
response.status = 403
|
||||
raise(Excon::Errors.status_error({:expects => 200}, response))
|
||||
end
|
||||
|
||||
response
|
||||
end
|
||||
|
||||
end
|
||||
|
||||
end
|
||||
end
|
||||
end
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ module Fog
|
|||
class AWS
|
||||
class Real
|
||||
|
||||
require 'fog/aws/requests/storage/hash_to_acl'
|
||||
require 'fog/aws/requests/storage/acl_utils'
|
||||
|
||||
# Change access control list for an S3 object
|
||||
#
|
||||
|
|
|
|||
20
lib/fog/aws/requests/sts/get_federation_token.rb
Normal file
20
lib/fog/aws/requests/sts/get_federation_token.rb
Normal file
|
|
@ -0,0 +1,20 @@
|
|||
module Fog
|
||||
module AWS
|
||||
class STS
|
||||
class Real
|
||||
|
||||
require 'fog/aws/parsers/sts/get_session_token'
|
||||
|
||||
def get_federation_token(name, policy, duration=43200)
|
||||
request({
|
||||
'Action' => 'GetFederationToken',
|
||||
'Name' => name,
|
||||
'Policy' => MultiJson.encode(policy),
|
||||
'DurationSeconds' => duration,
|
||||
:parser => Fog::Parsers::AWS::STS::GetSessionToken.new
|
||||
})
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
18
lib/fog/aws/requests/sts/get_session_token.rb
Normal file
18
lib/fog/aws/requests/sts/get_session_token.rb
Normal file
|
|
@ -0,0 +1,18 @@
|
|||
module Fog
|
||||
module AWS
|
||||
class STS
|
||||
class Real
|
||||
|
||||
require 'fog/aws/parsers/sts/get_session_token'
|
||||
|
||||
def get_session_token(duration=43200)
|
||||
request({
|
||||
'Action' => 'GetSessionToken',
|
||||
'DurationSeconds' => duration,
|
||||
:parser => Fog::Parsers::AWS::STS::GetSessionToken.new
|
||||
})
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
|
|
@ -5,7 +5,7 @@ module Fog
|
|||
class SimpleDB < Fog::Service
|
||||
|
||||
requires :aws_access_key_id, :aws_secret_access_key
|
||||
recognizes :host, :nil_string, :path, :port, :scheme, :persistent, :region
|
||||
recognizes :host, :nil_string, :path, :port, :scheme, :persistent, :region, :aws_session_token
|
||||
|
||||
request_path 'fog/aws/requests/simpledb'
|
||||
request :batch_put_attributes
|
||||
|
|
@ -70,6 +70,7 @@ module Fog
|
|||
|
||||
@aws_access_key_id = options[:aws_access_key_id]
|
||||
@aws_secret_access_key = options[:aws_secret_access_key]
|
||||
@aws_session_token = options[:aws_session_token]
|
||||
@connection_options = options[:connection_options] || {}
|
||||
@hmac = Fog::HMAC.new('sha256', @aws_secret_access_key)
|
||||
@nil_string = options[:nil_string]|| 'nil'
|
||||
|
|
@ -88,6 +89,8 @@ module Fog
|
|||
'sdb.us-west-1.amazonaws.com'
|
||||
when 'us-west-2'
|
||||
'sdb.us-west-2.amazonaws.com'
|
||||
when 'sa-east-1'
|
||||
'sdb.sa-east-1.amazonaws.com'
|
||||
else
|
||||
raise ArgumentError, "Unknown region: #{options[:region].inspect}"
|
||||
end
|
||||
|
|
@ -165,6 +168,7 @@ module Fog
|
|||
params,
|
||||
{
|
||||
:aws_access_key_id => @aws_access_key_id,
|
||||
:aws_session_token => @aws_session_token,
|
||||
:hmac => @hmac,
|
||||
:host => @host,
|
||||
:path => @path,
|
||||
|
|
|
|||
|
|
@ -67,6 +67,8 @@ module Fog
|
|||
'sns.us-west-1.amazonaws.com'
|
||||
when 'us-west-2'
|
||||
'sns.us-west-2.amazonaws.com'
|
||||
when 'sa-east-1'
|
||||
'sns.sa-east-1.amazonaws.com'
|
||||
else
|
||||
raise ArgumentError, "Unknown region: #{options[:region].inspect}"
|
||||
end
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@ module Fog
|
|||
class SQS < Fog::Service
|
||||
|
||||
requires :aws_access_key_id, :aws_secret_access_key
|
||||
recognizes :region, :host, :path, :port, :scheme, :persistent
|
||||
recognizes :region, :host, :path, :port, :scheme, :persistent, :aws_session_token
|
||||
|
||||
request_path 'fog/aws/requests/sqs'
|
||||
request :change_message_visibility
|
||||
|
|
@ -78,6 +78,7 @@ module Fog
|
|||
def initialize(options={})
|
||||
@aws_access_key_id = options[:aws_access_key_id]
|
||||
@aws_secret_access_key = options[:aws_secret_access_key]
|
||||
@aws_session_token = options[:aws_session_token]
|
||||
@connection_options = options[:connection_options] || {}
|
||||
@hmac = Fog::HMAC.new('sha256', @aws_secret_access_key)
|
||||
options[:region] ||= 'us-east-1'
|
||||
|
|
@ -92,6 +93,8 @@ module Fog
|
|||
'us-west-1.queue.amazonaws.com'
|
||||
when 'us-west-2'
|
||||
'us-west-2.queue.amazonaws.com'
|
||||
when 'sa-east-1'
|
||||
'sa-east-1.queue.amazonaws.com'
|
||||
else
|
||||
raise ArgumentError, "Unknown region: #{options[:region].inspect}"
|
||||
end
|
||||
|
|
@ -121,6 +124,7 @@ module Fog
|
|||
params,
|
||||
{
|
||||
:aws_access_key_id => @aws_access_key_id,
|
||||
:aws_session_token => @aws_session_token,
|
||||
:hmac => @hmac,
|
||||
:host => @host,
|
||||
:path => path || @path,
|
||||
|
|
|
|||
|
|
@ -197,10 +197,14 @@ module Fog
|
|||
's3-eu-west-1.amazonaws.com'
|
||||
when 'us-east-1'
|
||||
's3.amazonaws.com'
|
||||
when 'sa-east-1'
|
||||
's3-sa-east-1.amazonaws.com'
|
||||
when 'us-west-1'
|
||||
's3-us-west-1.amazonaws.com'
|
||||
when 'us-west-2'
|
||||
's3-us-west-2.amazonaws.com'
|
||||
when 'sa-east-1'
|
||||
's3-sa-east-1.amazonaws.com'
|
||||
else
|
||||
raise ArgumentError, "Unknown region: #{options[:region].inspect}"
|
||||
end
|
||||
|
|
@ -271,6 +275,8 @@ module Fog
|
|||
's3-eu-west-1.amazonaws.com'
|
||||
when 'us-east-1'
|
||||
's3.amazonaws.com'
|
||||
when 'sa-east-1'
|
||||
's3-sa-east-1.amazonaws.com'
|
||||
when 'us-west-1'
|
||||
's3-us-west-1.amazonaws.com'
|
||||
when 'us-west-2'
|
||||
|
|
|
|||
137
lib/fog/aws/sts.rb
Normal file
137
lib/fog/aws/sts.rb
Normal file
|
|
@ -0,0 +1,137 @@
|
|||
require File.expand_path(File.join(File.dirname(__FILE__), '..', 'aws'))
|
||||
|
||||
module Fog
|
||||
module AWS
|
||||
class STS < Fog::Service
|
||||
|
||||
class EntityAlreadyExists < Fog::AWS::STS::Error; end
|
||||
class ValidationError < Fog::AWS::STS::Error; end
|
||||
|
||||
requires :aws_access_key_id, :aws_secret_access_key
|
||||
recognizes :host, :path, :port, :scheme, :persistent
|
||||
|
||||
request_path 'fog/aws/requests/sts'
|
||||
request :get_federation_token
|
||||
request :get_session_token
|
||||
|
||||
class Mock
|
||||
def self.data
|
||||
@data ||= Hash.new do |hash, key|
|
||||
hash[key] = {
|
||||
:owner_id => Fog::AWS::Mock.owner_id,
|
||||
:server_certificates => {}
|
||||
}
|
||||
end
|
||||
end
|
||||
|
||||
def self.reset
|
||||
@data = nil
|
||||
end
|
||||
|
||||
def self.server_certificate_id
|
||||
Fog::Mock.random_hex(16)
|
||||
end
|
||||
|
||||
def initialize(options={})
|
||||
@aws_access_key_id = options[:aws_access_key_id]
|
||||
end
|
||||
|
||||
def data
|
||||
self.class.data[@aws_access_key_id]
|
||||
end
|
||||
|
||||
def reset_data
|
||||
self.class.data.delete(@aws_access_key_id)
|
||||
end
|
||||
end
|
||||
|
||||
class Real
|
||||
|
||||
# Initialize connection to STS
|
||||
#
|
||||
# ==== Notes
|
||||
# options parameter must include values for :aws_access_key_id and
|
||||
# :aws_secret_access_key in order to create a connection
|
||||
#
|
||||
# ==== Examples
|
||||
# iam = STS.new(
|
||||
# :aws_access_key_id => your_aws_access_key_id,
|
||||
# :aws_secret_access_key => your_aws_secret_access_key
|
||||
# )
|
||||
#
|
||||
# ==== Parameters
|
||||
# * options<~Hash> - config arguments for connection. Defaults to {}.
|
||||
#
|
||||
# ==== Returns
|
||||
# * STS object with connection to AWS.
|
||||
def initialize(options={})
|
||||
require 'fog/core/parser'
|
||||
require 'multi_json'
|
||||
|
||||
@aws_access_key_id = options[:aws_access_key_id]
|
||||
@aws_secret_access_key = options[:aws_secret_access_key]
|
||||
@connection_options = options[:connection_options] || {}
|
||||
@hmac = Fog::HMAC.new('sha256', @aws_secret_access_key)
|
||||
@host = options[:host] || 'sts.amazonaws.com'
|
||||
@path = options[:path] || '/'
|
||||
@persistent = options[:persistent] || false
|
||||
@port = options[:port] || 443
|
||||
@scheme = options[:scheme] || 'https'
|
||||
@connection = Fog::Connection.new("#{@scheme}://#{@host}:#{@port}#{@path}", @persistent, @connection_options)
|
||||
end
|
||||
|
||||
def reload
|
||||
@connection.reset
|
||||
end
|
||||
|
||||
private
|
||||
|
||||
def request(params)
|
||||
idempotent = params.delete(:idempotent)
|
||||
parser = params.delete(:parser)
|
||||
|
||||
body = Fog::AWS.signed_params(
|
||||
params,
|
||||
{
|
||||
:aws_access_key_id => @aws_access_key_id,
|
||||
:hmac => @hmac,
|
||||
:host => @host,
|
||||
:path => @path,
|
||||
:port => @port,
|
||||
:version => '2011-06-15'
|
||||
}
|
||||
)
|
||||
|
||||
begin
|
||||
response = @connection.request({
|
||||
:body => body,
|
||||
:expects => 200,
|
||||
:idempotent => idempotent,
|
||||
:headers => { 'Content-Type' => 'application/x-www-form-urlencoded' },
|
||||
:host => @host,
|
||||
:method => 'POST',
|
||||
:parser => parser
|
||||
})
|
||||
|
||||
response
|
||||
rescue Excon::Errors::HTTPStatusError => error
|
||||
if match = error.message.match(/<Code>(.*)<\/Code>(?:.*<Message>(.*)<\/Message>)?/m)
|
||||
case match[1]
|
||||
when 'EntityAlreadyExists', 'KeyPairMismatch', 'LimitExceeded', 'MalformedCertificate', 'ValidationError'
|
||||
raise Fog::AWS::STS.const_get(match[1]).slurp(error, match[2])
|
||||
else
|
||||
raise Fog::AWS::STS::Error.slurp(error, "#{match[1]} => #{match[2]}") if match[1]
|
||||
raise
|
||||
end
|
||||
else
|
||||
raise
|
||||
end
|
||||
end
|
||||
|
||||
|
||||
end
|
||||
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
|
|
@ -56,6 +56,7 @@ require 'fog/bin/aws'
|
|||
require 'fog/bin/bluebox'
|
||||
require 'fog/bin/brightbox'
|
||||
require 'fog/bin/cloudstack'
|
||||
require 'fog/bin/clodo'
|
||||
require 'fog/bin/dnsimple'
|
||||
require 'fog/bin/dnsmadeeasy'
|
||||
require 'fog/bin/dynect'
|
||||
|
|
|
|||
|
|
@ -35,6 +35,8 @@ class AWS < Fog::Bin
|
|||
Fog::AWS::RDS
|
||||
when :sns
|
||||
Fog::AWS::SNS
|
||||
when :sts
|
||||
Fog::AWS::STS
|
||||
else
|
||||
# @todo Replace most instances of ArgumentError with NotImplementedError
|
||||
# @todo For a list of widely supported Exceptions, see:
|
||||
|
|
|
|||
31
lib/fog/bin/clodo.rb
Normal file
31
lib/fog/bin/clodo.rb
Normal file
|
|
@ -0,0 +1,31 @@
|
|||
class Clodo < Fog::Bin
|
||||
class << self
|
||||
|
||||
def class_for(key)
|
||||
case key
|
||||
when :compute
|
||||
Fog::Compute::Clodo
|
||||
else
|
||||
raise ArgumentError, "Unrecognized service: #{key}"
|
||||
end
|
||||
end
|
||||
|
||||
def [](service)
|
||||
@@connections ||= Hash.new do |hash, key|
|
||||
hash[key] = case key
|
||||
when :compute
|
||||
Formatador.display_line("[yellow][WARN] Clodo[:compute] is deprecated, use Compute[:clodo] instead[/]")
|
||||
Fog::Compute.new(:provider => 'Clodo')
|
||||
else
|
||||
raise ArgumentError, "Unrecognized service: #{key.inspect}"
|
||||
end
|
||||
end
|
||||
@@connections[service]
|
||||
end
|
||||
|
||||
def services
|
||||
Fog::Clodo.services
|
||||
end
|
||||
|
||||
end
|
||||
end
|
||||
|
|
@ -16,6 +16,10 @@ module Fog
|
|||
model :server
|
||||
collection :server_groups
|
||||
model :server_group
|
||||
collection :firewall_policies
|
||||
model :firewall_policy
|
||||
collection :firewall_rules
|
||||
model :firewall_rule
|
||||
collection :flavors
|
||||
model :flavor
|
||||
collection :images
|
||||
|
|
@ -35,6 +39,7 @@ module Fog
|
|||
request :add_nodes_load_balancer
|
||||
request :add_servers_server_group
|
||||
request :apply_to_firewall_policy
|
||||
request :remove_firewall_policy
|
||||
request :create_api_client
|
||||
request :create_cloud_ip
|
||||
request :create_firewall_policy
|
||||
|
|
@ -80,6 +85,7 @@ module Fog
|
|||
request :remove_nodes_load_balancer
|
||||
request :remove_servers_server_group
|
||||
request :reset_ftp_password_account
|
||||
request :reset_secret_api_client
|
||||
request :resize_server
|
||||
request :shutdown_server
|
||||
request :snapshot_server
|
||||
|
|
@ -88,6 +94,8 @@ module Fog
|
|||
request :unmap_cloud_ip
|
||||
request :update_account
|
||||
request :update_api_client
|
||||
request :update_cloud_ip
|
||||
request :update_firewall_rule
|
||||
request :update_image
|
||||
request :update_load_balancer
|
||||
request :update_server
|
||||
|
|
|
|||
|
|
@ -23,9 +23,17 @@ module Fog
|
|||
attribute :server_id, :aliases => "server", :squash => "id"
|
||||
attribute :load_balancer, :alias => "load_balancer", :squash => "id"
|
||||
|
||||
def map(interface_to_map)
|
||||
def map(destination)
|
||||
requires :identity
|
||||
connection.map_cloud_ip(identity, :interface => interface_to_map)
|
||||
case destination
|
||||
when Fog::Compute::Brightbox::Server
|
||||
final_destination = destination.interfaces.first["id"]
|
||||
when Fog::Compute::Brightbox::LoadBalancer
|
||||
final_destination = destination.id
|
||||
else
|
||||
final_destination = destination
|
||||
end
|
||||
connection.map_cloud_ip(identity, :destination => final_destination)
|
||||
end
|
||||
|
||||
def mapped?
|
||||
|
|
|
|||
29
lib/fog/brightbox/models/compute/firewall_policies.rb
Normal file
29
lib/fog/brightbox/models/compute/firewall_policies.rb
Normal file
|
|
@ -0,0 +1,29 @@
|
|||
require 'fog/core/collection'
|
||||
require 'fog/brightbox/models/compute/firewall_policy'
|
||||
|
||||
module Fog
|
||||
module Compute
|
||||
class Brightbox
|
||||
|
||||
class FirewallPolicies < Fog::Collection
|
||||
|
||||
model Fog::Compute::Brightbox::FirewallPolicy
|
||||
|
||||
def all
|
||||
data = connection.list_firewall_policies
|
||||
load(data)
|
||||
end
|
||||
|
||||
def get(identifier)
|
||||
return nil if identifier.nil? || identifier == ""
|
||||
data = connection.get_firewall_policy(identifier)
|
||||
new(data)
|
||||
rescue Excon::Errors::NotFound
|
||||
nil
|
||||
end
|
||||
|
||||
end
|
||||
|
||||
end
|
||||
end
|
||||
end
|
||||
65
lib/fog/brightbox/models/compute/firewall_policy.rb
Normal file
65
lib/fog/brightbox/models/compute/firewall_policy.rb
Normal file
|
|
@ -0,0 +1,65 @@
|
|||
require 'fog/core/model'
|
||||
|
||||
module Fog
|
||||
module Compute
|
||||
class Brightbox
|
||||
|
||||
class FirewallPolicy < Fog::Model
|
||||
|
||||
identity :id
|
||||
attribute :url
|
||||
attribute :resource_type
|
||||
|
||||
attribute :name
|
||||
attribute :description
|
||||
|
||||
attribute :default
|
||||
|
||||
attribute :server_group_id, :aliases => "server_group", :squash => "id"
|
||||
attribute :created_at, :type => :time
|
||||
attribute :rules
|
||||
|
||||
# Sticking with existing Fog behaviour, save does not update but creates a new resource
|
||||
def save
|
||||
raise Fog::Errors::Error.new('Resaving an existing object may create a duplicate') if identity
|
||||
options = {
|
||||
:server_group => server_group_id,
|
||||
:name => name,
|
||||
:description => description
|
||||
}.delete_if {|k,v| v.nil? || v == "" }
|
||||
data = connection.create_firewall_policy(options)
|
||||
merge_attributes(data)
|
||||
true
|
||||
end
|
||||
|
||||
def apply_to(server_group_id)
|
||||
requires :identity
|
||||
options = {
|
||||
:server_group => server_group_id
|
||||
}
|
||||
data = connection.apply_to_firewall_policy(identity, options)
|
||||
merge_attributes(data)
|
||||
true
|
||||
end
|
||||
|
||||
def remove(server_group_id)
|
||||
requires :identity
|
||||
options = {
|
||||
:server_group => server_group_id
|
||||
}
|
||||
data = connection.remove_firewall_policy(identity, options)
|
||||
merge_attributes(data)
|
||||
true
|
||||
end
|
||||
|
||||
def destroy
|
||||
requires :identity
|
||||
data = connection.destroy_firewall_policy(identity)
|
||||
true
|
||||
end
|
||||
|
||||
end
|
||||
|
||||
end
|
||||
end
|
||||
end
|
||||
54
lib/fog/brightbox/models/compute/firewall_rule.rb
Normal file
54
lib/fog/brightbox/models/compute/firewall_rule.rb
Normal file
|
|
@ -0,0 +1,54 @@
|
|||
require 'fog/core/model'
|
||||
|
||||
module Fog
|
||||
module Compute
|
||||
class Brightbox
|
||||
|
||||
class FirewallRule < Fog::Model
|
||||
|
||||
identity :id
|
||||
attribute :url
|
||||
attribute :resource_type
|
||||
|
||||
attribute :description
|
||||
|
||||
attribute :source
|
||||
attribute :source_port
|
||||
attribute :destination
|
||||
attribute :destination_port
|
||||
attribute :protocol
|
||||
attribute :icmp_type_name
|
||||
attribute :created_at, :type => :time
|
||||
|
||||
attribute :firewall_policy_id, :aliases => "firewall_policy", :squash => "id"
|
||||
|
||||
# Sticking with existing Fog behaviour, save does not update but creates a new resource
|
||||
def save
|
||||
raise Fog::Errors::Error.new('Resaving an existing object may create a duplicate') if identity
|
||||
requires :firewall_policy_id
|
||||
options = {
|
||||
:firewall_policy => firewall_policy_id,
|
||||
:protocol => protocol,
|
||||
:description => description,
|
||||
:source => source,
|
||||
:source_port => source_port,
|
||||
:destination => destination,
|
||||
:destination_port => destination_port,
|
||||
:icmp_type_name => icmp_type_name
|
||||
}.delete_if {|k,v| v.nil? || v == "" }
|
||||
data = connection.create_firewall_rule(options)
|
||||
merge_attributes(data)
|
||||
true
|
||||
end
|
||||
|
||||
def destroy
|
||||
requires :identity
|
||||
connection.destroy_firewall_rule(identity)
|
||||
true
|
||||
end
|
||||
|
||||
end
|
||||
|
||||
end
|
||||
end
|
||||
end
|
||||
24
lib/fog/brightbox/models/compute/firewall_rules.rb
Normal file
24
lib/fog/brightbox/models/compute/firewall_rules.rb
Normal file
|
|
@ -0,0 +1,24 @@
|
|||
require 'fog/core/collection'
|
||||
require 'fog/brightbox/models/compute/firewall_rule'
|
||||
|
||||
module Fog
|
||||
module Compute
|
||||
class Brightbox
|
||||
|
||||
class FirewallRules < Fog::Collection
|
||||
|
||||
model Fog::Compute::Brightbox::FirewallRule
|
||||
|
||||
def get(identifier)
|
||||
return nil if identifier.nil? || identifier == ""
|
||||
data = connection.get_firewall_rule(identifier)
|
||||
new(data)
|
||||
rescue Excon::Errors::NotFound
|
||||
nil
|
||||
end
|
||||
|
||||
end
|
||||
|
||||
end
|
||||
end
|
||||
end
|
||||
|
|
@ -11,6 +11,7 @@ module Fog
|
|||
attribute :resource_type
|
||||
|
||||
attribute :name
|
||||
attribute :username
|
||||
attribute :status
|
||||
attribute :description
|
||||
|
||||
|
|
@ -43,6 +44,7 @@ module Fog
|
|||
:source => source,
|
||||
:arch => arch,
|
||||
:name => name,
|
||||
:username => username,
|
||||
:description => description
|
||||
}.delete_if {|k,v| v.nil? || v == "" }
|
||||
data = connection.create_image(options)
|
||||
|
|
|
|||
|
|
@ -27,17 +27,43 @@ module Fog
|
|||
# Links - to be replaced
|
||||
attribute :account_id, :aliases => "account", :squash => "id"
|
||||
attribute :image_id, :aliases => "image", :squash => "id"
|
||||
attribute :flavor_id, :aliases => "server_type", :squash => "id"
|
||||
attribute :zone_id, :aliases => "zone", :squash => "id"
|
||||
|
||||
attribute :snapshots
|
||||
attribute :cloud_ips
|
||||
attribute :interfaces
|
||||
attribute :server_groups
|
||||
attribute :zone
|
||||
attribute :server_type
|
||||
|
||||
def initialize(attributes={})
|
||||
self.image_id ||= 'img-2ab98' # Ubuntu Lucid 10.04 server (i686)
|
||||
self.image_id ||= 'img-4gqhs' # Ubuntu Lucid 10.04 server (i686)
|
||||
super
|
||||
end
|
||||
|
||||
def zone_id
|
||||
if t_zone_id = attributes[:zone_id]
|
||||
t_zone_id
|
||||
elsif zone
|
||||
zone[:id] || zone['id']
|
||||
end
|
||||
end
|
||||
|
||||
def flavor_id
|
||||
if t_flavour_id = attributes[:flavor_id]
|
||||
t_flavour_id
|
||||
elsif server_type
|
||||
server_type[:id] || server_type['id']
|
||||
end
|
||||
end
|
||||
|
||||
def zone_id=(incoming_zone_id)
|
||||
attributes[:zone_id] = incoming_zone_id
|
||||
end
|
||||
|
||||
def flavor_id=(incoming_flavour_id)
|
||||
attributes[:flavor_id] = incoming_flavour_id
|
||||
end
|
||||
|
||||
def snapshot
|
||||
requires :identity
|
||||
connection.snapshot_server(identity)
|
||||
|
|
@ -82,11 +108,19 @@ module Fog
|
|||
end
|
||||
|
||||
def private_ip_address
|
||||
interfaces.first
|
||||
unless interfaces.empty?
|
||||
interfaces.first["ipv4_address"]
|
||||
else
|
||||
nil
|
||||
end
|
||||
end
|
||||
|
||||
def public_ip_address
|
||||
cloud_ips.first
|
||||
unless cloud_ips.empty?
|
||||
cloud_ips.first["public_ip"]
|
||||
else
|
||||
nil
|
||||
end
|
||||
end
|
||||
|
||||
def ready?
|
||||
|
|
@ -106,7 +140,8 @@ module Fog
|
|||
:image => image_id,
|
||||
:name => name,
|
||||
:zone => zone_id,
|
||||
:user_data => user_data
|
||||
:user_data => user_data,
|
||||
:server_groups => server_groups
|
||||
}.delete_if {|k,v| v.nil? || v == "" }
|
||||
unless flavor_id.nil? || flavor_id == ""
|
||||
options.merge!(:server_type => flavor_id)
|
||||
|
|
|
|||
|
|
@ -16,9 +16,11 @@ module Fog
|
|||
attribute :name
|
||||
attribute :description
|
||||
attribute :default
|
||||
attribute :created_at, :type => :time
|
||||
|
||||
attribute :server_ids, :aliases => "servers"
|
||||
|
||||
def save
|
||||
requires :name
|
||||
options = {
|
||||
:name => name,
|
||||
:description => description
|
||||
|
|
@ -28,30 +30,67 @@ module Fog
|
|||
true
|
||||
end
|
||||
|
||||
# Add a server to the server group
|
||||
def servers
|
||||
srv_ids = server_ids.collect {|srv| srv["id"]}
|
||||
srv_ids.collect do |srv_id|
|
||||
connection.servers.get(srv_id)
|
||||
end
|
||||
end
|
||||
|
||||
# Adds specified servers to this server group
|
||||
#
|
||||
# == Parameters:
|
||||
# identifiers::
|
||||
# An array of identifiers for the servers to add to the group
|
||||
#
|
||||
# == Returns:
|
||||
#
|
||||
# An excon response object representing the result
|
||||
#
|
||||
# <Excon::Response: ...
|
||||
#
|
||||
def add_servers(server_identifiers)
|
||||
# @param [Array] identifiers array of server identifier strings to add
|
||||
# @return [Fog::Compute::ServerGroup]
|
||||
def add_servers identifiers
|
||||
requires :identity
|
||||
server_references = server_identifiers.map {|ident| {"server" => ident} }
|
||||
options = {
|
||||
:servers => server_references
|
||||
:servers => server_references(identifiers)
|
||||
}
|
||||
data = connection.add_servers_server_group(identity, options)
|
||||
merge_attributes(data)
|
||||
data = connection.add_servers_server_group identity, options
|
||||
merge_attributes data
|
||||
end
|
||||
|
||||
# Removes specified servers from this server group
|
||||
#
|
||||
# @param [Array] identifiers array of server identifier strings to remove
|
||||
# @return [Fog::Compute::ServerGroup]
|
||||
def remove_servers identifiers
|
||||
requires :identity
|
||||
options = {
|
||||
:servers => server_references(identifiers)
|
||||
}
|
||||
data = connection.remove_servers_server_group identity, options
|
||||
merge_attributes data
|
||||
end
|
||||
|
||||
# Moves specified servers from this server group to the specified destination server group
|
||||
#
|
||||
# @param [Array] identifiers array of server identifier strings to move
|
||||
# @param [String] destination_group_id destination server group identifier
|
||||
# @return [Fog::Compute::ServerGroup]
|
||||
def move_servers identifiers, destination_group_id
|
||||
requires :identity
|
||||
options = {
|
||||
:servers => server_references(identifiers),
|
||||
:destination => destination_group_id
|
||||
}
|
||||
data = connection.move_servers_server_group identity, options
|
||||
merge_attributes data
|
||||
end
|
||||
|
||||
def destroy
|
||||
requires :identity
|
||||
connection.destroy_server_group(identity)
|
||||
true
|
||||
end
|
||||
|
||||
protected
|
||||
|
||||
def server_references identifiers
|
||||
identifiers.map {|id| {"server" => id} }
|
||||
end
|
||||
|
||||
end
|
||||
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
|
|
|
|||
14
lib/fog/brightbox/requests/compute/remove_firewall_policy.rb
Normal file
14
lib/fog/brightbox/requests/compute/remove_firewall_policy.rb
Normal file
|
|
@ -0,0 +1,14 @@
|
|||
module Fog
|
||||
module Compute
|
||||
class Brightbox
|
||||
class Real
|
||||
|
||||
def remove_firewall_policy(identifier, options)
|
||||
return nil if identifier.nil? || identifier == ""
|
||||
request("post", "/1.0/firewall_policies/#{identifier}/remove", [202], options)
|
||||
end
|
||||
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
13
lib/fog/brightbox/requests/compute/update_firewall_rule.rb
Normal file
13
lib/fog/brightbox/requests/compute/update_firewall_rule.rb
Normal file
|
|
@ -0,0 +1,13 @@
|
|||
module Fog
|
||||
module Compute
|
||||
class Brightbox
|
||||
class Real
|
||||
|
||||
def update_firewall_rule(id, options)
|
||||
request("put", "/1.0/firewall_rules/#{id}", [202], options)
|
||||
end
|
||||
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
34
lib/fog/clodo.rb
Normal file
34
lib/fog/clodo.rb
Normal file
|
|
@ -0,0 +1,34 @@
|
|||
require 'fog/core'
|
||||
|
||||
module Fog
|
||||
module Clodo
|
||||
|
||||
extend Fog::Provider
|
||||
|
||||
service(:compute, 'clodo/compute', 'Compute')
|
||||
|
||||
def self.authenticate(options)
|
||||
clodo_auth_url = options[:clodo_auth_url] || "api.clodo.ru"
|
||||
url = clodo_auth_url.match(/^https?:/) ? \
|
||||
clodo_auth_url : 'https://' + clodo_auth_url
|
||||
uri = URI.parse(url)
|
||||
connection = Fog::Connection.new(url)
|
||||
@clodo_api_key = options[:clodo_api_key]
|
||||
@clodo_username = options[:clodo_username]
|
||||
response = connection.request({
|
||||
:expects => [200, 204],
|
||||
:headers => {
|
||||
'X-Auth-Key' => @clodo_api_key,
|
||||
'X-Auth-User' => @clodo_username
|
||||
},
|
||||
:host => uri.host,
|
||||
:method => 'GET',
|
||||
:path => (uri.path and not uri.path.empty?) ? uri.path : 'v1.0'
|
||||
})
|
||||
response.headers.reject do |key, value|
|
||||
!['X-Server-Management-Url', 'X-Storage-Url', 'X-CDN-Management-Url', 'X-Auth-Token'].include?(key)
|
||||
end
|
||||
|
||||
end # authenticate
|
||||
end # module Clodo
|
||||
end # module Fog
|
||||
152
lib/fog/clodo/compute.rb
Normal file
152
lib/fog/clodo/compute.rb
Normal file
|
|
@ -0,0 +1,152 @@
|
|||
module Fog
|
||||
module Compute
|
||||
class Clodo < Fog::Service
|
||||
|
||||
requires :clodo_api_key, :clodo_username
|
||||
recognizes :clodo_auth_url, :persistent
|
||||
recognizes :clodo_auth_token, :clodo_management_url
|
||||
|
||||
model_path 'fog/clodo/models/compute'
|
||||
model :image
|
||||
collection :images
|
||||
model :server
|
||||
collection :servers
|
||||
|
||||
request_path 'fog/clodo/requests/compute'
|
||||
request :create_server
|
||||
request :delete_server
|
||||
request :get_image_details # Not supported by API
|
||||
request :list_images
|
||||
request :list_images_detail
|
||||
request :list_servers
|
||||
request :list_servers_detail
|
||||
request :get_server_details
|
||||
request :server_action
|
||||
request :start_server
|
||||
request :stop_server
|
||||
request :reboot_server
|
||||
request :rebuild_server
|
||||
request :add_ip_address
|
||||
request :delete_ip_address
|
||||
request :move_ip_address
|
||||
# request :list_addresses
|
||||
# request :list_private_addresses
|
||||
# request :list_public_addresses
|
||||
# request :confirm_resized_server
|
||||
# request :revert_resized_server
|
||||
# request :resize_server
|
||||
# request :update_server
|
||||
|
||||
class Mock
|
||||
|
||||
def self.data
|
||||
@data ||= Hash.new do |hash, key|
|
||||
hash[key] = {
|
||||
:last_modified => {
|
||||
:images => {},
|
||||
:servers => {}
|
||||
},
|
||||
:images => {},
|
||||
:servers => {}
|
||||
}
|
||||
end
|
||||
end
|
||||
|
||||
def self.reset
|
||||
@data = nil
|
||||
end
|
||||
|
||||
def initialize(options={})
|
||||
require 'multi_json'
|
||||
@clodo_username = options[:clodo_username]
|
||||
end
|
||||
|
||||
def data
|
||||
self.class.data[@clodo_username]
|
||||
end
|
||||
|
||||
def reset_data
|
||||
self.class.data.delete(@clodo_username)
|
||||
end
|
||||
|
||||
end
|
||||
|
||||
class Real
|
||||
|
||||
def initialize(options={})
|
||||
require 'multi_json'
|
||||
@clodo_api_key = options[:clodo_api_key]
|
||||
@clodo_username = options[:clodo_username]
|
||||
@clodo_auth_url = options[:clodo_auth_url]
|
||||
@clodo_servicenet = options[:clodo_servicenet]
|
||||
@clodo_auth_token = options[:clodo_auth_token]
|
||||
@clodo_management_url = options[:clodo_management_url]
|
||||
@clodo_must_reauthenticate = false
|
||||
authenticate
|
||||
Excon.ssl_verify_peer = false if options[:clodo_servicenet] == true
|
||||
@connection = Fog::Connection.new("#{@scheme}://#{@host}:#{@port}", options[:persistent])
|
||||
end
|
||||
|
||||
def reload
|
||||
@connection.reset
|
||||
end
|
||||
|
||||
def request(params)
|
||||
begin
|
||||
response = @connection.request(params.merge({
|
||||
:headers => {
|
||||
'Content-Type' => 'application/json',
|
||||
'Accept' => 'application/json',
|
||||
'X-Auth-Token' => @auth_token
|
||||
}.merge!(params[:headers] || {}),
|
||||
:host => @host,
|
||||
:path => "#{@path}/#{params[:path]}"
|
||||
}))
|
||||
rescue Excon::Errors::Unauthorized => error
|
||||
if error.response.body != 'Bad username or password' # token expiration
|
||||
@clodo_must_reauthenticate = true
|
||||
authenticate
|
||||
retry
|
||||
else # bad credentials
|
||||
raise error
|
||||
end
|
||||
rescue Excon::Errors::HTTPStatusError => error
|
||||
raise case error
|
||||
when Excon::Errors::NotFound
|
||||
Fog::Compute::Clodo::NotFound.slurp(error)
|
||||
else
|
||||
error
|
||||
end
|
||||
end
|
||||
unless response.body.empty?
|
||||
response.body = MultiJson.decode(response.body)
|
||||
end
|
||||
response
|
||||
end
|
||||
|
||||
private
|
||||
|
||||
def authenticate
|
||||
if @clodo_must_reauthenticate || @clodo_auth_token.nil?
|
||||
options = {
|
||||
:clodo_api_key => @clodo_api_key,
|
||||
:clodo_username => @clodo_username,
|
||||
:clodo_auth_url => @clodo_auth_url
|
||||
}
|
||||
credentials = Fog::Clodo.authenticate(options)
|
||||
@auth_token = credentials['X-Auth-Token']
|
||||
uri = URI.parse(credentials['X-Server-Management-Url'])
|
||||
else
|
||||
@auth_token = @clodo_auth_token
|
||||
uri = URI.parse(@clodo_management_url)
|
||||
end
|
||||
@host = @clodo_servicenet == true ? "snet-#{uri.host}" : uri.host
|
||||
@path = uri.path
|
||||
@port = uri.port
|
||||
@scheme = uri.scheme
|
||||
end
|
||||
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
31
lib/fog/clodo/models/compute/image.rb
Normal file
31
lib/fog/clodo/models/compute/image.rb
Normal file
|
|
@ -0,0 +1,31 @@
|
|||
require 'fog/core/model'
|
||||
|
||||
module Fog
|
||||
module Compute
|
||||
class Clodo
|
||||
|
||||
class Image < Fog::Model
|
||||
|
||||
identity :id
|
||||
|
||||
attribute :name
|
||||
attribute :vps_type
|
||||
attribute :status
|
||||
attribute :os_type
|
||||
attribute :os_bits
|
||||
attribute :os_hvm
|
||||
|
||||
def initialize(new_attributes)
|
||||
super(new_attributes)
|
||||
merge_attributes(new_attributes['_attr']) if new_attributes['_attr']
|
||||
end
|
||||
|
||||
def ready?
|
||||
status == 'ACTIVE'
|
||||
end
|
||||
|
||||
end
|
||||
|
||||
end
|
||||
end
|
||||
end
|
||||
28
lib/fog/clodo/models/compute/images.rb
Normal file
28
lib/fog/clodo/models/compute/images.rb
Normal file
|
|
@ -0,0 +1,28 @@
|
|||
require 'fog/core/collection'
|
||||
require 'fog/clodo/models/compute/image'
|
||||
|
||||
module Fog
|
||||
module Compute
|
||||
class Clodo
|
||||
|
||||
class Images < Fog::Collection
|
||||
|
||||
model Fog::Compute::Clodo::Image
|
||||
|
||||
def all
|
||||
data = connection.list_images_detail.body['images']
|
||||
load(data)
|
||||
end
|
||||
|
||||
def get(image_id)
|
||||
image = connection.get_image_details(image_id).body['image']
|
||||
new(image) if image
|
||||
rescue Fog::Compute::Clodo::NotFound
|
||||
nil
|
||||
end
|
||||
|
||||
end
|
||||
|
||||
end
|
||||
end
|
||||
end
|
||||
163
lib/fog/clodo/models/compute/server.rb
Normal file
163
lib/fog/clodo/models/compute/server.rb
Normal file
|
|
@ -0,0 +1,163 @@
|
|||
require 'fog/core/model'
|
||||
|
||||
module Fog
|
||||
module Compute
|
||||
class Clodo
|
||||
|
||||
class Server < Fog::Model
|
||||
|
||||
identity :id
|
||||
|
||||
attribute :addresses
|
||||
attribute :name
|
||||
attribute :image_id, :aliases => 'imageId'
|
||||
attribute :type
|
||||
attribute :state, :aliases => 'status'
|
||||
attribute :type
|
||||
attribute :vps_memory
|
||||
attribute :vps_memory_max
|
||||
attribute :vps_os_title
|
||||
attribute :vps_os_bits
|
||||
attribute :vps_os_type
|
||||
attribute :vps_vnc
|
||||
attribute :vps_cpu_load
|
||||
attribute :vps_cpu_max
|
||||
attribute :vps_cpu_1h_min
|
||||
attribute :vps_cpu_1h_max
|
||||
attribute :vps_mem_load
|
||||
attribute :vps_mem_max
|
||||
attribute :vps_mem_1h_min
|
||||
attribute :vps_mem_1h_max
|
||||
attribute :vps_hdd_load
|
||||
attribute :vps_hdd_max
|
||||
attribute :vps_traf_rx
|
||||
attribute :vps_traf_tx
|
||||
attribute :vps_createdate
|
||||
attribute :vps_billingdate
|
||||
attribute :vps_update
|
||||
attribute :vps_update_days
|
||||
attribute :vps_root_pass, :aliases => ['adminPass','password']
|
||||
attribute :vps_user_pass
|
||||
attribute :vps_vnc_pass
|
||||
|
||||
attr_writer :private_key, :private_key_path, :public_key, :public_key_path, :username
|
||||
|
||||
def initialize(attributes={})
|
||||
self.image_id ||= attributes[:vps_os] ? attributes[:vps_os] : 666
|
||||
super attributes
|
||||
end
|
||||
|
||||
def destroy
|
||||
requires :id
|
||||
connection.delete_server(id)
|
||||
true
|
||||
end
|
||||
|
||||
def image
|
||||
requires :image_id
|
||||
image_id # API does not support image details request. :-(
|
||||
end
|
||||
|
||||
def private_ip_address
|
||||
nil
|
||||
end
|
||||
|
||||
def private_key_path
|
||||
@private_key_path ||= Fog.credentials[:private_key_path]
|
||||
@private_key_path &&= File.expand_path(@private_key_path)
|
||||
end
|
||||
|
||||
def private_key
|
||||
@private_key ||= private_key_path && File.read(private_key_path)
|
||||
end
|
||||
|
||||
def public_ip_address
|
||||
pubaddrs = addresses && addresses['public'] ? addresses['public'].select {|ip| ip['primary_ip']} : nil
|
||||
pubaddrs && !pubaddrs.empty? ? pubaddrs.first['ip'] : nil
|
||||
end
|
||||
|
||||
def add_ip_address
|
||||
connection.add_ip_address(id)
|
||||
end
|
||||
|
||||
def move_ip_address(ip_address)
|
||||
connection.move_ip_address(id, ip_address)
|
||||
end
|
||||
|
||||
def delete_ip_address(ip_address)
|
||||
connection.delete_ip_address(id, ip_address)
|
||||
end
|
||||
|
||||
def public_key_path
|
||||
@public_key_path ||= Fog.credentials[:public_key_path]
|
||||
@public_key_path &&= File.expand_path(@public_key_path)
|
||||
end
|
||||
|
||||
def public_key
|
||||
@public_key ||= public_key_path && File.read(public_key_path)
|
||||
end
|
||||
|
||||
def ready?
|
||||
self.state == 'is_running'
|
||||
end
|
||||
|
||||
def reboot(type = 'SOFT')
|
||||
requires :id
|
||||
connection.reboot_server(id, type)
|
||||
true
|
||||
end
|
||||
|
||||
def save
|
||||
raise Fog::Errors::Error.new('Resaving an existing object may create a duplicate') if identity
|
||||
requires :image_id
|
||||
data = connection.create_server(image_id, attributes)
|
||||
merge_attributes(data.body['server'])
|
||||
true
|
||||
end
|
||||
|
||||
def setup(credentials = {})
|
||||
requires :public_ip_address, :identity, :public_key, :username
|
||||
Fog::SSH.new(public_ip_address, username, credentials).run([
|
||||
%{mkdir .ssh},
|
||||
%{echo "#{public_key}" >> ~/.ssh/authorized_keys},
|
||||
%{passwd -l #{username}},
|
||||
%{echo "#{MultiJson.encode(attributes)}" >> ~/attributes.json},
|
||||
])
|
||||
rescue Errno::ECONNREFUSED
|
||||
sleep(1)
|
||||
retry
|
||||
end
|
||||
|
||||
def ssh(commands)
|
||||
requires :public_ip_address, :identity, :username
|
||||
|
||||
options = {}
|
||||
options[:key_data] = [private_key] if private_key
|
||||
options[:password] = password if password
|
||||
Fog::SSH.new(public_ip_address, username, options).run(commands)
|
||||
end
|
||||
|
||||
def scp(local_path, remote_path, upload_options = {})
|
||||
requires :public_ip_address, :username
|
||||
|
||||
scp_options = {}
|
||||
scp_options[:key_data] = [private_key] if private_key
|
||||
Fog::SCP.new(public_ip_address, username, scp_options).upload(local_path, remote_path, upload_options)
|
||||
end
|
||||
|
||||
def username
|
||||
@username ||= 'root'
|
||||
end
|
||||
|
||||
def password
|
||||
vps_root_pass
|
||||
end
|
||||
|
||||
private
|
||||
|
||||
end
|
||||
|
||||
end
|
||||
end
|
||||
|
||||
end
|
||||
36
lib/fog/clodo/models/compute/servers.rb
Normal file
36
lib/fog/clodo/models/compute/servers.rb
Normal file
|
|
@ -0,0 +1,36 @@
|
|||
require 'fog/core/collection'
|
||||
require 'fog/clodo/models/compute/server'
|
||||
|
||||
module Fog
|
||||
module Compute
|
||||
class Clodo
|
||||
|
||||
class Servers < Fog::Collection
|
||||
|
||||
model Fog::Compute::Clodo::Server
|
||||
|
||||
def all
|
||||
data = connection.list_servers_detail.body['servers']
|
||||
load(data)
|
||||
end
|
||||
|
||||
def bootstrap(new_attributes = {})
|
||||
server = create(new_attributes)
|
||||
server.wait_for { ready? }
|
||||
server.setup(:password => server.password)
|
||||
server
|
||||
end
|
||||
|
||||
def get(server_id)
|
||||
if server = connection.get_server_details(server_id).body['server']
|
||||
new(server)
|
||||
end
|
||||
rescue Fog::Compute::Clodo::NotFound
|
||||
nil
|
||||
end
|
||||
|
||||
end
|
||||
|
||||
end
|
||||
end
|
||||
end
|
||||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue