This blog post discusses fog, its history here at Brightbox, some of its problems and how they could be addressed.
Since we use it so much in anger we’ve hit pretty much every core issue with fog and there’s a good chance we’ve filed an issue. We’ve also fixed a number as well.
Fog was intended to be a cloud services toolkit that abstracts away API differences between service providers. This allows you to write code that manages servers, load balancers or cloud storage that should work with little more than a configuration change.
In reality the project has grown into a big collection of libraries under one banner. There’s enough difference and ambiguity in them that it feels like you can’t just change the configuration settings and off you go.
When we were developing Brightbox Cloud we knew we wanted a SDK for customers to use that we could also use ourselves.
We had a number of options available at the time:
Creating a Brightbox SDK would have been fairly simple given our API but it would have meant less exposure and discoverability for us. We also would have to do extra work on plugins since it would have been unlikely anyone else would be integrating our SDK with other tools.
Deltacloud abstracts between providers at the server level and then you communicate with a deltacloud client. That limits clients to features supported by deltacloud.
Most of the existing tools did not allow you to work around their abstractions so we would have suffered from supporting the lowest common denominator.
Brightbox Cloud has many more features that just provisioning servers such as console access, Cloud IPs, IPv6 support, load balancing and firewalls. We knew that a CIMI interface was not going to cut it.
Fog allows using requests directly which should map to the API so that if something is not available in the higher level abstraction you can still use a lower level call.
Adding support to fog gave us a head start since we could work within its framework. It also allowed us to contribute back to the open source community.
The problem with something like fog is that it can grow to be a victim of its own success.
With just adequate documentation and using the code as examples providers have added support for new features and used their own branding before standard interfaces were discussed.
It is flexible enough that local virtualization platforms have been added. These often rely on their own native gems to interact with non HTTP APIs. The local filesystem is a supported provider for storage.
It has become a collection of SDKs amongst a framework. Some parts are still interchangeable and other aren’t.
Well the good news is that fog is going to get better! It’s been on a road to recovery for a while now and it’s about to step up a notch.
Eight of the most prolific fog committers are meeting up in Atlanta next week to discuss the big issues, make some decisions and write it all up.
That’s representatives from Brightbox, Google, HP and Rackspace to look at the big picture and prioritise what changes are needed.
We’ve got a lot of ground to cover in two days.
Fog is primarily a library for use by other Ruby applications however some of
the features were designed for quick prototyping from a console. Grabbing the
~/.fog into a global Hash meant you could start the console and
be away but it forced applications to move their configurations away from
Whilst fog offers a framework, the majority of the current codebase is not reusable so there is a lot of repetition between providers or within providers. We recently broke out fog-core so that the shared framework for providers is its own thing. Now the parts of the code that are shared are clearer.
Originally fog supported AWS and added a few more providers. Then others added new services. There wasn’t too much coordination about what to call them.
That’s going to change following the summit. The plan is to standardise on resource names so you won’t have to worry about “buckets”, “containers” or “directories”. Same with “elastic”, “floating” or “cloud” IPs.
Then the new names will be set up in the core framework and tested. When you
Fog::Server, it won’t matter if it is called an “instance” by the
In it’s current form, fog is one monolithic gem. Within that gem is several thousand Ruby files to allow the library to talk to 40 different cloud based service providers (and a few other things like the local file system).
That’s 5000 files being downloaded when installing and a large number being required when starting up applications.
It’s not ideal. Disk is cheap but a CI run pulling down a 1.5MB gem takes time and bandwidth. You don’t want all of that code in memory when you are using one provider or just one service.
So what is already happening is fog is going modular. Originally on a provider by provider basis we are going to discuss service by service options.
Brightbox customers are already benefiting because we are leading the clean up operation. The Brightbox provider was extracted from the main fog gem and repository and is now a standalone fog-brightbox gem.
It is still an officially supported part of fog but now we install just enough of the library for using Brightbox when you install our CLI tools.
In restructuring this we also managed to eliminate
on libxml2 which was a big waste of everyone’s resources.
By originally targetting AWS, fog had XML support baked right in. For speed Nokogiri was used with the demand for libxml2 to be installed.
Since we offer an JSON based API to our services, it was a burden making users install development tools to build gems or install XML packages to install our CLI for something they would never use.
We have broken out the JSON parts of fog and can depend on it separately. The
XML dependencies are next up. What that means for us is that
depends on the core framework and just the JSON parts.
So Nokogiri is only required when installing the full
Fog is “the Ruby cloud services library” so you would expect a good deal of HTTP knowledge to be going on in there. To enable reuse, the Excon HTTP that was extracted to be it’s own project.
The legacy of that however is that when you are using fog and you get an error
there is an excellent chance you’ll see an
Excon error. If you’ve been
trying to rescue Fog::Errors then you get caught out.
Since it is part of the core framework you have a curious problem when local virtualization hypervisors can use the same errors as everything else. Mocking a HTTP response for a 404 to return a consistent error is non sensible.
So during the work there will be a proper isolation layer between requests and responses from API calls.
When fog is whipped into shape, it is going to be easier to use, more consistent and an excellent base to integrate with cloud services from your own applications and scripts.
Then we can get existing open source projects that use fog into shape so that you can benefit when Brightbox is transparently supported.
You can still benefit from using fog to help you insulate your server deployments from your current cloud provider to make it easier to migrate if pricing or customer data becomes an issue to you.