Fabric is pretty awesome and last time I discussed how I use Fabric to make one command deployments easy. One thing I didn’t cover was how I use boto, a great Python package for working with AWS, to make sure I don’t bring down my site while I’m deploying.
The basic problem is this: if you have a few web servers in an ELB or other load balancer and you start deploying to each one in turn, when you restart the service responsible for your app, there will be a few seconds where that server is unavailable. And if you have your load balancer sending traffic to it, you’re going to end up with people getting 502s and the like. Not good.
Conceptually, what we want to do is remove each server from the load balancer, push code to it, restart the service, when the service is back up, add that server back to the load balancer. In fact, you can imagine a number of tasks that might require us to remove a server from a load balancer, complete that task, and then add the server back. So let’s create Fabric tasks that’ll add and remove servers from ELBs as well as a decorator we can use to wrap other tasks so they are automatically “managed.”
Here’s the code I added to the servers.py file we created in the first post:
from boto.ec2.elb import ELBConnection def elb_operation(operation, instance_id, lbs): conn = ELBConnection(env.aws_key, env.aws_secret) for lb in conn.get_all_load_balancers(): if lb.name in lbs: getattr(lb, operation)(instance_id) def remove_from_elbs(): host = db.get_hosts_by('host', env.host) instance_id, elbs = host.instance_id, host.elbs elb_operation('deregister_instances', instance_id, elbs) def add_to_elbs(): host = db.get_hosts_by('host', env.host) instance_id, elbs = host.instance_id, host.elbs elb_operation('register_instances', instance_id, elbs)
The code is pretty simple. When we deploy to each server, the
env.host variable is set to the host name for that server, so we can use the
HostManager object we setup before to lookup which ELBs that host belongs to. Then we iterate over those ELBs and pull the server out of that configuration. I also have my AWS keys in a Fabric settings file, so those are available as well.
Here’s a decorator that we can wrap tasks with to automatically manage the ELBs:
from functools import wraps def elb_managed(func): @wraps(func) def decorated(*args, **kwargs): remove_from_elbs() func(*args, **kwargs) add_to_elbs() return decorated
and here’s an example of using the decorator:
from deploy.servers import elb_managed @elb_managed def deploy(): git_pull() buildout(False) restart()
and now we don’t have to worry about our ELBs sending traffic to servers that shouldn’t be available. This also works well in the case that the deployment fails for some reason. In that case, the server is taken out of the ELB, the deployment fails, but the server isn’t added back to the ELB, so I don’t have to worry about anyone hitting that server. Instead, I can figure out what the problem is, and just redeploy, which will automatically add that server back to that ELB once the deployment is successful.