Simple Tips to Build Scalable Websites

Published on 01 July 2009

A few days ago I’ve been invited to a launch party for a web product in Paris. While the product was nice and polished, it seems like the developers didn’t understand anything about scalability. They didn’t even understand my question when I asked them if the product could scale.

It’s probably not a big deal for them: they were presenting a CMS, so most of the time it will be installed for a limited user base. I guess most people will be happy to use it on a single server, so it’s probably OK for them not to be able to scale. However I noticed that while scalability is now a fairly solved problem, there are not that many articles explaining how to prepare to scalability on the web. So here I go. I will not try to replace a good book, but just to give the very basics.

What is scalability?

It’s important to get that out of the way. Scalability is not performance: it’s not about making good use of CPU and bandwidth, and it’s not about having the page being loaded quickly in the user’s browser. It’s about being able to balance the load between several servers. So when the load increases (more users creating accounts, more visitors, more page views) you can add additional servers to balance the load. You don’t just throw in a server, you need to design your software to work on a cluster of servers.

An other point is that you will rarely create a cluster of machines from scratch: when you launch a new website you will have few users so few machines (one or two), and as your load increase you will increase the number of servers. You will have to scale different parts of your system one after the other.

#1: the web front-end

Most of the time you start with a front-end (PHP, Python, Ruby, Java…) and a data layer (MySQL, PostgreSQL, CouchDB…). As your load increase, the front-end will be the first to break. Of course server-side caching will help, but at some point you will need several front-end servers.

The key for that is to ensure you don’t store any data on the front-end. The problem sometimes arise with sessions: a lot of PHP libraries store session information locally on the server, and that prevents from balancing the load. The idea is that in a session a user may hit a server for a given page, then an other for the next page. If the session is only accessible to the first server, you’re screwed. You want it to be somewhere else. That can be in the data layer or in a special sessions server. If you write a Facebook app you don’t need to care, because Facebook takes care of the session.

Now can have as many front-ends as we want, but we have a unique database server.

#2: the read operations on the database

Most applications will have many more reads than writes. For example in a blogging software, each visitor will trigger a read on the database (OK, not each visitor if there is a good cache), but writes only occur when the author writes a new post or someone leave a comment.

That’s good, because it’s much easier to scale reads than writes. Just make sure that in your code you have different settings for reads and writes. They can point to the same database at launch time, but when the time comes you can separate those. Writes will go to your “main” database, and reads will go to a copy. There are other approaches, but for example MySQL offers replications features. Once set up, the slaves will stay in sync with the master. You can have as many slaves as you need.

OK - several front-ends, several read-only databases, but still one master database for writes. If your applications has few reads it may be fine with a beefy database server, (and some major websites just have one master database), but if you have a lot of writes (highly social applications like Facebook or Twitter) you may want to continue the scaling process.

#3: the write database

Now we want to have several databases where we can write to. Obviously, we have to be careful not to introduce inconsistencies in the process. So having an old version of a blog post on a server and the new version on an other one is not great; what if some users see an old version of your post and others see the most recent one ?

There are various strategies to divide data in a safe and consistent way, including:

  • Depending on the userid (or blogid, or whatever makes sense in your application), put the data on one server on an other. For example, all users with an even id go to server1 and all users with an odd id go to server2. Hint: make sure your algorithm lets you add more servers later, which is not the case with my example where you will be stuck at 2 servers :)
  • Put some tables on a server, some others on an other. It doesn't help you when a table is growing too much, but it can be combined with the previous point.

Conclusion

Here you go, the basics for building a scalable website. That’s not all you have to do, if your website continues growing you will face more problems such as having to scale your network. I’m not talking about outgoing bandwidth but communication between your servers (front-end and data layers). But if your code is efficient, those simple recommendation will get you to a server that can handle a fairly big load. I really recommend Building Scalable Websites, from O’Reilly if you want to know more.

FAQ

Q: Language X doesn’t scale, but language Y does!</em></strong>

A: Bullshit. It’s not the language that scales, it’s your code. Some languages may not perform as good as others, so you will have to add boxes more often but the way you scale is still the same.

Q: What about cloud computing? Virtualization? All these fancy buzzwords?

Virtualization means you run on virtual machines rather than on physical ones. The benefit is that you can easily add or remove machines. For example, using Amazon EC2 you can add as many machines as you want in a few minutes, and then remove them in no more time. With a classical hosting company, you need to make a phone call, ask for the machines and you get them in maybe one week. They’ll charge you for the set-up too, and if you no longer want it you still have to pay for a full term. So cloud computing offers are generally more flexible.

Q: Does Google App Engine make it easier to scale?

In short, yes. By not letting you access the machines, Google App Engine constrain you into writing scalable code. You also don’t have to request new machines when you need them or release when you no longer need them; you just pay what you use depending on the load of your application.

I am a big fan on Google App Engine but be careful, since it’s programmed in a particular way it’s not easy to move your project out of it. You may feel locked in after you project started.

TAGS: hacking  scalability  web