Tag Archives: visitors per day

Heavy Traffic – Lessons Learned

In the past 15 or 16 years, I’ve worked on a number of websites that had fairly significant traffic (mostly in the form of unique daily visitors – there’s many ways to measure traffic). In one specific case, the traffic on a well-known author’s website spiked significantly (several thousand unique visitors per day) after his appearance on a television talk show. The website, although database driven, primarily consisted of articles, along with a store – and even on shared hosting, this wasn’t a problem.

Recently, my company built an online “live auction” website for a customer, a project which posed a number of interesting challenges and learning experiences (the hard way, of course) regarding how to build a site that has heavy traffic. In this case, the nature of the website requires that all users see information that is current and accurate – resulting in a need for AJAX calls that run repeatedly on a per second basis per user. This project is the first one that I have worked on that required serious optimization work; typically even the heaviest custom development that my team works on is primarily focused on business use cases rather than things like speed or algorithm design; not so here.

The “coming soon” page, long before the site was launched, already received several hundred unique visitors per day (based on Google Analytics). The site launched with more than 500 registered users (pre-registration via the coming soon page), and traffic spiked heavily following launch. The initial traffic spike actually forced the site to close for several days, in order for our team to rework code. The re-launch was preceded by several Beta tests that involved registered users. Bear in mind that a registered user on most sites isn’t individually responsible for much server load. On this particular site, each user is receiving at least one update per second, each of which may involve multiple database calls.

The following is a description of some of the issues we encountered, and how they were addressed or mitigated. In some cases, work is ongoing, in order to adapt to continued growth. In many cases, the challenges that we encountered forced me to revise some assumptions I had held about how to approach traffic. Hopefully the following lessons will save a few people the weeks of sleep deprivation that I went through in order to learn them.

Project Description:

  • Penny Auction website
  • Technology: PHP (Zend Framework), Javascript
  • Server: Various VPS packages (so far)
  • Description of traffic: All users receive one data update per second; there are additional data updates every 3 seconds, and once per minute.

1. Don’t Rely Too Much On Your Server

Many web developers build code that simply assumes that the server will work properly. The problem is that under heavy load, it isn’t at all uncommon for servers to actually not function in the way that might be expected. Examples include things like file resources dropping, database calls dropping – sometimes without intelligible error codes, and even things like system time being unreliable. The following are a couple of specific examples we encountered:

a) PHP time() – When developing in PHP, it is very common to rely on function calls such as time() (to obtain system time, in UNIX timestamp form) for algorithms to work properly. Our setup involved a VPS with multiple CPUs dedicated to our use, and the ability to “burst” to more CPUs as needed. As it turned out, whenever our server went into burst mode, the additional CPUs reported different system times than “our” CPUs did. This is probably an issue with the underlying VPS software, but we didn’t have the luxury of investigating fully. This meant that rows were frequently (as in: about one quarter of the time) saved in the wrong order into the database, which is a serious issue for an auction website! When possible, use a timestamp within SQL code (i.e. MySQL’s TIMESTAMP() function) instead. Fixing the system time on the other VPS partitions wasn’t feasible, since they “belonged” to a different customer.

b) Not every database call will work. Under heavy load, it isn’t at all unusual for a SQL insert or update statement to be dropped. Unless your code is designed to check error statements, and handle retries properly, your site will not work.

2. Pick Your Hosting Company Wisely

We launched the project on one of our hosting company’s smaller VPS packages. We quickly went to one of the middle-range packages, discovered it was also insufficient, and then switched to the largest package that they offer.

In the process, we also entered a number of second tier or higher tickets into their system, including serious operating system level problems.

Luckily, we chose a hosting company that responds quickly to issues, and whose staff are familiar with the types of issues we encountered.

This isn’t something to take for granted. Not every hosting company has the ability to quickly and seamlessly transition a site through different packages on different servers, nor do they necessarily have tier 3 support staff who can address unusual support requests.

In this case, our conversations with the company seem to indicate that they have never seen a new site with this level of load in the past; they still worked valiantly to assist us in keeping things running.

3. Shared Hosting, VPS, Dedicated, Cloud Hosting?

In our previous experience, when a hosting company sells somebody a dedicated server, the notion is that the customer knows what they are doing, and can handle most issues. This occurs even where a SLA (service level agreement) is in place, and can seriously effect response time for trouble tickets.

As a result, our first inclination was to use a VPS service. Our decision was further supported by the level of backup provided by default with VPS packages at our chosen vendor. A similar backup service on a dedicated server of equivalent specifications appeared to be much more expensive.

One of the larger competitors of our customer’s site currently runs under a cloud hosting system. We are continuing to look at a variety of “grid” and cloud hosting options; the main issue is that it is extremely hard to estimate the monthly costs involved in cloud hosting, without having a good handle on how much traffic a site will receive. It isn’t unusual for hosting costs to scale in such a way as to make an otherwise profitable site lose money. That said, we will likely have to transition over to a cloud hosting service of some kind at some point in time.

4. Database Keys Are Your Friend

At one point, we managed to reduce server load from > 100% load, down to around 20%, by adding three keys into the database. This is easy for many web developers to overlook (yes I know, serious “desktop” application developers are used to thinking of this stuff).

5. Zend Framework is Good For Business Logic – But It Isn’t Fast

We initially built the entire code base using Zend Framework 1.10. Using Zend helped build the site in a lot less time than it would otherwise have taken, and it also allows for an extremely maintainable and robust code base. It isn’t particularly fast, however, since there’s significant overhead involved in everything it does.

After some experimentation, we removed any code that supported AJAX calls from Zend, and placed it into a set of “gateway” scripts that were optimized for speed. By building most of the application in Zend, and moving specific pieces of code that need to run quickly out of it, we found a compromise that appears to work – for now.

The next step appears to be to build some kind of compiled daemon to handle requests that need speed.

6. Javascript

Our mandate was to support several of the more common browsers currently in use (mid-2010), including Firefox, IE7-9, Opera, and – if feasible – Safari.

The site is extremely Javascript-intense in nature, although the scripting itself isn’t particularly complex.

We used Jquery as the basis for much of the coding, and then created custom code on top of this. Using a library – while not a magic solution in itself – makes cross-browser support much, much easier. We’re not very picky / particular about specific libraries, but have used Jquery on a number of projects in the past couple of years, to generally good results.

Specific issues encountered included IE’s tendancy to cache AJAX posts, which had to be resolved by tacking a randomized variable onto resources; this, unfortunately, doesn’t “play nice” with Google Speedtest (see below).

We also had a serious issue with scripts that do animated transitions, which resulted in excessive client-side load (and thus poor perceived responsiveness) in addition to intermittantly causing Javascript errors in IE.

Javascript debugging in IE isn’t easy at the best of times, and is made more complex by our usage of minify (see below) to compress script size. One tool that occasionally helped was FireBug Lite, which essentially simulates Firefox’s Firebug plugin in other browsers (but which also sometimes can change the behaviour of the scripts being debugged). The underlying issue is that IE does a poor job of pointing coders to exactly where a script crashed, and the error messages tend to be unhelpful. The debugging method in IE basically boils down to a) downloading a copy of the minified resource in the form that the browser sees it, b) using an editor with good row/column reporting (I often use Notepad++) to track down roughly where the error occurs, and c) put in debug statements randomly to try and isolate the problem. After working with Firebug for a while, this is an unpleasant chore.

7. Testing Server

Long before your site launches, set up a separate testing server with as close to a duplicate of the live environment as possible. Keep the code current (we usually try to use SVN along with some batch scripts to allow quick updating), and test EVERY change on the test site before pushing the code over to the live server. Simple, but frequently overlooked (I’m personally guilty on occasion).

8. CSS

Designers and web developers often think of CSS purely in terms of cross-browser compatibility. Building sites that actually work in major browsers goes without saying, and based on personal experience, CSS issues can lead to a lot of customer support calls (“help, the button is missing”) that could be easily avoided. In the case of this specific project, we actually had to remove or partially degrade some CSS-related features, in order to provide for a more uniform experience across browsers. Attempting to simulate CSS3 functionality using Javascript is not a solution for a heavy-traffic, speed-intensive site; we tried this, and in many cases had to remove the code due to poor performance.

An often overlooked CSS issue (which Google and Yahoo have started plugging – see below) has to do with render speed. Browsers view documents essentially like a multi-dimensional array of elements, and specifying elements in an inefficient way can actually have a significant effect on the apparent page load time for users. It is well worth your while to spend some time with Google Speed Tester (or Yahoo’s competing product) in order to optimize the CSS on your site for speed.

9. Why Caching Doesn’t Always Work

Caching technology can be a very useful way of obtaining additional performance. Unfortunately, it isn’t a magic bullet, and in some cases (i.e. our project specifically), it can not only hurt performance – but can actually make a site unreliable.

High traffic websites tend to fall into one of two categories:

On the one hand, there are sites such as Facebook, whose business model is largely based on advertising; what this means is that if user data isn’t completely, totally current and accurate, it is at most an annoyance (“where’s that photo I just uploaded?”). Facebook famously uses a modified version of memcached to handle much of their data, and this kind of caching is probably the only way they can (profitably) serve half a billion customers.

On the other hand, financial types of websites (think of your bank’s online portal, or a stock trading site) have business models that pertain directly to the user’s pocketbook. This means that – no matter how many users are available, or the volume of data – the information shown on the screen has to be both accurate and timely. You would not want to login to your bank’s site and see an inaccurate account balance, right? In many cases, sites of this nature use a very different type of architecture to “social media” sites. Some banks actually have supercomputers running their websites in order to accomodate this.

Underlying the dichotomy above, is the fundamental notion of what caching is all about – “write infrequently, view often”. Caches work best in situations where there are far fewer updates to data than views.

The initial version of our code actually implemented memcached, in an attempt to try to reduce the number of (relatively expensive) database calls. The problem is that our underlying data changes so rapidly (many times per second, for a relatively small number of resources, that are actively being viewed and changed by many users), that caching the data was happening extremely frequently. The result in practice was that some users were seeing out of date cached data, at least some of the time. Abandoning caching in our specific case helped resolve these issues.

10. Speed Optimization

We used Google Speed Test in order to optimize our project. There is a similar competing product from Yahoo as well. These tools provide a wealth of information about how to make websites load faster – in many cases significantly faster.

Among the many changes that we made to the site, based on the information from the tester, were the following:

a) Use minify to combine and compress Javascript and CSS files. No kidding – this works. Not only that, but if you have a large number of CSS files that are loaded in each page, you can run into odd (and very hard to trace) problems in IE, which appears to only be able to handle approximately 30 external CSS files on a page. Compressing and combining these files using minify and/or yui can save you more than bandwidth.

b) Use sprites to combine images into large files. This does not work well in some cases (i.e. some kinds of buttons), but this technique can save precious seconds of load time. We used a Firefox plugin called Spriteme to automate this task, although we didn’t follow all of its suggestions.

c) Validate your HTML. Again, another “no brainer”. The load time saved by having valid HTML will actually surprise many readers. The process of validation is a nuisance, particularly if your site serves up dynamic, user-contributed content. Set aside a few hours for this process, and just do it though. It makes a difference.

11. Don’t Forget Algorithms 101

I took several courses on algorithm design at university, and then did nothing with that knowledge for more than a decade. Surprise, surprise – a complex, multi-user site actually needs proper thought in this regard.

One example from our experience – the data that tracks the status of an auction (i.e. whether it is currently running, paused, won etc etc) can be “touched” by 9 different pieces of code in the site, including “gateway” code that responds to users, and background tasks.

It took significant effort to build a reliable algorithm that can determine when an auction has actually ended, and the task was complicated by the fact that some of the code runs relatively slowly, and it is quite possible for another operation to attempt to modify the underlying data while the first task is still operating. Furthermore, “locking” in this case may have negative ramifications for user experience, since we did not want to unduly reject or delay incoming “bids” from users.

Conclusions

  1. It is very hard to plan ahead of time for growth in a web environment. Sometimes steps taken specifically to try and address traffic (i.e. caching in our case) can actually be detrimental. The process of adapting to growth can actually involve a surprising amount of trial and error experimentation.
  2. Using frameworks can be very helpful for writing maintainable code. Unfortunately its sometimes necessary to work around them, when specific optimization is needed. Proper documentation and comments can help – I try to write as if I’m explaining to somebody really dumb, years in the future, what is going on in my code – and then I’m often surprised when I need my own comments later on…
  3. Work with the right people. Not just your internal team, but also your hosting company etc. This can make a big difference when you are under pressure.
  4. Prepare yourself for periods of high stress. Not much you can do about this, unfortunately. In most cases, it will be unlikely that you will actually have access to the resources you really need to get the job done. Make sure you schedule breaks too. Its hard. Burnout is much harder though.