Category Archives: Software Development

Client-side game scoring with blockchains

If you’re reading this, there’s a good chance you (at least occasionally) play games online in your browser. You’ve probably noticed that client-side games, and particularly multi-user ones, don’t have the same performance as software that you install on your computer (or use on a dedicated gaming platform). This is at least partially because the scoring model for such games is typically hosted on the server, since javascript is too easy for users to access and modify themselves. The typical design for javascript (and also older Flash) games is to have them constantly communicate the player’s moves back to the server; the server determines scoring and other updates, and returns that back to the user. This introduces lots of opportunities for lag.

I haven’t had a chance to really think this out in detail, but what if a javascript-based game used a blockchain system instead?

Here’s what I’m thinking:

  • The server keeps track of (and likely caches, in order to reduce the size of transactions) a blockchain
  • When the user loads the game, they receive a portion of the chain, along with the entire scoring model in javascript
  • As the user interacts with the game, additional entries are made in the blockchain to record them
  • The user’s chain is periodically sent back to the server to check for cheating, and to keep all of the users in synch
  • There would need to be some sort of mechanism for interchange of blockchain transactions between users, to keep the system honest

As I previously mentioned, I haven’t thought this out in a great amount of detail. Assuming this approach works, it could move a lot more of the code into the client, and reduce client-server communication. That should speed things up significantly.

Firefox Toolbars – Some Tips

I’ve been working on a custom Firefox toolbar for a really cool project that I can’t talk about yet.

What I’ve been finding is that the quality of documentation for developers isn’t good, and is both inconsistent and frequently missing important details (or hasn’t been updated to reflect changes).

I’m going to post a few things I’ve learned over the past few days that took far more time to figure out than it should have. Hopefully somebody else will benefit from my wasted time. Continue reading

Working for Equity

Once in a while (actually every few weeks, give or take) somebody asks me if I’ll do a project for equity instead of cash.

My immediate response is “what’s your exit plan?”.

Usually this is met with a blank stare, which is when I follow my question up with “what I mean is, how do I sell those shares, and when?”. At this point, in 90% of cases, the other party is already starting to look panicky. I then usually politely excuse myself and leave.

It isn’t that I’m opposed in general to holding equity in a project that I’m working on. Far from it. The lack of an exit plan, however, implies a number of things about the person doing the asking though:

  • No business plan (if they had one, they would probably have an exit plan)
  • No cash (this makes launching a business an uphill battle from the start – I know this from bitter personal experience)
  • No idea of valuation (and in turn probable lack of general business know-how)
  • Possible utter lack of respect for the developer (more on this bel0w)

By asking a developer to work – possibly for months, or years even – without cash, the person isn’t paying much attention to how the developer will pay their bills in the interim, and how the developer will ever get paid for the project (i.e. by selling their shares).

What they’re saying is that they want the developer to assume all of the project risk, in exchange – maybe – for some pieces of probably worthless paper. Even worse, if things go completely pear-shaped, the developer might even wind up on the hook for company debts or legal issues. Even with paper, the developer can also still wind up with their equity diluted or out and out taken away – contracts can be tricky things.

Aside from all of the above, they also clearly haven’t thought through what happens when the developer runs out of cash (i.e. they leave, or they become unmotivated).

The converse to this situation is one in which a business has a clearly defined plan, cash on hand to pay contracts or salaries, and wishes to align staff with the overall goal – this is the only time when I would ever want to be holding equity in somebody else’s company.

Open peer-to-peer markets

The following is a crude, first attempt to try and define a way for an online market to operate that is entirely decentralized (i.e. there is no central exchange).

In addition to describing some of the mechanisms that would allow such a market to operate, I am also calling for a) the establishment of a foundation or industry association to ensure that standards are created for the necessary systems, and b) the voluntary acceptance of some level of regulation (i.e. government) by the virtual market community. I’ll make cases for both below. Continue reading

Currency as incentivation

I was going to write a short article on some of the challenges that face virtual currencies in order to obtain main street acceptance, along with some possible solutions. Some of the possible solutions turned out to be interesting, and highly “disruptive” business models, and its a bit premature to discuss them in an open forum. Continue reading

The three phases of the internet

Forget what some people are calling Web3.0.

The first phase of the internet involved taking real world information, and moving it into a digital, connected format – i.e. making web pages.

The second phase of the internet involved taking that newly minted digital stuff, and bringing humanity into the picture (i.e. web pages that are “social”).

The third phase of the internet will involve taking “stuff” that was originally digital, and making it “live” in the real world. All that mobile phone geo-location stuff is just a tiny (and honestly, not very interesting) part of that.


The fourth phase of the internet is already upon us as well, and interestingly enough its as much about hardware as software. This phase involves breaking the physical constraints of the internet and allowing it to work seamlessly through ad-hoc, peer-to-peer, wireless networks (i.e. there’s no ISP and no phone company involved, except maybe for the long lines). This also involves replacing TCP/IP with DTN – especially if humanity is going to do anything useful in the rest of the solar system.

The danger of lock-in

Lock-in refers to a situation where prior decisions make it very difficult to change things later on. Lock-in exists in many areas, but it is in the technical sphere where it is often felt the hardest. A bad decision today can literally make life difficult for oneself – and many, many others – for a great many years. Continue reading

Evaluating Project Risk

Risk Factory - by kyz - Creative Commons

I’m interested to hear feedback regarding how other development companies measure project risk.

We currently track three general classes of risk (although in a very simplistic way) for a project:

1) Technical risk – how likely is it that we will run into something that we don’t know how to solve (or that can’t be solved as stated – or is generally insoluble).

2) Bottom line risk – how likely is it that the project will cost too much to build (i.e. it won’t be profitable). Note that even projects that are not fixed cost (i.e. are billed on an hourly or some other type of flexible basis) can run into issues if they start to cost more than some unstated budget on the customer’s end. This type of risk is frequently the largest concern on our end of things, because (like many service organizations) our largest expense is staffing.

3) Customer risk – I’ve had customers go out of business, vanish, fire us etc in the past. There are frequently warning signs from the start that a particular customer may be more risky than usual. We’ve started tracking issues in a database to try to become more adept at evaluating this sort of risk.

How does your company measure and evaluate risk? Are there relevant categories missing from my list (VaR, Black Scholes etc aren’t really relevant to software development – I think).

Heavy Traffic – Lessons Learned

In the past 15 or 16 years, I’ve worked on a number of websites that had fairly significant traffic (mostly in the form of unique daily visitors – there’s many ways to measure traffic). In one specific case, the traffic on a well-known author’s website spiked significantly (several thousand unique visitors per day) after his appearance on a television talk show. The website, although database driven, primarily consisted of articles, along with a store – and even on shared hosting, this wasn’t a problem.

Recently, my company built an online “live auction” website for a customer, a project which posed a number of interesting challenges and learning experiences (the hard way, of course) regarding how to build a site that has heavy traffic. In this case, the nature of the website requires that all users see information that is current and accurate – resulting in a need for AJAX calls that run repeatedly on a per second basis per user. This project is the first one that I have worked on that required serious optimization work; typically even the heaviest custom development that my team works on is primarily focused on business use cases rather than things like speed or algorithm design; not so here.

The “coming soon” page, long before the site was launched, already received several hundred unique visitors per day (based on Google Analytics). The site launched with more than 500 registered users (pre-registration via the coming soon page), and traffic spiked heavily following launch. The initial traffic spike actually forced the site to close for several days, in order for our team to rework code. The re-launch was preceded by several Beta tests that involved registered users. Bear in mind that a registered user on most sites isn’t individually responsible for much server load. On this particular site, each user is receiving at least one update per second, each of which may involve multiple database calls.

The following is a description of some of the issues we encountered, and how they were addressed or mitigated. In some cases, work is ongoing, in order to adapt to continued growth. In many cases, the challenges that we encountered forced me to revise some assumptions I had held about how to approach traffic. Hopefully the following lessons will save a few people the weeks of sleep deprivation that I went through in order to learn them.

Project Description:

  • Penny Auction website
  • Technology: PHP (Zend Framework), Javascript
  • Server: Various VPS packages (so far)
  • Description of traffic: All users receive one data update per second; there are additional data updates every 3 seconds, and once per minute.

1. Don’t Rely Too Much On Your Server

Many web developers build code that simply assumes that the server will work properly. The problem is that under heavy load, it isn’t at all uncommon for servers to actually not function in the way that might be expected. Examples include things like file resources dropping, database calls dropping – sometimes without intelligible error codes, and even things like system time being unreliable. The following are a couple of specific examples we encountered:

a) PHP time() – When developing in PHP, it is very common to rely on function calls such as time() (to obtain system time, in UNIX timestamp form) for algorithms to work properly. Our setup involved a VPS with multiple CPUs dedicated to our use, and the ability to “burst” to more CPUs as needed. As it turned out, whenever our server went into burst mode, the additional CPUs reported different system times than “our” CPUs did. This is probably an issue with the underlying VPS software, but we didn’t have the luxury of investigating fully. This meant that rows were frequently (as in: about one quarter of the time) saved in the wrong order into the database, which is a serious issue for an auction website! When possible, use a timestamp within SQL code (i.e. MySQL’s TIMESTAMP() function) instead. Fixing the system time on the other VPS partitions wasn’t feasible, since they “belonged” to a different customer.

b) Not every database call will work. Under heavy load, it isn’t at all unusual for a SQL insert or update statement to be dropped. Unless your code is designed to check error statements, and handle retries properly, your site will not work.

2. Pick Your Hosting Company Wisely

We launched the project on one of our hosting company’s smaller VPS packages. We quickly went to one of the middle-range packages, discovered it was also insufficient, and then switched to the largest package that they offer.

In the process, we also entered a number of second tier or higher tickets into their system, including serious operating system level problems.

Luckily, we chose a hosting company that responds quickly to issues, and whose staff are familiar with the types of issues we encountered.

This isn’t something to take for granted. Not every hosting company has the ability to quickly and seamlessly transition a site through different packages on different servers, nor do they necessarily have tier 3 support staff who can address unusual support requests.

In this case, our conversations with the company seem to indicate that they have never seen a new site with this level of load in the past; they still worked valiantly to assist us in keeping things running.

3. Shared Hosting, VPS, Dedicated, Cloud Hosting?

In our previous experience, when a hosting company sells somebody a dedicated server, the notion is that the customer knows what they are doing, and can handle most issues. This occurs even where a SLA (service level agreement) is in place, and can seriously effect response time for trouble tickets.

As a result, our first inclination was to use a VPS service. Our decision was further supported by the level of backup provided by default with VPS packages at our chosen vendor. A similar backup service on a dedicated server of equivalent specifications appeared to be much more expensive.

One of the larger competitors of our customer’s site currently runs under a cloud hosting system. We are continuing to look at a variety of “grid” and cloud hosting options; the main issue is that it is extremely hard to estimate the monthly costs involved in cloud hosting, without having a good handle on how much traffic a site will receive. It isn’t unusual for hosting costs to scale in such a way as to make an otherwise profitable site lose money. That said, we will likely have to transition over to a cloud hosting service of some kind at some point in time.

4. Database Keys Are Your Friend

At one point, we managed to reduce server load from > 100% load, down to around 20%, by adding three keys into the database. This is easy for many web developers to overlook (yes I know, serious “desktop” application developers are used to thinking of this stuff).

5. Zend Framework is Good For Business Logic – But It Isn’t Fast

We initially built the entire code base using Zend Framework 1.10. Using Zend helped build the site in a lot less time than it would otherwise have taken, and it also allows for an extremely maintainable and robust code base. It isn’t particularly fast, however, since there’s significant overhead involved in everything it does.

After some experimentation, we removed any code that supported AJAX calls from Zend, and placed it into a set of “gateway” scripts that were optimized for speed. By building most of the application in Zend, and moving specific pieces of code that need to run quickly out of it, we found a compromise that appears to work – for now.

The next step appears to be to build some kind of compiled daemon to handle requests that need speed.

6. Javascript

Our mandate was to support several of the more common browsers currently in use (mid-2010), including Firefox, IE7-9, Opera, and – if feasible – Safari.

The site is extremely Javascript-intense in nature, although the scripting itself isn’t particularly complex.

We used Jquery as the basis for much of the coding, and then created custom code on top of this. Using a library – while not a magic solution in itself – makes cross-browser support much, much easier. We’re not very picky / particular about specific libraries, but have used Jquery on a number of projects in the past couple of years, to generally good results.

Specific issues encountered included IE’s tendancy to cache AJAX posts, which had to be resolved by tacking a randomized variable onto resources; this, unfortunately, doesn’t “play nice” with Google Speedtest (see below).

We also had a serious issue with scripts that do animated transitions, which resulted in excessive client-side load (and thus poor perceived responsiveness) in addition to intermittantly causing Javascript errors in IE.

Javascript debugging in IE isn’t easy at the best of times, and is made more complex by our usage of minify (see below) to compress script size. One tool that occasionally helped was FireBug Lite, which essentially simulates Firefox’s Firebug plugin in other browsers (but which also sometimes can change the behaviour of the scripts being debugged). The underlying issue is that IE does a poor job of pointing coders to exactly where a script crashed, and the error messages tend to be unhelpful. The debugging method in IE basically boils down to a) downloading a copy of the minified resource in the form that the browser sees it, b) using an editor with good row/column reporting (I often use Notepad++) to track down roughly where the error occurs, and c) put in debug statements randomly to try and isolate the problem. After working with Firebug for a while, this is an unpleasant chore.

7. Testing Server

Long before your site launches, set up a separate testing server with as close to a duplicate of the live environment as possible. Keep the code current (we usually try to use SVN along with some batch scripts to allow quick updating), and test EVERY change on the test site before pushing the code over to the live server. Simple, but frequently overlooked (I’m personally guilty on occasion).

8. CSS

Designers and web developers often think of CSS purely in terms of cross-browser compatibility. Building sites that actually work in major browsers goes without saying, and based on personal experience, CSS issues can lead to a lot of customer support calls (“help, the button is missing”) that could be easily avoided. In the case of this specific project, we actually had to remove or partially degrade some CSS-related features, in order to provide for a more uniform experience across browsers. Attempting to simulate CSS3 functionality using Javascript is not a solution for a heavy-traffic, speed-intensive site; we tried this, and in many cases had to remove the code due to poor performance.

An often overlooked CSS issue (which Google and Yahoo have started plugging – see below) has to do with render speed. Browsers view documents essentially like a multi-dimensional array of elements, and specifying elements in an inefficient way can actually have a significant effect on the apparent page load time for users. It is well worth your while to spend some time with Google Speed Tester (or Yahoo’s competing product) in order to optimize the CSS on your site for speed.

9. Why Caching Doesn’t Always Work

Caching technology can be a very useful way of obtaining additional performance. Unfortunately, it isn’t a magic bullet, and in some cases (i.e. our project specifically), it can not only hurt performance – but can actually make a site unreliable.

High traffic websites tend to fall into one of two categories:

On the one hand, there are sites such as Facebook, whose business model is largely based on advertising; what this means is that if user data isn’t completely, totally current and accurate, it is at most an annoyance (“where’s that photo I just uploaded?”). Facebook famously uses a modified version of memcached to handle much of their data, and this kind of caching is probably the only way they can (profitably) serve half a billion customers.

On the other hand, financial types of websites (think of your bank’s online portal, or a stock trading site) have business models that pertain directly to the user’s pocketbook. This means that – no matter how many users are available, or the volume of data – the information shown on the screen has to be both accurate and timely. You would not want to login to your bank’s site and see an inaccurate account balance, right? In many cases, sites of this nature use a very different type of architecture to “social media” sites. Some banks actually have supercomputers running their websites in order to accomodate this.

Underlying the dichotomy above, is the fundamental notion of what caching is all about – “write infrequently, view often”. Caches work best in situations where there are far fewer updates to data than views.

The initial version of our code actually implemented memcached, in an attempt to try to reduce the number of (relatively expensive) database calls. The problem is that our underlying data changes so rapidly (many times per second, for a relatively small number of resources, that are actively being viewed and changed by many users), that caching the data was happening extremely frequently. The result in practice was that some users were seeing out of date cached data, at least some of the time. Abandoning caching in our specific case helped resolve these issues.

10. Speed Optimization

We used Google Speed Test in order to optimize our project. There is a similar competing product from Yahoo as well. These tools provide a wealth of information about how to make websites load faster – in many cases significantly faster.

Among the many changes that we made to the site, based on the information from the tester, were the following:

a) Use minify to combine and compress Javascript and CSS files. No kidding – this works. Not only that, but if you have a large number of CSS files that are loaded in each page, you can run into odd (and very hard to trace) problems in IE, which appears to only be able to handle approximately 30 external CSS files on a page. Compressing and combining these files using minify and/or yui can save you more than bandwidth.

b) Use sprites to combine images into large files. This does not work well in some cases (i.e. some kinds of buttons), but this technique can save precious seconds of load time. We used a Firefox plugin called Spriteme to automate this task, although we didn’t follow all of its suggestions.

c) Validate your HTML. Again, another “no brainer”. The load time saved by having valid HTML will actually surprise many readers. The process of validation is a nuisance, particularly if your site serves up dynamic, user-contributed content. Set aside a few hours for this process, and just do it though. It makes a difference.

11. Don’t Forget Algorithms 101

I took several courses on algorithm design at university, and then did nothing with that knowledge for more than a decade. Surprise, surprise – a complex, multi-user site actually needs proper thought in this regard.

One example from our experience – the data that tracks the status of an auction (i.e. whether it is currently running, paused, won etc etc) can be “touched” by 9 different pieces of code in the site, including “gateway” code that responds to users, and background tasks.

It took significant effort to build a reliable algorithm that can determine when an auction has actually ended, and the task was complicated by the fact that some of the code runs relatively slowly, and it is quite possible for another operation to attempt to modify the underlying data while the first task is still operating. Furthermore, “locking” in this case may have negative ramifications for user experience, since we did not want to unduly reject or delay incoming “bids” from users.


  1. It is very hard to plan ahead of time for growth in a web environment. Sometimes steps taken specifically to try and address traffic (i.e. caching in our case) can actually be detrimental. The process of adapting to growth can actually involve a surprising amount of trial and error experimentation.
  2. Using frameworks can be very helpful for writing maintainable code. Unfortunately its sometimes necessary to work around them, when specific optimization is needed. Proper documentation and comments can help – I try to write as if I’m explaining to somebody really dumb, years in the future, what is going on in my code – and then I’m often surprised when I need my own comments later on…
  3. Work with the right people. Not just your internal team, but also your hosting company etc. This can make a big difference when you are under pressure.
  4. Prepare yourself for periods of high stress. Not much you can do about this, unfortunately. In most cases, it will be unlikely that you will actually have access to the resources you really need to get the job done. Make sure you schedule breaks too. Its hard. Burnout is much harder though.

Microsoft – Twitter Deal

Nathan forwarded me this link from Mashable, with the subject line prefaced with the word “HUGE”.

From what I can tell, it looks like Microsoft is finally starting to put together the pieces of an overall web strategy: determine what Google would like to do and put roadblocks in their way. Hence the previous Yahoo deal.

Its obvious far to early to see if this helps them out. I’m fairly sure though that it means search engines will be displaying a lot more “current” or trending data pulled from profiles and micro-blogging posts.

Reblog this post [with Zemanta]