Where is the money going next?

Flickr Creative Commons: cpeaO

Ever wondered how much money there is in the world?

The answer is that nobody really knows for sure, but there are various estimates that involve esoteric calculations.

The thing is though – the total amount of money in the world changes relatively slowly (i.e. it is somewhat inelastic).

New inventions and companies add a bit to the mix.

Major stock market crashes remove some value on the other hand – but less than you might expect. What actually tends to happen when a particular market crashes is that money moves out of that market and into another one. Its somewhat like one of those animal shaped balloons – you squeeze on one part, and the air goes into a different part. This is what has been driving speculation in commodities over the past few years; money moved out of markets like mortgage backed paper, bonds and stocks, and into commodities like oil and gold, because investors don’t want to just sit on cash – they want (or in many cases need – i.e. a fund that invests other people’s money) to make a return on their capital.

Major fortunes have been made by people who are good at working out ahead of time where the money is going to go next. Soros made huge sums of money in currency exchange (most famously betting against the Pound, but also in Thailand and other places), because he figured out where the money was going to go ahead of time.

Here’s the thing: its looking like the commodity rally is slowing down. Maybe. Its hard to call that sort of thing. Certainly various governments are under political pressure to make it uncomfortable to speculate in certain key commodities (i.e. oil), and the Chicago market has been tightening the margins required to buy and sell paper (probably under pressure from the US government), which may tamp down that market a bit.

So where is the money going to go next? I don’t think its into bonds, unless I’m completely off base – the inflation/lending rate thing is too hard to call yet. The stock market seems to be topping out for now, although it may resume growth soon. Nobody wants to touch real estate right now – again too much uncertainty. So is the money going to head overseas? Will investors be willing to go all cash and sit on the sidelines for a bit? My guess is that we’ll see some M&A activity that will be driven by low stock prices, combined by impatient investors with a lot of cash. I also wouldn’t be surprised if we see a lot of instability in currency markets as well, as a result of speculators pushed out of other markets. Again, its a murky situation and nobody really knows for sure.

The point of the vague meandering above is as follows though: I have some ideas regarding how a computer could track global flows of money over time and try to figure out the possibilities of different kinds of outcomes. Something like a “Soros in a Box”, if you will. Not completely original – I know that there’s any number of hedge funds that have proprietary systems like this.  I’d be interested in bouncing ideas off of people smarter than myself though…

 

What should Microsoft do with Skype?

The recently announced purchase of Skype by Microsoft wasn’t something I would have anticipated at all – but it actually does makes sense.

Aside from blocking some of its competitors from making the same purchase, this has some interesting strategic implications. In recent years, Microsoft’s profit has come from two places – games (i.e. XBox), which is what the consumer notices, and the enterprise market (which it dominates).

Look for Microsoft to do three things (aside from increasing advertising) to fully utilize this purchase:

1) Expand Skype’s enterprise functionality – Skype has some teleconferencing capabilities, but they could definitely be improved (I can think of several competing products that are superior in this area). In addition, better integrating this functionality into the Outlook / Exchange / Sharepoint stack could be very beneficial for corporate customers. This might imply that they eventually kill off some of their in-house software, i.e. Live.

2) Improve Skype’s APIs – allowing games to use embedded voice and video chat via APIs could be a powerful enhancement for the XBox platform.

3) Bring back Skype for Windows Mobile (which also makes sense vis-a-vis Microsoft’s alliance with Nokia).

The hidden challenges of channel marketing

Channel marketing is an approach for growing sales by developing relationships with other companies (i.e. channels) that do the hard part of selling for you. One example you’ve probably seen is affliate marketing, where third parties sign up to provide leads in exchange for commissions. There’s many other examples from component manufacturing to distribution companies and retail.

What is seldom discussed in case studies (or sales pitches from affliate companies, for that matter) on the topic, is that while a channel marketing approach can be effective in triggering rapid sales growth, it also has some serious challenges that need to be addressed if you are going to succeed:

a) Pricing power – if somebody is sending you business in volume, chances are they will push very hard to get the best price possible (i.e. your margin will be lower). In addition, raising your prices later on may be challenging, particularly if a given channel represents a large volume of your business (and a useful channel almost always will).

b) Brand – if your “real customers” are actually your customer’s customers, chances are they’ve never heard of you. Unlike Intel, with their successful “Intel Inside” branding exercise, most companies that rely heavily on channels do not have the budget or marketing savvy to create a national television marketing campaign. Nor will their customers permit them to succeed (Intel was lucky that Dell signed on).

c) Inability to choose business – this is a particular issue in the service industry, where success may depend on the ability to turn down business that is not a good fit. It becomes difficult to turn away a project, for instance, when a channel that represents a significant percentage of revenues demands that it be undertaken.

d) Potential conflict of interest – there are several areas where channel marketing can lead to enhanced potential for conflict of interest; the customers of your customers may wish to approach you directly; paying commissions may result in business being directed inappropriately; differential pricing may result in your channels being angered if your direct customers obtain better pricing.

e) Excess growth – a channel approach may result in an inability to control the rate of growth, which has any number of inherent follow-on issues.

The take-away is that channel marketing, while a powerful tool that can be used to fuel growth (and also provide an entrée to otherwise inaccessible markets), does have some inherent risks, and these must be managed with care and foresight if this strategy is to succeed.

What’s next for Cisco?

While I’m on the topic of tech stocks, I was reading the comments on the following story earlier today (article). The general gist seems to be that Cisco has lost its way, and that its low valuation of late is part of an overall downward trend.

My immediate thought is that at its current valuation, and with its huge cash reserves (never mind market share, product lineup, patents etc), Cisco is actually a potential target for a takeover. The first candidate that came to mind was HP, but they’re unlikely to risk antitrust action (they bought 3Com a while back). A more likely candidate would be Oracle, who appear to be positioning themselves as HP’s most immediate competitor. I’m not sure I’m happy about the thought from a consumer’s perspective, but an Oracle-Cisco merger might make good business sense.

Time to buy Google?

Google’s share price has taken a beating lately, due to escalating costs (primarily R&D) and the CEO switch. About a year ago I wrote that Google, while a well run and highly profitable company, was too expensive. At current (8 April 2011) P/E under 20, and with both increasing revenue and forward thinking R&D, I think its now time to to reverse course, and call GOOG as a buy and hold.

Please note: the author does not currently hold a position in GOOG, and is not making a recommendation regarding other people’s investment activities!

When will virtual currencies be useful?

I currently have small amounts of money floating around in a variety of virtual currencies. In some cases, I can convert those currencies to other virtual currencies or to real world money (i.e. there’s a slow process to move paypal money into my bank account).

It occurs to me though that it would be very useful if I could pay real world bills (think groceries or mortgage) directly using virtual currency.

Before that can happen, there would need to be a lot more transparency (i.e. no grocery will accept magicbuxx if they don’t know how much they are worth, or whether they can in turn get value out of them), and a whole lot of big institutions like banks and payment portals would need to sign on too. There would also need to be physical mechanisms that can transfer the payments (i.e. the new mobile payment technology that is slowly being adopted by cellphone manufacturers would be helpful).

I wonder how we can make that happen. It would be very nice to be able to go to a restaurant with a pile of Facebook credits, or Bitcoins.

Evaluating Project Risk

Risk Factory - by kyz - flickr.com Creative Commons

I’m interested to hear feedback regarding how other development companies measure project risk.

We currently track three general classes of risk (although in a very simplistic way) for a project:

1) Technical risk – how likely is it that we will run into something that we don’t know how to solve (or that can’t be solved as stated – or is generally insoluble).

2) Bottom line risk – how likely is it that the project will cost too much to build (i.e. it won’t be profitable). Note that even projects that are not fixed cost (i.e. are billed on an hourly or some other type of flexible basis) can run into issues if they start to cost more than some unstated budget on the customer’s end. This type of risk is frequently the largest concern on our end of things, because (like many service organizations) our largest expense is staffing.

3) Customer risk – I’ve had customers go out of business, vanish, fire us etc in the past. There are frequently warning signs from the start that a particular customer may be more risky than usual. We’ve started tracking issues in a database to try to become more adept at evaluating this sort of risk.

How does your company measure and evaluate risk? Are there relevant categories missing from my list (VaR, Black Scholes etc aren’t really relevant to software development – I think).

HP Needs a BHAG (Big Hairy Audacious Goal)

Flickr Creative Commons - Taken by kevindooley

A number of years ago, I read a book called “Built to Last”, by Jim Collins and Jerry Porras. The book, a classic of the genre, discusses a number of companies that the author feel to be “visionary” in nature. One of those companies is HP. The founders of the company built not just a company, but also a coherent internal culture, commonly called the “HP Way”. This has lead to the company being greatly admired in business circles.

The last few years have been rough on HP’s external image, in large part – I believe – unfairly. The past week has seen the second sudden departure of its CEO in only a few years. Both were due to scandal. While profitability has not been hurt (in fact HP is doing better than ever, with much credit to its recently departed CEO!), the stock has lately been pummeled in the markets.

Some of the commentary that I’ve read describe the most recent tenure as being one of building a solid financial foundation for the company. Which leads me to my point. What HP needs in its new leader is a vision for where the company should be moving technologically. Not just a specific set of goals, but something that is going to put fire in their bellies (and enthral their customers). In short, they need a BHAG – a Big Hairy Audacious Goal.

One possibility that comes to mind: HP is one of the largest manufacturers of electronics (both consumer and business) in the world. If they made a decision that in 3 to 5 years time, every single device that they manufactured would contain a wireless mesh device, they could theoretically blanket the entire world with free, decentralized, high speed internet connections. And by implication, free telephony and broadcast media. Yes, there are still big technical issues to address, and wireless mesh networks are still very much the realm of techy enthusiasts (and the US military, and also to some extent Google). But that’s the point of a BHAG. Yes, the telecom industry would scream (including likely some of HP’s board members – hey, I’m just sayin’) as their entire business model evaporated. Oh, and Apple might be in trouble as well – they make money on the telecom contracts for iPhones, not on the hardware. But imagine the sales pitch to consumers – buy our printers, our laptops, our telephones, and never pay for internet, telephone or cable TV ever again. Nice, eh?

Here’s another possibility: HP is already widely known for its environmentally friendly policies, and especially for its experience handling and recycling plastics. Imagine what effect a Fortune 500 company (with $100 billion plus per year in revenue) could have, if it would back a project like WHIM Architecture’s Recycled Island project? WHIM are trying to gather all of the waste plastic floating in the middle of the Pacific Ocean, and turn it into habitable land. I don’t know with any certainty if their economic projections are feasible, but there’s a potential for large profits from this type of venture.

There’s no doubt that there’s any number of highly talented people that can step into the chief executive role at HP. Let’s hope that whomever they chose will bring this kind of vision to the table, and that this remarkable company can quickly move beyond this temporary setback.

Faceted Social Networks

I just found an interesting article slideshow via Slashdot, on how real-life social graphs work, and why current social media websites don’t do a good job of supporting them – http://www.slideshare.net/padday/the-real-life-social-network-v2.

The gist is that people’s “real life” social networks are highly faceted in nature, and the resulting online interactions can be jarring. There’s some food for thought here.

Heavy Traffic – Lessons Learned

In the past 15 or 16 years, I’ve worked on a number of websites that had fairly significant traffic (mostly in the form of unique daily visitors – there’s many ways to measure traffic). In one specific case, the traffic on a well-known author’s website spiked significantly (several thousand unique visitors per day) after his appearance on a television talk show. The website, although database driven, primarily consisted of articles, along with a store – and even on shared hosting, this wasn’t a problem.

Recently, my company built an online “live auction” website for a customer, a project which posed a number of interesting challenges and learning experiences (the hard way, of course) regarding how to build a site that has heavy traffic. In this case, the nature of the website requires that all users see information that is current and accurate – resulting in a need for AJAX calls that run repeatedly on a per second basis per user. This project is the first one that I have worked on that required serious optimization work; typically even the heaviest custom development that my team works on is primarily focused on business use cases rather than things like speed or algorithm design; not so here.

The “coming soon” page, long before the site was launched, already received several hundred unique visitors per day (based on Google Analytics). The site launched with more than 500 registered users (pre-registration via the coming soon page), and traffic spiked heavily following launch. The initial traffic spike actually forced the site to close for several days, in order for our team to rework code. The re-launch was preceded by several Beta tests that involved registered users. Bear in mind that a registered user on most sites isn’t individually responsible for much server load. On this particular site, each user is receiving at least one update per second, each of which may involve multiple database calls.

The following is a description of some of the issues we encountered, and how they were addressed or mitigated. In some cases, work is ongoing, in order to adapt to continued growth. In many cases, the challenges that we encountered forced me to revise some assumptions I had held about how to approach traffic. Hopefully the following lessons will save a few people the weeks of sleep deprivation that I went through in order to learn them.

Project Description:

  • Penny Auction website
  • Technology: PHP (Zend Framework), Javascript
  • Server: Various VPS packages (so far)
  • Description of traffic: All users receive one data update per second; there are additional data updates every 3 seconds, and once per minute.

1. Don’t Rely Too Much On Your Server

Many web developers build code that simply assumes that the server will work properly. The problem is that under heavy load, it isn’t at all uncommon for servers to actually not function in the way that might be expected. Examples include things like file resources dropping, database calls dropping – sometimes without intelligible error codes, and even things like system time being unreliable. The following are a couple of specific examples we encountered:

a) PHP time() – When developing in PHP, it is very common to rely on function calls such as time() (to obtain system time, in UNIX timestamp form) for algorithms to work properly. Our setup involved a VPS with multiple CPUs dedicated to our use, and the ability to “burst” to more CPUs as needed. As it turned out, whenever our server went into burst mode, the additional CPUs reported different system times than “our” CPUs did. This is probably an issue with the underlying VPS software, but we didn’t have the luxury of investigating fully. This meant that rows were frequently (as in: about one quarter of the time) saved in the wrong order into the database, which is a serious issue for an auction website! When possible, use a timestamp within SQL code (i.e. MySQL’s TIMESTAMP() function) instead. Fixing the system time on the other VPS partitions wasn’t feasible, since they “belonged” to a different customer.

b) Not every database call will work. Under heavy load, it isn’t at all unusual for a SQL insert or update statement to be dropped. Unless your code is designed to check error statements, and handle retries properly, your site will not work.

2. Pick Your Hosting Company Wisely

We launched the project on one of our hosting company’s smaller VPS packages. We quickly went to one of the middle-range packages, discovered it was also insufficient, and then switched to the largest package that they offer.

In the process, we also entered a number of second tier or higher tickets into their system, including serious operating system level problems.

Luckily, we chose a hosting company that responds quickly to issues, and whose staff are familiar with the types of issues we encountered.

This isn’t something to take for granted. Not every hosting company has the ability to quickly and seamlessly transition a site through different packages on different servers, nor do they necessarily have tier 3 support staff who can address unusual support requests.

In this case, our conversations with the company seem to indicate that they have never seen a new site with this level of load in the past; they still worked valiantly to assist us in keeping things running.

3. Shared Hosting, VPS, Dedicated, Cloud Hosting?

In our previous experience, when a hosting company sells somebody a dedicated server, the notion is that the customer knows what they are doing, and can handle most issues. This occurs even where a SLA (service level agreement) is in place, and can seriously effect response time for trouble tickets.

As a result, our first inclination was to use a VPS service. Our decision was further supported by the level of backup provided by default with VPS packages at our chosen vendor. A similar backup service on a dedicated server of equivalent specifications appeared to be much more expensive.

One of the larger competitors of our customer’s site currently runs under a cloud hosting system. We are continuing to look at a variety of “grid” and cloud hosting options; the main issue is that it is extremely hard to estimate the monthly costs involved in cloud hosting, without having a good handle on how much traffic a site will receive. It isn’t unusual for hosting costs to scale in such a way as to make an otherwise profitable site lose money. That said, we will likely have to transition over to a cloud hosting service of some kind at some point in time.

4. Database Keys Are Your Friend

At one point, we managed to reduce server load from > 100% load, down to around 20%, by adding three keys into the database. This is easy for many web developers to overlook (yes I know, serious “desktop” application developers are used to thinking of this stuff).

5. Zend Framework is Good For Business Logic – But It Isn’t Fast

We initially built the entire code base using Zend Framework 1.10. Using Zend helped build the site in a lot less time than it would otherwise have taken, and it also allows for an extremely maintainable and robust code base. It isn’t particularly fast, however, since there’s significant overhead involved in everything it does.

After some experimentation, we removed any code that supported AJAX calls from Zend, and placed it into a set of “gateway” scripts that were optimized for speed. By building most of the application in Zend, and moving specific pieces of code that need to run quickly out of it, we found a compromise that appears to work – for now.

The next step appears to be to build some kind of compiled daemon to handle requests that need speed.

6. Javascript

Our mandate was to support several of the more common browsers currently in use (mid-2010), including Firefox, IE7-9, Opera, and – if feasible – Safari.

The site is extremely Javascript-intense in nature, although the scripting itself isn’t particularly complex.

We used Jquery as the basis for much of the coding, and then created custom code on top of this. Using a library – while not a magic solution in itself – makes cross-browser support much, much easier. We’re not very picky / particular about specific libraries, but have used Jquery on a number of projects in the past couple of years, to generally good results.

Specific issues encountered included IE’s tendancy to cache AJAX posts, which had to be resolved by tacking a randomized variable onto resources; this, unfortunately, doesn’t “play nice” with Google Speedtest (see below).

We also had a serious issue with scripts that do animated transitions, which resulted in excessive client-side load (and thus poor perceived responsiveness) in addition to intermittantly causing Javascript errors in IE.

Javascript debugging in IE isn’t easy at the best of times, and is made more complex by our usage of minify (see below) to compress script size. One tool that occasionally helped was FireBug Lite, which essentially simulates Firefox’s Firebug plugin in other browsers (but which also sometimes can change the behaviour of the scripts being debugged). The underlying issue is that IE does a poor job of pointing coders to exactly where a script crashed, and the error messages tend to be unhelpful. The debugging method in IE basically boils down to a) downloading a copy of the minified resource in the form that the browser sees it, b) using an editor with good row/column reporting (I often use Notepad++) to track down roughly where the error occurs, and c) put in debug statements randomly to try and isolate the problem. After working with Firebug for a while, this is an unpleasant chore.

7. Testing Server

Long before your site launches, set up a separate testing server with as close to a duplicate of the live environment as possible. Keep the code current (we usually try to use SVN along with some batch scripts to allow quick updating), and test EVERY change on the test site before pushing the code over to the live server. Simple, but frequently overlooked (I’m personally guilty on occasion).

8. CSS

Designers and web developers often think of CSS purely in terms of cross-browser compatibility. Building sites that actually work in major browsers goes without saying, and based on personal experience, CSS issues can lead to a lot of customer support calls (“help, the button is missing”) that could be easily avoided. In the case of this specific project, we actually had to remove or partially degrade some CSS-related features, in order to provide for a more uniform experience across browsers. Attempting to simulate CSS3 functionality using Javascript is not a solution for a heavy-traffic, speed-intensive site; we tried this, and in many cases had to remove the code due to poor performance.

An often overlooked CSS issue (which Google and Yahoo have started plugging – see below) has to do with render speed. Browsers view documents essentially like a multi-dimensional array of elements, and specifying elements in an inefficient way can actually have a significant effect on the apparent page load time for users. It is well worth your while to spend some time with Google Speed Tester (or Yahoo’s competing product) in order to optimize the CSS on your site for speed.

9. Why Caching Doesn’t Always Work

Caching technology can be a very useful way of obtaining additional performance. Unfortunately, it isn’t a magic bullet, and in some cases (i.e. our project specifically), it can not only hurt performance – but can actually make a site unreliable.

High traffic websites tend to fall into one of two categories:

On the one hand, there are sites such as Facebook, whose business model is largely based on advertising; what this means is that if user data isn’t completely, totally current and accurate, it is at most an annoyance (“where’s that photo I just uploaded?”). Facebook famously uses a modified version of memcached to handle much of their data, and this kind of caching is probably the only way they can (profitably) serve half a billion customers.

On the other hand, financial types of websites (think of your bank’s online portal, or a stock trading site) have business models that pertain directly to the user’s pocketbook. This means that – no matter how many users are available, or the volume of data – the information shown on the screen has to be both accurate and timely. You would not want to login to your bank’s site and see an inaccurate account balance, right? In many cases, sites of this nature use a very different type of architecture to “social media” sites. Some banks actually have supercomputers running their websites in order to accomodate this.

Underlying the dichotomy above, is the fundamental notion of what caching is all about – “write infrequently, view often”. Caches work best in situations where there are far fewer updates to data than views.

The initial version of our code actually implemented memcached, in an attempt to try to reduce the number of (relatively expensive) database calls. The problem is that our underlying data changes so rapidly (many times per second, for a relatively small number of resources, that are actively being viewed and changed by many users), that caching the data was happening extremely frequently. The result in practice was that some users were seeing out of date cached data, at least some of the time. Abandoning caching in our specific case helped resolve these issues.

10. Speed Optimization

We used Google Speed Test in order to optimize our project. There is a similar competing product from Yahoo as well. These tools provide a wealth of information about how to make websites load faster – in many cases significantly faster.

Among the many changes that we made to the site, based on the information from the tester, were the following:

a) Use minify to combine and compress Javascript and CSS files. No kidding – this works. Not only that, but if you have a large number of CSS files that are loaded in each page, you can run into odd (and very hard to trace) problems in IE, which appears to only be able to handle approximately 30 external CSS files on a page. Compressing and combining these files using minify and/or yui can save you more than bandwidth.

b) Use sprites to combine images into large files. This does not work well in some cases (i.e. some kinds of buttons), but this technique can save precious seconds of load time. We used a Firefox plugin called Spriteme to automate this task, although we didn’t follow all of its suggestions.

c) Validate your HTML. Again, another “no brainer”. The load time saved by having valid HTML will actually surprise many readers. The process of validation is a nuisance, particularly if your site serves up dynamic, user-contributed content. Set aside a few hours for this process, and just do it though. It makes a difference.

11. Don’t Forget Algorithms 101

I took several courses on algorithm design at university, and then did nothing with that knowledge for more than a decade. Surprise, surprise – a complex, multi-user site actually needs proper thought in this regard.

One example from our experience – the data that tracks the status of an auction (i.e. whether it is currently running, paused, won etc etc) can be “touched” by 9 different pieces of code in the site, including “gateway” code that responds to users, and background tasks.

It took significant effort to build a reliable algorithm that can determine when an auction has actually ended, and the task was complicated by the fact that some of the code runs relatively slowly, and it is quite possible for another operation to attempt to modify the underlying data while the first task is still operating. Furthermore, “locking” in this case may have negative ramifications for user experience, since we did not want to unduly reject or delay incoming “bids” from users.

Conclusions

  1. It is very hard to plan ahead of time for growth in a web environment. Sometimes steps taken specifically to try and address traffic (i.e. caching in our case) can actually be detrimental. The process of adapting to growth can actually involve a surprising amount of trial and error experimentation.
  2. Using frameworks can be very helpful for writing maintainable code. Unfortunately its sometimes necessary to work around them, when specific optimization is needed. Proper documentation and comments can help – I try to write as if I’m explaining to somebody really dumb, years in the future, what is going on in my code – and then I’m often surprised when I need my own comments later on…
  3. Work with the right people. Not just your internal team, but also your hosting company etc. This can make a big difference when you are under pressure.
  4. Prepare yourself for periods of high stress. Not much you can do about this, unfortunately. In most cases, it will be unlikely that you will actually have access to the resources you really need to get the job done. Make sure you schedule breaks too. Its hard. Burnout is much harder though.