I've been looking at server hosting for a decade, and I'm a little perplexed at this point. For many years with a managed hosting environment, we'd sign up for 12 months, then at the end of the year, we'd migrate to a new environment. The new environment was typically the same cost as the old one, but somewhere between 1.5x and 2x times as powerful with about the same again in disk space.
Then the cloud came along. I've been using Rackspace's cloud solution, Amazon EC2, Linode and a few others over the last few years, and I'm not seeing the capability of the server time you spend money on increasing at anywhere near the same rate as previously. Whilst CPUs have risen in core count, and memory capacity has gone up, my cloud server still costs about the same as it did three years ago and still has the same capability as it did three years ago. Amazon et al have had some small price decrements, but nothing close to the doubling every year we used to see.
I think this is perhaps one very big drawback for cloud computing that many people didn't bet on. My guess is that once a cloud infrastructure is created, the same systems just sit in their racks happily chugging away. There is only minor incentive to upgrade these systems as their usage is on-demand and they aren't being actively provisioned like traditional server hardware was. You can no longer easily compare and contrast hosting options because it's complicated now. The weird thing is that this situation seems to have infected regular hosting also! I am in the process of trying to reallocate my hosting to reduce my costs, and it seems everywhere I turn it's the same story. Looking at Serverbeach, their systems have barely shifted upwards, other than CPU model in five years. my $100/mo still buys a dual core system with about the same RAM and disk as it did five years ago, albeit the CPU is a newer model.
For those of us developing on heavier platforms, like Grails or JEE, the memory requirements of our applications are increasing, and the memory on traditional servers and our developer machines is increasing in step, but cloud computing resources are not. I simply cannot run a swath of Grails applications on a small EC2 instance, the memory capacity just isn't big enough. My desktop has 8GB of RAM today, and it won't be long before it has 16GB, yet my Amazon instance is stuck at 1.7GB. Looking at shared hosting, the story is the same. Tomcat hosting can be had quite cheaply, if you don't mind only have 32-64MB of heap. You couldn't even start a grails app in that space, it's recommended that PermGen is set to 192MB at least.
The story isn't universally the same I've noticed, some hosting providers have increased the disk capacity availability quite dramatically, but with regard to RAM, the story seems pretty consistent. You just don't get more than you used to.
What does this mean for the little guy like me, trying to host applications in a cost effective way? At this point I really don't know. I'm starting to consider investigating co-location at this point; talk about dialing the clock back a decade. I can throw together a headless machine for less than $400 which has 10x the capacity of a small instance, seems kinda sad. Right now, I'm considering shifting away from rich server architectures and refocusing on a more Web 2.0 approach, which brings a small shudder, but still, I guess there's a reason. A simple REST server doesn't need a big heavy framework, so I can build something that will fit in that measly 64MB heap.
Moore's law might apply to computing power, but apparently not to hosting companies.