Skip to content

Insight and analysis of technology and business strategy

High-performance computing: waiting for the cloud to catch up

High-performance computing (HPC) was once limited to organizations that could afford to buy and maintain systems costing tens of millions of dollars. The cloud has changed all that, creating a democracy in which teraflops are accessible to people well outside the realms of the academic or government customer. The market has responded exactly as you’d imagine. According to research by Intersect360 , the 2017 market for HPC in the cloud grew by 44% to $1.1 billion, exceeding the billion-dollar mark for the first time. The consultancy anticipates this growth will continue, with annual spending expected to hit $3 billion by 2022. By contrast, traditional on-premises HPC spending increased by only 1.6% during the same period, and it’s easy to see why. The expense of upgrading and replacing HPC servers in a data center makes a move to the cloud pretty much a no-brainer. But the switch isn’t yet as easy as we might hope. The blinding pace of technology’s evolution has placed customers’ expectations several paces ahead of the cloud’s ability to keep up. The problem is expressing itself in a number of ways. For example, true HPC calls for low-latency interconnects, and cloud providers have yet to offer those as a matter of course. The costs and logistics of moving petabytes of data are also an issue, and providers could do more to simplify data migration and make the pricing more affordable. And what about storage? The public cloud’s standard built-in storage solutions don’t match the demands of applications with high bandwidth needs. This is likely to send data management duties back to local administrators, and that’s not a sustainable arrangement in most cases. The most obvious solution would be for providers to spin up a parallel file system appliance, easing the customer’s burden in the same way they do with database services and Hadoop clusters. And then there’s the issue of how software gets billed in the cloud ecosystem. Third-party developers charge per hour or per instance for tools that sit on top of the IaaS layer. That’s fine for enterprise, but it doesn’t work as well with the fixed budgets of scientific computing. There’s a real opportunity here for cloud providers, and they can seize it by offering their own performance-optimized images for HPC. To keep pace with their HPC customers’ expectations, cloud providers need to address all the issues raised above (and then some). But all these issues are actually just offshoots of the largest issue of all, which is that the business model of the public cloud does not yet mesh with the realities of HPC. Cloud pricing is predicated on the demands of enterprise customers, whose hardware utilization routinely drops below 20%. HPC users typically operate at up to 90% utilization, which makes on-demand cloud server pricing much less competitive than traditional on-premises solutions. Cloud providers could answer these concerns with a different pricing model — for example, by allowing for preemptible instances and spot pricing. But at the moment, the public cloud could be forgiven for seeing HPC customers as being a bit too high-maintenance. Without a doubt, the hardware demands of HPC are costly, and perhaps not yet justifiable in terms of the ROI. It will likely take the arrival of big new customers for cloud providers to see the merits of innovating their business model, and with it, their technology. If we’ve learned anything from the digital revolution, it’s that the need for innovation will never slow down to a more convenient pace. The demand for HPC is certain to grow exponentially in the coming decades. For cloud providers, the market opportunity is virtually unlimited, and all that remains is for them to embrace it. Find out how Pythian can help you succeed in the cloud.  

Top Categories

  • There are no suggestions because the search field is empty.

Tell us how we can help!

dba-cloud-services
Upcoming-Events-banner