6.02.2012

Supercomputing Comes to Midsize and Non-Technical Enterprises

Supercomputing, with its ability to tackle the most complex problems and extremely large volumes of data fast, no longer is only for large organizations in scientific and technical fields. You don’t have to be unable to run a Monte Carlo simulation or two before you think a supercomputer might not be a bad thing for your organization too. The latest generation of high performance computing (HPC) systems put supercomputing capabilities into the hands of even midsize and non-technical organizations.
They can use HPC to solve the same complex, multi-dimensional problems that took way too long or were not even feasible with the usual corporate systems. The new generation of HPC can handle compute-intensive workloads as expected, but they also can handle big data processing fast.
And they do it in ways that don’t require big investments in more technology or the need to recruit a cadre of hardcore compute geeks. Where once supercomputing focused primarily on delivering megaflops (millions of floating point operations per second), now companies are looking to leverage affordable technical computing tools for complex problems that may be somewhat less complicated than, say, intergalactic navigation yet still deliver important business results .
Initially HPC or supercomputing was considered the realm of large government research being conducted by secretive agencies and esoteric think tanks. Today, HPC is poised to go mainstream.
Initially, automotive, aerospace, electronics, and petroleum companies were the primary HPC adopters, expecting it to deliver better product designs that result in higher quality, lower costs, and faster time to market. Now other industries are getting involved–financial services, media, telecommunication, and life sciences–by adopting HPC for modeling, simulations, and predictive analyses of various types.
Financial services firms, for instance, want real time analytics to deliver improved risk management, faster and more accurate credit valuation assessments, multi-dimensional pricing, and actuarial analyses. Life sciences firms are seeking to speed drug discovery, collaborate more effectively, and reduce costs.
Some of the activities being performed with HPC have a distinct scientific flavor, such as next-generation genomics or 3D computer modeling. Other HPC usage seems quite conventional, even mundane. This includes financial data analysis, real-time CRM, social media analysis, data mining/unstructured data analysis, and retail commerce/ merchandising analysis and planning.
Not coincidentally, the technology to perform HPC now is coming within the reach of conventional businesses. HPC is being delivered through compute clusters, compute grids, and increasingly via the cloud. Compute clusters or grids can be nothing more than connected multi-core Windows servers tuned for parallel processing.
Long-time HPC players like IBM, HP, SGI, and Dell are revamping their offerings. They are being joined by a new breed of analytics driven, cloud-based HPC players including Amazon’s Cluster Compute Instances, Appistry, and Microsoft’s Project Daytona.
IBM has taken the lead in bringing HPC, which it calls technical computing, within reach by packaging it as a complete, affordable, easy to deploy, and sufficiently scalable bundle to accommodate workload growth and business expansion. It simplifies administration through intuitive management tools that free companies to focus on business goals, not HPC. It is doing this mainly by bringing Platform Computing, a recent acquisition, to the HPC party.
For the large or even midsize enterprise that wants to capitalize on the kind of analytics HPC makes possible, there is no shortage of technology options. And, as HPC catches on for midsize and non-technical companies the choices will only get better.

No comments:

Post a Comment