Blogger Themes

Thursday, 1 March 2012

High-Performance Innovation:

March 1, 2012

ANSYS-CFX was used in the cloud via Windows HPC Server to depict wave formulation around a seafaring vessel. ANSYS is one of many vendors to develop software specifically designed to remotely take advantage of highly parallel computing systems, offering customers high-end performance and faster results. Image: ANSYS

As researchers scramble to deliver R&D results and bring products to market, they are turning to high-performance computing. Vendors are competing for their business. Can everyone adapt to the cloud?

What laboratory tool has made the most difference in research and development? Arguably, it’s the personal computer. In the early days of computing, specialized clusters of high-performing processors were often needed for data-intensive tasks. But as chipmakers upheld Moore’s Law, desktop machines and even laptops became powerful enough to handle complex design and processing tasks.

Personal computers are ubiquitous and indispensible, but often are no longer powerful enough, even for daily research tasks such a processing a Microsoft Excel spreadsheet. Circumstances have conspired to force researchers to seek a better solution. As microprocessor speed has stalled, data volume has exploded. In 2004, according to Dave Turek, vice president of deep computing at IBM Corp., Armonk, N.Y., computer scientists recognized the limits of microprocessor technology and realized the best avenue for more performance was to group large numbers of processors together and leverage strength in numbers. Multi-core was born.

Now, through a combination of multicore processing, commoditization of high-end service components, and high-speed communications, high-performance computing (HPC) is handling the heavy lifting of high-technology R&D.

“What really has changed is the migration of traditional techniques and approaches into a non-classical domain. When you peel back the covers, at its core, software is sophisticated mathematics used to answer problems,” says Turek. HPC represents this new domain, where linear programming gives way to counter-intuitive parallel processing and where researchers stand to make tremendous gains in knowledge, if they know how to get the most out it. As a result, research organizations cannot consider adopting HPC without gaining knowledge of the associated software, tools, components, storage, and services that together form the infrastructure for intensive computation.
To read more click here...


Post a Comment