Q&A: Julian Fielden, OCF, Discusses High Performance Computing

Drawing on 20 years of business and accounting experience, Julian is Managing Director of OCF. Julian was responsible for establishing OCF in January 2002, purchasing the business and assets of OCF Limited in a management buy-out. He is responsible for the strategic direction of the business and its day-to-day operations, and oversees all customer accounts and relationships with key technology partners including IBM, Fakespace, Voltaire, Intel and AMD. BCW hooked up with Julian to get his thoughts on high performance servers and storage.

What is OCF?

Based in Sheffield UK, OCF is a proven, trusted UK high performance server and storage cluster integrator, cluster services & support team, and IaaS provider. OCF provides solutions for over 28 commercial clients from the automotive, aerospace, utilities, pharmaceutical, manufacturing, oil & gas and financial industries. Plus, it provides solutions to 39 (22 per cent) of the UK’s 176 Universities, Higher Education Institutes and Research Councils. The Company also supports customers in Eire and the United Arab Emirates.

OCF holds IBM Premier Partner status and enhances its IBM-based solutions using technology from a range of partners: AMD, Cisco, Dataram, Fakespace, Infortrend, Intel, Microsoft, Nallatech, Qlogic, Sun Microsystems, Supermicro Computer, Inc., Tyan and Voltaire. OCF has the largest High Performance Computing delivery team in the UK. It is celebrating its 10 year anniversary in 2012.

What did the High Performance Computing ‘World’ look like in 2002?

Back in 2002, our customers―mostly universities―were buying high performance computers in the form of appliances to run a single application code, to generate a dataset for further analysis and review. It was very rare to find a customer needing a complex solution to support multiple departments, multiple users and multiple application codes, as is the case today.

There were some server clusters in existence to rival appliance based computers, but not on the scale we see now. Operating systems were a mix of UNIX and some Linux. Compared to current standards, our clusters at that time didn’t work in a particularly efficient manner!

Data volumes generated by our computers then are also nothing like they are today―there is an order of magnitude of difference. In some cases now, customers are generating Terabytes or even Petabytes of data for long-term storage every six months or so. Importantly, in 2002, storage for data was only ever sold alongside an appliance or cluster. The high performance computer was the only facility generating big data for storage.

What does the environment look like today?

Times have changed significantly in the high performance environment. Firstly, the definition of “high-performance” is changing weekly. The boundaries are stretching. One man’s high performance computer is another’s fast PC in just a matter of months.

Today, low-cost Linux-based server clusters dominate. Around 450 of the Top500 largest supercomputers (as of November 2011) were Linux-based server clusters. Data processing power has increased. Looking at the Top500 again, the slowest machine on today’s list still ‘out powers’ the fastest on the list in just a few years previous.

The market for high performance data processing has changed too. We’re no longer just dealing with academics and universities. The market has expanded significantly so that we find increasing numbers of customers in the private sector. We are now working with many companies that have engineering at their core―firms in aerospace, automotive, and manufacturing for example―that use high performance computers to lower research and development costs. We’re also seeing more interest from financial services firms that rely on significant compute power to solve banking and insurance calculations.

The individual users of server clusters have changed too. We are no longer just dealing with IT-savvy computer scientists. We have a mix of users: ‘power users’ that understand hardware, applications and have skills to operate a server cluster, but at the lower level in small to medium enterprises for example, we have users who are simply not interested in high performance computing and just want the results to their problems. They want to utilise technology but don’t want to understand it.

As a result of the wider adoption of server clusters, companies are also now producing massive amounts of digital information. Now, we’re not only concerned with helping companies to process codes to generate data, but we’re also helping customers to handle and store the information afterwards.

We are also finding that as a result of the internet and use of IP enabled devices, there has been an explosion of digital data, structured and unstructured, that is making a far bigger requirement for big data storage in companies that would not have needed it, or companies that simply would not have existed, a few years ago.

What’s driving change?

The expansion of the market, particularly into the private sector, is driven by better awareness of how server clusters (and high performance computing in general) can support a mainstream business. Market growth is partly driven by reduction in cost per Teraflop.

The Linux cluster has brought affordability right down. For example in 2002, a 16-processor machine might cost in the region of £250k, but today the same machine might cost just £5k. The biggest disrupter has been the internet and use of IP enabled devices as previously mentioned. This has caused an explosion of digital data, structured and unstructured―that is making storage a far bigger customer challenge than ever before.

What hasn’t changed?

Ten years ago the top two suppliers of high performance computers on the Top 500 list were IBM and HP and that still remains today. There has been no major disruptor entering the market. The major x86 processor vendor is still Intel as it was in 2002 (AMD came in around 2003/2004 and took some market share but has lost it again).

What problems does the industry continue to face?

First, the sheer speed of advance in processor technology is disruptive because it creates more power to be applied to a problem and software developments can’t keep up with processor changes. Second, the cost and ability to access energy is a large problem. As a result, customers really have to consider energy efficiency when planning deployments. We are now reaching a situation where some large server clusters need have their own power stations!

When we reach Exascale Computing (when data processing of certain code and calculations have move beyond current TeraFlops and PetaFlops into ExaFlop speeds), the energy profile of computers will need to change massively again to cope.

Some customers―faced with energy and cooling challenges―are now choosing to avoid owning and housing their own compute facility and are looking to the “cloud” to provide data processing and data storage as a service. This presents a challenge for vendors and suppliers, to change business models and match customer need in a new ways.

What will we be discussing in 10 years from now?

Firstly, the future is not a rail track we follow but a big wide road where we need to adapt, disrupt and be nimble, but some safe bets might be that we will have definitely reached Exascale Computing within the next 10 years. Bandwidth and latency issues (slowing the transfer of data) will have been overcome.

As with the last 10 years, the major vendors will still be the same with IBM and HP at top of the tree. Ownership of technology will be less of a concern and cloud models that are evolving now to provide customers with high performance processing power without onsite infrastructure, will be more seasoned and trusted.

Christian Harris is editor and publisher of BCW. Christian has over 20 years' publishing experience and in that time has contributed to most major IT magazines and Web sites in the UK. He launched BCW in 2009 as he felt there was a need for honest and personal commentary on a wide range of business computing issues. Christian has a BA (Hons) in Publishing from the London College of Communication.