HPC for pharma
Tom Messina, IT manager, High Performance & Scientific Computing, Application Services R&D Strategy & Delivery, Johnson and Johnson
of Johnson and Johnson R&D, the primary demand we see our business partners facing is constrained compute capacity. Tis results in lengthy job queues during times of peak demand and filtering activity to determine which simulations they have capacity to run at all. Tese challenges lead directly to our team’s mission of providing a highly reusable platform offering limitless compute capacity to enable new opportunities and experiments that could not have been done before. Computer-aided models becoming
A
both more critical and more complex, in combination with the technical advancements of modelling soſtware, has exponentially increased computational demand. While many companies and institutions have purchased very high-end compute machines and farms, many others have not. Regardless,
s the IT team responsible for delivering high performance and scientific computing solutions to all sectors
the on-demand elasticity of well-engineered cloud models is one of the best use cases for us to meet this HPC capacity challenge. Among the demands that our business
partners face, particularly as they pertain to cloud computing, are understanding how cloud technology affects the deployment of GxP solutions, overcoming long data transfer times for larger datasets, being comfortable with storing data in the public cloud and working with licensing models that don’t port nicely to the elasticity of cloud platforms. One of the key requirements we have for
the evolution of our own HPC platform is to integrate our disparate internal compute clusters with the cloud environment. Like other organisations, we have made significant investment in building up local compute environments. Many of these environments continue to have decent life leſt in them. To continue utilising that investment while taking advantage of cloud elasticity, we’re enabling
our scientists and engineers to submit jobs to a decision engine. Tis engine will apply intelligent business rules to determine whether that job should be executed in a local cluster or dynamically provisioned to a cloud cluster, all without the user ever needing to worry about where the job will run. We have also started to dig into where HPC
can add further value. Modelling, simulation, image processing and statistical processing will all be in our portfolio for some time and there are still many areas of J&J we have yet to tap into. However, we have also started to look into how HPC can benefit various areas within business intelligence. Datasets have reached the point that
traditional mining and warehousing approaches either don’t work or are too complex or costly to support. Technologies like Hadoop have already begun to make big strides in this space and it is now on our radar to start taking a look at this opportunity.
www.scientific-computing.com
APRIL/MAY 2012 31
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52