This page contains a Flash digital edition of a book.
MODELLING: BIOLOGY Basic neural science thus far has been


done on traditional codes, and they can get speedups by a factor of 100 by doing hundreds of sequential runs on one workstation. But to do a big job with 10,000 or even millions of elements, adds Hereld, they couldn’t any longer use that programming idiom. Thus his group developed pNeo, a parallel code intended to transition to a large number of compute nodes. In tests for scaling, they’ve run the code on a Linux cluster and also a Blue Gene machine to look at communications and memory scaling and have done models with half a million cells. This will pave the way to learning how to set up the networks to solve large problems. The real issue, elaborates Hereld, is knowing


how to build a large-scale model of part of the brain and have any assurance that your configuration is reasonable and meaningful. ‘We understand the core science and have considerable knowledge about individual cells, but we have no good information about very complicated interactions. In complicated organisms, only part of the wiring is deterministic; much of the architecture isn’t deterministic, it’s learned over time, based on history. In addition, it’s difficult to see all the connections, because they’re so small. We’re not even sure how to write down the wiring and understand the meaning of the connections.’ His group is now working to transition


its production work from smaller GENESIS models to pNeo. And, while Argonne has large


‘Once the predictive dynamics algorithm is defined, the simulation can run... in real time’


HPC systems available, the software isn’t yet ready to scale up to that system. The ultimate target is the Intrepid, a BlueGene/P machine with a peak performance of 557 Tflops. It consists of 160,000 cores and fills 40 racks with 80TB of memory. This raises another question for Hereld and


his colleagues: how large a model must you build to understand a condition like epilepsy? For now, nobody is certain. He adds, though, that in some cases it is useful to study localised effects. For a brain trauma, it can be sufficient to study an area of the brain 1 x 1cm. He also explains that there are some questions that can’t be answered by examining


www.scientific-computing.com


Argonne computer scientist Mark Hereld presents a visualisation of a computer simulation of neuronal activity in a brain afflicted by epilepsy


a live patient or looking at a small piece of brain tissue in the lab, and models play an important role. In software we can change any individual property in a cell, which is practically impossible to do in a live cell, and see the effects of that change on the system.


Writing the code was the easy part It’s possible that new strains of HIV resistant to existing antiretroviral drugs (ARVs) pose a substantial threat to global public health. This is the conclusion resulting from the amplification cascade model developed by a group led by Professor Sally Blower, director of the Center for Biomedical Modeling at UCLA’s Semel Institute for Neuroscience and Human Behavior. All previous HIV-transmission models of ARV resistance are based on simple biological assumptions and can track only one resistant strain; this model captures biological complexity by generating a dynamic network composed of multi ARV-resistant strains. Their work indicates that 60 per cent of the currently circulating ARV-resistant strains in San Francisco are capable of causing self-sustaining epidemics.


Implementing the model took place in


three major stages, explains Justin Okano, a statistician on the team who did most of the computer work. First he used C++ for the data- sampling stage. Next, he wrote the ordinary differential equations (ODEs) governing the processes using Mathematica. The current model has 33 ODEs for combinations of various


population stages: for four stages of HIV, eight drug resistance stages and a control stage. Once the simulation runs, Okano analyses the results with ‘R’, a GNU project language and environment for statistical computing and graphics. For additional statistical analysis, the team also works with CART (Classification and Regression Trees) data-mining software distributed by Salford Systems, and it’s based on landmark mathematical theory introduced in 1984 by four world-renowned statisticians at Stanford University and the UC Berkeley. Developing the simulation/analysis code


didn’t take very long, just a couple of weeks, notes Blower. The majority of the work, she adds, was coming up with the model. They spent a year defining parameters and creating the model structure. ‘The difficult part was coming up with a research question that’s interesting and useful. We model the evolution of drug resistance in San Francisco for the past 20 years, and in doing so we have to take into account lots of treatment regimes and the complexity of drugs and incorporate all that complexity. Then we had to decide upon the level of complexity we needed to work at.’ Based on initial results, the team is now evaluating possible preventative measures. For instance, what if you give uninfected people drugs – does this lead to them developing an excessively high level of drug resistance that’s counterproductive? They’re also doing work with colleagues in Botswana to examine the spatial dynamics of epidemics.


SCIENTIFIC COMPUTING WORLD AUGUST/SEPTEMBER 2010 17





Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48
Produced with Yudu - www.yudu.com