but that’s what the telescope would actually see. You end up degrading it, but at least you get something that could really be observed, rather than some beautiful, high-resolution image out of a simulation.’ Tat’s not to say the high-resolution simulation doesn’t have its place, he stresses, but it’s important to do both and to bear in mind the question; ‘what would I actually see?’
A matter of observation In cosmology, where researchers are studying the whole universe, there is a similar split between wanting to do simulation in as large a scale as possible, and needing to simulate what current telescopes will see. ‘Te strange thing about cosmology is that you can’t actually run your experiments,’ comments Tom Kitching, research fellow at the University of Edinburgh. ‘In the Large Hadron Collider, for example, they can run the experiment many times and see what happens, whereas we only have one universe! So we simulate the universe many, many times and then compare the simulations to what we observe,’ he adds. Kitching explains that, for
the most part, in cosmology: ‘We want more resolution – the more particles you have in your simulation the better.’ However, as in Rice’s work, cosmologists also need to simulate the instruments
that are used. Tat involves simulating all of the optical components within the telescope, the charge-coupled devices used and everything else that will affect the final result. Most simulations are run
using code developed within the astrophysics community itself. Dr Rice says community- developed soſtware is used because astronomers know and trust it: ‘I’ve talked to the people who developed [the codes], so I can email them and ask, if I can’t work it out.’ As a researcher, he comments, ‘it’s much easier to just download something that someone else has written – and if you make a change you can email them updated sub-routines or parts of the code, and it propagates in that way.’ It’s not that he’s actually averse
to improved code, he insists, and oſten the astronomers are building code based on methods that originated from computer science. ‘So it’s not true that there’s no connection [or that] astronomers are developing a method from scratch. But we have our own problems we want to solve,’ adds Rice. Kitching, however, believes
that in some fields, data volumes are reaching a point where specialised input from computer scientists may be needed to get the best possible results. ‘It’s got to the point where, in order to
Kepler’s science data processing pipeline. The most computationally intensive portions of the pipeline, the Transiting Planet Search and the Data Validation modules (shown in orange), are run on Pleiades
run these simulations, having a degree in astronomy is maybe not actually as useful as a degree in computer science. Tis is because you really need to know how to code these things up in a very efficient way and, secondly, how to actually use the hardware,’ he explains. A lot of it comes down to
whether there is money available for developing soſtware, says Rice. Commercial developers would naturally want to make money from anything they created; money the community can’t afford, he adds, and even to run a joint project with computer scientists would need funding. In the United States, Rice explains, there is more public funding for code development and more work being done with computer scientists – and certainly the work being done on the Kepler project, using NASA’s supercomputer, seems to bear that out.
Kepler planetary candidates as of February 2011 42 SCIENTIFIC COMPUTING WORLD
Catch a falling star Te Kepler project is attempting to identify planets that could potentially support life. By measuring the brightness of a star and spotting any regular drops in brightness that might indicate something in orbit, the team can identify possible planets and calculate, from the frequency of transits, how big the planet is and the length of its year. Researchers can then calculate whether each planet is in the ‘habitable zone’ – far enough from the star, but
not too far, so that liquid water can exist. Te Kepler space craſt is monitoring almost 200,000 stars and taking samples every 30 minutes, which means that the data volumes being produced are enormous. Originally the analysis was being done at the Kepler Science Operations Centre (SOC), on a cluster of 300 CPUs. Todd Klaus is the lead soſtware
engineer at the Kepler SOC. He quickly realised that, as the data grew and the algorithms became more complex, the system in the operations centre was not going to be able to cope. ‘We had developed all the soſtware before launch, using simulators, but once we got the flight data we [found we] had to enhance the algorithms. And naturally, that made things slow down. So in 2010 we made a decision to port the transit search algorithm to the NASA supercomputer,’ he says. Running 30 months of data
– around 1.4 trillion individual calculations for each star – would have taken over a month on Kepler’s own system. It can now be processed in three days on the supercomputer. ‘Fortunately I didn’t need to change the algorithms themselves; it was basically a matter of unpacking the data. Because we hadn’t had that many processors [at the operations centre], we’d process maybe a thousand stars per processor. So what I did was separate it out so that we could assign each star to a separate
www.scientific-computing.com
Kepler Science Team, NASA Ames Research Center
(Superior Institute for Aeronautics & Space)
William Rapin, ISAE/SUPAERO
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48