This page contains a Flash digital edition of a book.

skipped. But automated prescriptive analytics will deliver accurate alerts when complex and highly multidimensional models of a patient’s condition predict impending serious health problems, and these models will prescribe an urgent medical intervention even when simple observation or traditional diagnostic methods do not indicate an emergency.

Regulatory oversight, governance More data can mean more liability; more models increase the probability of making bad predictions. If flaws in data governance or modelling lead to undesirable outcomes, the public will demand oversight and accountability. Tere is an advanced analytics revolution in progress in healthcare, where much more individualised predictions and prescriptions can be generated based on more specific, electronically collected and stored patient data. But if the predictions are wrong, results can be dire, and it is critical that the modelling process and the resulting predictions can be justified. In the pharmaceutical and medical-

device industries, all analytics used in the manufacturing process must be documented and validated, and the integrity of the process is carefully scrutinised by independent regulatory bodies (e.g., the FDA in the USA). Similar regulatory oversight guides risk modelling

in the financial industries. Going forward, regulatory oversight will become more prevalent as the public demands careful scrutiny of all models that affect our lives. What’s next? Complete analytics and

analytic modelling platforms will need to incorporate features that enable version control, audit logs, and traceability of all actions taken. From a data perspective, data integrity and validity will become critical, which also means that some data that cannot be validated may not be collected anymore, because doing so would pose a liability. If ‘everything’ is stored ‘indefinitely,’ it also becomes discoverable in case a conflict leads to a lawsuit claiming that the defendant ‘should have known…’

Summary We believe that technology will continue to progress at an accelerated and perhaps exponential rate, contributing to growth of productivity. More data can be collected faster and in real time, stored cheaply, and analysed with large numbers of massively parallelised and distributed algorithms and methods. In short, Big Data enables more models and more predictions, which in turn allows for a more efficient use of resources, less waste and scrap, and safer and more environmentally sustainable businesses and industries.

We believe that the most useful and

successful analytics platforms in the future will not only embrace these technologies but will enable a degree of automation and simplicity from a user perspective, so that the limited capabilities of few decision makers and process experts can be devoted to few critical areas, while a self-learning, automated, validated, and compliant automated modelling system ensures a continuously improving process. Our influential technology will continue

‘making the world more productive’ even more so now that StatSoſt has become part of Dell.

Paul Lewicki was the CEO of StatSoft,until it was acquired by Dell earlier this year, when he joined the executive management of Dell to help with the process of integration. Together

with Thomas Hill he is the author of Statistics: Methods and Applications, (2005) a popular resource on statistics and data mining. Thomas Hill is currently executive director of analytics in Dell’s Information Management Group, having served previously as VP for Analytic Solutions at StatSoft.

Engineering simulation:

past, present and future Bill Clark considers the successes of computer-aided engineering through the ‘three ages of CFD’


omputational Fluid Dynamics (CFD) is about solving difficult engineering problems, using expensive soſtware, enormous

computing resources, and highly trained engineers. If the problems weren’t difficult, then it is doubtful that anyone would devote so much time and money to solving them. From the perspective of a modern engineer, it would be easy to assume that this desire to apply simulation technology to complex problems is a recent concern; that only today are we able to contemplate solving tough industrial problems, armed with a complex array of multi-physics simulation tools. l Tis is a misconception. Twenty or so

years ago, commercial CFD was born from a desire to solve problems involving turbulence, heat transfer, and combustion, based on the vision of a small group of pioneering researchers who were able to see beyond the meagre computing resources available at the time, and to develop the techniques and methods that would ultimately revolutionise engineering. CFD meshes took weeks, or even

months, to construct, usually by a process of ‘hand-meshing’ by which an engineer (usually PhD-qualified) painstakingly built up meshes vertex-by-vertex. Although


‘automatic meshing technology’ was starting to become available in the early 90s, it was far from reliable, particularly when it came to defining layers of prismatic cells that were required to accurately capture boundary layers. Another issue with the so-called ‘automatic meshing’ technology of the day, is that it tended to generate more cells than the meagre computing resources of the time could handle. In 1994, I can remember submitting a Star-CD simulation that consisted of 750,000 cells for the first time, and fully expecting smoke to start flowing from the large Unix box that sat under my desk.


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45