This page contains a Flash digital edition of a book.
that portability is desirable and include rules relating to it, it is rarely the primary concern.


Adopting an appropriate rule set Choosing a static analysis tool that encompasses many such standards will provide access to the portability rules of all of them. Considering additional por- tability rules in tandem with those from a chosen standard will ensure more focus on that element of the development work, and hence, ensure the minimum possible impact in the event of such a change.


There is no consequential compromise in the adherence to the standard of choice. And by choosing a rule superset of the nominated standard, it is possible to ensure that a change in architecture will have the minimum possible impact. With the correct tools, it is also perfectly pos- sible to cross-reference Code Review Reports to a nominated standard and still include the additional portability rules.


Adapting to new target hardware When the time comes to port code to a new target, it is useful to focus on only those parts of the rule set that have impli- cations for portability, regardless of whether target obsolescence results from lengthy development times.


If the portability-focused rule set has been in place from the start, this exercise should largely be one of confirming that there are no problems. If not, the ability to filter static code review reporting to focus on portability becomes invaluable.


In this case, there are likely to be changes spread through the code base. So once the changes have been made, how can it be proven that the code functionality has not been compromised?


Ensuring uncompromised functionality


Unit test tools often encourage the collation of a suite of regression tests as development progresses. If such tests exist, rerunning them for a new target environment is a trivial exercise. If there is no such existing portfolio of unit test cases, an alternative approach needs to be considered.


For example, it is possible to automatically generate a set of unit test cases based on code analysis. These test cases can then be used to prove that functionality is iden- tical before and after modification. While unit test cases not based on requirements


MILITARY EMBEDDED SYSTEMS March/April 2011 15


are generally inferior because of a lack of independence, this approach bears scrutiny here because the primary require- ment is to confirm that functionality has not changed.


Another approach involves comparing “baselines” from test tools, which can show how code has been changed. These can be used in conjunction with the tools’ graphical representations of code archi- tecture to ensure that even unchanged code is not affected by changes made elsewhere.


Such possibilities exist because modern test tool suites include a collection of


tools that, although frequently used in a prescribed sequence, is often much more adaptable than such a rigid definition implies. By considering them not only as part of the development process but also as a tool kit, they have the flexibil- ity to help in a plethora of situations … including this one.


Mark Pitchford has more than 25 years of experience in software development for engineering applications. Since 2001, he has specialized in software testing and works as a Field Applications Engineer with LDRA. He can be contacted at Mark.Pitchford@ldra.com.


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48