Feature: Software and tools
Sample suggestions for fixing the violations were
requested using Parasoſt’s prompt templates, both with and without reasoning questions, and the GitHub Copilot Fix command within VS Code. Using the VS Code API, we were able to elicit fixes for 432 violations from the dataset using GitHub Copilot. Fixes provided by Copilot and the commercial tool
Table 1: Results with reasoning prompts
were both based on using the GPT-4o-2024-08-06 model. Te commercial tool fixes included enhanced prompting behind the scenes without user input, with detailed information about the standard and the code itself. Prompt generation is based on using general LLM prompting techniques, reasoning and proprietary techniques and data inside the commercial tooling, including documentation of the checker to get an improved fixed recommendation. Before you begin to format your paper, first write and
Table 2: Results without reasoning prompts
save the content as a separate text file. Keep your text and graphic files separate until aſter the text has been formatted and styled. Don’t use hard tabs, and limit use of hard returns to only one return at the end of a paragraph. Don’t add any kind of pagination anywhere in the paper, and don’t number text heads – the template will do that for you.
Table 3: Comparison of win rates for different tools
Assessing the results In order to assess the results in a fair manner, we wanted to do more than just blindly check each recommended code fix and possibly inject human bias into the results. Instead, we used the GPT- 4o-2024-08-06 model to evaluate the quality of the fixes. It was given prompts to compare two solutions for fixing a static analysis violation and determine which one was better or if they were equally good. To avoid any bias from the order in which the solutions were presented, each pair of solutions was compared twice, with the order switched each time. Tables 1-3 compare results between the commercial
tool and GitHub Copilot. Te commercial tool was tested both with additional reasoning questions in the prompts as well as without this additional information. Reasoning questions help to analyse code violations and enhance the model’s fix generation capabilities. Te ‘win rate’ is the percentage of fix comparisons where a particular solution was declared best; i.e., the ‘winner’ – see Figure 4. It can be seen that the commercial tool prompts
Figure 4: C fixes recommended by Copilot vs those with reasoning questions
consistently generated better fix recommendations than GitHub Copilot on its own. Both bare and reasoning prompts show better performance, with reasoning prompts performing slightly better. One reason the commercial tool can produce better results is that it has access to more information than Copilot, including things like checker documentation, control flow graphs and other proprietary violation-related properties. Tis
36 March 2026
www.electronicsworld.co.uk
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48