search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
Column: Embedded design


Uncovering performance issues of your interrupt request handler


Mohammed Billoo, founder of MAB Labs, charts his experiences with the new Linux support by Percepio’s Tracealyzer v4.4 tool through real-world projects


W


hen developing an application for a Linux- based system, it is important to configure the


system for maximum performance. For example, in 2017, I was part of


a team that developed an application to receive and process data from a software-defined radio (SDR). Data was being output by the SDR at a very high rate, therefore minimising packet loss was important. Unfortunately, we saw substantial packet loss during initial bring-up of the Linux system, and needed to determine the cause. One thought was that the so-called CPU affinity of the application was not set appropriately. At the time we didn’t have access to


Tracealyzer for Linux and our initial suspicion proved incorrect. I will revisit the problem here to see how Tracealyzer


can help determine why the hypothesis was incorrect.


The setup Since I no longer have access to the original system, SDR or application, I’m using a Jetson Nano as a replacement for the Linux system. Jetson Nano is another Linux system for the SDR and “iperf” as the userspace application. iperf is a utility commonly used to test the performance of a network link between two Linux systems; see Figure 1. On the right is the Jetson Nano (64-bit ARM architecture) running iperf in server mode; on the left is the host machine (x86_64 architecture) running iperf in client mode. I’ll adjust the CPU affinity of the iperf server on the Jetson Nano to observe how that parameter affects overall throughput performance between client and server. The term “CPU affinity” signifies the particular CPU core that an execution


context is pinned to, usually set per application. My hypothesis is that if the CPU affinity of the interrupt and corresponding handler matches the CPU affinity of the process receiving packets, then packet loss should be minimised because there’s no time wasted moving data between cores. First, I will determine the affinity of the


eth0 interface on the Jetson Nano, which will tell me which processor core handles interrupts from the eth0 interface. To do this, I execute the following command on the Jetson Nano:


$> cat /proc/interrupts/ | grep eth0 407: 1881331 0 0 0 Tegra PCIe MSI 0 Edge eth0


We can thus see that the first core (CPU0) handles these interrupts. Next, I run iperf in server mode on the


Jetson Nano: $> iperf -s -B 192.168.2.247 -p 5001


Still on the Jetson Nano, I execute the following commands to determine the default CPU affinity:


$> ps ax | grep iperf 12910 pts/0 Sl+ 1:25 iperf -s -B 192.168.2.247 -p 5001 $> taskset -p --cpu-list 12910 pid 20977’s current affinity list: 0-3


Figure 1: The basic setup of an SDR application system based on Linux


The first command retrieves the process ID (PID) of the iperf command. I use the PID in the taskset command, together


08 April 2021 www.electronicsworld.co.uk


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44