search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
Circuit Components


that the dynamic simulation model can be configured correctly.


These parameters generally describe: 1. Read DMA characteristics: Number of blocks, block sizes, memory addresses and memory access patterns 2. Processing characteristics: The delay which the task will require in order to perform its processing 3. Write DMA characteristics: Number of blocks, block sizes, memory addresses and memory access patterns


Figure 2 shows that this information is best described in tabular format, where rows represent processing tasks and columns are parameters associated with the task. The use case graph may also have an embedded sub-graph, which is often the case with AI applications that describe the algorithm in terms of a Neural Network computation graph.


Usecase parameters captured in tabular format as shown in Figure 2 are sufficient to describe the application intent regarding dataflows between processing stages and the processing delay of a given stage. The added benefit of having the graph drawn to the left of the table is that it becomes intuitive to understand the data flow, hence the relationship between nodes as processing stages. Even for large graphs, the method is applicable and offers supplementary information readily available if required. Separate to the Application Usecase is a


model of the Hardware Platform, which will perform the data transfers and processing delays as prescribed by the Usecase model.


The Hardware Platform model will typically have the following capabilities: 1. Generate and initiate protocol compliant transactions to local memory, global memory or any IO device 2. Simulate arbitration delays in all levels of a hierarchical interconnect 3. Simulate memory access delays in a memory controller model as per the chosen JEDEC memory standard


So far, we have defined two simulation constructs – the Application Usecase Model and the Hardware Platform Model. What is required is now a specification of how the Usecase maps on to the Hardware Platform subsystems. That is, which tasks of the application usecase model are run by which subsystems in the hardware platform model. Figure 3 shows the full simulation model with usecase tasks mapped on to subsystems of our Architecting the Future SFA 350A platform. The Full System Model on the left of Figure 3 is the dynamic performance model used for Usecase and Hardware Platform Exploration.


Every node in the Usecase graph is traversed during simulation, with the Subsystem master transactor generating and initiating memory transactions to one or more slave transactors. As a result, delays due to contention, pipeline stages or outstanding


transactions are applied to every transaction, which cumulatively sums up the total duration that the task is active.


The temporal simulation view on the right of Figure 3 shows the duration active for each task for a single traversal of the Application Usecase. The duration for the entire chain is defined as the Usecase Latency. Having one visualisation showing the Hardware Platform, Application Usecase and Temporal Simulation view often work very well for various stakeholders because it is intuitive to follow. Now a single traversal is not useful, decides providing some sanity checks about the setup of the environment. For thorough System Performance Exploration multiple traversals need to be run, and in this setup, we see the two phases of the simulation. A transient phase is when the pipeline is filling up, followed by the steady-state when the pipeline is full; hence the system is at maximum contention. During the steady-state, metrics are gathered to understand the performance characteristics and bounds of the system. This guides further tuning and exploration of the use case and hardware platform. Figure 4 shows two configurations of the hardware platform and the resulting temporal views. One system is setup for low latency by using direct streaming interfaces to avoid data exchange in the DDR memory.


Yet again, the benefits of showing the two systems visually bring clarity so that all stakeholders can understand with a bit of guidance.


The complete architecture exploration


methodology relates to use case and platform requirements, simulation metrics, key performance indicators and reports.


Figure 5 shows the flow of information in the following order: 1. Application Usecase is defined first. The tabular format for capturing the use case is crucial here, as shown previously in Figure 3 2. Usecase Requirements associated with the Application Usecase are stated 3. Usecase Requirements are converted into Key Performance Indicators, which are thresholds on metrics expected from simulation runs 4. Simulation metrics are collected from simulation runs 5. Usecase performance summary report is produced by checking if metrics meet their Key Performance Indicators or not


A similar flow applies to Hardware Platform Requirements whereby: 1. Hardware Platform defined first 2. Platform Requirements stated 3. Platform KPIs extracted from Requirements 4. Platform simulation metrics collected 5. Platform performance summary generated by comparing metrics with KPIs


Further details of Sondrel’s Architecting the Future hardware platforms can be found at https://www.sondrel.com/ solutions/architecting-the-future


Figure 5: Architectural exploration based on usecase KPIs 22 May 2022 Components in Electronics www.cieonline.co.uk


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56  |  Page 57  |  Page 58  |  Page 59  |  Page 60  |  Page 61  |  Page 62