Feature: Processors
Advances in parallel processing with neuromorphic analogue chip implementations
By Alexander Timofeev, CEO and Founder, Polyn Technology T
he shiſt from cloud to edge computing has changed the sensor node concept, from using a simple DSP near the sensor, to applying deep neural network
functionality at the sensor level. Hence, most recent trends in this field have targeted device energy optimisation and computing latency reduction.
Going parallel Traditional computers process tasks sequentially: once a task is completed, the next one begins. Although this method still serves many applications today, demands are calling for faster, more optimised and more energy-efficient computations, where parallel processing fits the bill. Computer scientists and engineers
approach parallel processing like the human brain, where data is distributed in parallel to all neurons of a layer, then instantly propagated to another layer, giving rise to neuromorphic
Neuromorphic analogue signal processing, or NASP, chip design embodies the
approach of a sparse neural network, with only the necessary connections
between neurons required for inference
computing. Tis makes neural networks computationally fast, allowing analysis and decisions to be as close to real time as possible. It also makes them much
18 July/August 2022
www.electronicsworld.co.uk
more energy-efficient than other types of processing, but, crucially, more robust, too. For example, an error in an algorithm code will most inevitably lead to a system error, whereas if an error appears in one or several connections in a neural network, this does not lead to failure – all connections are interdependent, work in parallel and are duplicated on multiple levels.
Neural network implementations A digital neural network is a model with simulated neurons, using standard step-by-step consecutive math operations in the digital processor core. Digital implementation is not well suited to neural network processing, since it cannot process the massively parallel data necessary for neural network computation. Digital neural networks can be implemented on traditional processors, but with lower efficiencies. Tere has been significant progress in neural network digital implementation
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52