search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
COVER STORY As a rule, it is very important that all signal processing modules are


grouped in an integrated algorithm suite. Processing latencies would needlessly increase, and overall system performance would be degraded if functional blocks were implemented in isolation from one another. For example, a beamforming algorithm should always be implemented together with the Acoustic Echo Cancelling (AEC) and, optimally, from the same provider. If the beamforming algorithm introduces any non-linear effects on the signal, the AEC will most certainly produce unsatisfactory results. Ideal results of digital signal processing can best be achieved by an integrated algorithm bundle that receives uncorrupted microphone signals. Standard linear beamforming and ADI-proprietary algorithms are


compared in detail in order to fully understand the performance potential of advanced beamforming algorithms. The plots in Figure 1 show three different BF algorithms regarding polar characteristics and frequency response in both in-beam and off-beam directions. A standard linear super-cardioid algorithm based on a 2-mic array serves as the benchmark (black curves). The benchmark curve exhibits the maximum sensitivity in the typical zero-angle direction (i.e., maximum off-beam attenuation) and a “rear-lobe” at 180°, where off-beam attenuation is lower. The resultant rear-lobe is a trade-off with beam width in a linear algorithm. A cardioid beam (not shown) has its maximum attenuation exactly at 180°; however, its receptive area is broader than a hyper- or super-cardioid configuration. Beams with less significant rear-lobes and higher off-beam attenuation can be achieved with non-linear algorithmic approaches with the red curve showing an ADI-proprietary 2-mic algorithm of this class (microphone spacing: 20mm).


types. In the in-beam direction, the frequency response of all algorithms is, as expected, flat within the desired audio bandwidth. The off-beam frequency responses are computed for the off-beam half-space (90° thru 270°), confirming high off-beam attenuation over a wide frequency range.


Figure 2: In-beam (dashed lines) and off-beam (bold lines) frequency-responses of different BF algorithms


The relationship between array microphone spacing and audio


Figure 1: Polar attenuation characteristic of different BF algorithms With two omni-directional microphones in an array, there is always


a rotational symmetry of the beam shape. In other words, the attenuation at X° in the polar plot is the same as at 360° minus X°. This assumes that the 0°–180° line of the polar plot is equivalent to the imaginary line connecting the two microphones. The 3-dimensional beam shape can be imagined by rotating the 2-D polar plot around this microphone axis. Asymmetric beam shapes without rotational symmetry or more narrow beams require at least three (or more) microphones arranged in a triangle. For example, in a typical overhead console installation, a 2-mic array can attenuate sound from the windshield. However, in such an orientation, a 2-mic array cannot distinguish driver from passenger. Rotating the array by 90° would make such driver/passenger distinction possible; but, the noise from the windshield would not be distinguishable from sounds inside the cabin. Both windshield noise attenuation and driver/ passenger differentiation are only possible using three or more omni microphones configured in an array. An exemplary polar characteristic of a respective ADI-proprietary 3-mic algorithm is given by the green curve in Figure 1 where the microphones are arranged in an equal-sided triangle with 20mm spacing. Polar plots are computed with band-limited white noise arriving at


the microphone array from different angles. The audio bandwidth is limited to 100– 7000Hz, which is the wide-band (or HD-voice) bandwidth of state-of-the-art cell phone networks. Figure 2 compares the frequency response curves of the different algorithm


bandwidth versus sample rate, is worth further discussion. Wide- band HD-voice uses a sample rate of 16kHz, which is a good choice for speech transmission. There is a huge difference in voice quality and speech intelligibility between the current 16kHz wide-band sample rate and 8kHz, which was used in earlier generations of narrow-band systems. Driven by speech recognition providers, there is growing demand for even higher sample rates, such as 24 or 32kHz. And specifications can be found where the sample rate of the voice-band application should be as high as 48kHz, which is typically the primary system audio sample rate. The underlying motivation is to avoid any internal sample rate conversion. However, the additional computational resources required to support these high sample rates cannot be justified by a tangible audible benefit so 16kHz or 24kHz are now widely accepted as the recommended sample rates for most voice-band applications. High sample rates are problematic for BF applications because


spatial aliasing occurs at frequencies equaling the speed of sound divided by twice the microphone spacing. Spatial aliasing is undesirable because BF is not possible at such aliasing frequencies. Spatial aliasing can be avoided in a wide-band system (16kHz sample rate) if the microphone spacing is limited to 21mm or less. Higher sample rates require smaller spacing to avoid spatial aliasing. However, overly small mic spacing is also undesirable because microphone tolerances and especially intrinsic (non-acoustic) noise of the microphone sensors can become an issue. Signal differences between the microphones of an array get marginal if the spacing is small and disturbances such as intrinsic noise and sensitivity deviations between microphones can overwhelm the signal difference. In practice, microphone spacing should not be smaller than 10mm.


A2 B technology overview The A2 B technology has been specifically developed to simplify the


connectivity challenge in microphone and sensor-intensive applications. From an implementation standpoint, A2


main, multiple sub-node (up to 10), line topology. The third generation of A2


B is a single- B transceivers that is currently in full production


consists of five family members – all offered in automotive, industrial, and consumer temperature ranges. The full-featured AD2428W, together with four feature-reduced, lower cost derivatives –


OCTOBER 2021 | ELECTRONICS TODAY 13


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50