ANALYSIS: EVENT-BASED VISION
Event cameras have their moment at MWC 2023: enabling a new era of photography in consumer devices
Luca Verre, Co-founder and CEO of Prophesee, as well as a Photonics100 honoree, highlights how event-based vision is set to revolutionise mobile photography
T
he recent Mobile World Congress (MWC), the annual showcase for the newest model smartphones
and other mobile devices, reinforced the most significant trend in how this massive consumer product sector is evolving. Driven by the TikTok/Snapchat/Instagram generation – enabled by more AI-enriched capabilities – and shaped by generational shifts in how humans communicate, the camera has become the centrepiece of the modern mobile device. Eye-catching industrial designs; bigger, brighter displays and higher-bandwidth 5G connectivity all take a backseat in the relentless quest to capture, manipulate and share visual information more effectively. Undeniably, the camera is the lead
feature in every new flagship model introduction. It is clear that the prophecy that has been percolating for several years has taken hold in a very real way: “the camera is the new keyboard”. All of which makes the pace and type of
camera innovations in mobile devices even more critical. Yet the not-so-talked-about secret of the industry is that we have been relying on concepts that are more than 100 years old to drive improvements in each new generation of device. Yes, there have been steady and impressive advancements in adding multiple cameras and sensors to phones, as well as incrementally higher pixel counts to improve still picture clarity, improvements in low-light performance and HDR, better autofocus features, and computational photography to improve depth of field and colour reproduction. Te industry has not stood still in this regard,
and today’s newest models are getting closer to rivalling expensive traditional cameras in terms of quality and performance. But nagging concerns about the potential of smaller sensors and a reliance on software correction techniques still keep smartphone cameras from being in the same class as pro models. Underlying this is the fact that most, if not all, enhancements rely on techniques pioneered by Eadweard Muybridge in the 1880s and that trace their roots back as far as Leonardo Da Vinci’s camera obscura. Frame-based photography – a means to capture static images – is inextricably embedded in the camera mindset and continues to serve us well. But it was never meant to address some of the challenges introduced by today’s ultra-mobile use of photography, which demands new levels of capability to efficiently capture motion and operate in challenging lighting conditions. In frame-based image capture, an entire image (i.e. the light intensity at each pixel) is recorded at a pre-assigned interval, known as the frame rate. While this works well in representing the ‘real world’ when displayed on a screen, recording the entire image at every time increment oversamples all the parts of the image that have not changed. In other words, because these traditional
camera techniques capture everything going on in a scene all the time, they create far too much unnecessary data – up to 90% or more in many applications. Tis taxes computing resources and complicates data analysis. On top of that, motion blur can occur because of the relatively open-and-shut exposure
20 IMAGING AND MACHINE VISION EUROPE APRIL/MAY 2023
times with frame-based cameras that causes them to ‘miss’ things between frames. Te fundamental nature of frame-based
photography as a way to capture images therefore prevents it from being able to operate effectively in certain scenes and for specific needs in a way that represents how humans actually see the world. Terein lies a key to how the technology can evolve – the human eye samples changes at up to 1,000 times per second, however does not record the primarily static background at such high frequencies.
Bio-inspired event cameras emerge as a viable solution Over the past several years, event cameras that leverage neuromorphic techniques to operate more like the human eye and brain have gained a strong foothold in machine vision applications in industrial automation, robotics, automotive and other areas – all applications where better performance in dynamic scenes, capturing fast-moving subjects and operating in low light conditions are critical (Read ‘Why you will be seeing much more from event cameras’ in IMVE Feb/Mar 2023). Now, thanks to improvements in size, power and cost, event-based sensors have emerged as a viable option to complement frame- based methods in more consumer-facing products. Tese products have a different set of constraints and feature needs, including smartphones, but also in wearables such as AR headsets. A quick primer on event-based vision:
Instead of capturing individual frames at a fixed rate, event cameras detect individual
@imveurope |
www.imveurope.com
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32