Technology
TFT-driven solid-state reflective display proves its commercial viability with GEN2.5 fabrication
Oxford-based Bodle Technologies has developed the world’s first TFT-driven solid-state reflective display (SRD), fabricated on a GEN2.5 line at Taiwan’s Industrial Technology Research Institute (ITRI). Tis is an ultra-low-power, colour, reflective display, based on phase-change materials, which work over a wide range of temperatures. Led by Bodle’s VP for manufacturing,
TS Jen, the team in Taiwan transferred the designs and research directly from the Oxford-based team to produce the test displays. “Tis is an important milestone for
Bodle, giving us the confidence that our technology is manufacturable using
standard processes. We are delighted to be doing this in an ecosystem where many new display technologies have been launched,” said Jen. Tis step provides further impetus to the
company’s expansion and scale-up plans within Taiwan as it looks to commercialise the new technology within the coming months. “To see this technology being scaled
from a few pixels on R&D samples made on a lab coater to work on displays with nearly a quarter-million pixels made on a production coater is fantastic. Many of the materials and production processes are based on those used for rewritable optical discs, an area where I have worked for
The world’s first TFT driven solid state reflective display (SRD) was fabricated on a GEN2.5 line at Taiwan’s ITRI
many years. We have been able to apply these processes to accelerate the move to a more manufacturable process for the SRD,” said Andrew Pauza, VP of Materials and Engineering at Bodle Technologies. SRD applications include second screens
for mobile devices, electronic shelf displays and personalised jewellery and watches, among many others.
New deep-learning method helps object detection in low-light conditions
A team from Socionext and Osaka University has developed a new method of deep learning, which enables image recognition and object detection in extremely low-light conditions. By merging multiple models, the new method enables the detection of objects without generating huge datasets, previously thought to be essential. Socionext plans to incorporate this new
method into the company’s image signal processors to develop new SoCs, as well as new camera systems around those SoCs, for automotive, security, industrial and other applications that require high-performance image recognition. A major challenge throughout the
evolution of computer vision technology has been to improve the image-recognition performance under poor lighting conditions for applications such as in-vehicle cameras and surveillance systems. Previously, a deep learning method using RAW image data from sensors was developed, called “Learning to See in the Dark”, by Chen et al.
Tis method, however, requires a dataset in excess of 200,000 images and over 1.5 million annotations for end-to-end learning. Preparing large datasets with RAW images is both too costly and time-prohibitive. Te joint research team, led by Professor
Hajime Nagahara, proposed the new domain adaptation method, which builds a required model using existing datasets from machine-learning techniques such as transfer learning and knowledge distillation. Te new method resolves the challenge through the following steps: (1) building an inference model with existing datasets; (2) extracting knowledge from it; (3) merging the models with glue layers; and (4) building generative models using knowledge distillation. Tis enables learning a desired image
recognition model using existing datasets. Using this domain adaptation method,
the team has built an object detection model “YOLO in the Dark” using RAW images taken in extreme dark conditions.
Teaching the object detection model with RAW images can thus be achieved with an existing dataset, without generating additional datasets. In contrast to the case where the object cannot be detected by brightness enhancement of images with existing YOLO models, the proposed new model makes it possible to recognise RAW images and detect objects. Te amount of computing resources needed by this new model is about half that of the baseline model, which uses a combination of previous models. Tis “direct recognition of RAW images”
method is expected to be used for object detection in extremely dark conditions, along with many other applications. Socionext plans to incorporate this new method into the company’s image signal processors to develop new SoCs, as well as new camera systems around such offering, for leading-edge solutions in applications including automotive, security, industrial and others that require high-performance image recognition.
www.electronicsworld.co.uk September/October 2020 05
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52 |
Page 53 |
Page 54 |
Page 55 |
Page 56 |
Page 57 |
Page 58 |
Page 59 |
Page 60 |
Page 61 |
Page 62 |
Page 63 |
Page 64 |
Page 65 |
Page 66 |
Page 67 |
Page 68