Future autonomous vehicles and industrial cameras may have a human-like vision, as a result of a recent advance by scientists from Hong Kong and South Korea. Researchers at The Hong Kong Polytechnic University (PolyU) and Yonsei University in Seoul have developed vision sensors that emulate and even surpass the human retina’s ability to adapt to various lighting levels.
The new sensors will greatly improve machine vision systems used for visual analysis and identification tasks, according to Dr CHAI Yang, Associate Professor, Department of Applied Physics, and Assistant Dean (Research), Faculty of Applied Science and Textiles, PolyU, who led the research.
Machine vision systems are cameras and computers that capture and process images for tasks such as facial recognition. They need to be able to “see” objects in a wide range of lighting conditions, which demands intricate circuitry and complex algorithms. Systems like there are rarely efficient enough to process a large volume of visual information in real-time – unlike the human brain.
The new bioinspired sensors developed by the research team may offer a solution via the direct adaptation of a variety of light intensities by the sensors, instead of relying on backend computation. The human eye adapts to different levels of illumination, from very dark to very bright and vice versa, which allows us to identify objects accurately under a range of lighting conditions. The new sensors aim to mimic this adaptability.
“The human pupil may help adjust the amount of light entering the eye,” Dr Chai explained, “but the main adaptation to brightness is performed by retina cells.” Natural light intensity spans a large range, 280 dB. Impressively, the new sensors developed by the team have an effective range of up to 199 dB, compared with only 70 dB for conventional silicon-based sensors. The human retina can adapt to environments under sunlight to starlight, with a range of about 160 dB.
The research team achieved this by developing light detectors, called phototransistors, making use of a dual-layer of atomic-level ultrathin molybdenum disulphide, a semiconductor with unique electrical and optical properties. The researchers then introduced “charge trap states” – impurities or imperfections in a solid’s crystalline structure that restrict the movement of charge – to the dual-layer.
These trap states enable the storage of light information, the researchers noted. They dynamically modulate the optoelectronic properties of the device at the pixel level. By controlling the movement of electrons, the trap states enabled the researchers to precisely adjust the amount of electricity conducted by the phototransistors. This in turn allowed them to control the device’s photosensitivity, or its ability to detect light.
Each of the new vision sensors is made up of arrays of such phototransistors. They mimic the rod and cone cells of the human eye, which are respectively responsible for detecting dim and bright light. As a result, the sensors can detect objects in differently lit environments as well as switch between, and adapt to, varying levels of brightness—with an even greater range than the human eye.
Dr Chai also noted that the sensors reduce hardware complexity and greatly increase the image contrast under different lighting conditions. This, thus, delivers high image recognition efficiency.
These novel bioinspired sensors could usher in the next generation of artificial-vision systems used in autonomous vehicles and manufacturing, as well as find exciting new applications in edge computing and the Internet of Things.