Biomimetic Coherent LiDAR Debuts

source:GMW.cn

keywords:

Time:2026-01-22

Source: GMW.cn  5th Jan 2026

 

Guangming Daily, Beijing, January 4 (Reporter Jin Haotian) Imagine a machine’s "eyes" that can not only scan broadly like the human eye, but also lock onto key targets in an instant and conduct precise "foveation"... Perhaps this will no longer be a science fiction scenario. The research team led by Professor Wang Xingjun and Research Fellow Shu Haowen from Peking University, the team led by Professor Wang Cheng from the City University of Hong Kong, and the team led by Professor Zhou Linjie from Shanghai Jiao Tong University have successfully developed an integrated biomimetic LiDAR inspired by the human visual mechanism. For the first time, they have achieved chip-scale 4D imaging with adaptive "foveation" capability on a Frequency Modulated Continuous Wave (FMCW) LiDAR based on "coherent detection", opening a new door for the next generation of more intelligent, efficient and flexible machine vision. This achievement was recently published online in the international journal Nature Communications.

 

"With the rapid development of autonomous driving, embodied intelligence, low-altitude intelligent systems and other fields, machine vision is in urgent need of upgrading from 'being able to see' to 'seeing clearly, seeing fast and seeing comprehensively'. As a core sensor, LiDAR has fallen into a predicament in its performance improvement. Traditional technical approaches rely on 'stacking' more detection channels in the spatial dimension to improve angular resolution, which leads to an exponential growth in system complexity, power consumption and cost, approaching engineering limits," Wang Xingjun told the reporter.

 

In response, the research team turned their attention to the most sophisticated visual system in nature—the human eye. The human eye does not maintain the highest resolution across the entire field of view, but achieves superior visual perception with limited energy consumption through an efficient collaborative mechanism of "peripheral vision + foveal focus". Inspired by this, the team wondered: Can we leverage the high sensitivity of coherent LiDAR technology to endow the system with a similar "foveation" capability, and dynamically and concentratedly allocate precious detection resources to the most critical areas?

 

based on this idea, the research team innovatively proposed a novel "micro-parallel" architecture for LiDAR and successfully developed a prototyping system. The core innovation of this architecture lies in the fact that in the past, improving clarity usually only relied on "increasing the number of channels" to boost performance, while this time the team uses flexible scheduling of wavelengths/frequencies to "allocate attention", allowing the FMCW LiDAR to focus its "efforts" on the most critical areas. Experimental results show that the system achieves an angular resolution as high as 0.012 degrees in the local region of interest (equivalent to distinguishing details the size of a coin’s diameter at a distance of 100 meters), and strikes a balance between wide field-of-view coverage and high-fidelity imaging requirements.

"More importantly, benefiting from the high dynamic sensitivity of coherent detection, this research has demonstrated a real-time parallel 4D imaging system based on an integrated optical frequency comb for the first time," Wang Xingjun said. This means the system can not only acquire high-precision 3D geometric information, but also directly resolve the instantaneous velocity information of targets using the Doppler effect, truly realizing the synchronous acquisition of spatiotemporal 4D data. This capability of direct velocity perception is inaccessible to traditional LiDAR.

 

The research team further demonstrated the laser radar’s capability of cooperative operation with visible light cameras. Through multi-modal fusion, it can colorize the LiDAR point clouds and compensate for the missing appearance information such as color, thereby generating richer and more definitive 4D scenes and greatly enhancing the machine’s ability to understand and interpret complex environments.

Wang Xingjun stated that this integrated biomimetic architecture boasts excellent scalability and chip-scale integration potential, providing a brand-new system-level solution for the development of miniaturized, low-power and high-performance perception modules. It not only provides more powerful "visual" support for cutting-edge fields such as next-generation intelligent driving and intelligent robots, but also lays an important technical foundation for the construction of future integrated air-ground-space perception networks.

Guangming Daily (January 05, 2026, Page 08)