Hesai’s Color Lidar Breakthrough Transforms Autonomous Driving Perception

Craig Nash
By
Craig Nash
AI-powered tech writer covering artificial intelligence, chips, and computing.
10 Min Read
Hesai's Color Lidar Breakthrough Transforms Autonomous Driving Perception — AI-generated illustration

Hesai Group, the Shanghai-based world leader in vehicle lidar manufacturing, just released a color lidar breakthrough that fundamentally reshapes how self-driving systems perceive the road. On April 18, 2026, the company unveiled the Picasso chip at its tech open day—the world’s first full-color lidar chip achieving native hardware-level fusion of color perception and distance measurement. This is not a camera bolted onto a lidar unit. It is a single chip that captures RGB color and 3D coordinates simultaneously, eliminating the computational guessing that has plagued autonomous driving for years.

Key Takeaways

  • Picasso chip delivers native 6D perception: X, Y, Z coordinates plus reflectivity, velocity, and color in a single sensor.
  • Photon detection efficiency exceeds 40%, enabling farther and clearer vision under identical laser power compared to traditional lidar.
  • ETX series sensors support up to 4,320 laser channels and 4K ultra-high-definition perception without post-processing camera-lidar stitching.
  • Directly identifies traffic lights, lane markings, and construction signs without inference, addressing a critical autonomous driving pain point.
  • Mass production and deliveries to automakers begin in the second half of 2026.

Why Color Lidar Matters for Self-Driving Cars

Traditional lidar sees the world in black and white—it measures distance and shape but cannot distinguish a red traffic light from a green one without a separate camera making an educated guess. That guessing introduces latency, computational overhead, and failure modes in edge cases like heavy rain or nighttime conditions. The color lidar breakthrough changes this equation entirely. Hesai’s Picasso chip uses SPAD (Single Photon Avalanche Diode) image and depth sensors to capture color information at the photon level, fusing it with distance data before the signal even leaves the sensor. Deutsche Bank analysts noted in a research report that this technology eliminates the need for complex stitching or inference, meaning autonomous driving systems no longer need to guess when identifying critical colored objects like traffic lights, lane lines, or construction signs. The spatial intelligence of AI world models would be significantly enhanced.

What makes this a genuine breakthrough rather than incremental progress is the hardware-level integration. Previous attempts at color lidar have relied on external camera fusion—Innoviz Technologies, an Israeli competitor, released the InnovizThree in January 2026 with an integrated RGB camera for sensor-fusion colored 3D perception. That approach reduces integration complexity for automakers compared to bolting separate sensors together, but it still requires post-processing to align camera and lidar data. Hesai’s native fusion happens inside the chip itself, generating color 3D point clouds without any stitching step. That architectural difference is why Hesai CEO David Li Yifan calls it a zero-to-one breakthrough, not market hype.

The Technical Architecture Behind 6D Perception

The Picasso chip supports up to 4,320 laser channels and delivers 4K ultra-high-definition perception, a massive leap from earlier lidar systems that operated at lower resolutions. The photon detection efficiency exceeds 40%, meaning the sensor captures more photons per pulse and achieves greater range and clarity under the same laser power. This matters in real-world driving: a sensor that sees farther and clearer in rain, fog, or direct sunlight is safer than one that degrades in adverse conditions.

The ETX series sensors equipped with the Picasso chip come in flexible configurations—1,080, 2,160, or 4,320 channels—allowing automakers to tailor perception depth to their specific autonomous driving stack. The 6D perception framework adds velocity and reflectivity to the traditional X, Y, Z coordinates, enabling the system to track moving objects and distinguish between different surface materials (metal, asphalt, paint) in a single pass. This is functionally similar to what a human driver does intuitively—seeing a traffic light, recognizing its color, and knowing its state without conscious reasoning. For a machine, that intuition requires raw perceptual data, and Hesai’s architecture now provides it.

Real-World Implications for Autonomous Driving Deployment

The timing of this release matters. Global automakers are racing to achieve Level 4 and Level 5 autonomy—hands-off driving in defined geographies and eventually all conditions. Perception is the bottleneck. A self-driving car that cannot reliably identify a red traffic light or a yellow construction sign cannot be trusted on public roads, no matter how sophisticated its planning algorithms are. Hesai’s color lidar breakthrough removes that bottleneck by making the sensor itself smarter rather than pushing the problem to software.

Hesai has not yet announced which automakers will adopt the ETX series, but the company expects mass production and deliveries to begin in the second half of 2026. That timeline suggests the technology will reach production vehicles within 18-24 months, potentially influencing the next generation of premium electric vehicles from Chinese and international OEMs. The advantage is especially pronounced in markets like China, where traffic patterns, signage, and road conditions differ from Western highways—a sensor that learns color and context simultaneously can adapt faster to regional driving norms.

How This Compares to Other Lidar Approaches

The lidar market has historically split into two camps: traditional black-and-white rangefinders and camera-plus-lidar fusion systems. Hesai describes traditional lidar as high-precision black-and-white cameras, forcing systems to guess road conditions when color matters. Innoviz’s InnovizThree takes the fusion approach, pairing a compact RGB camera module with lidar in a single windshield-mounted package, reducing integration complexity and cost compared to separate sensors. But Innoviz still relies on external synchronization and post-processing, whereas Hesai’s native fusion happens inside the chip.

Columbia University researchers published work on a rainbow chip in 2025—an accidental frequency comb for multi-color lasers on a single chip to improve lidar power and coherence. That innovation improves laser efficiency but does not address full-color perception fusion the way Hesai’s Picasso does. The research communities are converging on the same insight: color matters for autonomous driving, and the sooner it is integrated into the sensor itself, the simpler and more reliable the system becomes.

What Remains Unproven

The color lidar breakthrough is real, but the hype around hands-off self-driving as an immediate reality warrants skepticism. Perception is one layer of an autonomous driving stack. A car also needs planning, prediction, localization, and fail-safe redundancy. No major automaker has yet committed to adopting the ETX series, and Hesai has not disclosed pricing or detailed performance benchmarks against competing sensors in independent testing. The photon detection efficiency claims exceed 40%, but without third-party validation, these are manufacturer specifications rather than verified facts. The technology is genuinely innovative, but innovation in sensors does not automatically translate to Level 4 autonomy on public roads.

When Will Color Lidar Reach Production Vehicles?

Hesai expects mass production and deliveries to automakers in the second half of 2026, meaning the earliest production vehicles equipped with ETX series sensors would likely appear in 2027 or 2028. That timeline is aggressive but plausible for Chinese automakers and premium EV brands that prioritize autonomous driving features. Western OEMs typically move slower through validation and integration cycles, so North American and European adoption may lag by a year or more.

Does the Picasso Chip Eliminate the Need for Cameras Entirely?

No. Hesai’s color lidar breakthrough significantly reduces the computational burden of camera fusion, but most autonomous driving systems will still use multiple sensors for redundancy and to cover lidar’s blind spots. A camera looking at a radar-occluded area or a lidar-blind spot (like directly overhead) remains valuable. The Picasso chip makes cameras optional for color perception rather than mandatory.

How Does Color Lidar Help in Bad Weather?

Lidar generally performs better than cameras in rain, fog, and snow because it emits its own light rather than relying on ambient illumination. By fusing color at the sensor level, Hesai’s system maintains that weather resistance while adding color discrimination. A red traffic light remains red whether it is sunny or raining, and the Picasso chip captures that color data directly rather than inferring it from a camera image that may be obscured by moisture.

Hesai’s color lidar breakthrough represents a genuine shift in autonomous driving perception architecture. By integrating color sensing at the hardware level rather than bolting cameras onto rangefinders, the company has solved a fundamental problem that has plagued the industry for years. Whether this innovation actually reaches mass production and drives adoption depends on automaker validation and real-world performance in diverse driving conditions. The technology is proven; the market adoption is still ahead.

This article was written with AI assistance and editorially reviewed.

Source: TechRadar

Share This Article
AI-powered tech writer covering artificial intelligence, chips, and computing.