
Rivian has made a bold statement, suggesting Tesla is mistaken regarding the development of fully autonomous vehicles solely using cameras.
At the automaker’s AI and Autonomy Day in Palo Alto, California on Thursday, Rivian showcased its custom-built proprietary silicon, outlined a plan for its forthcoming hands-free driver-assist technology, an AI-based assistant for messaging, and the inclusion of LiDAR technology.
The highlight of the reveal is Rivian’s newly designed silicon chip, named Rivian Autonomy Processor (RAP1), which will take the place of the current Nvidia chip. This 5nm chip merges processing and memory into a single unit and connects with the auto manufacturer’s Gen 3 Autonomy Computer (ACM3), which evolves from the ACM2 introduced in 2025 on the second-gen R1. While RAP1 is crucial for the company’s self-driving goals, the introduction of LiDAR hardware indicates that Rivian believes the industry standards adopted by competitors are fundamentally flawed.
LiDAR is expected to debut on the new R2 model at the end of 2026. The automaker has not confirmed when, or if, LiDAR will be incorporated into the R1, but it’s difficult to envision that the pricier flagship models won’t adopt the tech shortly after it launches on the R2.
Rivian’s just-revealed Universal Hands-Free (UHF) driver-assist system, set to roll out this month for second-gen R1 models through a complimentary over-the-air software update called 2025.46, will progressively evolve with LiDAR integration. This LiDAR hardware will facilitate three-dimensional spatial mapping and redundant sensing, along with enhanced real-time object detection for edge-case driving situations, surpassing the capabilities of the present in-house built RTK GNSS (real-time Kinematic Global Navigation Satellite System) that refines vehicle positioning to within 20 cm of its latitude and longitude, 10 external cameras, 12 ultrasonic sensors, 5 radar units, and a high-accuracy GPS receiver which might overlook.
A Rivian representative exclusively informed The Drive that LiDAR is essential, and a camera-only setup is insufficient (Tesla’s self-styled Full Self Driving technology relies solely on cameras, having removed front-facing radar units from its models) because, “Cameras are passive light sensors, meaning they do not perform optimally in low light or fog compared to active light sensors like LiDAR.”
“LiDAR can virtually double visibility at night. Cameras excel at capturing their surroundings until they fail to do so,” the representative stated to The Drive.
Sam Abuelsamid, vice president of market research at Telemetry stated to The Drive, “Utilizing diverse sensing modalities is vital for a safe and reliable ADS solution. Each sensor type has its own pros and cons and they complement one another. Cameras are superb for object recognition but fare poorly in low-light conditions or with direct sunlight hitting them. Unless designed for stereoscopic vision, they also struggle with distance estimation to objects.”
“LiDAR offers a resolution that lies between cameras and radar, operates in all lighting conditions, and with contemporary software can even handle rain and snow. However, until recent years, it has been significantly pricier. Nonetheless, in the past few years, the cost of solid-state LiDAR has drastically reduced, with some of the latest models from companies like Hesai priced below $200,” Abuelsamid noted.
Abuelsamid added, “When camera-only proponents argue that humans drive using just two eyes, they are mistaken on multiple levels. Humans rely on multiple senses while driving: stereoscopic vision for depth awareness (which Tesla lacks), as well as hearing and touch, sensing feedback through their hands and back. Human eyes possess a much higher dynamic range than cameras, making them far more effective in poor lighting, and the human brain processes information differently, with superior ability to filter out unnecessary information and categorize what it observes.”
Rivian informed The Drive that upon launch, vehicles with LiDAR will feature new augmented reality visualizations in the driver’s display, alongside enhanced detection of surroundings, especially at greater distances and in challenging environments. Rich spatial data from these vehicles will also contribute to better onboard performance for all second-gen R1 and subsequent vehicles, even those without LiDAR. In the long term, cars equipped with LiDAR will unlock unique autonomy functionalities enabled by the additional sensor and onboard processing power.
Both the LiDAR and ACM3 paired with RAP1 are currently undergoing validation, and Rivian anticipates deploying these advanced hardware components in R2 at the end of 2026.
Have a tip about future technology? Reach out to us at [email protected]