I specifically bought a robot vacuum with less sensors (no camera) for this reason. Why does it need camera if bump sensors and Lidar already works, it's asking for trouble.
Lidar doesn't work for some things- my Roborock S7 has trouble if there's a USB cable on the ground or a lamp's power cord isn't tucked all the way up against the wall. Supposedly the camera models are better at avoiding certain obstacles, which is good if you have a pet or housemate who sometimes poops inside and you don't want that getting mopped all over the floor.
That's a compelling use case for me but considering how many of these vacuums have had privacy issues, I stuck with Lidar (people cast aspersions on the Chinese companies but US manufacturers have track records that don't inspire confidence either - just ask the Roomba employees who got their naked pics leaked online)
"good if you have a pet or housemate who sometimes poops inside"
I have a pet (cat) that unfortunately poops just outside her box most of the time, despite a lot of different ideas and approaches with the help of our vet. She's old and has lower back pain issues. It ends up on a litter mat or the wooden floor, so it's not that hard to clean up.
If I had a housemate that pooped inside not in the toilet, they would need to be even less able to manage their shit, so to speak, and more loved than our cat, or they would be out of here very fast.
In addition to what others have said, I believe some use an upward facing camera to help with mapping.
Ceilings tend to be less cluttered than floors so it is easier to figure out the shapes of rooms and their relationships by looking at the ceiling than by looking at the floor.
Some manufacturers use cameras instead of LiDAR (iRobot, for example).
Others use both. LiDAR for walls, cameras for object identification below the LiDAR plane, directly in front of the robot. That’s how the fancy ones avoid socks or cables or other small things.
It means the cameras can be fooled by things LIDAR cannot be. Such as smoke, glare, reflections, optical illusions/mirage, etc.
If the algorithms are fed with incorrect data, they will produce incorrect results - such as driving full-speed into a parked, white colored, semi-truck.
Then that means the vision processing isn't far along yet to be viable for a car. There is no fundamental reason why it couldn't work though. With either stereoscopic vision or more temporal processing you could obviously detect when things are only painted on a wall surface, with both there really is no excuse to still fail except limited processing power.
I don't think Tesla ever used LIDAR and the article confirms they don't think they will need to. I believe they removed ultrasonic sensors though, maybe that's what you're thinking of.
This sounds like the Roborock S series. I went with lidar over camera because it can run in any lighting condition and I don’t have a need for poop detection.