It means the cameras can be fooled by things LIDAR cannot be. Such as smoke, glare, reflections, optical illusions/mirage, etc.
If the algorithms are fed with incorrect data, they will produce incorrect results - such as driving full-speed into a parked, white colored, semi-truck.
Then that means the vision processing isn't far along yet to be viable for a car. There is no fundamental reason why it couldn't work though. With either stereoscopic vision or more temporal processing you could obviously detect when things are only painted on a wall surface, with both there really is no excuse to still fail except limited processing power.
I don't think Tesla ever used LIDAR and the article confirms they don't think they will need to. I believe they removed ultrasonic sensors though, maybe that's what you're thinking of.
https://bdtechtalks.com/2021/06/28/tesla-computer-vision-aut...