Hacker News new | past | comments | ask | show | jobs | submit login

I would recommend watching this video, where the same YouTuber is using FSD which takes an exit and needs intervention to avoid running directly into a divider.

https://youtu.be/LbCGAN6Pk_c?t=2026






I will never understand how Tesla can get away with releasing such a broken product on the market and charging for it.

You seem to be stating this as some kind of gotcha.

1. Since that video, FSD 12.5.6.1 has been released, and v13 (which is what is rumored to have been used on Cybercab) is just around the corner. It is completely disingenuius to point to a video 7 months old (on FSD 12.3!) and insinuate that it is representative of the current experience.

2. In FSD, interventions are in your hands. With Waymo in the same situation, your fate lies solely in the remote operator watching your vehicle and how quickly they are able to react. FSD is obviously not perfect but the rate of interventions plummets with every new major release.


I have videos of safety critical interventions in v12.5. Here's one of it attempting to drive straight through a roundabout at 50 miles per hour:

https://www.youtube.com/watch?v=b1XagBTmpgw

Your understanding of remote support is entirely wrong. At no point can a remote operator "drive" a Waymo. They confirm or change plans when the car gets stuck -- that's it.


If you look at the rate of progress over time you see a monotonically improving system that has no apparent halting of improvement. Likewise you have examples of competitors making progress as well in different areas each converging to a pretty, what appears to be, inevitable conclusion of full autonomy.

No one knows when that happens. But it feels pretty certain it’s happening.

I’ve been using FSD for 5 years now. It’s gone from glorified cruise control to something I generally don’t need to intervene with on city streets in that time. Will it improve that fast over the next five years? I doubt it. But it doesn’t have to because the residual problems are much fewer if harder. At this point, especially given the rate of AI improvement overall, I am confident in that five years those problems will largely if not entirely disappear.

Do I take a nap in the back seat? Of course not. Should it be marketed in its condition? I don’t know. But I do know the joint probability of me making a mistake as the attentive operator and it making a mistake while in control is significantly lower than either alone. The fact it makes mistakes at times is obviously concerning to me as a driver, and the fact I also make mistakes actually doesn’t concern me nearly as much as it should. However - I catch its mistakes, and it doesn’t make mine. Why is it rational to be more upset about the machine making a mistake than a human? It’s not - but humans are taught logic and are never rational.


It's impressive software for an ADAS.

But it's not a robotaxi. Even its level of sensor and compute redundancy is not ready to be a robotaxi. Nothing shown at the Cybercab event changed that. They go for form over reliability every time.

With HW3, they ate their redundant compute node because they underestimated the compute required for the task.

Now even with the redundant node utilized for non-redundant purposes, that doesn't seem to be enough as they are finally admitting HW3 will never not be "supervised".

And then the many years of lying about its upcoming readiness. There are websites out there where you can find all of Musk's quotes about it being just around the corner, or their current generation of vehicles all becoming money-making robotaxis with a little software update worldwide.

There's no indication at all they'll break out of the 100-120 miles per safety disengagement they currently sit at (community tracker, Tesla themselves doesn't publish reviewable safety data).

You being happy that your car can finally make zero intervention trips is NOT the standard necessary for taking the driver out of the seat.


I can see no path of how Tesla's current L2 system can become a L4 system as capable as Waymo.

These comments on that video perfectly capture my reaction:

> This is FUD. Who or what speeds up to 55 mph to enter a rotary? How is it that the posted limit and presumably map data indicates 55mp seconds before the rotary? What do you expect FSD which is training on humans using vision and the maps to do? I saw a dinky little rotary sign AT the rotary. I'd slam on the brakes or have an accident too.

> Why would the car come to a stop? I don't see a stop sign, and most roundabouts are yield and I don't see another car blocking your way. Why enter a roundabout at 55? You are wrong, not the car or FSD. You don't know the correct way to drive a roundabout.

FSD mimics human behavior. If you are speeding into a roundabout at 55 mph, you are the one in the wrong, not FSD. It's honestly kind of incredible the ridiculous lengths people go to try to discredit FSD.


That YouTube commenter you quote pretty clearly did not pay attention to the video.

> FSD mimics human behavior. If you are speeding into a roundabout at 55 mph, you are the one in the wrong, not FSD. It's honestly kind of incredible the ridiculous lengths people go to try to discredit FSD.

That's just rephrasing the YouTube comment. Try watching the video yourself. Particularly watch test #7.

Here's a summary:

• The car is on a highway, traveling at normal highway speed of 55 mph. There is no visual indication that there is a roundabout somewhere up ahead.

• After traveling ~3800 feet there is a sign that indicates a roundabout and says the roundabout speed is 15 mph. The roundabout is not yet visible.

• The car continues at highway speed past another sign ~600 feet past the first that also shows that there will be an upcoming roundabout. The road starts curving after that sign, and the roundabout starts coming into view ~600 feet further down the road.

• The car continues approaching the roundabout at highway speed until the human intervenes. He tried to give the car as much time as possible for FSD to decide to slow down. In some of the tests he waited long enough that when he did hit the brakes he had to brake very very aggressively to slow down to 15 mph before entering the roundabout.

Even if it does not have that roundabout in its map and did not read the signs so it is not expecting a roundabout there shouldn't is see it as a sharp bend in the road that should not be taken at highway speed and slow down?


I wish we could shut down the road and put a crash dummy in the drivers seat and see what happens.

> > This is FUD. Who or what speeds up to 55 mph to enter a rotary? How is it that the posted limit and presumably map data indicates 55mp seconds before the rotary? What do you expect FSD which is training on humans using vision and the maps to do? I saw a dinky little rotary sign AT the rotary. I'd slam on the brakes or have an accident too.

This is a laughable hot take. "seconds before".

Watching the video, it starts with him at 43mph, and he drives at 55mph for FORTY SECONDS before encountering the roundabout.

All these clowns saying "Oh, in the real world he'd have slowed down for that roundabout".

No. He wouldn't have started slowing down two-thirds of a mile away (40 seconds at 55mph). This is a garbage argument.


Are you serious? That driver “testing” FSD doesn’t seem to understand basic rules of the road, like that you are not supposed to stop at a roundabout.

You're not going to be taking that roundabout at 50 mph. That why, significantly before the roundabout, there's a 15 mph roundabout warning that FSD completely ignores.

Almost crashing the car IS a gotcha for any vehicle purporting to be autonomous. Especially when Musk seems to be betting Tesla on it working vastly better than it currently is.



Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: