Hacker Newsnew | past | comments | ask | show | jobs | submit | jellojello's commentslogin

Without Lidar + the terrible quality of tesla onboard cameras.. street view would look terrible. The biggest L of elon's career is the weird commitment to no-lidar. If you've ever driven a Tesla, it gives daily messages "the left side camera is blocked" etc.. cameras+weather don't mix either.


At first I gave him the benefit of the doubt, like that weird decision of Steve Jobs banning Adobe Flash, which ran most of the fun parts of the Internet back then, that ended up spreading HTML5. Now I just think he refused LIDAR on purely aesthetic reasons. The cost is not even that significant compared to the overall cost of a Tesla.


It's important to understand the timeline of the Steve Jobs open letter on Adobe Flash - at that point the iPhone had been out just shy of three years, and before the first public betas on Android. So for nearly three years, Apple had been investing in HTML5 technology because Flash wasn't in a form where it was deployable.

Additionally, Flash required android phones with 256MB ram as a minimum (which would have precluded two of the three shipped iPhone models at the time) and at least initially only supported software video decoding. Because of the difference in screen dimensions, resolutions and interaction models (plus the issues with embedding due to RAM limitations), the website was still basically broken whether your mobile phone had Flash or not.

My understanding (based on the timing) was always that when Adobe was finally ready to push its partners to bundle mobile Flash, Apple looked at it and decided against it. Adobe made public statements against their partner and so Jobs did so in kind.


That one was motivated by the need of controlling the app distribution channel, just like they keep the web as a second class citizen in their ecosystem nowadays.


Years ago he called lidar a crutch...

And I agree, it is. Clearly it is theoretically possible without.

But when you can't walk at all, a crutch might be just what you need to get going before you can do it without the crutch!


he didn't refuse it. MobileEye or whoever cut Tesla off because they were using the lidar sensors in a way he didn't approve. From there he got mad and said "no more lidar!"


Assuming what you say is true, are they the only LIDAR vendor?


False. Mobileye never used lidar. Lmao where do you all come up with this


I think Elon announced Tesla was ditching LIDAR in 2019.[0] This was before Mobileye offered LIDAR. Mobileye has used LIDAR from Luminar Technologies around 2022-2025. [1][2] They were developing their own lidar, but cancelled it. [3] They chose Innoviz Technologies as their LIDAR partner going forward for future product lines. [4]

0: https://techcrunch.com/2019/04/22/anyone-relying-on-lidar-is...

1: https://static.mobileye.com/website/corporate/media/radar-li...

2: https://www.luminartech.com/updates/luminar-accelerates-comm...

3: https://www.youtube.com/watch?v=Vvg9heQObyQ&t=48s

4: https://ir.innoviz.tech/news-events/press-releases/detail/13...


The original Mobileye EyeQ3 devices that Tesla began installing in their cars in 2013 had only a single forward facing camera. They were very simple devices, only intended to be used for lane keeping. Tesla hacked the devices and pushed them beyond their safe design constraints.

Then that guy got decapitated when his Model S drove under a semi-truck that was crossing the highway and Mobileye terminated the contract. Weirdly, the same fatal edge case occurred 2 more times at least on Tesla's newer hardware.

https://en.wikipedia.org/wiki/List_of_Tesla_Autopilot_crashe...


They had radar too. No such incidents since going camera only fyi, even on the old autopilot product


Thank you!


Never with the product used by Tesla early on.


It's been a decade and it's hard to keep up with all of the drama and ego. It was the EyeQ3 vision system. It used cameras, radar, and ultrasonic sensors and Tesla was accessing them directly. MobileEye cut them off and Elon put his foot down and said "fine we'll just use crappy webcams and be fine."

https://www.mobileye.com/news/mobileye-to-end-internal-lidar...

Um, yes they did.

No idea if it had any relation to Tesla though.


Did not


> purely aesthetic reasons

This is huge though.

People aren't setting them on fire during protests, and if an FSD Tesla plows into a farmers market, it might not even make the news.

People hate tech so much that self-driving companies with easy-to-spot cars have had to shut down after just a few mistakes.

Disguising Teslas as plain old regular human-driven cars is a great idea and I wouldn't be surprised if they win the market because of this. Even if they suck at driving.


People aren't setting Teslas on fire? Where do you get that from?

https://www.forbes.com/sites/conormurray/2025/05/01/tesla-pr...


> The cost is not even that significant compared to the overall cost of a Tesla.

That’s true now, but when they first debuted they would have doubled the cost of the car.


When Tesla debuted, the cost of batteries made electric cars more like an expensive novelty. The Tesla roadster certainly was fun, but it wasn't a practical car for day-to-day use.

Of course, things have changed.

Had Tesla gone all-in on Lidar, they could have turned the technology into a commodity, they are a trillion dollar company producing a million cars a year. Lidar is already present on cheap robot vacuum cleaners, and we have time-of-flight cameras in smartphones, I don't believe it would have been a problem to equip $50k cars with Lidar.


His stated reason was that he wanted the team focused on the driving problem, not sensor fusion "now you have two problems" problems. People assumed cost was the real reason, but it seems unfair to blame him for what people assumed. Don't get me wrong, I don't like him either, but that's not due to his autonomous driving leadership decisions, it's because of shitting up twitter, shitting up US elections with handouts, shitting up the US government with DOGE, seeking Epstein's "wildest party," DARVO every day, and so much more.


Sensor fusion is an issue, one that is solvable over time and investment in the driving model, but sensor-can't-see-anything is a show stopper.

Having a self-driving solution that can be totally turned off with a speck of mud, heavy rain, morning dew, bright sunlight at dawn and dusk.. you can't engineer your way out of sensor-blindness.

I don't want a solution that is available to use 98% of the time, I want a solution that is always-available and can't be blinded by a bad lighting condition.

I think he did it because his solution always used the crutch of "FSD Not Available, Right hand Camera is Blocked" messaging and "Driver Supervision" as the backstop to any failure anywhere in the stack. Waymo had no choice but to solve the expensive problem of "Always Available and Safe" and work backwards on price.


> Waymo had no choice but to solve the expensive problem of "Always Available and Safe"

And it's still not clear whether they are using a fallback driving stack for a situation where one of non-essential (i.e. non-camera (1)) sensors is degraded. I haven't seen Waymo clearly stating capabilities of their self-driving stack in this regard. On the other hand, there are such things as washer fluid and high dynamic range cameras.

(1) You can't drive in a city if you can't see the light emitted by traffic lights, which neither lidar nor radar can do.


Hence why both together make the solution waymo chose. The proof is in the pudding, Waymo's have been driving millions of miles without any intervention. Tesla requires safety drivers. I would never trust the FSD on my model 3 to be even nearly perfect all the time.

Lidar also gives you the ability to see through fog and as it scans, see the depth needed to nearly always understand what object is in front of them.

My Model 3 shows "degraded" or "unavailable" about 2% of the time i'm driving around populated areas. Zero chance it will ever be truly FSD capable, no matter the software improvements. It'll still be unavailable because the cameras are blinded/blocked/unable to process the scene because it can't see the scene.

While you're right, washer fluid works usually on the windshield, it doesn't on the side cameras, and yea hdr could improve things, it won't improve depth perception, and this will never be installed on my model 3..

Lidar contributes the data most needed to handle the millions of edge cases that exist. With both camera and lidar contributing the data they are both the best at collecting, the risk of the very worst type of accidents is greatly reduced.

I don't see these stats https://waymo.com/safety/impact/ happening for tesla anytime soon.


> without any intervention

but with occasional remote guidance (Waymo doesn't seem to disclose statistics of that). In some cases remote guidance includes placing waypoints[1].

> Lidar also gives you the ability to see through fog and as it scans

Nah. Lidar isn't much better in fog than cameras. If I'm not mistaken, fog, rain, smoke, snow scatter IR light approximately the same as visible light. The lidar beam needs to travel twice the distance and its power is limited by eye-safety concerns.

> FSD on my model 3 to be even nearly perfect all the time

It doesn't need to be perfect. It needs to not hit things, cars and pedestrians too hard and too often, while mostly obeying traffic rules. Waymo has quite a few complains about their cars' behavior[2], but they manage just fine.

[1] third video in https://waymo.com/blog/2024/05/fleet-response

[2] https://www.austintexas.gov/page/autonomous-vehicles


Waymo had safety drivers for a long time. And still have safety drivers to this day when they roll out a new city. You wouldn't have known that because no one was paying attention to this stuff back then.


Waymo also had safety drivers for years.

All you really need is "drive slower if you can't see (because rain, fog, or degraded cameras), or you're in an area where children might run out into the road"


If you have mud on a camera, you can't drive it either way. Lidar or not. The way to actually solve these issues is to have way more cameras for redundancy / self cleaning etc, not other sensors.


LIDAR is notoriously easy to blind, what are you on about? Bonus meme: LIDAR blinds you(r iPhone camera)!


Yeah its absurd. As a Tesla driver, I have to say the autopilot model really does feel like what someone who's never driven a car before thinks it's like.

Using vision only is so ignorant of what driving is all about: sound, vibration, vision, heat, cold...these are all clues on road condition. If the car isn't feeling all these things as part of the model, you're handicapping it. In a brilliant way Lidar is the missing piece of information a car needs without relying on multiple sensors, it's probably superior to what a human can do, where as vision only is clearly inferior.


The inputs to FSD are:

    7 cameras x 36fps x 5Mpx x 30s
    48kHz audio
    Nav maps and route for next few miles
    100Hz kinematics (speed, IMU, odometry, etc)
Source: https://youtu.be/LFh9GAzHg1c?t=571


So if they’re already “fusioning” all these things, why would LIDAR be any different?


Tesla went nothing-but-nets (making fusion easy) and Chinese LIDAR became cheap around 2023, but monocular depth estimation was spectacularly good by 2021. By the time unit cost and integration effort came down, LIDAR had very little to offer a vision stack that no longer struggled to perceive the 3D world around it.

Also, integration effort went down but it never disappeared. Meanwhile, opportunity cost skyrocketed when vision started working. Which layers would you carve resources away from to make room? How far back would you be willing to send the training + validation schedule to accommodate the change? If you saw your vision-only stack take off and blow past human performance on the march of 9s, would you land the plane just because red paint became available and you wanted to paint it red?

I wouldn't completely discount ego either, but IMO there's more ego in the "LIDAR is necessary" case than the "LIDAR isn't necessary" at this point. FWIW, I used to be an outspoken LIDAR-head before 2021 when monocular depth estimation became a solved problem. It was funny watching everyone around me convert in the opposite direction at around the same time, probably driven by politics. I get it, I hate Elon's politics too, I just try very hard to keep his shitty behavior from influencing my opinions on machine learning.


> but monocular depth estimation was spectacularly good by 2021

It's still rather weak and true monocular depth estimation really wasn't spectacularly anything in 2021. It's fundamentally ill posed and any priors you use to get around that will come to bite you in the long tail of things some driver will encounter on the road.

The way it got good is by using camera overlap in space and over time while in motion to figure out metric depth over the entire image. Which is, humorously enough, sensor fusion.


It was spectacularly good before 2021, 2021 is just when I noticed that it had become spectacularly good. 7.5 billion miles later, this appears to have been the correct call.


What are the techniques (and the papers thereof) that you consider to be spectacularly good before 2021 for depth estimation, monocular or not?

I do some tangent work from this field for applications in robotics, and I would consider (metric) depth estimation (and 3D reconstruction) starting to be solved only by 2025 thanks to a few select labs.

Car vision has some domain specificity (high similarity images from adjacent timestamps, relatively simpler priors, etc) that helps, indeed.


depth estimation is but one part of the problem— atmospheric and other conditions which blind optical visible spectrum sensors, lack of ambient (sunlight) and more. lidar simply outperforms (performs at all?) in these conditions. and provides hardware back distance maps, not software calculated estimation


Lidar fails worse than cameras in nearly all those conditions. There are plenty of videos of Tesla's vision-only approach seeing obstacles far before a human possibly could in all those conditions on real customer cars. Many are on the old hardware with far worse cameras


Interesting, got any links? Sounds completely unbelievable, eyes are far superior to the shitty cameras Tesla has on their cars.


There's a misconception that what people see and what the camera sees is similar. Not true at all. One day when it's raining or foggy, have some record the driving, through the windshield. You'll be very surprised. Even what the camera displays on the screen isn't what it's actually "seeing".


Yea.. not holding my breath on links to superman tesla cameras performing better than eyes


Monocular depth estimation can be fooled by adversarial images, or just scenes outside of its distribution. It's a validation nightmare and a joke for high reliability.


It isn't monocular though. A Tesla has 2 front-facing cameras, narrow and wide-angle. Beyond that, it is only neural nets at this point, so depth estimation isn't directly used; it is likely part of the neural net, but only the useful distilled elements.


I never said it was. I was using it as a lower bound for what was possible.


Always thought the case was for sensor redundancy and data variety - the stuff that throws off monocular depth estimation might not throw off a lidar or radar.


It doesn't solve the "Coyote paints tunnel on rock" problem though.


IIRC, that was only ever a problem for the coyote, though.

Source: not a computer vision engineer, but a childhood consumer of looney toons cartoons.


Time for a car company to call itself "ACME" and the first model the "Road Runner".


Fog, heavy rain, heavy snow, people running between cars or from an obstructed view…

None of these technologies can ever be 100%, so we’re basically accepting a level of needless death.

Musk has even shrugged off FSD related deaths as, “progress”.


Humans: 70 deaths in 7 billion miles

FSD: 2 deaths in 7 billion miles

Looks like FSD saves lives by a margin so fat it can probably survive most statistical games.


How many of the 70 human accidents would be adequately explained by controlling for speed, alcohol, wanton inattention, etc? (The first two alone reduce it by 70%)

No customer would turn on FSD on an icy road, or on country lanes in the UK which are one lane but run in both directions; it's much harder to have a passenger fatality in stop-start traffic jams in downtown US cities.

Even if those numbers are genuine (2 vs 70) I wouldn't consider it apples-for-apples.

Public information campaigns and proper policing have a role to play in car safety, if that's the stated goal we don't necessarily need to sink billions into researching self driving


Is that the official Tesla stat? I've heard of way more Tesla fatalities than that..


There are a sizeable number of deaths associated with the abuse of Tesla’s adaptive cruise control with lane cantering (publicly marketed as “autopilot”). Such features are commonplace on many new cars and it is unclear whether Tesla is an outlier, because no one is interested in obsessively researching cruise control abuse among other brands.

There are two deaths associated with FSD.


This is absolutely a Musk defender. FSD and Tesla related deaths are much higher.

https://www.tesladeaths.com/index-amp.html


Autopilot is the shitty lane assist. FSD is the SOTA neural net.

Your link agrees with me:

> 2 fatalities involving the use of FSD


Tesla sales are dead across the world. Cybertruck is a failure. Chinese EVs are demonstrably better.

No one wants these crappy cars anymore.


I don't know what he's on about. Here's a better list:

https://en.wikipedia.org/wiki/List_of_Tesla_Autopilot_crashe...


Good ole Autopilot vs FSD post. You would think people on Hacker News would be better informed. Autopilot is just lane keep and adaptive cruise control. Basically what every other car has at this point.

"MacOS Tahoe has these cool features". "Yea but what about this wikipedia article on System 1. Look it has these issues."

That's how you come across


Autopilot is the shitty lane assist. FSD is the SOTA neural net.

Your link agrees with me:

> two that NHTSA's Office of Defect Investigations determined as happening during the engagement of Full Self-Driving (FSD) after 2022.


Isn't there a great deal of gaming going on with the car disengaging FSD milliseconds before crashing? Voila, no "full" "self" driving accident; just another human failing [*]!

[*] Failing to solve the impossible situation FSD dropped them into, that is.


Nope. NHTSA's criteria for reporting is active-within-30-seconds.

https://www.nhtsa.gov/laws-regulations/standing-general-orde...

If there's gamesmanship going on, I'd expect the antifan site linked below to have different numbers, but it agrees with the 2 deaths figure for FSD.


Better than I expected. So this was 3 days ago, is this for all previously models or is there a cut off date here?


I quickly googled Lidar limitations, and this article came up:

https://www.yellowscan.com/knowledge/how-weather-really-affe...

Seeing how its by a lidar vendor, I don't think they're biased against it. It seems Lidar is not a panacea - it struggles with heavy rain, snow, much more than cameras do and is affected by cold weather or any contamination on the sensor.

So lidar will only get you so far. I'm far more interested in mmwave radar, which while much worse in spatial resolution, isn't affected by light conditions, weather, can directly measure stuff on the thing its illuminating, like material properties, the speed its moving, the thickness.

Fun fact: mmWave based presence sensors can measure your hearbeat, as the micro-movements show up as a frequency component. So I'd guess it would have a very good chance to detect a human.

I'm pretty sure even with much more rudimentary processing, it'll be able to tell if its looking at a living being.

By the way: what happened to the idea that self-driving cars will be able to talk to each other and combine each other's sensor data, so if there are multiple ones looking at the same spot, you'd get a much improved chance of not making a mistake.


Lidar is a moot point. You can't drive with just Lidar, no matter what. That's what people don't understand. The most common one I hear: "What if the camera gets mud on it", ok then you have to get out and clean it, or it needs an auto cleaning system.


Maybe vision-only can work with much better cameras, with a wider spectrum (so they can see thru fog, for example), and self-cleaning/zero upkeep (so you don't have to pull over to wipe a speck of mud from them). Nevertheless, LIDAR still seems like the best choice overall.


Autopilot hasn’t been updated in years and is nothing like FSD. FSD does use all of those cues.


I misspoke, i'm using Hardware 3 FSD.


From the perspective of viewing FSD as an engineering problem that needs solving I tend to think Elon is on to something with the camera-only approach – although I would agree the current hardware has problems with weather, etc.

The issue with lidar is that many of the difficult edge-cases of FSD are all visible-light vision problems. Lidar might be able to tell you there's a car up front, but it can't tell you that the car has it's hazard lights on and a flat tire. Lidar might see a human shaped thing in the road, but it cannot tell whether it's a mannequin leaning against a bin or a human about to cross the road.

Lidar gets you most of the way there when it comes to spatial awareness on the road, but you need cameras for most of the edge-cases because cameras provide the color data needed to understand the world.

You could never have FSD with just lidar, but you could have FSD with just cameras if you can overcome all of the hardware and software challenges with accurate 3D perception.

Given Lidar adds cost and complexity, and most edge cases in FSD are camera problems, I think camera-only probably helps to force engineers to focus their efforts in the right place rather than hitting bottlenecks from over depending on Lidar data. This isn't an argument for camera-only FSD, but from Tesla's perspective it does down costs and allows them to continue to produce appealing cars – which is obviously important if you're coming at FSD from the perspective of an auto marker trying to sell cars.

Finally, adding lidar as a redundancy once you've "solved" FSD with cameras isn't impossible. I personally suspect Tesla will eventually do this with their robotaxis.

That said, I have no real experience with self-driving cars. I've only worked on vision problems and while lidar is great if you need to measure distances and not hit things, it's the wrong tool if you need to comprehend the world around you.


This is so wild to read when Waymo is currently doing like 500,000 paid rides every week, all over the country, with no one in the driver's seat. Meanwhile Tesla seems to have a handful of robotaxis in Austin, and it's unclear if any of them are actually driverless.

But the Tesla engineers are "in the right place rather than hitting bottlenecks from over depending on Lidar data"? What?


I wasn't arguing Tesla is ahead of Waymo? Nor do I think they are. All I was arguing was that it makes sense from the perspective of a consumer automobile maker to not use lidar.

I don't think Tesla is that far behind Waymo though given Waymo has had a significant head start, the fact Waymo has always been a taxi-first product, and given they're using significantly more expensive tech than Tesla is.

Additionally, it's not like this is a lidar vs cameras debate. Waymo also uses and needs cameras for FSD for the reasons I mentioned, but they supplement their robotaxis with lidar for accuracy and redundancy.

My guess is that Tesla will experiment with lidar on their robotaxis this year because design decisions should differ from those of a consumer automobile. But I could be wrong because if Tesla wants FSD to work well on visually appealing and affordable consumer vehicles then they'll probably have to solve some of the additional challenges with with a camera-only FSD system. I think it will depend on how much Elon decides Tesla needs to pivot into robotaxis.

Either way, what is undebatable is that you can't drive with lidar only. If the weather is so bad that cameras are useless then Waymos are also useless.


What causes LiDAR to fail harder than normal cameras in bad weather conditions? I understand that normal LiDAR algorithms assume the direct paths from light source to object to camera pixel, while a mist will scatter part of the light, but it would seem like this can be addressed in the pixel depth estimation algorithm that combines the complex amplitudes at the different LiDAR frequencies.

I understand that small lens sizes mean that falling droplets can obstruct the view behind the droplet, while larger lens sizes can more easily see beyond the droplet.

I seldom see discussion of the exact failure modes for specific weather conditions. Even if larger lenses are selected the light source should use similar lens dimensions. Independent modulation of multiple light sources could also dramatically increase the gained information from each single LiDAR sensor.

Do self-driving camera systems (conventional and LiDAR) use variable or fixed tilt lenses? Normal camera systems have the focal plane perpendicular to the viewing direction, but for roads it might be more interesting to have a large swath of the horizontal road in focus. At least having 1 front facing camera with a horizontal road in focus may prove highly beneficial.

To a certain extend an FSD system predicts the best course of action. When different courses of action have similar logits of expected fitness for the next best course of action, we can speak of doubt. With RMAD we can figure out which features or what facets of input or which part of the view is causing the doubt.

A camera has motion blur (unless you can strobe the illumination source, but in daytime the sun is very hard to outshine), it would seem like an interesting experiment to:

1. identify in real time which doubts have the most significant influence on the determination of best course of action

2. have a camera that can track an object to eliminate motion blur but still enjoy optimal lighting (under the sun, or at night), just like our eyes can rotate

3. rerun the best course of action prediction and feed back this information to the company, so it can figure out the cost-benefit of adding a free tracking camera dedicated to eliminating doubts caused by motion blur.


Tesla has driven 7.5B autonomous miles to Waymo's 0.2B, but yes, Waymo looks like they are ahead when you stratify the statistics according to the ass-in-driver-seat variable and neglect the stratum that makes Tesla look good.

The real question is whether doing so is smart or dumb. Is Tesla hiding big show-stopper problems that will prevent them from scaling without a safety driver? Or are the big safety problems solved and they are just finishing the Robotaxi assembly line that will crank out more vertically-integrated purpose-designed cars than Waymo's entire fleet every day before lunch?


Tesla's also been involved in WAY more accidents than Waymo - and has tried to silence those people, claim FSD wasn't active, etc.

What good is a huge fleet of Robotaxis if no one will trust them? I won't ever set foot in a Robotaxi, as long as Elon is involved.


waymo just hit it's first pedestrian, ever. It did it at a speed of 6mph and it was estimated a human would have hit the kid at 14mph (it was going 17mph when a small child jumped out in front of it from behind a black suv.

First pedestrian struck. That's crazy.

Tesla just disengages fsd anytime a sensor is slightly blocked/covered/blinded.. waymo out here doing fsd 100% of the time and basically never hurts anyone.

I don't get the tesla/elon love here, i like my model 3 but it's never going to get real fsd, and that sucks, elon also lies about the roadmap, timing, etc. I bet the roadster is canceled now. Why do people like inferior sensors and autistic hitler?


Waymos disengage and get tele operated too?


Not really. Waymos can’t be driven remotely, their remote operators can give the car directions, e.g. “use this lane”, and then the autonomous system controls the vehicle to execute those directions.

I’m sure latency and connectivity is too much of an risk to do it any other way.

The only Waymos driven by a human are the ones with human drivers physically in the car


There's more Tesla's on the road than Waymo's by several orders of magnitude. Additionally the types of roads and conditions Tesla's drive under is completely incomparable to Waymo.


Yes that was accounted for above, but this isn't autonomous apples to apples


semi autonomous


>>The biggest L of elon's career is the weird commitment to no-lidar.

I thought it was the Nazi salutes on stage and backing neo-nazi groups everywhere around the world, but you know, I guess the lidar thing too.


maybe it's better to say it was the biggest L of his engineering career instead of his political career


I have HW3, but FSD reliably disengages at this time of year with sunrise and sunset during commute hours.


Yep, and won't activate until any morning dew is off the sensors.. or when it rains too hard.. or if it's blinded by a shiny building/window/vehicle.

I will never trust 2d camera-only, it can be covered or blocked physically and when it happens FSD fails.

As cheap as LIDAR has gotten, adding it to every new tesla seems to be the best way out of this idiotic position. Sadly I think Elon got bored with cars and moved on.


If the camera is covered or blocked, you can't drive plain and simple, as you can't drive a car (at least on Earth) with just Lidar. The roads are made for eyes. Maybe on Rocky's homeworld you can have a Lidar only system for traveling.


This will considerably skew the statistics, a low sun dramatically increases accident rates on humans too.


FSD14 on hw4 does not. Its dynamic range is equivalent or better than human.


This is amazing, if you feel like opening an entire language to being learned more easily.. Farsi is a VERY overlooked language, my wife/her family speak it but it's so difficult finding great language lessons (it's also called Persian/Dari)


Thank you.

I had a quick look at Farsi datasets, and there seem to be a few options. That said, written Farsi doesn’t include short vowels… so can you derive pronunciation from the text using rules?


> written Farsi doesn’t include short vowels… so can you derive pronunciation from the text using rules?

You can't, but Farsi dictionaries list the missing short vowels/diacritics/"eraab" for every word.

For instance, see this entry: https://vajehyab.com/dehkhoda/%D8%AD%D8%B3%D8%A7%D8%A8?q=%D8...

With the short vowel on the first letter it would be written حِساب (normally written as just حساب)

The dictionary entry linked shows that there is a ِ on the first letter ح

But you would have to disambiguate between homographs that differ only in the eraab.


I made a parallel literal translator for Farsi:

https://pingtype.github.io/farsi.html

Paste in some parallel text (e.g. Bible verses, movie subtitles, song lyrics) and read what Farsi you can on the first line, looking to the lower lines for clues if you get stuck.

The core version of Pingtype is for traditional Chinese, but it supports a few other languages too.


You are at the very top of Mount Autism my friend. I should have expected this.


The parent commenter is just saying that it’s not executed well enough for most people to get the joke, and I agree. Here a better one: https://x.com/markpike/status/1442565924440002563


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: