I bought a Model 3 in 2018, a Y in 2020, and another 3 in 2024. The end user design has only continued to get worse on various aspects of the car. Regarding non-safety issues, living in the northeast and mid-Atlantic, the door handles freeze shut during the winter and become impossible to open from outside of the car.
But why continue to buy these poor end-user design experiences, you think? My car maintenance costs since 2018 has been a gallon of windshield wiper fluid and new tires. So I deal with poor design decisions.
But the cup holders in the latest Model 3 may be my breaking point.
My car maintenance costs since 2018 has been a gallon of windshield wiper fluid and new tires.
If you're buying a new car every two years, you could get the same low level of maintenance with an ICE Toyota. Or, you know, you can buy an EV from a company that knows how to make a door handle, and still get that low maintenance of an EV. There are many to choose from.
>My car maintenance costs since 2018 has been a gallon of windshield wiper fluid and new tires
Just to be clear, you keep buying stupid and poorly designed cars to save a couple hundred dollars on oil changes? Which are the ONLY maintenance item you need to do on any brand new ICE from purchase to about 100k miles?
You repeatedly purchase brand new, luxury priced, "new car premium" priced cars to save a few hundred dollars?
Uhhh.... What?
My ICE car's entire maintenance budget since I purchased it 5 years ago has been about $300 worth of oil changes, once per year, and Europeans claim that's way too often.
> Generally speaking, for most people, bills go up as they age (kids, health, yada-yada).
To a point yes. There was a time when you could realistically pay off your mortgage before you were fifty, and for some people maybe this is still the case. College expenses are another consideration, just depends on how much you as a parent are going to shoulder those costs compared to grants/military/self earn.
For myself, costs have lowered and I'm able to donate more money to charity. So if you can swing a bigger paycheck and you're motivated help others I think it's worth doing so.
So imagine you have some number of cron jobs which require a bunch of secrets and these things fire every minute or 30 seconds or what have you. You could save as much as $0.25 a month!
Okay I guess I've just had a different experience entirely. Maybe I'm jaded by hallucinations.
The code ChatGPT generates is often bad in ways that are hard to detect. If you are not an experienced software engineer, the defects could be impossible to detect, until you/ChatGPT has gone and exposed all your customers to bad actors, or crash at runtime, or do something terribly incorrect.
As far as other thought work goes, I am not consulting ChatGPT over, say, a dietician or a doctor. The hallucination risk is too high. Producing an answer is the not the same as producing a correct answer.
I agree. I've just seen it hallucinate too many things that on the surface seem very plausible but are complete fabrications. Basically my trust is near 0 for anything chatgpt, etc. spits out.
My latest challenge is dealing with people that trust chatgp to be infallible, and just quote the garbage to make themselves look like they know what they are talking about.
> Okay I guess I've just had a different experience entirely.
I've seen both the good and the bad. I really like the good parts. Most recently, Claude Sonnet 3.5 fixed a math error in my code (I prompted it to check for it from a well-written bug report, and it did it fix it ever so perfectly).
These days, it is pretty much second nature for me to pull up a new file & prompt Copilot to complete writing the entire code from my comment trails. I don't think I've seen as much change in my coding behaviour since Borland Turbo C -> NetBeans.
If your procees is asking it to "write me all this code", then you slap it in production, you're going to have a bad time. But there's intermediate ground.
>I am not consulting ChatGPT over, say, a dietician or doctor
Do you know any doctors, by chance? You have way more faith in experts than I do.
ChatGPT is just statistically associating what it’s observed online. I wouldn’t take dietary advice from the mean output of Reddit with more trust than an expert.
Doctors can be associating what they’ve learned, often with heavy biases from hypochondriacs and not enough time per patient to really consider the options.
I’ve had multiple friends get seriously ill before a doctor took their symptoms seriously, and this is a country with decent healthcare by all accounts.
> Doctors can be associating what they’ve learned, often with heavy biases from hypochondriacs
So true. And it's hard to question a doctor's advice, because of their aura of authority, whereas it's easy to do further validation of an LLMs diagnosis.
I had to change doctor recently when moving towns. It was only when chancing on a good doctor that I realised how bad my old doctor was - a nice guy but cruising to retirement. And my experience with cardiologists has been the same.
Happy to get medical advice from an LLM though I'd certainly want prescriptions and action plans vetted by a human.
By the time a doctor paid me enough attention to realise something was wrong I had suffered a spinal cord injury whose damage can never be reversed. I’m not falling all over myself to trust chatgpt, but I got practically zero for doctors either. Nobody moved until I threatened to start sueing.
I sometimes use ChatGPT to prepare for a doctor's visit so I can have a more intelligent conversation even if I may have more trust overall in my doctor than in AI.
Will be cool once we have active agents tho. Surely the learning/research process isn't that difficult even for current LLMs/similar architectures. If it can teach itself, or it can collate new (never seen) data for other models then that's the cool part.
You realize that "online" doesn't just mean Reddit, but also Wikipedia and arXiv and PubMed and other sources perused by actual experts? ChatGPT read more academic publications in any field than any human.
I’ve seen so many doctors advertising or recommending homeopathic “medicines” or GE-132 [1], that I would be fairly more confident in an LLM + my own verification from reliable sources. I’m no doctor, but I know more than enough to recognize bullshit, so I wouldn’t just recommend this approach to everyone.
I recently needed to help a downstream team with a problem with an Android app. I never did mobile app dev before, but I was able to spin up a POC (having not coded in Java for 22 years) and solve the problem with the help of ChatGPT 4.0.
Sure I probably would have been able to do it without ChatGPT, but it was so much easier to have something to bounce ideas off-of. A safety net, if you will.
The hallucination risk was irrelevant: it did hallucinate a little early on. I told it it was a hallucinating, and we moved onto a different way of solving the problem. It was easy enough to verify it was working as expected.
Seems to me this is the equivalent of fast retrieval and piecing together from a huge amount of examples in the data. This might take far more time if you were to do this yourself. That's a plus for the tools. In other words, a massively expensive (for the service provider) auto-complete.
But try to do something much more simple but has much fewer examples (a typical case is something which has bad documentation) in the data, and it falls apart. I even tried to use Perplexity to create a dead simple CLI command, and it hallucinated an answer (looking at the docs, it misused the parameter, and may have picked up on someone who gave an incorrect answer in the data.)
It's already gotten significantly better and faster in a few yrs. Maybe LLMs will hit a wall in the next 5yrs but even if it does it's still extremely useful and there are always other ways to optimize the current technology where this is already a major development for society.
>The code ChatGPT generates is often bad in ways that are hard to detect. If you are not an experienced software engineer, the defects could be impossible to detect, until you/ChatGPT has gone and exposed all your customers to bad actors, or crash at runtime, or do something terribly incorrect.
I wonder about this a lot, because there's a future here where a decent amount of software engineering is offloaded to these AIs and we reach a point, in the near future, where no one really knows or understands what's going on. That seems bad. Put another way, suppose that your primary care doctor is really just using MedAI to diagnose and recommend treatment for whatever it is you went in to see him about. Over time, these sorts of shortcuts metastasize and the doctor ends up not really knowing anything about you, or the other patients, or what he's really doing as a doctor ... it's just MedAI (with whatever wrongness rate is tolerable for the insurance adjusters). Again, seems bad. There's a palpable loss of human knowledge here that's enabled by a "tool" that's allegedly going to make us all better off.
>The code ChatGPT generates is often bad in ways that are hard to detect. If you are not an experienced software engineer, the defects could be impossible to detect
I keep hearing this, but it's incorrect. While I only know R, which is obviously a simple language, I would never type out all my code and go without testing to ensure it does what I intended before using it regularly.
So I can't imagine someone that knows a more complex language just typing out all of it before integrating it into business systems at their work or anything else before testing it.
Why would AI be any different?
Why the hell are AI skeptics acting like getting help from an LLM would involve not testing anything? Of course I test it! Why on earth wouldn't I? Just as I tested code made by freelancers I hired on commission before using the code I bought from them. Do AI skeptics really not test their own code? Are you all insane?
> While I only know R, which is obviously a simple language
Take it from someone who started with R, R is 100% not a simple language. If you can write good R, you're probably a surprisingly good potential SE as R is kinda insane and inconsistent due to 50+ years of history (from S, to R etc).
Hmmm.. I'm trying to imagine interviewing for SE and telling them I got wealthy from a crypto market-making algorithm I coded in R during Covid and the interviewer responding with anything but laughter or with silence as they ponder legal ways to question my mental health.
It's an excellent language, I think, for many reasons. One is that you can work with data within hours because even before learning what packages or classes are, you got native objects for data storage, wrangling, and analysis. Even import my Excel data and rapidly learn the native function cheat sheet so fast that I was excited to learn what packages are because I couldn't wait to see what I could do.
That was my experience in like 2010, maybe, and after having C++ and Python go in and out my head during college multiple times. I view R as simple only because I actually felt more helpless to keep learning it than helpless to ever learn coding at all. Worth noting that I was a Stat/Probability tutor with a Finance degree and much Excel experience.
> That was my experience in like 2010, maybe, and after having C++ and Python go in and out my head during college multiple times. I view R as simple only because I actually felt more helpless to keep learning it than helpless to ever learn coding at all. Worth noting that I was a Stat/Probability tutor with a Finance degree and much Excel experience.
Ah yeah, makes sense. That's the happy path for learning R (know enough stats etc to decode the help pages).
That being said, R is an interesting language with lots of similarities to both C based languages and also Lisp (R was originally a scheme intepreter), so it's surprisingly good at lots of things (except string manipulation, it's terrible at that).
Easy answer. Ask ChatGPT to write testable code, and tests for the code, then just verify the tests. If the tests don't work, have ChatGPT use the test output to rewrite the code until it does.
If you can't have ChatGPT write testable code because of your architecture, you have other problems. People with bad process and bad architecture saying AI is bad because it doesn't work well with their dumpster fire systems, 100% facepalm.
> If you can't have ChatGPT write testable code because of your architecture, you have other problems.
There exist lots of reasons why code is hard to test automatically that have nothing to do with the architecture of the code, but with the domain for which the code is written and runs.
Tesla uses independent body shops they certify, I know this because I've used one as a result of a deer accident. The service centers do not handle beyond some level of damage.
My personal experience so far with Tesla specifically - and this is not to say they are doing the right thing in regards to parts or anything - was when my wife hit a deer and a bunch of the driver's front side of the car had to be replaced was that it took six days. Of course the bill insurance covered was around $12K which is just insane (headlight, hood, fender, panel, side mirror, camera). The timeline to repair was probably because the Tesla density for my area (around Richmond, VA) is not high like it is in places such as California.
Other than that, which was $0 out of pocket, I've had my Model 3 for six years in July and almost $0 in parts or service. I replaced the 12v battery which I had to buy for $75 and tires twice. This way more cost effective than my Lexus (around $4,000 in repair service over 5 years, not counting tires) and Mercedes (around $3,000 in repair service over 2 years, which - not counting tires and I had these AMG rims that were wider on the rear tires so tire replacement had to happen early). So its hard for me to complain about Tesla, but I can see where for other people who have had issues where there are a lot of Teslas on the road could be an issue.
Going 4 years on our Hyundai and we’ve literally only had to replace tires. Both of our prior vehicles, Chevys, only required tires and brakes outside of a single, major repair - melted catalytic converter. That would have been a $2k repair tops over respective 8 and 18 year life span of both of those vehicles.
If its tech and they want a 30-45 minute interview with live coding on something like hackerrank - especially if whatever brain teaser they've chosen has absolutely nothing to do with the field they operate in - I'd put the chances around 80%.
> I don't see how AI, quantum and block chain are at all equivalent.
It's not that there is any claim to equivalency, its that these are the technology trends that are most useful - the trend itself, nevermind any sort of usable technology - for those who grift.
It's not an uncommon argument to try and draw parallels between these - X was a fairly useless, yet extremely speculative new tech scam, and Y is speculative new tech. Therefore Y is also a useless scam, QED.
There's a difference between something being an extremely hyped development and it being an actual grift down to the core. The internet was an extremely overhyped development, but ultimately not a grift. Cryptocurrency was, to a large extent, both. Whether generative AI is one or the other won't be apparent until a bubble truly starts growing.
Something can be both a good development and overhyped grift. You have to keep in mind that for a lot of people and their businesses, the grift is literally the entire angle. Not building a technology. Not building a 100 year old business. But to get rich fast as you can while you have the opportunity, tech and business be damned. Ironically, both the real technologists and the grifters benefit from this preaching of misleading overstatements and half truths from the rooftop. It’s therefore tolerated almost as a funding mechanism for the industry at large. Mac os 15, same as it ever was now with AI under the hood. Sounds like a good seller to me.
For all these "it is better" points, it still doesn't even begin to outweigh the "you get to spend time with your children" point. This is the most important part of child rearing and once they've moved on you'll be thankful you spent your time with them instead of... whatever all that income/societal optimization stuff is. You get to raise your children once.
If one parent is working, the other home with the kids, chances are that the one working can't take as much time off or has a harder time setting a healthy work-life balance (increasing your pay requires working harder).
Using myself as an example: the only reason I can pick up my kid early from the kindergarten and spend quality time with him is because I can afford to not work 9-5 every day due to my wife also working.
As most things it is a balancing act. Children also need to socialize and learn to operate in a social setting independently from their parents. Day cares have an important role in this aspect in our nuclear family based societies. Confining children to be paired to their parents all the time is also not going to be good for their development.
I guess you missed the part where I said that it is better for the child. Reducing my argument to "income/societal optimization stuff" is arguing in bad faith.
> browser search bar and get perfectly good answers
That depends. If your search engine is Google, you'd get things like "The best 17 AAA games by budget to buy in 2024" or "22 triple A games for a tight budget in 2024."
But why continue to buy these poor end-user design experiences, you think? My car maintenance costs since 2018 has been a gallon of windshield wiper fluid and new tires. So I deal with poor design decisions.
But the cup holders in the latest Model 3 may be my breaking point.