Hacker Newsnew | past | comments | ask | show | jobs | submit | bsoles's commentslogin

Outsourcing their thinking is going to be the stupidest thing humans ever did and we won't even be smart enough to understand that this is the case.

But humans have evolved to socialize thinking, haven't we?

What is representative democracy if not that?

One reason - the main reason? - we live in groups with different roles and skills is to rely on others to think for us.


> But humans have evolved to socialize thinking, haven't we?

An overwhelmingly large number of people keep saying that socialism is bad, individualism is where it's at. I trust they're right.


Thought we learned this lesson with attention span/ADHD-mimicking symptoms from phone addiction but apparently not!

The title should have said "Antropic stole GCC and other open-source compiler code to create a subpar, non-functional compiler", without attribution or compensation. Open source was never meant for thieving megacorps like them.

No, I did not read the article...


I am on the same boat. I left management to go back to a senior IC role in my early 50s. I am perceived as an important contributor in my company, but I am 90% sure that no big company would ever hire me. I am also pretty sure I cannot pass LeetCode these days even though I work on implementing scientific algorithms that actually get used by major engineering/manufacturing companies around the world.

I feel like a perfect realization Goodhart's Law is about to happen to move up our rankings.

I asked my students in a take home lab to write tests for a function that computes the Collatz sequence. Half of the class returned AI generated tests that tested the algorithm with floating point and negative numbers (for "correct" results, and not for input validation). I am not doing anything take home anymore.

There are, proportionally, more lawyers than software engineers in prison I would claim. Code of ethics doesn't really mean much.

> ... regular wires with plastic isolation, but those can be a pain to strip, even with a semi-decent stripping tool.

> Boil away a millimeter or 2 of enamel without cutting the wire off the spool.

The person spends 18 seconds (see the video) to "boil away" and tin the tip of the enamel wire, yet complains about using a wire stripper to strip the wire in 5 seconds.

I mostly use breadboards even with microcontroller stuff (at 16 MHz, or so), and never run into any problems for constructing semi-permanent circuits. The PCB version of breadboards (https://www.adafruit.com/product/571) also work pretty well. There area also little PCB adaptors to integrate surface-mount components on breadboards that work well.


Over the last 25 years of building commercial software, but being a programming enthusiast since I was 15 years old, I came to the conclusion that self-improvement (in the sense of gaining real expertise in a field, building a philosophy of things, and doing the right things) is in direct opposition to creating "value" in the corporate/commercial sense of today.

Using AI/LLMs, you perhaps will create more commercial value for yourself or your employer, but it will not make you a better learner, developer, creator, or person. Going back to the electronic calculator analogy that people like to refer to these days when discussing AI, I also now think that, yes, electronic calculators actually made us worse with being able to use our brains for complex things, which is the thing that I value more than creating profits for some faceless corporation that happens to be my employer at the moment.


Why are you so certain that LLMs/AI can't be used as a tool to learn and grow?

Like Herbie Hancock once said, a computer is a tool, like an axe. It can be used for terrible things, or it can be used to build a house for your neighbor.

It's up to people how we choose to use these tools.


Just putting it out there, not really interested in exercising the metaphor. I tend to be able to own my tools, these are closer to services.

> Why are you so certain that LLMs/AI can't be used as a tool to learn and grow?

Because every other post in here, for example, starts with "I vibe coded..." and not with "I learned something new today on ChatGPT".


I’m vibe coding apps that help me explore stuff and learn things. That’s their specific purpose.

Maybe people that learn stuff from AI aren't the type to enthusiastically make posts about it?

> The false positive rate is 0. The tool never says human writing is AI.

That cannot be true as it would be easy for a human to write in the style of AI, if they choose to. Whoever is making that claim is lying, because money...


Read the paper dude. It's not an advertisement, it's an investigation. They performed an experiment including 29 human written papers. One of them got a score of 11% likely to be AI, the rest got a score of 0% likely to be AI. The tool never labeled any human writing as AI with high confidence.

> That cannot be true as it would be easy for a human to write in the style of AI, if they choose to.

Is that the nightmare scenario that everybody in this thread is freaking out about?

Students who go to great effort to deliberately try to make it look like they are cheating, they're the ones you're afraid of being falsely accused of cheating?

We're on our way to dystopia because people who go out of their way to look suspicious on purpose, arouse suspicion?


The reliability of all AI tools with potentially severe consequences for people needs to be tested using adversarial patterns. This is nothing new, yet the mentioned article fails to do that. They test the happy paths and find the results to be satisfactory for themselves.

It is very common in academic investigations to achieve results with more than 95% accuracy, let alone 90%, when in the real world the same AI tools fail miserably.

So, yes, this is the nightmare scenario that I am afraid of where a simplistic "investigation" will be used to justify the use of unproven AI tools with real life consequences to people.


This is just sad to read. AI makes all of us stupider by the day, no matter what the tech bros and wanna-be billionaires of HN tells us...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: