When I hear "ChatGPT says..." on some topic at work, I interpret that as "Let me google that for you, only I neither care nor respect you enough to bother confirming that that answer is correct."
To my mind, it's like someone saying "I asked Fred down at the pub and he said...". It's someone stupidly repeating something that's likely stupid anyway.
You can have the same problem with Googling things, LLMs usually form conclusions I align with when I do the independent research. Google isn't anywhere near as good as it was 5 years ago. All the years of crippling their search ranking system and suppressing results has caught up to them to the point most LLMs are Google replacements.
In a work context, for me at least, this class of reply can actually be pretty useful. It indicates somebody already minimally investigated a thing and may have at least some information about it, but they're hedging on certainty by letting me know "the robots say."
It's a huge asterisk to avoid stating something as a fact, but indicates something that could/should be explored further.
(This would be nonsense if they sent me an email or wrote an issue up this way or something, but in an ad-hoc conversation it makes sense to me)
I think this is different than on HN or other message boards, it's not really used by people to hedge here, if they don't actually personally believe something to be the case (or have a question to ask) why are they posting anyway? No value there.
> can actually be pretty useful. It indicates somebody already minimally investigated a thing
Every time this happens to me at work one of two things happens:
1) I know a bit about the topic, and they're proudly regurgitating an LLM about an aspect of the topic we didn't discuss last time. They think they're telling me something I don't know, while in reality they're exposing how haphazard their LLM use was.
2) I don't know about the topic, so I have to judge the usefulness of what they say based on all the times that person did scenario Number 1.
Yeah if the person doing it is smart I would trust they had the reasonable prompt and ruled out flagrant BS answers. Sometimes the key thing is just to know the name of the thing for the answer. It's equally as good/annoying as reporting what Google search gives for the answer. I guess I assume mostly people will do the AI query/search and then decide to share the answer based on how good or useful it seems.
These days, most people who try googling for answers end up reading an article which was generated by AI anyway. At least if you go right to the bot, you know what you're getting.
> When I hear "ChatGPT says..." on some topic at work, I interpret that as "Let me google that for you, only I neither care nor respect you enough to bother confirming that that answer is correct."
I have a less cynical take. These are casual replies, and being forthright about AI usage should be encouraged in such circumstances. It's a cue for you to take it with a grain of salt. By discouraging this you are encouraging the opposite: for people to mask their AI usage and pretend they are experts or did extensive research on their own.
If you wish to dismiss replies that admit AI usage you are free to do so. But you lose that freedom when people start to hide the origins of their information out of peer pressure or shame.
If someone is asking a technical question along the lines of “how does this work” or “can I do this,” then I’d expect them to Google it first. Nowadays I’d also expect them to ask ChatGPT. So I’d appreciate their preamble explaining that they already did that, and giving me the chance to say “yep, ChatGPT is basically right, but there’s some nuance about X, Y, and Z…”
Expecting people to stop asking casual questions to LLMs is definitely a lost cause. This tech isn't going anywhere, no matter how much you dislike it.
> expecting anyone to actually try anymore is a lost cause
Well now you're putting words in my mouth.
If you make it against the rules to cite AI in your replies then you end up with people masking their AI usage, and you'll never again be able to encourage them to do the legwork themselves.
But I'm not interested in the AI's point of view. I have done that myself.
I want to hear your thoughts, based on your unique experience, not the AI's which is an average of the experience of the data it ingested. The things that are unique will not surface because they aren't seen enough times.
Your value is not in copy-pasting. It's in your experience.
Did you agree with it before the AI wrote it though (in which case, what was the point of involving the AI)?
If you agree with it after seeing it, but wouldn't have thought to write it yourself, what reason is there to believe you wouldn't have found some other, contradictory AI output just as agreeable? Since one of the big objections to AI output is that they uncritically agree with nonsense from the user, scycophancy-squared is even more objectionable. It's worth taking the effort to avoid falling into this trap.
Well - the point of involving the AI is that very often it explains my intuitions way better than I can. It instantiates them and fills in all the details, sometimes showing new ways.
I find the second paragraphs contradictory - either you fear that I would agree with random stuff that the AI writes or you believe that the sycophant AI is writing what I believe. I like to think that I can recognise good arguments, but if I am wrong here - then why would you prefer my writing from an LLM generated one?
> Well - the point of involving the AI is that very often it explains my intuitions way better than I can. It instantiates them and fills in all the details
> I like to think that I can recognise good arguments, but if I am wrong here - then why would you prefer my writing from an LLM generated one?
Because the AI will happily argue either side of a debate, in both cases the meaningful/useful/reliable information in the post is constrained by the limits of _your_ knowledge. The LLM-based one will merely be longer.
Can you think of a time when you asked AI to support your point, and upon reviewing its argument, decided it was unconvincing after all and changed your mind?
You could instead ask Kimi K2 to demolish your point instead, and you may have to hold it back from insulting your mom in the ps.
Generally if your point holds up under polishing under Kimi pressure, by all means post it on HN, I'd say.
Other LLMs do tend to be more gentle with you, but if you ask them to be critical or to steelman the opposing view, they can be powerful tools for actually understanding where someone else is coming from.
Try this: Ask an LLM to read the view of the person you're answering to, and ask it steelman their arguments. Now think to see if your point is still defensible, or what kinds of sources or data you'd need to bolster it.
> why would you prefer my writing from an LLM generated one?
Because I'm interested in hearing your voice, your thoughts, as you express them, for the same reason I like eating real fruit, grown on a tree, to sucking high-fructose fruit goo squeezed fresh from a tube.
"I asked an $LLM and it said" is very different than "in my opinion".
Your opinion may be supported by any sources you want as long as it's a genuine opinion (yours), presumably something you can defend as it's your opinion.
If I wanted to consult an AI, I'd consult an AI. "I consulted an AI and pasted in its answer" is worse than worthless. "I consulted an AI and carefully checked the result" might have value.
The cost of operating television studios and paying related staff, including on-air talent, is probably significant. I can easily see major news networks turning to AI-generated newsreaders. TikTok is already full of AI voiceovers; seems like a short leap, to me.
I don't quite follow this point. Master Chief is recognizable. So is Lara Croft. So is Darth Vader's voice. Networks could easily develop virtual personalities with distinctive, bankable, appealing characteristics.
They wouldn't have off-air scandals, require insurance, pensions, teams of wardrobe and makeup artists, security details; They wouldn't need to travel. And that is just the on-air talent. You can replace thousands of tv studios all over the world with a handful of workstations and compute power.
And why haven't they? Master Chief has been around since 2001, Lara Croft since 1996 and Darth Vader since 1977. The technology has been around for ages, and as far as I know, no networks have opted for virtual anchors.
Just from where you are pulling the data that on-air personalities are too expensive?
I don't have a good answer for why they haven't already. I have wondered about the possibility of doing this for 10 years or more.
"The data that on-air personalities are too expensive?" It doesn't seem to me , for the purposes of this conversation, that identification of a cost center required a quantitative analysis. The cost of human talent is non-zero, presumably large enough to merit scrutiny, and unpredictable; that is sufficient, to my way of thinking. So is the cost of the equipment and infrastructure to capture and transmit video image of that human talent, and the humans who maintain and operate that infrastructure.
We've seen several decades of human cost-reduction initiatives, across multiple industries and fields of endeavor, so I'm taking that as evidence that if there is a cost that can be reduced or removed, someone somewhere is looking at doing so. Everything from assembly-line automation, to switching to email over inter-office memos and mailrooms, to the abandonment of fixed-benefit pensions, to self-service kiosks in fast-food restaurants, demonstrates that costs will be cut where they can be cut.
I think there's a good chance people would watch an entirely generated character read the news, so long as they find the presentation reliable according to their world view.
Tucker Carlson or Wolf Blitzer or Lester Holt might as well be cartoon characters to me. There's practically zero chance I'll ever meet them in person, especially more that we'd have any kind of real human connection. What one cares about is if they think the overall source is reliable and what kind of information (or disinformation) their orgs are pushing to the people. Having them be actual meatbags is a liability, they'll pop too much ambien one night and say some pretty terrible things on social media compared to only ever being a highly curated output of the organization. Unless they pull a Tay.
I think thats the biggest issue they will face. For example, a company that uses avatars and emojis just looks cheap, because it is cheap to do.
Are you going to pay for cheap looking TV, especially when you know its shit?
But then the more important thing to remember is that news isn't expensive because of the news readers, its expensive because it costs lots to operate a news network. If you news anchors are costing millions, you have a chat show, not a news programme.
The way I see it is, it doesn't matter if I'm willing to pay for shit content/presentation or not. This discussion is not about what is good for customers, or for news consumers in general. It is about what is good for publicly-traded content providers' bottom lines. My opinions as a consumer of video-based news do not matter. They're going to give me what they want, regardless of what I think about it, and as they have done for the past 50 years.
It is no different than charging me for a channel package full of content I don't watch, cancelling my favorite shows, flooding their channels with unscripted reality garbage, or using "stunning" and "so-and-so just did such-and-such" on nominally serious news web sites. If I don't like it I can choose not to participate, but if I do choose to participate, I agree to accept whatever is offered to me; my opinion was neither requested nor required. So if the top three linear TV news providers chooses to go with an AI-based newsreaders that people initially don't like... so what?
And 25 years later, a significant portion of the issues in that whitepaper remain unresolved. They were still shitting on people like Jeffrey Snover who were making attempts to provide more scalable management technologies. Such a clown show.
This may be an important clue for something that happened recently in our environment. We configured a bunch of database service SPNs and immediately all Kerberos auth failed. Rolled it back and talked to our support provider. They said that the expected behavior was to default to AES but that for some reason our environment wasn’t honoring that. We ended up having to manually enable AES support on each service account, which is a minor pain in the ass, and since no one in the IAM team was involved in the original domain setup, no one could explain why this happened or whether there was a manual RC4/DES config lurking out there in the shadows.
Logitech specifically. They haven't made a wired trackball in some time, as far as I can tell. Their only options are AA battery or rechargeable. I'd rather not have to futz with batteries at all on my desktop computer
This is the third or fourth time you’ve spammed this exact comment in response to people’s perfectly legitimate questions. What is this clown-show bullshit?
Not that one more opinion in this endless war of opinion will matter, nor will it convince anyone, but I genuinely don't see the issue with "tabs for indentation, spaces for alignment."
"I nobly refused these golden handcuffs so that well down the road I could continue huffing the farts of a company that is a shell of its former self. Don't let your eyes deceive you - they're still a powerhouse. Buy my book."
You are not a dinosaur. I would argue that the great majority of engineers at our org do it the 'old fashioned' way.
My own experience with LLM-based coding has been wasted hours of reading incorrect code for junior-dev-grade tasks, despite multiple rounds of "this is syntactically incorrect, you cannot do this, please re-evaluate based on this information" "Yes you are right, I have re-evaluated it based on your feedback" only to do the same thing again. My time would have been better spent either 1) doing this largely boilerplate task myself, or 2) assigning and mentoring a junior dev to do it, as they would only have required maybe one round of iteration.
Based on my experience with other abstraction technologies like ORMs, I look forward to my systems being absolutely flooded with nonperformant garbage merged by people who don't understand either what they are doing, or what they are asking to be done.
When I hear "ChatGPT says..." on some topic at work, I interpret that as "Let me google that for you, only I neither care nor respect you enough to bother confirming that that answer is correct."