Hacker Newsnew | past | comments | ask | show | jobs | submit | floren's commentslogin

Doing multiplication without laboriously writing one number over the other with an x next to the bottom one considered deeply suspicious...

Hello! You are being recor--hey what are you doing stop that, I'm afraid, Dave, I'm afraid...

Whichever human ultimately stood up the initial bot and gave it the loose directions, that person is responsible for the actions taken by that bot and any other agents it may have interacted with. You cannot wash responsibility through N layers of machine indirection, the human is still liable for it.

> You cannot wash responsibility through N layers of machine indirection, the human is still liable for it.

Yes they can, and yes they will.


That argument is not going to hold up for long though. Someone can prompt "improve the open source projects I work on", an agent 8 layers deep can do something like this. If you complain to the human, they are not going to care. It will be "ok." or "yeah but it submitted 100 other PRs that got approved" or "idk, the AI did it"

We don't necessarily care whether a person "cares" whether they're responsible for some damage they caused. Society has developed ways to hold them responsible anyway, including but not limited to laws.

Laws don't really have any bearing on situations like rude discussions on PR threads.

Sure, laws are only one of the tools. I thought that was obvious, but I've edited to clarify.

The point being made is that this argument is quite quickly going to become about as practicable as blaming Eve for all human sin.

If that's the point being made in:

> If you complain to the human, they are not going to care.

then it's not at all clear, and is a gross exaggeration of the problem regardless.


They are still responsible. Legally, and more importantly morally, they are responsible. Whether or not they care has no bearing.

An agent 8 layers deep can only do this if you give it access to tools to do it. Whoever set it up is responsible

I didn't even notice that the cap was "shorted", instead I noticed that it had the shorter line (traditionally negative) labeled "+" but connected directly to GND!

The agent serves a principal, who in theory should have principles but based on early results that seems unlikely.

Twice now in this same story, different subthreads, I've seen AI dullards declaring that you, specifically, are holding it wrong. It's delightful, really.

I don't really care if other people want to be on or off the AI train (no hate to the gp poster), but if you are on the train and you read the above comment, it's hard not to think that this person might be holding it wrong.

Using sonnet 4 or even just not knowing which model they are using is a sign of someone not really taking this tech all that seriously. More or less anyone who is seriously trying to adopt this technology knows they are using Opus 4.6 and probably even knows when they stopped using Opus 4. Also, the idea that you wouldn't review the code it generated is, perhaps not uncommon, but I think a minority opinion among people who are using the tools effectively. Also a rename falls squarely in the realm of operations that will reliably work in my experience.

This is why these conversations are so fruitless online - someone describes their experience with an anecdote that is (IMO) a fairly inaccurate representation of what the technology can do today. If this is their experience, I think it's very possible they are holding it wrong.

Again, I don't mean any hate towards the original poster, everyone can have their own approach to AI.


Yeah, I'm definitely guilty of not being motivated to use these tools. I find them annoying and boring. But my company's screaming that we should be using them, so I have been trying to find ways to integrate it into my work. As I mentioned, it's mostly not been going very well. I'm just using the tool the company put in front of me and told me to use, I don't know or really care what it is.

The whole point of "AI" in the first place is that it just vibes and doesn't need an instruction manual!

If "learn to hold it not wrong" is your message, then the AI bubble will be popping very soon.


How is that the point of AI. The point is that it can chug through things that would take humans hours in a matter of seconds. You still have to work with it. But it reduces huge tasks into very small ones

No, the point of AI is to fire your employees and replace them with "agents".

This implies that the managers managing your "agents" can be literal assclowns hired for pennies.


"Hey boss, I tried to replace my screwdriver with this thing you said I have to use? Milwaukee or something? When I used it, it rammed the screw in so tight that it cracked the wood."

^ If someone says that they are definitely "holding it wrong", yes. If they used it more they would understand that you use the clutch ring to the appropriate setting to avoid this. What you don't do, is keep using the screwdriver while the business that pays you needs 55 more townhouses built.


No need to be mean. It's not living up to the marketing (no surprise), but I am trying to find a way to use these things that doesn't suck. Not there yet, but I'll keep trying.

Become?


Vacuum being so famous for not conducting heat that we use it to keep our coffee hot


which is why the whole idea of data centers in space is ridiculous.


I'm glad to realize we're in violent agreement, I thought you were implying cooling would be easy due to the vacuum!


It can be solved with radiator fins and cold plates but yeah, it’s not an easy task.


And yet in computing, a 1kHz clock is still 1000 cycles per second, and 1 MFLOP is still 1,000,000 floating-point operations per second.


The comment you replied to explained that:

"in binary computing traditionally prefix + byte implied binary number quantities."

There are no bytes involved in Hz or FLOPs.


Remember that back in the mists of time, computers used typewriter-esque machines for user interaction and text output. You had to send a CR followed by an LF to go to the next line on the physical device. Storing both characters in the file meant the OS didn't need to insert any additional characters when printing. Having two separate characters let you do tricks like overstriking (just send CR, no LF)


True, but I don’t think there was a common reason to ever send a linefeed without going back to the beginning. Were people printing lots of vertical pipe characters at column 70 or something?

It would’ve been far less messy to make printers process linefeed like \n acts today, and omit the redundant CR. Then you could still use CR for those overstrike purposes but have a 1-byte universal newline character, which we almost finally have today now that Windows mostly stopped resisting the inevitable.


> now that Windows mostly stopped resisting the inevitable

I've been trying to get Visual Studio to stop mucking with line endings and encodings for years. I've searched and set all the relevant settings I could find, including using a .editorconfig file, but it refuses to be consistent. Someone please tell me I'm wrong and there's a way to force LF and UTF-8 no-BOM for all files all the time. I can't believe how much time I waste on this, mainly so diffs are clean.


Ugh, I didn't realize it was still that bad.

How far can you get with setting core.autocrlf on your machine? See https://git-scm.com/book/en/v2/Customizing-Git-Git-Configura...


As I understand it (this may be apocryphal but I've seen it in multiple places) the print head on simple-minded output devices didn't move fast enough to get all the way back over to the left before it started to output the next character. Making LF a separate character to be issued after CR meant that the line feed would happen while the carriage was returning, and then it's ready to print the next character. This lets you process incoming characters at a consistent rate; otherwise you'd need some way to buffer the characters that arrived while the CR was happening.

Now, if you want to use CR by itself for fancy overstriking etc. you'd need to put something else into the character stream, like a space followed by a backspace, just to kill time.


I don't think that's right. Not saying that to argue, more to discuss this because it's fun to think about.

In any event, wouldn't you have to either buffer or use flow-control to pause receiving while a CR was being processed? You wouldn't want to start printing the next line's characters in reverse while the carriage was going back to the beginning.

My suspicion is there was a committee that was more bent on purity than practicality that day, and they were opposed to the idea of having CR for "go to column 0" and newline for "go to column 0 and also advance the paper", even though it seems extremely unlikely you'd ever want "advance the paper without going to column 0" (which you could still emulate it with newline + tab or newline + 43 spaces for those exceptional cases).


I've seen this explanation multiple times through the years, but as I said it's entirely possible it was just a post-hoc thing somebody came up with. But as you said, it's fun to argue/think about, so here's some more. I'm talking about the ASR-33 because they're the archetypal printing terminal in my mind.

If you look at the schematics for an ASR-33, there's just 2 transistors in the whole thing (https://drive.google.com/file/d/1acB3nhXU1Bb7YhQZcCb5jBA8cer...). Even the serial decoding is done electromechanically (per https://www.pdp8online.com/asr33/asr33.shtml), and the only "flow control" was that if you sent XON, the teletype would start the paper tape reader -- there was no way, as far as I can tell, for the teletype to ask the sender to pause while it processes a CR.

These things ran at 110 baud. If you can't do flow control, your only option if CR takes more than 1/10th of a second is to buffer... but if you can't do flow control, and the computer continues to send you stuff at 110 baud, you can't get that buffer emptied until the computer stops sending, so each subsequent CR will fill your buffer just a little bit more until you're screwed. You need the character following CR (which presumably takes about 2/10ths of a second) to be a non-printing character... so splitting out LF as its own thing gives you that and allows for the occasional case where doing a linefeed without a carriage return is desirable.

Curious Marc (https://www.curiousmarc.com/mechanical/teletype-asr-33) built a current loop adapter for his ASR-33, and you'll note that one of the features is "Pin #32: Send extra NUL character after CR (helps to not loose first char of new line)" -- so I'd guess that on his old and probably worn-out machine, even sending LR after CR doesn't buy enough time and the next character sometimes gets "lost" unless you send a filler NUL.

Now, I haven't really used serial communications in anger for over a decade, and I've never used a printing terminal, so somebody with actual experience is welcome to come in and tell me I'm wrong.


That's fascinating! They got a lot of mileage out of those 2 transistors, didn't they?

But see, that's why I think there has to be more to it. That extra LF character wouldn't be enough to satisfy the timing requirements, so you'd also need to send NUL to appropriately pad the delay time. And come to think of it, the delay time would be proportional to the column the carriage was on when you sent the CR, wouldn't it? I guess it's possible that it always went to the end but that seems unlikely, not least because if that were true then you'd never need to send CR at all, just send NUL or space until you calculated it was at EOL.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: