Hacker Newsnew | past | comments | ask | show | jobs | submit | LatencyKills's commentslogin

I worked on the Xcode team for years and know the lengths Apple goes to make this stuff difficult to figure out.

I just wanted to say that you’ve done an excellent job and am looking forward to the 3rd installment.


>I worked on the Xcode team for years

Why did you guys remove the ability to detach the console and move it to another window?


I've always felt a little odd saying, "Back in my day we had to understand the cpu, registers, etc." It's a true statement, but doesn't help in any way. Is that stuff still worth knowing, IMHO? Yes. Can you create incredibly useful code without that knowledge today? Absolutely.

There are some people who still know these things, and are able to use LLMs far more effectively than those who do not.

I've seen the following prediction by a few people and am starting to agree with it: software development (and possibly most knowledge work) will become like farming. A relatively smaller number of people will do with large machines what previously took armies of people. There will always be some people exploring the cutting edge of thought, and feeding their insights into the machine, just how I image there are biochemists and soil biology experts who produce knowledge to inform decisions made by the people running large farming operations.

I imagine this will lead to profound shifts in the world that we can hardly predict. If we don't blow ourselves up, perhaps space exploration and colonization will become possible.


I think that tt's more likely at this point that we turn the depleting quantities of exploitable resources on this planet into more and more data centers and squander any remaining opportunity at space exploration/colonization at scale.

If this happens to software development, this will happen to most mental jobs.

> Can you create incredibly useful code without that knowledge today?

You could do that without that knowledge back in the day too, we had languages that were higher level than assembler for forever.

It's just that the range of knowledge needed to maximize machine usage is far smaller now. Before you had to know how to write a ton of optimizations, nowadays you have to know how to write your code so the compiler have easy job optimizing it.

Before you had to manage the memory accesses, nowadays making sure you're not jumping actross memory too much and being aware how cache works is enough


Or more so - machines have gotten so fast, with so much disk and memory.. that people can ship slopware filled with bloatware and the UX is almost as responsive as Windows 3.1 was

I don't think it's odd. Sacrificing deep understanding, and delegating that responsibility to others is risky. In more concrete terms, if your livelihood depends on application development, you have concrete dependencies on platforms, frameworks, compilers, operating systems, and other abstractions that without which you might not be able to perform your job.

Fewer abstractions, deeper understanding, fewer dependencies on others. These concepts show up over and over and not just in software. It's about safety.


You have a .claude folder without a CLAUDE.md?

I realize my situation isn’t typical, but I’m retired and have dealt with depression most of my life.

The thing I miss most about work (yes, you really can miss work) is collaborative problem-solving. At Microsoft, we called it “teddy bear debugging”—basically, self-explaining a problem out loud to clarify your thinking. [1]

These days, when I’m stuck, I open Claude Code and “talk it through.” That back-and-forth helps me reason through technical issues and scratches a bit of that collaborative itch that helped keep my depression in check.

[1]: https://economictimes.indiatimes.com/news/international/us/w...


I've found something similar. I've been using Claude Code to build lots of things I would but fear failing at or hitting an iceberg. Having seen success for that, I've started rubber ducking it through a number of things. Changed the carburetor on my snow blower for the first time ever and with minimal pain mainly because "asking Claude about it" meant making myself stop and think through the process, plan an approach and put together a mise-en-place rather than starting, realizing I needed a couple of tools, leaving things a mess and not coming back due to anxiety.

Basically, it helps me avoid what they called "gumption traps" in Zen and the Art of Motorcycle Maintenance.


Yep, this, so far, proven the most promising use of LLMs, to me. I've read about people's Rube Goldberg machine-eqsue setups for getting agentic LLMs to work for them, but I find simply having a dialectic with an LLM to be more fruitful. Rubber-ducking with a duck that quacks back.

How do you prevent it from just taking the reins and writing an entire function or class for you when all you wanted to do was just talk about the code you already had.

"No coding. (Explain|Debug|Analyze|Talk through) this with me:"

"Talk with me first:" (Implying anything other than talking, like coding, would be a separate distinct step that is not to be done)

Proposal is the best keyword imo if it fits what you'd like.

"Propose changes you would make to (this repo|staged changes|latest commit)."

"Propose alternatives."

"Propose flaws." / "Propose flaws in my reasoning."


I keep Claude in "planning mode" (shift+tab) so it cannot touch my codebase.

"Do not write any code ..." If you are using LLMs for highly restricted work, it is rather trivial to keep them in check enough to receive useful responses.

You tell it to only talk about the code you already had.

You don't turn on editing mode.

But talking with an Llm isn’t teddy bear/rubber duck debugging because your llm has some high odds of outputting good feedback. Teddy bear/rubber duck debugging involves the other party not knowing anything about your problem let a lone even capable of giving a response (hence why it’s not go-ask-a-coworker/teacher/professional debugging). It’s about getting yourself to refocus the problem and state what you already know and allowing your brain to organize the facts.

I’m not trying to be rude but it seems like you’re conflating collaborative problem solving with rubber duck debugging. You haven’t actually collaborated with a rubber duck when you’re finished rubber duck debugging.


> But talking with an Llm isn’t teddy bear/rubber duck debugging because your llm has some high odds of outputting good feedback.

That isn't how we did it at either Microsoft or Apple. There, we defined it as walking another engineer through a problem. That person may or may not have been an expert in whatever I was working on at the time. You truly aren't suggesting that rubber duck debugging only works when you don't receive feedback?

I use Claude to bounce ideas around just like I did with my human teammates.

I think you're being pedantic, but it doesn't matter to me: in the end, I work must better when I can talk through a problem; Claude is a good stand-in when I don't have access to another human.


No I’m suggesting that RDD is not a mechanism to reason and solve your problem, but rather a mechanism to get your mind thinking into the problem solving state. It is asking you to physically repeat what is in your brain. The same as writing it out on a marker board or handwritten notes. Rubber duck debugging is about debugging you, not debugging the code. That’s why it doesn’t matter who you talk to about the problem in rubber duck debugging.

The part where your colleague or Llm returns more information or advice is past the rubber ducking state. Depending on the difficulty of the problem you may not need to ask a colleague to lead you to water. And if rubber duck debugging can be done solo, what is the actual process you get from it as non-relative to you coworker/code assistant?


> involves the other party not knowing anything about your problem let a lone even capable of giving a response

I prefer grabbing a colleague that is technical but does not work on this particular project. Seems to force me to organize the info in my head more than an actual rubber duck.


Sure no one likes being seen ranting at no one either. Rubber ducks can be pencils, dogs, hamsters, and teddy bears, even friendly Carroll in accounting too.

Rubber duck debugging is a null-llm offloading to your gray matter for the other half of interlocution. A fancy way of recruiting your other brain matter into the problem solving process. Perhaps by offloading to a non-null LLM, there is decreased activation/recruitment of brain regions in the problem solving process, leading to network pruning over time. Particularly in the event you take the position that the "tool" isn't something worthy of having it's inner state reacted to and modeled via mirror networks.

But what do I know man, I'm just a duck on the Internet. On the Internet, no one knows you're a duck.

Quack.


But the point is that as soon as you get feedback and a response you’re back in traditional reasoning, puzzle solving, teaching, learning, etc. paradigm. Not in the rubber duck debugging paradigm. RDD is clearly defined as different. The GP is just choosing to remove the elements that make it unique but keep the metaphorical branding. Even bots responding is not RDD. Rubber ducks can’t respond or understand.

You don’t send kids to Rubber Duck Debugging Class (you send them to School) because you can’t see the teacher in the classroom while you’re at work.

You’re debugging yourself, not the actual problem per say.


RDD is using an external object, the rubber duck, as an external anchor from which to project sub/unconscious processing elements onto. Think about it. Your brain doesn't know the difference between imagining an interaction, and actually having the interaction. The passive duck, even as little more than a passive anchor to project mental faculties not currently employable without consciousness destabilization, still gets you into an "effective collaboratory mode" with yourself. Like have you ever tried and succeeded at RDD without the external focus? Tis' a pain in the ass. The addition of a model that isn't your brain* spitting out output just gets registered as just another message from somebody's brain matter, not yours. I am increasingly looking at generative coding with an LLM as having a substantive rewriting effect on expectations around how computers are capable of behaving. This isn't just transform hiding a pile of abstractions that our brains mirroring systems can accurately feed forward, giving us a sense of interoception and the ability to kinesthetically navigate and "feel" what we're doing with the machine or how the machine should behave given our inputs. It just does. It breaks the rules. We're helpless in the face of knowing or simulating what's next. This forces neurons to start to rewire. Rewiring and severing of old connections in times of great change, at least to me, comes with feelings indistinguishable from acute depression. Please don't ask how I know. It should, in theory, settle down given enough time to learn the quirks of a specific snapshot of a model, and probably flare up again after substantial changes to weights occur. Our brains, like it or not, are specifically designed to use anything outside themselves to "pour" subconscious faculties into.

Just started messing with these things, and it at least to me seems to resonate.


Or "rubberducking" as it's called now: https://en.wikipedia.org/wiki/Rubber_duck_debugging

MSGA: Make Software Great Again? /s

I'm currently building a tool for macOS that allows users to "see through" app windows without constantly cmd+tabbing. https://imgur.com/a/gsKFJ7f

A year ago I would have have gladly shared it on HN but now the comments will be nothing but "probably made with AI!". Or "I saw an em dash in your comments so it must be AI!"

It isn't enough to claim, "Not made with AI" any longer because no one believes you. I have 30+ years of experience... how do I even prove I wrote my code at this point?


It’s a problem. But maybe it doesn’t matter. I wouldn’t let this prevent you from sharing your project. There will always be stupid commenters. Just put it out there, tell the truth about it, and ignore them.

The Python docs are bare bones. If you want more in-depth documentation for these features, check out the Swift docs:

https://developer.apple.com/documentation/foundationmodels


I was an engineer on the Visual Studio team. Internally, the Notepad project existed to provide a minimal, shippable product that we could use as a testbed. We used it to validate everything from compiler changes to kernel32 loader behavior on beta versions of Windows. If Notepad didn’t run, your feature didn’t work.

This doesn't seem like a good idea.


Well, you see, they got rid of all the QA so those tests stopped adding value ;)

they have AI, they don't need QA or tests, come on man, aren't you a 9999999x engineer?

On a serious note... with AI, I feel I'm getting about a 5x improvement on average in terms of productivity.

This technology exists. It isn’t just a toy. I think it is amazing to see people use it for interesting things even if it isn’t groundbreaking.

I’ve been an engineer for almost 40 years and love seeing what Claude Code can do.

Like it or not, young people will not know a world where this technology doesn’t exist. It is just part of their toolset now.


> I’ve been an engineer for almost 40 years and love seeing what Claude Code can do.

You would say that because otherwise you'd be afraid as being seen as "too old for this job", and hence risking getting kicked out of it all, meaning no future employment opportunities. I know that feeling, because I myself have been doing this programming job for 20+ years already (so not a young one by any means), but let's just cut the crap about it all and let's tell it how it is.


Really? That's a lot of presumption and reductionism to LLMs enthusiasts.

People of varied ages, already leverage LLMs on a daily basis. And LLMs will only get better.

Yesterday, Opus did work for me that would have taken me weeks. And the result was verified with a comprehensive suite of unit tests plus smoke tests by myself. The code looks exactly as the rest of the code in the 10y+ old, hand-written, enterprise project, no slop.

And you actually should be afraid of being left behind in dev related fields if you don't use LLMs. In most areas in fact.

Once the market corrects for LLM assisted production, the expectations will raise. So right now there is a small window to leverage LLMs as a time saving advantage before it becomes the norm and everyone is forced to use it because expecttions will reflect that.


> You would say that because otherwise you'd be afraid as being seen as "too old for this job"

Um... I am still an active reverse engineer of both ring-0 and ring0 applications on both macOS and Windows (I worked on both the VS and Xcode teams). I'm developing a new tool for macOS that allows users to "see behind" active windows without the constant need for cmd/alt+tabbing. My age has zero bearing on my skill set or ability to understand technology. https://imgur.com/a/seymour-r9whXO5

> let's just cut the crap about it all and let's tell it how it is

The reality is, as I said, that this technology exists and it isn't going anywhere. Young people are going to use it as a tool just like we did when GUI operating systems first became prevalent.

I don't even remotely buy into the AI hype but I'm not going put the blinders on either. There is utility in this technology.


I'm pretty young and hate this technology with a passion. I didn't spend 100k on education, and studying for a decade to have my job reduced to being a project manager for a bot or to play with a prompt slot machine all day. This crap is reducing the thing I genuinely love doing more than anything, writing code, into nothing.. Reviewing code that lacks any sweat, any intention. I really can't stand this garbage.

I can't stand you old heads, I'm very happy for you that you got to stash away 40 years of SWE salaries. Its just ladder kicking behavior to be honest. Typical boomer, you got your nut and don't care what happens after.

25% of new college grads in STEM are unemployed and a bunch of companies (controlled by people in your age group) have laid off 400k Americans over the last 16 months while equities and profits are at an all time highs.

The replies : ItS NoT Ai, ItS cUz FrEe MoNeY fRoM CoViD HaS DrIeD uP.


Software jobs have been steadily outpacing other white collar jobs for the past year, but it's unlikely you will find one unless you work on your attitude and your ability to communicate respectfully.

The world is changing and instead of embracing that change (ensuring that you will be the next leader) you are actively fighting against technology?

The world was once entirely analog; generations of analog engineers had to throw away their knowledge and start over during the digital transition. It wasn't always pretty but they did it.

If you can't embrace technological change you might have wasted $100k.


So to summarize, your objections are almost completely unrelated to the technology, and are mostly about capitalism.

I knew OpenAI was in trouble the instant they chose Altman over Ilya Sutskever.

> I knew OpenAI was in trouble the instant they chose Altman over Ilya Sutskever.

I am not so sure:

This decision rather tells something important about the priorities of the string-pullers behind the curtain:

They clearly want(ed) to monetize what is there, with the risk that only smaller improvements for the AI models will happen from OpenAI, and thus OpenAI might get outcompeted by competitors who are capable of building and running a much better model.

If this is the priority (no matter whether you like or despise Sam Altman), you will likely prefer Sam Altman over Ilya Sutskever.

If, on the other hand, a fast monetization is less important than making further huge leaps towards much better AI models, you will, of course, strongly prefer Ilya Sutskever over Sam Altman.

Thus, I wouldn't say that choosing Sam Altman over Ilya Sutskever is a sign that OpenAI is in trouble, but a very strong sign where the string-pullers behind the curtain want OpenAI to be. Both Sam Altman and Ilya Sutskever are just marionettes for these string pullers. When they have served their role, they get put back into the box.


Yes I agree. Altman was the rational choice if you realise that eventually the huge R&D bill will need to stop for atleast a moderate period (<5 years).

You want to ride that out before capitalising on the eventual cheaper training costs once the rug has been pulled.

Altman has already succeeded here as it seems inference for API and chat is profitable but offset with massive R&D costs.


All your competitors benefit from your training costs. They’ll lose on inference pretty quickly if they stop training new models, no?

I don't think they will lose on inference because that assumes that compute becomes cheap for all evenly.

Their spending today has secured their compute for the near future.

If every GPU, stick for RAM and SSD is already paid for. Who can afford to sell cheap inference?

Z.ai is trying to deal with this by using domestic (basically Huwawei silicon not Nvidia). And with their state subsidy they will do well.

Anthropic has a 50bn USD plan to build data centres for 2026.

OpenAI similarly has secured extraordinary amounts of other people's money for data centres.

All these will be sunk costs and "other people's money" while money is easy to get hold off. But will be a moat when R&D ends.

Once all the models become basically the same who you go with will be who you're already with (mostly OpenAI), and who you end up with (say people who use Gemini because they have a Google 2TB account).

Some upstart can put themselves into the ground borrowing compute and selling at a loss but the moment they catch up and need to raise prices everyone will simply leave.

ChatGPT is what is most likely to remain a sustained frontier model. Maybe Claude jumps ahead further a few times, Gemini will have its moment. But it'll all be a wash with ChatGPT tittering along as rarely the best. But never the worst.


> Once all the models become basically the same who you go with will be who you're already with (mostly OpenAI)

Imho, people are undervaluing the last mile connection to the customer.

The last Western megacorp to bootstrap its way there was Facebook, and control over cloud identity and data was much less centralized circa-late-00s.

The real clock OpenAI is running against is creating a durable consumer last-mile connection (killer app, device, etc).

"Easy to use chat app / coding tool" doesn't even begin to approach the durability of Microsoft, Apple, Google, or Meta. And without it, OpenAI risks any one of them pulling an Apple Maps at any time.

Unless it continually plows money into R&D to maintain the lead and doesn't pull an Intel and miss a beat.

Maybe they do, but that's a lot of coin flips that need to continually come up heads, in perpetuity.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: