Hacker Newsnew | past | comments | ask | show | jobs | submit | mathgorges's commentslogin

> The software has more rights and privilege than actual humans at this point.

That's been true for some time though, right? For example if you have a community notice board in front of your store and someone pins illegal content to it you're held to a different legal standard than if someone posts the same content to a social media platform.

I don’t think that’s right either, but this kind of “tech exceptionalism” has been baked into law for decades. AI is inheriting those privileges more than inventing new ones.


The code for Eyes Up seems to be public [0](although there’s no license, so presumably is copyrighted).

I bet that one could refactor it into a PWA.

[0]: https://github.com/explorealways/eyes-up


I’m going to guess if you opened a ticket the writer would provide it as a permissive licence.


If only there were 2 big corporations hellbent on making PWAs and side-loading harder/worse… and perhaps that duopoly could then donate some money for the president's ballroom, and maybe they could even be found guilty of price-fixing wages, and…

EU folk, we really need a 3rd platform. Let's go.



"I am beer" is a pretty funny typo ;-)

But seriously, I wonder why this happens. My experience of working with LLMs in English and Japanese in the same session is that my prompt's language gets "normalized" early in processing. That is to say, the output I get in English isn't very different from the output I get in Japanese. I wonder if the system prompts is treated differently here.


Not suuuper relevant, but whenever I start a conversation[0] with OpenAI o3, it always responds in Japanese. (The Saved Memories does include facts about Japanese, such as that I'm learning Japanese and don't want it to use keigo, but there's nothing to indicate I actually want a non-English response.) This doesn't happen with the more conversational models (e.g. 4o), but only the reasoning one, for some unknowable reason.

[0] Just to clarify, my prompts are 1) in English and 2) totally unrelated to languages


In my experience it's less about the latest generation of LLMs being better, and more about the tooling around them for integration into a programmer's workflow being waaaay better.

The article doesn't explicitly spell it out until several paragraphs later, but I think what your quoted sentence is alluding to is that Cursor, Cline et al can be pretty revolutionary in terms of removing toil from the development process.

Need to perform a gnarly refactor that's easy to describe but difficult to implement because it's spread far and wide across the codebase? Let the LLM handle it and then check its work. Stuck in dependency hell because you updated one package due to a CVE? The LLM can (often) sort that out for you. Heck, did the IDE's refactor tool fail at renaming a function again? LLM.

I'm remain skeptical of LLM-based development insofar as I think the enshitification will inevitably come when the Magic Money Machine breaks down. And I don't think I would hire a programmer that needs LLM assistance in order to program. But it's hard to deny that it has made me a lot more productive. At the current price it's a no-brainer to use it.


It's great when it works, but half the time IME it's so stupid that it can't even use the edit/path tools properly even when given line numbers prepended inputs.

(I should know since I've created half-a-dozen tools for this with gptel. Cline hasn't been any better on my codebase.)


Do Cursor and co have better tools than the ones we write ourselves for lower-level interfaces like gptel? Or do they work better because they add post-processing layers that verify the state of the repo after the tool call?


Cursor is proprietary, but is known to index code for doing queries etc.

Cline is closer in spirit to GPTel, but since CLINE is an actual business, it does seem to do well off the bat. That said, I haven't found it to be "hugely better" compared to whatever you can hack in GPTel.

Quite frankly being able to hack the tools on the go in Elisp, makes GPTel far far better (for some of us anyway).

(Thanks for creating GPTel BTW!)


I assume SoftTalker is referring to SWEs not being Professional Engineers.

Professional Engineer (PE) != Engineer (in many jurisdictions)

> A professional engineer is competent by virtue of his/her fundamental education and training to apply the scientific method and outlook to the analysis and solution of engineering problems.

> He/she is able to assume _personal responsibility_ for the development and application of engineering science and knowledge, notably in research, design, construction, manufacturing, superintending, managing, and in the education of the engineer.

(emphasis mine)

[0]: https://en.wikipedia.org/wiki/Engineer#Definition


Thanks for building this. Super stoked that the translation in on-device.

I'll be downlaoding it and giving it a try today!!


This hasn't been my experience with English/Japanese translation with Google Translate. For context I used Google Translate for pair programming with Japanese clients 40 hours per week for about 6 months, until I ponied up for a DeepL subscription.

As long as you're expressive enough in English, and reverse the translation direction every now and again to double check the output then it works fine.


As I mentioned in another reply, the scenario here is translating "artistic" or "real-world" (for lack of a better term) literature accurately—whether it's a novel, a YouTuber's video, casual conversation, or blog posts/tweets with internet slang and abbreviations. In these cases, getting things 95% right isn’t enough to capture the nuances, especially when the author didn’t create the content with translation in mind (which I believe matches your experience).

Machine translation for instructional or work-related texts has been "usable" for years, way before LLM emerged.

LLM-based translation has certainly made significant progress in these scenarios—GPT-4, for example, is fully capable IMHO. However, it's still not quite fast enough for real-time use, and the smaller models that can run offline still don't deliver the needed quality.


English -> Japanese machine translations(whether it's GT or DL or GPT) are fairly usable these days in the sense that it reduces interpretation workload to trivial amount especially with typical skillset of a Japanese white collar workers, or not perfect in the sense that the fact that the output is a translation is always apparent to native speakers - but that is the case too even with offline human translators, so could be a moot point.

Anyway, the current state of affairs float somewhere comfortably above "broken clock" and unfortunately below "Babelfish achieved", so opinions may vary.


This is fascinating to me as an ex-mainframer that now works on a niche hyperscaler. I would love to learn more!

Will you let me know some of the names in the space so that I can research more? Some cursory searching only brings up some questionably relavent press releases from IBM.


Sounds like they’re talking about running IBM Wazi on Red Hat OpenShift Virtualization. As far as I know, there isn’t a System z-on-a-container offering, like you install from a Helm Chart or comes to you from an OCI registry. If it is the IBM I know, it’s completely out of reach of most homelab’ers and hobbyists.

IBM Wazi As A Service is supposed to be more affordable than the self hosted version and the Z Development and Test Environment (ZD&T) offering. ZD&T is around $5000 USD for the cheapest personal edition, so maybe around $2500-3500 USD per year?


Look up Micro Focus Enterprise Server and Enterprise Developer. They are now owned by Rocket.


I second this and know some of the folks who work on Enterprise Server. Good people. They have a partnership of some sort with AWS and there is a bunch of decent docs around Enterprise Server on AWS


My general strategy is to find a community where $domainExperts hang out and figure out how they talk about $interestingThing, then refine my search from there using progressively more professional lingo.

Here's an example of what my first query for this topic might be: https://kagi.com/search?q=site%3Areddit.com%2Fr%2Faskhistori...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: