Hacker Newsnew | past | comments | ask | show | jobs | submit | seg_lol's commentslogin

That was a little nudge that proved to themselves that they can be better people.

formal > typed > untyped > unsound > vibed

This is slightly flawed. LLMs are search but the search space is sparse, the size of the question risks underspecification. The question controls the size of the encapsulated volume in that high dimensional space. The only advantage for small prompts is computational cost. In every other way they are a downside.

Voice prompts, restate what you want, how you want it from multiple vantage points. Each one is a light cone in a high dimensional space, your answer lies in their intersection.

Use mind altering drugs. Give yourself arbitrary artificial constraints.

Try using it in as many different ridiculous ways you can. I am getting the feeling you are only trying one method.

> I've had a fairly steady process for doing this: look at each route defined in Django, build out my `+page.server.ts`, and then split each major section of the page into a Svelte component with a matching Storybook story. It takes a lot of time to do this, since I have to ensure I'm not just copying the template but rather recreating it in a more idiomatic style.

Relinquish control.

Also, if you have very particular ways of doing things, give it samples of before and after (your fixed output) and why. You can use multishot prompting to train it to get the output you want. Have it machine check the generated output.

> Simple prompting just isn't able to get AI's code quality within 90%

Would simple instructions to a person work? Esp a person trained on everything in the universe? LLMs are clay, you have to mold them into something useful before you can use them.


Nada Amin isn't relevant to hacker news, she is 15 years ahead of grifting tech bros.

Just to make sure that people don't get the wrong idea;

Nada Amin's Harvard webpage states;

I combine programming languages (PL) and artificial intelligence (AI), including large language models (LLMs), to create intelligent systems that are correct by construction. My current application domains include program and proof synthesis, and precision medicine. ... we look at combining Machine Learning and Programming Languages to enable the creation of neuro-symbolic systems that can move back and forth between learnable (neural) and interpretable (symbolic) representations of a system.

She was a recipient of the Amazon Research Award 2024 for LLM-Augmented Semi-Automated Proofs for Interactive Verification - https://www.amazon.science/research-awards/program-updates/7...


> is that only for some problems is the specification simpler than the code.

Regardless of the proof size, isn't the win that the implementation is proven to be sound, at least at the protocol level, if not the implementation level depending on the automatic theorem prover?


So if it "works" out for OpenAI, not only do they have all the ram, power, water but they also have all the jobs.

We need to figure out how AI can use housing and food to complete resource exhaustion. You forgot to include AI water consumption.

It’s all very Orwellian. Consolidation of resources and the eventual result of total control over those.

One week of jailtime for everyone involved.

When I was twelve, a 10 yo kid from the next town over was hit and killed, his body was thrown over 100 feet when someone sped around a stopped bus with its flashers out.


No, to the CEO and all the managers who approved the process.

In addition to that, fine the company. Calculate the fine by the usual punishment multiplied by the number of vehicles on the road. And suddenly the companies begin focusing on safety.


Try using a VPN, my ISP was killing connections and claude would randomly reset. Using a VPN fixed the issue.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: