Hacker Newsnew | past | comments | ask | show | jobs | submit | NewsaHackO's commentslogin

Also, one pet peeve of mine is when there are emojis in semi-serious writing. ChatGPT really made the practice of putting emojis everywhere explode.

tell it to: "Output documentation in the style of MDN" and it looks way more professional

At least for a malicious user embedding a prompt injection using their API key, I could have sworn that there is a way to scan documents that have a high level of entropy, which should be able to flag it.

Yes, but they definitely have a vested interest in scaring people into buying their product to protect themselves from an attack. For instance, this attack requires 1) the victim to allow claude to access a folder with confidential information (which they explicitly tell you not to do), and 2) for the attacker to convince them to upload a random docx as a skills file in docx, which has the "prompt injection" as an invisible line. However, the prompt injection text becomes visible to the user when it is output to the chat in markdown. Also, the attacker has to use their own API key to exfiltrate the data, which would identify the attacker. In addition, it only works on an old version of Haiku. I guess prompt armour needs the sales, though.

I feel as though the majority of programmers do the same thing; they apply well known solutions to business programs. I agree that LLM are not yet making programs like ffmpeg, mpv, or BLAS but only a small amount of programmers are working on projects like that anyway.

I am going to sound cynical, but I strongly believe that everyone's view on AI is contaminated by ulterior motives, and a lot of people are not truthful with themselves about their positions on AI. For instance, I feel as though topics such as copyright, environmentalism, water use, etc., that have been thrust into the limelight are being pushed by people who didn't care about these issues 5-10 years ago, but decided to start clutching their pearls about it now. Particularly copyright; everyone was so okay with pirating movies, apps, music when it benefited them, but now they are the vanguard in enforcing other people's copyright on data they don’t even own.

> everyone was so okay with pirating movies, apps, music when it benefited them, but now they are the vanguard in enforcing other people's copyright on data they don’t even own.

You do not mention the perception of asymmetric legal and market power. Many people think that file sharing Disney movies is ok, but Google scraping the art of independent artists to create AI is not ok. That is not the same dynamic at all as not caring about copyright, and then suddenly caring about copyright.


Suddenly people change their tune, what gives? All we are talking about is the forced wealth transfer of trillions of dollars to the richest megacorps on the planet.

Most people didn't choose to be part of your moon shot death cult. Only the people at the tippy top of the pyramid get golden parachutes off Musk's exploding rocket.

They never changed their position, corpos shouldn't get any money! That's always been the position. They are inherently unethical meat grinders.


Yeah, he just casually said he had an elo that high, as if that doesn't blow 90% of people out of the water.

I mean, I'm pretty sure it would be trivial to tell it to move files to the trash instead of deleting them. Honestly, I thought that on Windows and Mac, the default is to move files to the trash unless you explicitly say to permanently delete them.

Yes, it is (relatively, [1]) trivial. However, even though it is the shell default (Finder, Windows Explorer, whatever Linux file manager), it is not the operating system default. If you call unlink or DeleteFile or use a utility that does (like rm), the file isn’t going to trash.

[1]: https://github.com/arsenetar/send2trash (random find, not mine)


Because it is the default. Heck, it is the default for most DEs and many programs on Linux, too.

Huh? Unless you are talking about DMCA, I haven't heard about that at all. Most AI companies go to great lengths to prevent exfiltration of copyrighted material.

> It is clear much of the people below don't even understand basic terminology. Something being a transformer doesn't make it an LLM (vision transformers, anyone) and if you aren't training on language (e.g. AlphaFold, or Aristotle on LEAN stuff), it isn't a "language" model.

I think it's because it comes off as you are saying that we should move off of GenAI, and alot of people use LLM when they mean GenAI.


Ugh, you're right. This was not intended. Conflating LLMs with GenAI is a serious error, but you're right, it is obviously a far more common error than I realized. I clearly should have said "move beyond solely LLMs" or "move beyond LLMs in isolation", perhaps this would have avoided the confusion.

This is a really hopeful result for GenAI (fitting deep models tuned by gradient descent on large amounts of data), and IMO this is possible because of specific domain knowledge and approaches that aren't there in the usual LLM approaches.


This is what I got from Tao's post as well.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: