Hacker Newsnew | past | comments | ask | show | jobs | submit | JustAndy's commentslogin

Wouldn't an LLM be able to do that kind of analysis?


A LLM all by itself? No, I really don't think so. From my personal trading history - I knew to invest into AMD when it was at $5 because I tried their products and am intimately familiar with computers. LLM won't be able to do that for a long time. But - it helps me.


I'm not really sure I understand your sorting example, maybe try it out in gpt and post the link to show exactly what you mean.

The refusal of the model is something trained into the model by the process of rlhf, and it can also be untrained, by the process of abliteration [1].

Also, LLMs are capable of using tools in this very moment [2].

[1]: https://huggingface.co/blog/mlabonne/abliteration [2]: https://www.anthropic.com/news/analysis-tool


I'm deliberately blurring refusal with having an accurate picture of its own abilities and, past that, having an accurate picture of of what it can do given tools. Both are tested by

   "Can you X?"
With refusal you find just how shallow it is because it really will answer all sorts of questions that are "helpful" in making a nuclear bomb but when you ask it directly it shuts up. In another sense nothing it does is "helpful" because it's not going to hunt down some people in central asia who have 50kg of U235 burning a hole in their pocket for you, which is what would actually "help".

I use tool using LLMs frequently, but I find they frequently need help using their tools, it is a lot of fun to talk to Windsurf about the struggles it has with its tools and it feels strangely satisfying to help it out.


And what other way do you recommend to invest your money in order to not get devalued by inflation?


Well, investing is always exploiting labour of others... Exactly like being landlord... Well I suppose, your own private means of production you only use yourself could be reasonable.


Do you have any references/examples of this?


Nope, this is just someone spreading AI hype.


tons

rapid7 for example use LLMs to analyze code and identify vulnerabilities such as SQL injection, XSS, and buffer overflows. Their platform can also identify vulnerabilities in third-party libraries and frameworks from what i can see


Can you point me to a blog or feature of them that does this? I used to work at R7 up until last year and there was none of this functionality in their products at the time and nothing on the roadmap related to this. It was all static content.


must've been another company then which i got confused with the name


Good thing you have tons of examples.

Right?


They know which answer is correct, they just don't want to say it.


This makes me wonder how much storage is needed for all the posts/comments on HN, since everything is "just" text?


Why isn't this a feature of documentation frameworks? Like it could be just a simple, "Hey, I see this function in the codebase has changed since the time you wrote the documemtation for it, do you want to update it's description?"


I would definitely be at danger of getting into the habit of just saying no if I was asked everytime it changed, especially early in the dev cycle. However, if it was just at pull request time, I probably wouldn't get frustrated with it.


Slightly OT, but what documentation frameworks do you recommend?


That's mainly because activity in the prefrontal cortex is very low - that's the part of the brain that handles problem solving, comprehension etc. and reasoning.


This exact thing happens on r/suggestmeabook, but I'm not sure it happens that often.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: