Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is more a lack of understanding of it's limitations, it'd be different if they asked for it to write a python script to collate the data.


If the LLM is intelligent, why can’t it figure out that writing a script would be the best way to solve the problem?


Some of the more modern tools do exactly that. If you upload a CSV to Claude, it will not (or at least not anymore) try to process the whole thing. It will read the header, and then ask you what you want. It will then write the appropriate Javascript code and run it to process the data and figure out the stats/whatever you asked it for.

I recently did this with a (pretty large) exported CSV of calories/exercise data from MyFitnessPal and asked it to evaluate it against my goals/past bloodwork etc (which I have in a "Claude Project" so that it has access to all that information + info I had it condense and add to the project context from previous convos).

It wrote a script to extract out extremely relevant metrics (like ratio of macronutrients on a daily basis for example), then ran it and proceeded to talk about the result, correlating it with past context.

Use the tools properly and you will get the desired results.


ChatGPT has been able to do exactly that (using its Code Interpreter tool) for two years now. Gemini and Claude have similar features.


Often they will do exactly that, currently their reasoning isn't the best so you may have to coax it to take the best path. It's also making a judgement call in its writing the code so worth checking too. No different to a senior instructing an intern.


Ah, it's like communism, then (to its diehards). It cannot fail, it can only be failed.


Please explain how what I am saying is wrong?


This is an odd non-sequitur.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: