Hacker Newsnew | past | comments | ask | show | jobs | submit | mattew's commentslogin

Looks like Lovable is expanding out to platform specific development with their just announced Shopify Integration


I’ve only played with Junie and Aider so far and like the approach the Junie agent takes of reading the code to understand it vs the Aider approach of using the repo map to understand the codebase.


I saw something mentioned of voice control in aider.chat



Thanks for sharing. Can you add a simple example that is more concrete and less abstrwct to the docs? It would be easier for people to understand if you did.



Thanks!


This makes sense. Can you clarify what you mean by journal management in this context?


I’ve found the OpenAI assistants API not really up to snuff in terms of predictable behavior yet.

That said, I’m very bullish on agents overall though and expect that once they get their assistants behaving a bit more predictably we will see some cool stuff.

It’s really quite magical to see one of these think through how to solve a problem, use custom tools that you implement to help solve it, and come to a solution.


Good stuff. How does this compare to Instructor? I’ve been using this extensively

https://jxnl.github.io/instructor/


answered in different thread. tldr: not that different for now. we're likely to do some serverside optimizations, esp. given our gpu inference history.


I like your UX a lot more. Modeling the llm calls as actual python functions allows them to mesh well with existing code organization dev tooling. And using a decorator to "implement" a function just feels like a special kind of magic. I'd need more ability to use my own "prompt templates" to use this as a lib but I'm definitely going to try using this general pattern.


This right here is actually the coolest part about developing with LLMs. You just changed the functionality with a sentence rather than a config file, or writing code. It’s great to be able to break out functionalty into things that can be easily handled in English (or your human language of choice) or what should be done in code.


I think it's the worst part, because it's completely inscrutable. Ask for the same thing in different wording and get a different response. Ask for a similar thing and get stonewalled. A config file has structure which you can (in theory) learn perfectly from documentation or even from your IDE while writing the file. None of that is true of asking in plain English.

I feel in some ways current LLMs are making technology more arcane. Which is why people who have the time are having a blast figuring out all the secret incantations that get the LLM to give you what you want.


> I feel in some ways current LLMs are making technology more arcane. Which is why people who have the time are having a blast figuring out all the secret incantations

Yeah, there's an important gap between engaging visions of casting cool magic versus (boring) practical streamlining and abstracting-away.

To illustrate the difference, I'm going to recycle a rant I've often given about VR aficionados:

Today, I don't virtually fold a virtual paper to put it in a virtual envelope to virtually lick a virtual stamp with my virtual tongue before virtually walking down the virtual block to the virtual post office... I simply click "Send" in my email client!

Similarly, it's engaging to think of a future of AI-Pokemon trainers--"PikaGPT, I choose you! Assume we can win because of friendship!"--but I don't think things will actually succeed in that direction because most of the cool stuff is also cruft.


Yeah until the notoriously unreliable ChatGPT forgets that it's supposed to follow that and starts giving you some CYOA text.


Until the LLM starts hallucinating its own instructions and "fills in the blanks".


I also think of pudding guy every time something like this comes up.


There was someone who did something similar with Tesco (UK Supermarket) Clubcard Points (a reward system which, I believe, essentially returns essentially 1% of your supermarket shopping spend in return for correlating shopping data with your demographics !)

In the early days (1997) of the system, there was a 'bonus' points points offer on Bananas, sufficient that the cost of the bananas was offset by the rewards. Although searching for the article, the benefit was only 8%, and the total reward was only about 25 GBP (about 40 USD at 1997 exchange rates)

https://www.independent.co.uk/news/banana-economics-buy-942l...


The interesting thing about GPS back then was that the location data you got back were randomly slightly incorrect every time you got a reading. I’m pretty sure this was so it wasn’t useful for military purposes.

I think it was called differential post correction but if you had a base station with a known location you could snap your incorrect points to the difference generated at that correction level and get the true location after the fact.

Source: GIS major in late 90s when this stuff was a lot more magical



That’s right. It was selective availability and you used differential post correction to clean the data up and get accurate locations for the data you were capturing. Thanks for the correction!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: