Interesting - I predict the opposite effect. Ultimately the use of AI for programming is still a human-computer interface, and both sides will need a coordinate system to communicate, which is the framework. I mean, until we get super AGI in which case we can tell it "make me the perfect website" and it does. However I don't worry about that case because its equally likely it will tell us "go to work at the chip factory or die." The in-between case is to tell an AI "make a controller" or "make a react component" and things like that. And ChatGPT is very good at doing things like that.
One thing that I’m excited about is the prospect of not having to design a library as a black box with an API on top. That’s the best way we’ve had previously for re-using code, but it’s an enormous effort to go from a working piece of code to a well-designed, well-documented library, and I think we have all experienced the frustration of discovering that a library you’re using doesn’t support a specific use case that is critical to you.
LLMs can potentially allow us to bring the underlying implementation directly in to our source code. Then we can talk to the LLM to adapt it to the specific needs of our project. Instead of a library you would install essentially a well-written prompt that tells the LLM how to guide you through setting up a tailor-made implementation, with tests and docs.
The benefits should be obvious: you’re not artificially restricted by the mental model encoded in the API, you’re not taking on a dependency where the author suddenly decides to release breaking changes or deprecate functionality you’re depending on, and you don’t risk “growing out of” a library that is used all over your codebase, as you can simply ask the LLM to patch the code with any changes you need in the future. The prompt itself could still be versioned so you can opt in to future improvements in security, performance or compatibility.
TLDR: let’s start writing tutorials for bots, rather than libraries.
> you’re not artificially restricted by the mental model encoded in the API
Most of the time I want a restricted mental model because I have so many API's to deal with that if they are not restricted my "mental model" breaks down. Suppose I am using a sockets library. I want to use that like a black box. I don't want that code arbitrarily mixed in with my code. I want to be able to debug my code separately because I assume that 99% of the time the bug is in my code and not the sockets lib etc. etc.
Even when most of the code is my own I will still split it into modules and try to make those as black box as possible in order to manage complexity.
I tend to write facades for many libraries/APIs I use and use the facades, not the actual APIs throughout the project. The facades, aside from being simpler to replace in case I need to switch dependencies, also use a simpler mental model suitable for the project (and me).