I'll admit that I haven't looked it in a while, but as originally released, it was a textbook example on how to complicate a fundamentally simple and well-understood task (text templates, basically) with lots of useless abstractions that made it all sound more "enterprise". People would write complicated langchains, but then when you looked under the hood all it was doing is some string concatenation, and the result was actually less readable than a simple template with substitutions in it.
yes, in an industry that has rapidly changing features and 7000 of these products that splinter and lose user base so quickly you should write your own orchestration for this stuff. It’s not hard and gives you a very easy path to implementing new features or optimizations.
For what LangChain does, yes, handrolled code. There won't be much of it because it doesn't actually do all that much.
As for "import openai", that's actually somewhat orthogonal, but if what you want is a common API for different providers then there are many options around that do just that. But then again at that point you probably also want something like OpenRouter, which has its own generic API.
I’ve been using the following pattern since gpt3, the only other thing I have changed was added another parameter for schema for structured output. People really love to overcomplicate things.
I went through evaluating a bunch of frameworks. There was Langchain, AG2, Firebase Gen AI / Vertex / whatever Google eventually lands on, Crew AI, Microsoft's stuff etc.
It was so early in the game none of those frame works are ready. What they do under the hood when I looked wasn't a lot. I just wanted some sort of abstraction over the model apis and the ability to use the native api if the abstraction wasn't good enough. I ended up using Spring AI. Its working well for me at the moment. I dipped into the native APIS when I needed a new feature (web search).
Out of all the others Crew AI was my second choice. All of those frameworks seem parasitic. One your on the platform you are locked in. Some were open source but if you wanted to do anything useful you needed an API key and you could see that features were going to be locked behind some sort of payment.
Honestly I think you could get a lot done with one of the CLI's like Claude Code running in a VM.
I am not sure what's the stereotype, but I tried using langchain and realised most of the functionality actually adds more code to use than simply writing my own direct API LLM calls.
Overall I felt like it solves a problem doesn't exist, and I've been happily sending direct API calls for years to LLMs without issues.
JSON Structured Output from OpenAI was released a year after the first LangChain release.
I think structured output with schema validation mostly replaces the need for complex prompt frameworks. I do look at the LC source from time to time because they do have good prompts backed into the framework.
IME you could get reliable JSON or other easily-parsable output formats out of OpenAI's going back at least to GPT3.5 or 4 in early 2023. I think that was a bit after LangChain's release but I don't recall hitting problems that I needed to add a layer around in order to do "agent"-y things ("dispatch this to this specialized other prompt-plus-chatgpt-api-call, get back structured data, dispatch it to a different specialized prompt-plus-chatgpt-api-call") before it was a buzzword.
It's still not true for any complicated extraction. I don't think I've ever shipped a successful solution to anything serious that relied on freeform schema say-and-pray with retries.
> so it's not a panacea you can count on in production.
OpenAI and Gemini models can handle ridiculously complicated and convoluted schemas, if I needed complicated JSON output I wouldn’t use anything that didn’t guarantee it.
I have pushed Gemini 2.5 Pro further than I thought possible when it comes to ridiculously over complicated (by necessity) structured output.
When my company organized an LLM hackathon last year, they pushed for LangChain.. but then instead of building on top of it I ended up creating a more lightweight abstraction for our use-cases.
No dig at you, but I take the average langchain user as one who is either a) using it because their C-suite heard about at some AI conference and had it foisted upon them or b) does not care about software quality in general.
I've talked to many people who regret building on top of it but they're in too deep.
I think you may come to the same conclusions over time.