I get the idea that things should be abstracted away and that's the point of a library but this feels like a little much.
First of all, every library/framework I've found is moving so fast that all the tutorials and printed material (O'Reilly books etc) are already out of date. Many of the changes are out of necessity, as it's a rapidly developing space, but sometimes it just feels like someone got high and decided to add 3 more layers of abstraction. Although for many tasks, AI coding assistants would be a benefit for noobs like me, the code base and documentation are too loose for me to get the expected benefits I would find in a more established code base.
LangChain seems to be where a lot of the action is with regard to modularity, and using different components in each part of the pipeline. That's important for me, because I need either local or HIPAA-compliant tools (Azure OpenAI works, Anthropic won't return my requests for a BAA, and I need a bigger GPU).
But using LangChain is a pretty horrible experience because, at least for my uses and as a noob, it's much too buried in abstractions to make quick iterations. The GUI-based stuff like flowise and langflow are too limited with regard to available components, and mostly they hide the problems so that errors are tough to address.
I'm thrilled that there has been so much work on adding JSON output and agent stuff at the LLM level, as hopefully it can bring some of these astronauts back to earth (or at least in a low orbit).
Brandon Hancock’s (3 hour!!) YouTube video and accompanying GitHub repo: https://github.com/bhancockio/langchain-crash-course https://youtu.be/yF9kGESAi3M
I’m not in a position to say, but perhaps many people will ultimately go with directly using the python libraries and APIs of their components. But if you are messing around with different components until you settle on something, it’s useful.
Did I miss out on some major developments there? Because I don't see why it's a thing that's being talked about everywhere, when it's barely anything.
https://venturebeat.com/ai/langchain-lands-25m-round-launche...
Even if you don't want to use agents, it is still useful as a convenient library for calling an Open AI compatible endpoint.
https://langroid.github.io/langroid/quick-start/llm-interact...
We started building Langroid in early 2023 after finding existing frameworks lacking in terms of good dev-ex, extensibility and clarity of code. We prioritize code transparency, flexibility, stability, good test coverage. We designed it to be agent-oriented from the start, with an elegant agent orchestration mechanism loosely inspired by various paradigms such as Blackboard Architectures, Actor Model, and Production systems, and process calculi. But as you said, Langroid is useful even if you want to just use a single LLM conversation-state.