LangChain LLM-System Development Framework
(Redirected from LangChain LLM-system development framework)
Jump to navigation
Jump to search
A LangChain LLM-System Development Framework is an open source component-based development framework LLM framework.
- Context:
- It can (typically) be composed of a Python Library, such interfaces such as:
- Models: a generic interface to a variety of different foundation models, such as OpenAI GPT-3, Google AI's LaMDA, and Jurassic-1 Jumbo. This allows you to choose the right model for your application, without having to worry about the underlying implementation.
- langchain.prompts: a framework to help you manage your prompts. This includes a library of pre-built prompts, as well as tools to create your own prompts.
- langchain.memory: a central interface to long-term memory. This allows you to persist state between calls of a chain/agent, which is essential for many applications.
- langchain.chains: A chain is a sequence of calls to different components, such as models, prompts, and memory. LangChain provides a standard interface for chains, which makes it easy to build and deploy complex applications.
- langchain.agents: An agent is a program that makes decisions about which actions to take. LangChain provides a standard interface for agents, which makes it easy to build applications that can learn and adapt over time.
- langgraph: ...
- ...
- langchain.adapters: Contains adapter interfaces for integrations like vector stores, databases, block storage etc.
- ...
- It can allow chaining together components like prompts, LLMs, data sources, actions etc. to create advanced workflows.
- It can enable building data-aware applications by integrating data sources.
- It can enable building agentic applications that interact with environments.
- It can provide off-the-shelf chains for tasks like summarization, QA, code generation etc.
- It can support both Python and JavaScript for development.
- ...
- It can (typically) be composed of a Python Library, such interfaces such as:
- Example(s):
- langchain-core v02.34 [1] (~2024-08-20).
- LangChain v0.0.312 [2] (~2023-10-10).
- LangChain v0.0.286 [3] (~2023-09-11).
- LangChain v0.0.248 [4] (~2023-07-31).
- LangChain v0.0.064 [5] (~2023-01-16).
https://github.com/langchain-ai/langchain/releases
.- ...
- Counter-Example(s):
- langchain JavaScript Library [6]
- LlamaIndex: for building in LLM applications that need to access information from a variety of sources.
- LLMFlows: for building simple, explicit, and transparent LLM applications such as chatbots, question-answering systems, and agents.
- LLMApp: for building real-time LLM-enabled data pipelines with few lines of code.
- OpenAI Swarm: for ...
- See: LLM-based System, Vector Database.
References
2024
- https://github.com/langchain-ai/langchain
- NOTES:
- Framework Architecture Layer
- LangChain Core: This component serves as the foundation for developing and managing workflows using large language models. It provides the necessary core abstractions and tools to create chains, handle data transformations, and integrate.
- LangGraph Orchestration: This component enhances the capabilities by enabling the development of complex, multi-step workflows that involve stateful and multi-agent systems. LangGraph is especially useful for scenarios that require real-time interaction and dynamic decision-making, with support for features like streaming to improve application responsiveness.
- Component Integration Layer: Third-Party Integrations: This component represents the ecosystem's extensibility, allowing for seamless integrations with various third-party services. These integrations expand the functionality of LangChain and LangGraph, enabling them to work with LLM providers, databases, and other critical services for more sophisticated application development.
- Deployment Infrastructure Layer: LangGraph Cloud Deployment: This service is designed for the deployment and scaling of complex LLM applications. It provides a managed environment with features such as auto-scaling, persistent storage, and fault tolerance, making it easier to transition applications from development to production with tools for easy deployment and real-time monitoring.
- Development Tools Layer: LangSmith DevOps Platform: LangSmith functions as the DevOps backbone within the LangChain ecosystem, offering a range of tools that streamline the development, testing, and monitoring of LLM-based applications. It integrates seamlessly with both LangChain and LangGraph, providing developers with a unified platform from prototype to production.
- Framework Architecture Layer
- NOTES:
2023
- https://python.langchain.com/docs/modules/
- Modules: LangChain provides standard, extendable interfaces and external integrations for the following modules, listed from least to most complex:
- Model I/O: Interface with language models
- Retrieval: Interface with application-specific data
- Chains: Construct sequences of calls
- Agents: Let chains choose which tools to use given high-level directives
- Memory: Persist application state between runs of a chain
- Callbacks: Log and stream intermediate steps of any chain
2023
- Google Bard (2023-09-11)
- LangChain is a framework for orchestrating the interaction between an LLM and other components of an LLM-based application. It provides a way to define a sequence of steps that the LLM will take to complete a task, as well as a way to manage the data that is passed between the LLM and the other components.
- Data-aware applications are able to connect to other sources of data, such as databases or APIs. This allows them to use the language model to process and understand information from a wider range of sources.
- Agentic applications are able to interact with their environment. This means that they can take actions in the world, such as controlling robots or devices.
- LangChain provides a number of features that make it easier to develop data-aware and agentic applications:
- Abstractions: LangChain provides abstractions for the components that are needed to work with LLMs, such as data sources, actions, and memories. This makes it easier to write code that is modular and reusable.
- Implementations: LangChain provides implementations for a variety of these abstractions, so you don't have to write them yourself.
- Off-the-shelf chains: LangChain provides a number of off-the-shelf chains that you can use to accomplish specific tasks, such as summarization, question answering, and code generation.
- LangChain is written in Python and JavaScript. It is open source and available on GitHub.
- Here are some examples of applications that can be built with LangChain:
- A chatbot that can answer questions about a product or service.
- A virtual assistant that can control smart home devices.
- A summarizer that can generate a summary of a long piece of text.
- A question answering system that can answer questions about a specific topic.
- A code generator that can generate code from natural language descriptions.
- LangChain is a framework for orchestrating the interaction between an LLM and other components of an LLM-based application. It provides a way to define a sequence of steps that the LLM will take to complete a task, as well as a way to manage the data that is passed between the LLM and the other components.
2023
- chat
- LangChain is a framework for building applications around large language models (LLMs) such as GPT-3, BLOOM, etc. It allows you to chain together different components such as prompt templates, LLMs, agents, and memory to create more advanced use cases. For example, you can use LangChain to create chatbots, question-answering systems, summarizers, and more12.
- Some of the features of LangChain include:
- A standard interface for prompt templates, which are templates for different types of prompts that can be input to LLMs
- A selection of LLMs to choose from, either from Hugging Face Hub or OpenAI
- A standard interface for agents, which use LLMs to decide what actions should be taken
- A standard interface for memory, which is the concept of persisting state between calls of a chain or agent
- A collection of toolkits that enable agents to interact with different data sources or APIs234
- LangChain is an open-source project that was created by Harrison Chase in late 2022 and has gained popularity since then. You can learn more about LangChain from its official blog2, documentation3, or GitHub repository.
2023
- https://python.langchain.com/en/latest/
- QUOTE: LangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model via an API, but will also:
- Be data-aware: connect a language model to other sources of data
- Be agentic: allow a language model to interact with its environment
- The LangChain framework is designed with the above principles in mind.
- This is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see here. For the JavaScript documentation, see here.
- QUOTE: LangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model via an API, but will also: