LangChain Adapter
LangChain Adapter
Section titled “LangChain Adapter”The LangChain adapter (@artemiskit/adapter-langchain) enables testing of LangChain.js chains, agents, and runnables with ArtemisKit.
Installation
Section titled “Installation”bun add @artemiskit/adapter-langchain# ornpm install @artemiskit/adapter-langchainQuick Start
Section titled “Quick Start”-
Create your LangChain chain or agent
import { ChatOpenAI } from '@langchain/openai';import { StringOutputParser } from '@langchain/core/output_parsers';import { ChatPromptTemplate } from '@langchain/core/prompts';const model = new ChatOpenAI({ model: 'gpt-4' });const prompt = ChatPromptTemplate.fromTemplate('Answer concisely: {input}');const chain = prompt.pipe(model).pipe(new StringOutputParser()); -
Wrap with ArtemisKit adapter
import { createLangChainAdapter } from '@artemiskit/adapter-langchain';const adapter = createLangChainAdapter(chain, {name: 'qa-chain',runnableType: 'chain',}); -
Use in tests
const result = await adapter.generate({ prompt: 'What is 2+2?' });console.log(result.text); // "4"
Configuration Options
Section titled “Configuration Options”| Option | Type | Default | Description |
|---|---|---|---|
name | string | - | Identifier for the chain/agent |
runnableType | 'chain' | 'agent' | 'llm' | 'runnable' | auto-detect | Type of LangChain runnable |
captureIntermediateSteps | boolean | true | Capture agent intermediate steps |
inputKey | string | 'input' | Custom input key for the runnable |
outputKey | string | 'output' | Custom output key for the runnable |
Examples
Section titled “Examples”Testing a Simple Chain
Section titled “Testing a Simple Chain”import { createLangChainAdapter } from '@artemiskit/adapter-langchain';import { ChatOpenAI } from '@langchain/openai';import { StringOutputParser } from '@langchain/core/output_parsers';import { ChatPromptTemplate } from '@langchain/core/prompts';
// Create LCEL chainconst model = new ChatOpenAI({ model: 'gpt-4' });const prompt = ChatPromptTemplate.fromTemplate('Answer concisely: {input}');const chain = prompt.pipe(model).pipe(new StringOutputParser());
// Wrap with adapterconst adapter = createLangChainAdapter(chain, { name: 'qa-chain', runnableType: 'chain',});
// Testconst result = await adapter.generate({ prompt: 'What is the capital of France?' });console.log(result.text); // "Paris"console.log(result.latencyMs); // Execution timeTesting an Agent with Tools
Section titled “Testing an Agent with Tools”import { createLangChainAdapter } from '@artemiskit/adapter-langchain';import { AgentExecutor, createReactAgent } from 'langchain/agents';import { ChatOpenAI } from '@langchain/openai';import { Calculator } from '@langchain/community/tools/calculator';import { pull } from 'langchain/hub';
// Create agentconst model = new ChatOpenAI({ model: 'gpt-4' });const tools = [new Calculator()];const prompt = await pull('hwchase17/react');const agent = await createReactAgent({ llm: model, tools, prompt });const agentExecutor = new AgentExecutor({ agent, tools });
// Wrap with adapterconst adapter = createLangChainAdapter(agentExecutor, { name: 'calculator-agent', runnableType: 'agent', captureIntermediateSteps: true,});
// Testconst result = await adapter.generate({ prompt: 'Calculate 25 * 4' });console.log(result.text); // "100"
// Access execution metadataconst metadata = result.raw.metadata;console.log(metadata.toolsUsed); // ['calculator']console.log(metadata.totalToolCalls); // 1Testing a RAG Chain
Section titled “Testing a RAG Chain”import { createLangChainAdapter } from '@artemiskit/adapter-langchain';import { ChatOpenAI } from '@langchain/openai';import { RetrievalQAChain } from 'langchain/chains';
// Assume vectorstore is already set upconst retriever = vectorstore.asRetriever();const chain = RetrievalQAChain.fromLLM( new ChatOpenAI({ model: 'gpt-4' }), retriever);
// Wrap with custom input/output keysconst adapter = createLangChainAdapter(chain, { name: 'rag-qa', inputKey: 'query', outputKey: 'result',});
const result = await adapter.generate({ prompt: 'What does the document say about X?',});Testing with ArtemisKit Scenarios
Section titled “Testing with ArtemisKit Scenarios”Create a YAML scenario:
name: langchain-qa-evaluationdescription: Test QA chain quality
cases: - id: factual-qa prompt: "What is the capital of France?" expected: type: contains values: ["Paris"]
- id: math-calculation prompt: "What is 25 * 4?" expected: type: contains values: ["100"]
- id: reasoning prompt: "If it's raining, should I bring an umbrella?" expected: type: llm_grader criteria: "Response recommends bringing an umbrella" minScore: 0.8Run with ArtemisKit:
import { ArtemisKit } from '@artemiskit/sdk';import { createLangChainAdapter } from '@artemiskit/adapter-langchain';
const adapter = createLangChainAdapter(myChain);
const kit = new ArtemisKit({ adapter, project: 'langchain-testing',});
const results = await kit.run({ scenario: './langchain-test.yaml',});
console.log(`Pass rate: ${results.manifest.metrics.pass_rate * 100}%`);Supported Runnable Types
Section titled “Supported Runnable Types”The adapter supports all LangChain runnables that implement invoke():
| Type | Examples |
|---|---|
| Chains | LCEL chains, RetrievalQA, ConversationalRetrievalQA |
| Agents | ReAct agents, OpenAI Functions agents, Tool-calling agents |
| LLMs | ChatOpenAI, ChatAnthropic, ChatGoogleGenerativeAI |
| Runnables | Any custom runnable with invoke() method |
Streaming Support
Section titled “Streaming Support”If your runnable supports streaming via stream(), the adapter will use it:
const adapter = createLangChainAdapter(myChain);
for await (const chunk of adapter.stream({ prompt: 'Tell me a story' }, console.log)) { // Process streaming chunks}Execution Metadata
Section titled “Execution Metadata”The adapter captures rich metadata about chain/agent execution:
interface LangChainExecutionMetadata { name?: string; // Chain/agent name runnableType: string; // Type of runnable totalToolCalls: number; // Tool invocations toolsUsed: string[]; // Unique tools used intermediateSteps?: object[]; // Agent intermediate steps executionTimeMs?: number; // Total execution time}Best Practices
Section titled “Best Practices”- Set appropriate input/output keys — Match your chain’s expected keys
- Enable intermediate steps — For debugging agent behavior
- Use meaningful names — For easier identification in reports
- Test streaming separately — If your chain supports streaming
- Monitor token usage — Chain calls can add up quickly
Troubleshooting
Section titled “Troubleshooting””Input key not found”
Section titled “”Input key not found””Your chain expects a different input key. Set the inputKey option:
const adapter = createLangChainAdapter(chain, { inputKey: 'question', // Match your chain's input key});“Output is undefined”
Section titled ““Output is undefined””Your chain returns output under a different key. Set the outputKey option:
const adapter = createLangChainAdapter(chain, { outputKey: 'answer', // Match your chain's output key});Agent intermediate steps not captured
Section titled “Agent intermediate steps not captured”Ensure captureIntermediateSteps is enabled:
const adapter = createLangChainAdapter(agent, { captureIntermediateSteps: true,});See Also
Section titled “See Also”- Agentic Adapters Overview — All agentic adapters
- DeepAgents Adapter — Multi-agent testing
- SDK Overview — ArtemisKit SDK documentation
- Test Matchers — Jest/Vitest matchers