LangChain Adapter
Test chains, agents, and runnables built with LangChain.js. Supports LCEL, ReAct agents, RAG chains, and more.
ArtemisKit provides specialized adapters for testing agentic AI systems built with LangChain and DeepAgents.
LangChain Adapter
Test chains, agents, and runnables built with LangChain.js. Supports LCEL, ReAct agents, RAG chains, and more.
DeepAgents Adapter
Test multi-agent systems built with DeepAgents. Supports sequential, parallel, and hierarchical workflows.
bun add @artemiskit/adapter-langchain# ornpm install @artemiskit/adapter-langchainbun add @artemiskit/adapter-deepagents# ornpm install @artemiskit/adapter-deepagents| Feature | LangChain | DeepAgents |
|---|---|---|
| System Type | Chains, Agents, Runnables | Multi-agent teams, hierarchies |
| Execution | Single-agent or chain flow | Multi-agent collaboration |
| Tool Tracking | Intermediate steps | Cross-agent tool usage |
| Communication | Chain input/output | Inter-agent messages |
| Streaming | Via stream() method | Via stream() method |
Both adapters capture tool call information for verification:
import { createLangChainAdapter } from '@artemiskit/adapter-langchain';
const adapter = createLangChainAdapter(myAgent, { name: 'tool-agent', captureIntermediateSteps: true,});
const result = await adapter.generate({ prompt: 'Search for weather in New York',});
// Verify tools were used correctlyconst metadata = result.raw.metadata;expect(metadata.toolsUsed).toContain('web_search');expect(metadata.totalToolCalls).toBeGreaterThan(0);import { createDeepAgentsAdapter } from '@artemiskit/adapter-deepagents';
const adapter = createDeepAgentsAdapter(myTeam, { name: 'content-team', captureTraces: true, captureMessages: true,});
const result = await adapter.generate({ prompt: 'Research and write an article',});
// Verify agent collaborationconst metadata = result.raw.metadata;expect(metadata.agentsInvolved).toEqual(['researcher', 'writer']);expect(metadata.totalMessages).toBeGreaterThan(0);Create YAML scenarios for agentic testing:
name: agent-quality-testdescription: Test agent response quality
cases: - id: tool-usage prompt: "Calculate 25 * 4 using the calculator" expected: type: contains values: ["100"] tags: [tool-test]
- id: multi-step prompt: "Search for today's weather and summarize it" expected: type: llm_grader criteria: "Response includes current weather information" minScore: 0.8 tags: [integration]Run with ArtemisKit:
import { ArtemisKit } from '@artemiskit/sdk';import { createLangChainAdapter } from '@artemiskit/adapter-langchain';
const adapter = createLangChainAdapter(myAgent);const kit = new ArtemisKit({ adapter, project: 'agent-testing',});
const results = await kit.run({ scenario: './agentic-test.yaml',});
console.log(`Pass rate: ${results.manifest.metrics.pass_rate * 100}%`);Both adapters provide rich execution metadata:
| Field | Description |
|---|---|
agentsInvolved | List of agents that participated |
totalAgentCalls | Number of agent invocations |
totalToolCalls | Total tool/function calls |
toolsUsed | Unique tools that were used |
traces | Full execution trace (if enabled) |
messages | Inter-agent messages (if enabled) |
executionTimeMs | Total execution time |
captureTraces: true for debugging