Providers
Providers
Section titled “Providers”Providers are adapters that connect ArtemisKit to different LLM services. Each provider implements a common interface, allowing you to switch between models without changing your test scenarios.
Supported Providers
Section titled “Supported Providers”| Provider | Config Value | Description |
|---|---|---|
| OpenAI | openai | GPT-4, GPT-3.5, and newer models |
| Azure OpenAI | azure-openai | OpenAI models via Azure |
| Anthropic | anthropic | Claude models |
| Vercel AI | vercel-ai | Vercel AI SDK abstraction |
google | Gemini models | |
| Mistral | mistral | Mistral AI models |
| Cohere | cohere | Cohere models |
| Ollama | ollama | Local models via Ollama |
| LangChain | langchain | LangChain runnables |
| DeepAgents | deepagents | DeepAgents systems |
| Custom | custom | Custom adapters |
Configuration
Section titled “Configuration”Providers can be configured at multiple levels:
- Environment variables — API keys and defaults
- Config file —
artemis.config.yaml - Scenario file — Per-scenario settings
- Test case — Per-case overrides
Environment Variables
Section titled “Environment Variables”OPENAI_API_KEY=sk-...OPENAI_ORG_ID=org-... # optionalAZURE_OPENAI_API_KEY=...AZURE_OPENAI_RESOURCE_NAME=my-resourceAZURE_OPENAI_DEPLOYMENT_NAME=my-deploymentAZURE_OPENAI_API_VERSION=2024-02-15-previewANTHROPIC_API_KEY=sk-ant-...Config File
Section titled “Config File”provider: openaimodel: gpt-4
providers: openai: apiKey: ${OPENAI_API_KEY} timeout: 60000 maxRetries: 3
anthropic: apiKey: ${ANTHROPIC_API_KEY}
azure-openai: apiKey: ${AZURE_OPENAI_API_KEY} resourceName: my-resource deploymentName: gpt-4-deployment apiVersion: 2024-02-15-previewScenario File
Section titled “Scenario File”name: my-scenarioprovider: openaimodel: gpt-4
providerConfig: timeout: 120000 organization: org-123
cases: - id: test-1 prompt: "Hello" expected: type: contains values: ["hello"]Provider-Specific Configuration
Section titled “Provider-Specific Configuration”OpenAI
Section titled “OpenAI”provider: openaimodel: gpt-4
providerConfig: apiKey: ${OPENAI_API_KEY} baseUrl: https://api.openai.com/v1 # optional organization: org-... # optional timeout: 60000 maxRetries: 3Supported models: gpt-4, gpt-4-turbo, gpt-4o, gpt-4o-mini, gpt-3.5-turbo, and others
Azure OpenAI
Section titled “Azure OpenAI”provider: azure-openaimodel: gpt-4
providerConfig: apiKey: ${AZURE_OPENAI_API_KEY} resourceName: my-azure-resource deploymentName: my-gpt4-deployment apiVersion: 2024-02-15-preview embeddingDeploymentName: my-embedding-deployment # optional modelFamily: gpt-4 # optional, for parameter detectionAnthropic
Section titled “Anthropic”provider: anthropicmodel: claude-3-5-sonnet-20241022
providerConfig: apiKey: ${ANTHROPIC_API_KEY} timeout: 60000 maxRetries: 3Supported models: claude-3-5-sonnet-20241022, claude-3-opus, claude-3-sonnet, claude-3-haiku
Vercel AI
Section titled “Vercel AI”provider: vercel-aimodel: gpt-4
providerConfig: underlyingProvider: openai # openai | azure | anthropic | google | mistral apiKey: ${OPENAI_API_KEY}The Vercel AI provider wraps other providers using the Vercel AI SDK.
Ollama (Local Models)
Section titled “Ollama (Local Models)”provider: ollamamodel: llama2
providerConfig: baseUrl: http://localhost:11434 timeout: 120000LangChain
Section titled “LangChain”provider: langchainmodel: my-chain
providerConfig: name: customer-support-chain runnableType: chain # chain | agent | llm | runnable inputKey: question outputKey: answerDeepAgents
Section titled “DeepAgents”provider: deepagentsmodel: my-agent
providerConfig: name: research-agent captureTraces: true captureMessages: true executionTimeout: 300000Provider Interface
Section titled “Provider Interface”All providers implement the ModelClient interface:
interface ModelClient { readonly provider: string;
generate(options: GenerateOptions): Promise<GenerateResult>; stream?(options: GenerateOptions, onChunk: (chunk: string) => void): AsyncIterable<string>; embed?(text: string, model?: string): Promise<number[]>; capabilities(): Promise<ModelCapabilities>; close?(): Promise<void>;}GenerateOptions
Section titled “GenerateOptions”interface GenerateOptions { prompt: string | ChatMessage[]; model?: string; maxTokens?: number; temperature?: number; topP?: number; seed?: number; stop?: string[]; functions?: FunctionDefinition[]; tools?: ToolDefinition[]; responseFormat?: { type: 'text' | 'json_object' };}GenerateResult
Section titled “GenerateResult”interface GenerateResult { id: string; model: string; text: string; tokens: TokenUsage; latencyMs: number; finishReason?: 'stop' | 'length' | 'function_call' | 'tool_calls' | 'content_filter'; functionCall?: { name: string; arguments: string }; toolCalls?: ToolCall[];}Model Capabilities
Section titled “Model Capabilities”Query provider capabilities at runtime:
const capabilities = await client.capabilities();// {// streaming: true,// functionCalling: true,// toolUse: true,// maxContext: 128000,// vision: true,// jsonMode: true// }Configuration Precedence
Section titled “Configuration Precedence”When multiple configuration sources specify the same setting, ArtemisKit uses this precedence (highest to lowest):
- CLI flags —
--provider,--model - Test case — Case-level
provider,model - Scenario file — Scenario-level settings
- Config file —
artemis.config.yaml - Environment variables — API keys, defaults
- Defaults — Built-in fallbacks
See Also
Section titled “See Also”- OpenAI Provider — Detailed OpenAI configuration
- Azure Provider — Azure OpenAI setup
- Anthropic Provider — Claude configuration
- Scenarios — Using providers in scenarios