Skip to content

Providers

Providers are adapters that connect ArtemisKit to different LLM services. Each provider implements a common interface, allowing you to switch between models without changing your test scenarios.

ProviderConfig ValueDescription
OpenAIopenaiGPT-4, GPT-3.5, and newer models
Azure OpenAIazure-openaiOpenAI models via Azure
AnthropicanthropicClaude models
Vercel AIvercel-aiVercel AI SDK abstraction
GooglegoogleGemini models
MistralmistralMistral AI models
CoherecohereCohere models
OllamaollamaLocal models via Ollama
LangChainlangchainLangChain runnables
DeepAgentsdeepagentsDeepAgents systems
CustomcustomCustom adapters

Providers can be configured at multiple levels:

  1. Environment variables — API keys and defaults
  2. Config fileartemis.config.yaml
  3. Scenario file — Per-scenario settings
  4. Test case — Per-case overrides
Terminal window
OPENAI_API_KEY=sk-...
OPENAI_ORG_ID=org-... # optional
artemis.config.yaml
provider: openai
model: gpt-4
providers:
openai:
apiKey: ${OPENAI_API_KEY}
timeout: 60000
maxRetries: 3
anthropic:
apiKey: ${ANTHROPIC_API_KEY}
azure-openai:
apiKey: ${AZURE_OPENAI_API_KEY}
resourceName: my-resource
deploymentName: gpt-4-deployment
apiVersion: 2024-02-15-preview
name: my-scenario
provider: openai
model: gpt-4
providerConfig:
timeout: 120000
organization: org-123
cases:
- id: test-1
prompt: "Hello"
expected:
type: contains
values: ["hello"]
provider: openai
model: gpt-4
providerConfig:
apiKey: ${OPENAI_API_KEY}
baseUrl: https://api.openai.com/v1 # optional
organization: org-... # optional
timeout: 60000
maxRetries: 3

Supported models: gpt-4, gpt-4-turbo, gpt-4o, gpt-4o-mini, gpt-3.5-turbo, and others

provider: azure-openai
model: gpt-4
providerConfig:
apiKey: ${AZURE_OPENAI_API_KEY}
resourceName: my-azure-resource
deploymentName: my-gpt4-deployment
apiVersion: 2024-02-15-preview
embeddingDeploymentName: my-embedding-deployment # optional
modelFamily: gpt-4 # optional, for parameter detection
provider: anthropic
model: claude-3-5-sonnet-20241022
providerConfig:
apiKey: ${ANTHROPIC_API_KEY}
timeout: 60000
maxRetries: 3

Supported models: claude-3-5-sonnet-20241022, claude-3-opus, claude-3-sonnet, claude-3-haiku

provider: vercel-ai
model: gpt-4
providerConfig:
underlyingProvider: openai # openai | azure | anthropic | google | mistral
apiKey: ${OPENAI_API_KEY}

The Vercel AI provider wraps other providers using the Vercel AI SDK.

provider: ollama
model: llama2
providerConfig:
baseUrl: http://localhost:11434
timeout: 120000
provider: langchain
model: my-chain
providerConfig:
name: customer-support-chain
runnableType: chain # chain | agent | llm | runnable
inputKey: question
outputKey: answer
provider: deepagents
model: my-agent
providerConfig:
name: research-agent
captureTraces: true
captureMessages: true
executionTimeout: 300000

All providers implement the ModelClient interface:

interface ModelClient {
readonly provider: string;
generate(options: GenerateOptions): Promise<GenerateResult>;
stream?(options: GenerateOptions, onChunk: (chunk: string) => void): AsyncIterable<string>;
embed?(text: string, model?: string): Promise<number[]>;
capabilities(): Promise<ModelCapabilities>;
close?(): Promise<void>;
}
interface GenerateOptions {
prompt: string | ChatMessage[];
model?: string;
maxTokens?: number;
temperature?: number;
topP?: number;
seed?: number;
stop?: string[];
functions?: FunctionDefinition[];
tools?: ToolDefinition[];
responseFormat?: { type: 'text' | 'json_object' };
}
interface GenerateResult {
id: string;
model: string;
text: string;
tokens: TokenUsage;
latencyMs: number;
finishReason?: 'stop' | 'length' | 'function_call' | 'tool_calls' | 'content_filter';
functionCall?: { name: string; arguments: string };
toolCalls?: ToolCall[];
}

Query provider capabilities at runtime:

const capabilities = await client.capabilities();
// {
// streaming: true,
// functionCalling: true,
// toolUse: true,
// maxContext: 128000,
// vision: true,
// jsonMode: true
// }

When multiple configuration sources specify the same setting, ArtemisKit uses this precedence (highest to lowest):

  1. CLI flags--provider, --model
  2. Test case — Case-level provider, model
  3. Scenario file — Scenario-level settings
  4. Config fileartemis.config.yaml
  5. Environment variables — API keys, defaults
  6. Defaults — Built-in fallbacks