Skip to content

Getting Started

Get ArtemisKit CLI running in under 5 minutes.

  • Node.js 18+ or Bun 1.0+
  • An API key for your LLM provider (OpenAI, Azure, Anthropic, etc.)
Terminal window
npm install -g @artemiskit/cli

Or with other package managers:

Terminal window
# Bun
bun add -g @artemiskit/cli
# pnpm
pnpm add -g @artemiskit/cli
# Yarn
yarn global add @artemiskit/cli

Verify installation:

Terminal window
artemiskit --version
# or use the shorthand
akit --version
Terminal window
export OPENAI_API_KEY="sk-your-api-key"

Or for other providers:

Terminal window
# Anthropic
export ANTHROPIC_API_KEY="sk-ant-..."
# Azure OpenAI
export AZURE_OPENAI_API_KEY="your-key"
export AZURE_OPENAI_RESOURCE_NAME="your-resource"
export AZURE_OPENAI_DEPLOYMENT_NAME="your-deployment"

Create a file called hello-world.yaml:

name: hello-world
description: My first ArtemisKit test
provider: openai
model: gpt-5
cases:
- id: basic-math
prompt: "What is 2 + 2?"
expected:
type: contains
values:
- "4"
mode: any
- id: greeting
prompt: "Say hello in a friendly way"
expected:
type: contains
values:
- "hello"
- "hi"
- "hey"
mode: any
Terminal window
artemiskit run hello-world.yaml

Or use the shorthand:

Terminal window
akit run hello-world.yaml

You’ll see output like:

Running scenario: hello-world
✓ basic-math (234ms)
✓ greeting (189ms)
Results: 2/2 passed (100%)

Generate an HTML report:

Terminal window
akit run hello-world.yaml --save

This creates files in artemis-output/:

  • run_manifest.json — Complete run data
  • A timestamped report file

For consistent settings across runs, create artemis.config.yaml:

provider: openai
model: gpt-5
providers:
openai:
apiKey: ${OPENAI_API_KEY}
timeout: 60000
output:
format: json
dir: ./artemis-output

Now you can run scenarios without specifying provider/model each time:

Terminal window
akit run hello-world.yaml