Skip to content

Audit Logging User Guide

Track token usage and API costs for every Claude Code request with opt-in audit logging. This guide covers configuration, log analysis, and privacy considerations.

Overview

The audit logging feature captures metadata about each Claude API request—including token counts, model names, timestamps, and durations—without logging any prompt content or user data. This allows you to:

  • Track token consumption per request
  • Analyze costs by model
  • Identify usage patterns
  • Understand API performance

Key characteristics:

  • Opt-in by default (disabled unless explicitly enabled)
  • Metadata-only logging (no prompts, responses, or user data)
  • Automatic log rotation (default: 10MB max)
  • JSONL format (newline-delimited JSON) for easy analysis with standard Unix tools
  • Zero performance impact when disabled

Enabling Audit Logging

Step 1: Edit Your Configuration

Add the audit logging section to your config file:

Global configuration (~/.claude/config.json):

json
{
  "audit_logging": {
    "enabled": true
  }
}

Project-level configuration (.clauderc in your project root):

json
{
  "audit_logging": {
    "enabled": true,
    "log_path": "./logs/audit.log",
    "max_size_mb": 5
  }
}

Step 2: Verify It's Working

Make a Claude API request. If audit logging is enabled, you should see a new log entry:

bash
# Check the audit log
cat ~/.claude/audit.log

# Pretty-print the latest entry
tail -1 ~/.claude/audit.log | jq .

Configuration Options

Available Fields

FieldTypeDefaultRangeDescription
enabledbooleanfalse-Enable/disable audit logging. Default: disabled for privacy.
log_pathstring~/.claude/audit.log-File path for audit log. Supports ~ for home directory and relative paths.
max_size_mbnumber101–1000Maximum log file size (MB) before rotation.
keep_backupsnumber11Number of backup files to keep. (Currently MVP supports only 1.)

Default Configuration

If you don't configure audit logging, these defaults are used:

json
{
  "audit_logging": {
    "enabled": false,
    "log_path": "~/.claude/audit.log",
    "max_size_mb": 10,
    "keep_backups": 1
  }
}

Configuration Priority

Audit logging config is loaded in this order (first match wins):

  1. Project-level (.clauderc in current directory)
  2. Global (~/.claude/config.json)
  3. Defaults (built-in values shown above)

This allows you to use different settings per project while maintaining global defaults.

Validation Constraints

The configuration module validates all values:

  • enabled: Must be a boolean (default: false)
  • log_path: Must be a valid string path
  • max_size_mb: Must be between 1 MB (minimum size) and 1000 MB (maximum size)
    • Values below 1 MB are reset to the default (10 MB) with a warning
    • Values above 1000 MB are capped at 1000 MB with a warning
  • keep_backups: Only 1 backup is supported in this MVP release (other values trigger a warning)

Log Format

Each audit log entry is a single-line JSON object (JSONL format). This format is compatible with jq, grep, and standard Unix tools.

Log Entry Schema

json
{
  "timestamp": "2026-02-04T23:30:15Z",
  "session_id": "uuid-string-here",
  "model": "claude-sonnet-4-5-20250929",
  "input_tokens": 1234,
  "output_tokens": 567,
  "cache_creation_tokens": 0,
  "cache_read_tokens": 0,
  "total_cost_usd": 0.00789,
  "duration_ms": 3200,
  "duration_api_ms": 2950,
  "models_used": ["claude-sonnet-4-5-20250929"],
  "warning": null
}

Field Descriptions

FieldTypeDescription
timestampstringISO 8601 UTC timestamp when the request was recorded.
session_idstringUnique identifier for this Claude session.
modelstringModel name used (e.g., claude-opus-4-5-20251101).
input_tokensnumber | nullTokens sent to the API (null if not available).
output_tokensnumber | nullTokens generated by Claude (null if not available).
cache_creation_tokensnumber | nullTokens consumed creating a prompt cache (null if not applicable).
cache_read_tokensnumber | nullTokens read from a prompt cache (null if not applicable).
total_cost_usdnumber | nullEstimated cost in USD (null if pricing data unavailable).
duration_msnumber | nullTotal request duration in milliseconds (null if not available).
duration_api_msnumber | null(Optional) API response time only, excluding local processing.
models_usedstring[](Optional) List of models used in this request (for multi-model sessions).
warningstring | null(Optional) Warning if data is incomplete (e.g., missing_usage_data).

Example Entries

Successful request with complete data:

json
{"timestamp":"2026-02-04T10:00:00Z","session_id":"sess-abc123","model":"claude-sonnet-4-5-20250929","input_tokens":1234,"output_tokens":567,"cache_creation_tokens":0,"cache_read_tokens":0,"total_cost_usd":0.00789,"duration_ms":3200}

Request with complete data and API timing:

json
{"timestamp":"2026-02-04T10:15:00Z","session_id":"sess-def456","model":"claude-opus-4-5-20251101","input_tokens":2000,"output_tokens":800,"cache_creation_tokens":0,"cache_read_tokens":0,"total_cost_usd":0.01389,"duration_ms":4500,"duration_api_ms":4200}

Request with cache usage:

json
{"timestamp":"2026-02-04T10:30:00Z","session_id":"sess-ghi789","model":"claude-opus-4-5-20251101","input_tokens":5000,"output_tokens":1500,"cache_creation_tokens":2000,"cache_read_tokens":0,"total_cost_usd":0.02450,"duration_ms":8000,"duration_api_ms":7500}

Log Location

Default Path

~/.claude/audit.log

This expands to your home directory. For example:

  • macOS/Linux: /Users/yourname/.claude/audit.log
  • Windows: C:\Users\yourname\.claude\audit.log

Custom Location

Specify a custom path in your config:

json
{
  "audit_logging": {
    "enabled": true,
    "log_path": "/var/log/claude/audit.log"
  }
}

Path Resolution Rules

Paths are resolved in this order:

  1. Absolute paths are used as-is
    • Example: /var/log/claude/audit.log
  2. Tilde paths expand to home directory
    • Example: ~/my-logs/audit.log/Users/yourname/my-logs/audit.log
  3. Relative paths are resolved relative to ~/.claude/
    • Example: logs/audit.log~/.claude/logs/audit.log

Directory Creation

The .claude directory and any parent directories are created automatically when the first audit log entry is written. You don't need to create them manually.

Rotation Behavior

When Rotation Happens

Log rotation is triggered when the current log file reaches or exceeds the max_size_mb threshold. This happens automatically before writing a new entry.

Example timeline:

10:00 - Write entry (file size: 1MB)
10:15 - Write entry (file size: 5MB)
10:30 - Write entry (file size: 9.5MB)
10:45 - Write entry (would push to 10.2MB)
       → ROTATION TRIGGERED
       → audit.log renamed to audit.log.backup
       → New audit.log created
       → Entry written to new audit.log

How Rotation Works

When the file reaches the size limit:

  1. Old backup file (if it exists) is deleted
  2. Current log file is renamed to audit.log.backup
  3. A new empty audit.log file is created
  4. The pending entry is written to the new file

File layout:

~/.claude/
├── audit.log          ← Current log (newly rotated, fresh)
└── audit.log.backup   ← Previous log (rotated out)

Backup Naming

Backups use a simple .backup suffix. The MVP implementation maintains only one backup file. If you need multiple backups or time-based naming, you can:

  1. Manually copy the backup before rotation occurs
  2. Use OS-level log rotation tools like logrotate on Unix systems
  3. Archive old logs to another location

Size Configuration

Set the rotation size in your config:

json
{
  "audit_logging": {
    "max_size_mb": 20
  }
}

Common sizes:

  • 1 – Very frequent rotation (small projects)
  • 5 – Frequent rotation (active development)
  • 10 – Default (typical usage)
  • 50 – Large deployments
  • 500 – High-traffic services

Analyzing Logs

Sample Log Data

The examples below use this sample audit log (4 entries):

{"timestamp":"2026-02-04T10:00:00Z","session_id":"sess-1","model":"claude-sonnet-4-5-20250929","input_tokens":1234,"output_tokens":567,"cache_creation_tokens":0,"cache_read_tokens":0,"total_cost_usd":0.00789,"duration_ms":3200}
{"timestamp":"2026-02-04T10:15:00Z","session_id":"sess-2","model":"claude-sonnet-4-5-20250929","input_tokens":2000,"output_tokens":800,"cache_creation_tokens":0,"cache_read_tokens":0,"total_cost_usd":0.01389,"duration_ms":4500}
{"timestamp":"2026-02-04T10:30:00Z","session_id":"sess-3","model":"claude-opus-4-5-20251101","input_tokens":5000,"output_tokens":1500,"cache_creation_tokens":0,"cache_read_tokens":0,"total_cost_usd":0.02450,"duration_ms":8000}
{"timestamp":"2026-02-03T14:00:00Z","session_id":"sess-4","model":"claude-sonnet-4-5-20250929","input_tokens":800,"output_tokens":300,"cache_creation_tokens":0,"cache_read_tokens":0,"total_cost_usd":0.00389,"duration_ms":2100}

Example 1: Total Tokens Used Today

Count all input and output tokens from today's requests:

bash
# Calculate total tokens used today
grep $(date +%Y-%m-%d) ~/.claude/audit.log | \
  jq -s 'map(.input_tokens + .output_tokens) | add'

Output: 10868 (total tokens)

What it does:

  • grep filters log entries matching today's date
  • jq -s slurps all matching entries into an array
  • map() transforms each entry into token sum
  • add sums all the values

Example 2: Tokens Grouped by Model

Break down token usage by model to understand which models consume the most tokens:

bash
# Pattern: by.*model|group.*model - use group_by to aggregate by model
jq -s 'group_by(.model) | map({
  model: .[0].model,
  total_input: map(.input_tokens) | add,
  total_output: map(.output_tokens) | add,
  total_combined: map(.input_tokens + .output_tokens) | add,
  request_count: length
})' ~/.claude/audit.log

Output:

json
[
  {
    "model": "claude-sonnet-4-5-20250929",
    "total_input": 4034,
    "total_output": 1667,
    "total_combined": 5701,
    "request_count": 3
  },
  {
    "model": "claude-opus-4-5-20251101",
    "total_input": 5000,
    "total_output": 1500,
    "total_combined": 6500,
    "request_count": 1
  }
]

What it does:

  • group_by(.model) organizes entries by model
  • map() transforms each group into summary statistics
  • Calculates input, output, combined totals and request count

Example 3: Average Tokens Per Request

Calculate the average tokens per API request:

bash
# Calculate average tokens per request
jq -s 'map(.input_tokens + .output_tokens) | (add / length)' ~/.claude/audit.log

Output: 2717 (average tokens per request)

For a more detailed breakdown, use:

bash
jq -s 'map(.input_tokens + .output_tokens) | {
  total: add,
  count: length,
  average: (add / length)
}' ~/.claude/audit.log

What it does:

  • map() sums tokens for each request
  • Calculates total, count, and average

Example 4: Cost Analysis

Calculate total cost and average cost per request:

bash
jq -s '{
  total_cost: map(.total_cost_usd) | add,
  request_count: length,
  average_cost: (map(.total_cost_usd) | add) / length,
  cost_by_model: group_by(.model) | map({
    model: .[0].model,
    total_cost: map(.total_cost_usd) | add
  })
}' ~/.claude/audit.log

Output:

json
{
  "total_cost": 0.05017,
  "request_count": 4,
  "average_cost": 0.00125425,
  "cost_by_model": [
    {
      "model": "claude-sonnet-4-5-20250929",
      "total_cost": 0.02567
    },
    {
      "model": "claude-opus-4-5-20251101",
      "total_cost": 0.02450
    }
  ]
}

Example 5: Request Duration Analysis

Find slow requests and average response time:

bash
jq -s 'sort_by(.duration_ms) | {
  slowest: .[-1],
  fastest: .[0],
  average_duration_ms: (map(.duration_ms) | add / length),
  median_duration_ms: .[length / 2 | floor].duration_ms
}' ~/.claude/audit.log

Output:

json
{
  "slowest": {
    "timestamp": "2026-02-04T10:30:00Z",
    "model": "claude-opus-4-5-20251101",
    "duration_ms": 8000
  },
  "fastest": {
    "timestamp": "2026-02-03T14:00:00Z",
    "model": "claude-sonnet-4-5-20250929",
    "duration_ms": 2100
  },
  "average_duration_ms": 4425,
  "median_duration_ms": 3850
}

Example 6: Requests from a Specific Session

Filter logs by session ID:

bash
jq 'select(.session_id == "sess-1")' ~/.claude/audit.log

Output:

json
{"timestamp":"2026-02-04T10:00:00Z","session_id":"sess-1","model":"claude-sonnet-4-5-20250929","input_tokens":1234,"output_tokens":567,"cache_creation_tokens":0,"cache_read_tokens":0,"total_cost_usd":0.00789,"duration_ms":3200}

Example 7: Cache Performance

Analyze cache creation and read patterns:

bash
jq -s '{
  total_cache_created: map(.cache_creation_tokens) | add,
  total_cache_read: map(.cache_read_tokens) | add,
  cache_enabled_requests: map(select(.cache_creation_tokens > 0 or .cache_read_tokens > 0)) | length,
  cache_efficiency: (
    (map(.cache_read_tokens) | add) /
    ((map(.cache_read_tokens) | add) + (map(.cache_creation_tokens) | add))
  )
}' ~/.claude/audit.log

Cost Calculation

Pricing Formula

Estimated cost is calculated by the Claude API based on current pricing:

Total Cost = (Input Tokens × Input Price) + (Output Tokens × Output Price)

Example:

Request: 1,000 input tokens + 500 output tokens
Model: Claude Sonnet (as of Feb 2026)
- Input: $3 per 1M tokens
- Output: $15 per 1M tokens

Cost = (1,000 × $3/1M) + (500 × $15/1M)
     = $0.003 + $0.0075
     = $0.0105

Accessing Cost Data

Each log entry includes total_cost_usd if available:

bash
# Sum total cost
jq -s 'map(.total_cost_usd) | add' ~/.claude/audit.log

# Cost per model
jq -s 'group_by(.model) | map({
  model: .[0].model,
  total_cost: map(.total_cost_usd) | add
})' ~/.claude/audit.log

Cache Impact on Cost

Prompt cache reads are significantly cheaper than regular input tokens. The audit log records both:

  • cache_creation_tokens: Tokens consumed creating cache (billed at input rate)
  • cache_read_tokens: Tokens read from cache (billed at 10% of input rate)

Example cache cost:

Request with cache reuse:
- Input tokens: 100
- Cache read tokens: 5,000
- Input price: $3 per 1M
- Cache read price: $0.30 per 1M

Cost = (100 × $3/1M) + (5,000 × $0.30/1M)
     = $0.0003 + $0.0015
     = $0.0018

Without cache, the same 5,100 input tokens would cost $0.0153 — 8.5x more.

Privacy & Security

What Is Logged

Logged (metadata only):

  • timestamp – When the request was made
  • session_id – Which Claude session created the request
  • model – Which AI model processed it
  • input_tokens, output_tokens – Token counts (no message content)
  • cache_creation_tokens, cache_read_tokens – Cache usage stats
  • total_cost_usd – Estimated cost
  • duration_ms – Request timing
  • models_used – List of models (for multi-model sessions)

What Is NOT Logged

Never logged:

  • Prompt text
  • Response text
  • System messages or instructions
  • User credentials or API keys
  • Session tokens or authentication data
  • Conversation history
  • Any other user data

Opt-In by Default

Audit logging is disabled by default for privacy. It requires explicit opt-in by setting enabled: true in config.

File Permissions

Audit log files are created with restrictive permissions (mode 0600):

bash
ls -la ~/.claude/audit.log
# Output: -rw------- user user 45678 Feb 4 10:45 /Users/user/.claude/audit.log

This means:

  • Only the file owner can read the log
  • Group and others have no access
  • Prevent accidental information leakage

Sharing and Backup

When sharing logs or backing them up:

  1. Review before sharing – Even though logs contain only metadata, they reveal usage patterns
  2. Restrict access – Keep files with 0600 permissions
  3. Rotate before archival – Archive and delete old logs to avoid accumulation
  4. Use encryption – For sensitive environments, encrypt backup files

The audit logging feature is implemented across these modules in src/audit/:

Troubleshooting

Audit log not being created

Check 1: Is audit logging enabled?

bash
cat ~/.claude/config.json | jq '.audit_logging.enabled'

If it returns false or null, enable it:

json
{
  "audit_logging": {
    "enabled": true
  }
}

Check 2: Are you making requests?

The log file is created when the first entry is written. Make sure you're making Claude API requests after enabling audit logging.

Check 3: Check permissions

bash
ls -la ~/.claude/
# Should show you can write to this directory

Log file is empty or truncated

Possible causes:

  1. No requests made – Make some Claude API requests
  2. Rotation just occurred – The old log was rotated to .backup. Check:
    bash
    ls -la ~/.claude/audit.log*
  3. Permission issue – Verify write permissions:
    bash
    touch ~/.claude/test.log && rm ~/.claude/test.log

Analyzing logs fails with "jq: parse error"

This usually means corrupted JSON in a log entry. Check for lines with issues:

bash
# Show all lines and find the problematic one
jq . ~/.claude/audit.log 2>&1 | grep -A 1 "parse error"

# Show the raw line that failed
sed -n '3p' ~/.claude/audit.log  # Line numbers from jq output

If you find a corrupted entry, you can:

  • Remove the line manually (use a text editor)
  • Or restart the log by rotating it:
    bash
    mv ~/.claude/audit.log ~/.claude/audit.log.old
    # The next request will create a fresh log

Cannot write to audit log (permission denied)

Error message:

ERROR: Permission denied writing to audit log

Fix:

bash
# Ensure you own the directory
sudo chown -R $(whoami) ~/.claude/

# Ensure correct permissions
chmod 700 ~/.claude/
chmod 600 ~/.claude/audit.log ~/.claude/audit.log.backup 2>/dev/null

Disk space issues

Error message:

ERROR: Cannot write audit log - disk full

Solutions:

  1. Archive and delete old logs:

    bash
    mv ~/.claude/audit.log ~/audit-logs/audit-$(date +%Y-%m-%d).log
    # (or similar archive strategy)
  2. Reduce max file size:

    json
    {
      "audit_logging": {
        "max_size_mb": 5
      }
    }
  3. Use logrotate to manage automatically (Unix/Linux):

    bash
    # Create /etc/logrotate.d/claude-audit
    ~/.claude/audit.log {
      daily
      rotate 7
      compress
      missingok
    }

Next Steps

  • Use the analysis examples to explore your usage patterns
  • Set up a monitoring script to track costs over time
  • Consider archiving old logs to manage disk space
  • Review privacy implications before sharing logs

Built with Claude Code