ModelBox Open Source

/

AgentKit
Core Concepts/Models

Models

Configure language models for your agents

|

AgentKit supports multiple language model providers through a unified interface. You can easily switch between OpenAI, Anthropic, and other providers without changing your agent code.

Quick Start

// Create a model with OpenAI
model := model.Model("gpt-4o").
    SetAPIKey(os.Getenv("OPENAI_API_KEY"))

// Use with an agent
agent := agent.New().
    SetModel(model).
    SetSystemPrompt("You are a helpful assistant.")

OpenAI Models

Supports all GPT models with function calling and streaming:

// Different OpenAI models
model := model.Model("gpt-4o").          // Latest GPT-4 Omni
    SetAPIKey(os.Getenv("OPENAI_API_KEY"))

model := model.Model("gpt-4o-mini").     // Faster, cheaper option
    SetAPIKey(os.Getenv("OPENAI_API_KEY"))

model := model.Model("gpt-3.5-turbo").  // Cost-effective choice
    SetAPIKey(os.Getenv("OPENAI_API_KEY"))

Configuration Options

model := model.Model("gpt-4o").
    SetAPIKey("your-api-key").
    SetBaseURL("https://api.openai.com/v1").  // Custom endpoint
    SetOrganization("your-org-id").           // Organization ID
    SetTemperature(0.7).                      // Response creativity
    SetMaxTokens(1000)                        // Response length limit

Other Providers

Anthropic (Coming Soon)

// Future support for Claude models
model := model.Model("claude-3-sonnet").
    SetAPIKey(os.Getenv("ANTHROPIC_API_KEY"))

Custom Providers

Integrate with your own model endpoints:

// Custom OpenAI-compatible endpoint
model := model.Model("custom-model").
    SetAPIKey("your-api-key").
    SetBaseURL("https://your-endpoint.com/v1")

Model Parameters

Temperature

Controls response creativity and randomness:

ValueUse CaseBehavior
0.0-0.2Code generation, factual Q&AConsistent, deterministic
0.3-0.5Technical writing, analysisBalanced, reliable
0.6-0.8General conversationNatural, varied
0.9-1.0Creative writing, brainstormingHighly creative
// For consistent technical responses
model := model.Model("gpt-4o").
    SetTemperature(0.1)

// For creative content generation
model := model.Model("gpt-4o").
    SetTemperature(0.8)

Max Tokens

Limits response length:

// Short responses
model.SetMaxTokens(500)

// Detailed explanations
model.SetMaxTokens(2000)

// Long-form content
model.SetMaxTokens(4000)

Advanced Features

Streaming Responses

Get real-time response streaming:

// Create agent with streaming-capable model
model := model.Model("gpt-4o").
    SetAPIKey(os.Getenv("OPENAI_API_KEY"))

agent := agent.New().
    SetModel(model)

// Create session and stream responses
session := session.New(agent)
stream, err := session.RunStreamed(ctx, []agent.ChatMessage{agent.NewUserMessage("Tell me a story")}, nil)
for event, err := range stream.StreamEvents() {
    if err != nil {
        log.Fatal(err)
    }
    if event.IsTextDelta() {
        fmt.Print(event.GetContent()) // Print each chunk as it arrives
    }
}

Function Calling

Models can call tools when available:

// Create model that supports function calling
model := model.Model("gpt-4o").
    SetAPIKey(os.Getenv("OPENAI_API_KEY"))

// Add tools to agent (model automatically uses them)
agent := agent.New().
    SetModel(model).
    AddTool(weatherTool).
    AddTool(calculatorTool)

Best Practices

Model Selection

Choose the right model for your use case:

// Cost-effective for simple tasks
model := model.Model("gpt-3.5-turbo")

// Best performance for complex reasoning
model := model.Model("gpt-4o")

// Fast and cheap for basic interactions
model := model.Model("gpt-4o-mini")

Error Handling

Handle API errors gracefully:

model := model.Model("gpt-4o").
    SetAPIKey(os.Getenv("OPENAI_API_KEY"))

agent := agent.New().
    SetModel(model)

session := session.New(agent)
response, err := session.Run(ctx, []agent.ChatMessage{agent.NewUserMessage("Hello")}, nil)
if err != nil {
    // Handle different error types
    switch {
    case errors.Is(err, model.ErrRateLimited):
        // Wait and retry
    case errors.Is(err, model.ErrInvalidKey):
        // Check API key
    default:
        log.Printf("Model error: %v", err)
    }
}

Configuration Management

Manage model settings centrally:

type Config struct {
    Model       string  `json:"model"`
    Temperature float64 `json:"temperature"`
    MaxTokens   int     `json:"max_tokens"`
}

func NewModelFromConfig(cfg Config) *model.OpenAIModel {
    return model.Model(cfg.Model).
        SetAPIKey(os.Getenv("OPENAI_API_KEY")).
        SetTemperature(cfg.Temperature).
        SetMaxTokens(cfg.MaxTokens)
}

Environment Variables

Set up your API keys:

# OpenAI
export OPENAI_API_KEY="your-openai-api-key"
export OPENAI_ORG_ID="your-org-id"          # Optional

# Custom endpoints
export CUSTOM_API_KEY="your-api-key"
export CUSTOM_BASE_URL="https://your-endpoint.com/v1"

Next Steps

  • Agents - Create agents with your models
  • Tools - Add function calling capabilities
  • Streaming - Implement real-time responses
Edit on GitHub