Configuration
Overview
This page provides a comprehensive overview of the configuration options for the Goose project, including settings for Large Language Models (LLMs), API integrations, and extending functionality through plugins and custom modifications. Configurations are typically defined in a YAML-like structure and can include multiple models, providers, API keys, base URLs, and parameters.
Design decisions emphasize security, such as storing API keys and tokens in environment variables or secure vaults, as detailed in the Security page. For troubleshooting configuration issues, refer to Debugging. Users can extend Goose by integrating plugins or modifying configurations for multi-model support and external API integrations. For detailed guidance on implementation and plugin development, see the Contributing page.
Models Configuration
The models section allows you to define multiple LLM configurations. Each model includes details such as the provider, API key, base URL, and parameters like temperature and max tokens.
Example Models
Here is an example configuration with multiple models:
models:
- name: "gpt-4o"
provider: "openai"
api_key: "your-openai-api-key"
base_url: "https://api.openai.com/v1"
parameters:
temperature: 0.7
max_tokens: 2048
- name: "gpt-4"
provider: "openai"
api_key: "your_api_key_here"
base_url: "https://api.openai.com/v1"
parameters:
temperature: 0.7
max_tokens: 512
- name: "llama3"
provider: "local"
path: "/path/to/llama3/model"
parameters:
temperature: 0.5
max_tokens: 1024
default_model: "gpt-4o"
fallback_models: ["llama3"]
Key Concepts
- default_model: Specifies the primary model to use, such as "gpt-4o".
- fallback_models: A list of models to fall back to if the default fails, e.g., ["llama3"].
- Parameters: These control model behavior, like
temperaturefor randomness andmax_tokensfor response length limits. Avoid duplicating parameters across models unless necessary.
Configurations can also be extended to include custom LLM providers or plugins, keeping the core system lightweight while allowing scalable modifications.
Extending Configuration
Goose's configuration system enables straightforward extensions without requiring extensive code changes. Users can modify settings to enable multi-model support, integrate with external APIs, or add plugins by placing them in designated directories and loading them at startup.
For example, you can extend LLM support or add plugins through configuration files (e.g., in YAML or JSON format). A basic example for extending LLM providers and plugins is:
# Example configuration snippet for extending LLM support and plugins
llm:
providers:
- name: 'customLLM'
apiKey: 'your-api-key'
endpoint: 'https://custom-llm.example.com/api'
model: 'advanced-model'
extensions:
- type: 'plugin'
path: './path/to/custom-plugin.js'
After setting up extensions, test them using the CLI Reference or follow initial steps in the Installation page.
API Integration and Security Considerations
Goose enables authenticated API requests, such as for publishing packages to npm, by configuring API keys and tokens securely. For example, API calls can be structured in workflows like this GitHub Actions script:
- name: Publish to npm
run: |
cd ui
pnpm publish -r --access public --no-git-checks
env:
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}Recent changes
- Merged: extensibility.md → configuration.md
- Merged: api-integration.md → configuration.md
- Merged: llm-support.md → configuration.md
- Created: Added AI model configurations