Docs
/
Architecture
/
Provider model
Overview
Previous
Runtime flow
Next
On this page
Primary Interface — IAgentProvider
Available Providers
Multi-Instance Model Providers
Agent Profiles
Provider Resolution Flow
Provider Configuration
Ollama (Default — local, no API key)
FoundryLocal (local on-device inference)
Azure OpenAI (cloud)
Foundry (Azure AI cloud endpoint)
Per-Definition Endpoint Override
Example: Two Ollama Instances
How It Works
Fallback Chain
Configuration
How It Works
Availability Check
Recommended Chain Configurations
Model Selection Strategy
Primary Model
Per-Request Routing (Future)
Runtime Model Switching
Switching Providers
At Startup (DI Registration)
With Fallback Chain
At Runtime (Settings API)
Scroll to top
Ask AI about this page
Copy as Markdown
Was this page helpful?
Yes
No