Providers
FOON is provider-agnostic. Configure it to work with any major AI provider.
Supported Providers
| Provider | Class | Default Model | Notes |
|---|---|---|---|
| OpenAI | OpenAIProvider | gpt-5-nano | Recommended - Fast & efficient |
| Google Gemini | GeminiProvider | gemini-1.5-flash | Good performance/cost |
| Ollama | OllamaProvider | llama2 | Self-hosted, no API key |
OpenAI
import { transform, OpenAIProvider } from 'foon-sdk';
const provider = new OpenAIProvider({
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-5-nano', // optional
timeout: 30000 // optional
});
const result = await transform(input, { schema, provider });Model Options
| Model | Speed | Accuracy | Cost |
|---|---|---|---|
| gpt-5-nano | Very Fast | Excellent | $ |
| gpt-4o | Fast | Excellent | $$ |
| gpt-4o-mini | Fast | Very Good | $ |
| gpt-4-turbo | Medium | Excellent | $$$ |
Google Gemini
import { transform, GeminiProvider } from 'foon-sdk';
const provider = new GeminiProvider({
apiKey: process.env.GEMINI_API_KEY,
model: 'gemini-1.5-flash', // optional
timeout: 30000 // optional, default: 30s
});
const result = await transform(input, { schema, provider });Model Options
| Model | Speed | Accuracy | Cost |
|---|---|---|---|
| gemini-1.5-pro | Medium | Excellent | $$ |
| gemini-1.5-flash | Fast | Very Good | $ |
Custom Endpoints
Use any OpenAI-compatible API:
const provider = new OpenAIProvider({
apiKey: process.env.CUSTOM_API_KEY,
baseUrl: 'https://your-api.example.com/v1',
model: 'your-model'
});Works with:
- Azure OpenAI
- Anyscale
- Together.ai
- Fireworks.ai
- Local LLM servers
Ollama (Self-Hosted)
Run AI locally with no API keys:
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Pull a model
ollama pull llama2import { transform, OllamaProvider } from 'foon-sdk';
const provider = new OllamaProvider({
model: 'llama2',
baseUrl: 'http://localhost:11434', // default
timeout: 60000 // Ollama can be slower
});
const result = await transform(input, { schema, provider });Recommended Models
| Model | Size | Speed | Quality |
|---|---|---|---|
| llama3:70b | 39GB | Slow | Excellent |
| llama3:8b | 4.7GB | Fast | Good |
| mistral | 4.1GB | Fast | Good |
Note: Ollama can run locally without an API key. If using a hosted Ollama instance, provide the apiKey and baseUrl.
Provider Comparison
When to use OpenAI
- Best overall performance
- Fast and efficient (gpt-5-nano)
- Wide model selection
- Established reliability
When to use Gemini
- Cost optimization
- Good general performance
- Google Cloud integration
When to use Ollama
- Data privacy requirements
- No external API calls
- Development/testing
- Cost-free operation
Performance Tips
Caching
Enable caching to reduce provider calls:
import { transform, OpenAIProvider, LRUCache } from 'foon-sdk';
const cache = new LRUCache({ max: 100, ttl: 3600000 });
const result = await transform(input, {
schema,
provider: new OpenAIProvider({
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-5-nano'
}),
cache
});
console.log(result.trace.cache.hit); // true if cache was usedConfidence Threshold
Adjust the confidence threshold per provider:
// Higher threshold for production
const result = await transform(input, {
schema,
provider,
confidenceThreshold: 0.9 // Reject mappings below 90%
});
// Lower threshold for development
const result = await transform(input, {
schema,
provider,
confidenceThreshold: 0.7
});Verbose Mode
Enable verbose output for debugging:
const result = await transform(input, {
schema,
provider,
verbose: true
});
console.log(result.trace.timings);
console.log(result.trace.mappingPlan.raw);