Providers

FOON is provider-agnostic. Configure it to work with any major AI provider.

Supported Providers

ProviderClassDefault ModelNotes
OpenAIOpenAIProvidergpt-5-nanoRecommended - Fast & efficient
Google GeminiGeminiProvidergemini-1.5-flashGood performance/cost
OllamaOllamaProviderllama2Self-hosted, no API key

OpenAI

import { transform, OpenAIProvider } from 'foon-sdk';

const provider = new OpenAIProvider({
  apiKey: process.env.OPENAI_API_KEY,
  model: 'gpt-5-nano',  // optional
  timeout: 30000        // optional
});

const result = await transform(input, { schema, provider });

Model Options

ModelSpeedAccuracyCost
gpt-5-nanoVery FastExcellent$
gpt-4oFastExcellent$$
gpt-4o-miniFastVery Good$
gpt-4-turboMediumExcellent$$$

Google Gemini

import { transform, GeminiProvider } from 'foon-sdk';

const provider = new GeminiProvider({
  apiKey: process.env.GEMINI_API_KEY,
  model: 'gemini-1.5-flash',  // optional
  timeout: 30000              // optional, default: 30s
});

const result = await transform(input, { schema, provider });

Model Options

ModelSpeedAccuracyCost
gemini-1.5-proMediumExcellent$$
gemini-1.5-flashFastVery Good$

Custom Endpoints

Use any OpenAI-compatible API:

const provider = new OpenAIProvider({
  apiKey: process.env.CUSTOM_API_KEY,
  baseUrl: 'https://your-api.example.com/v1',
  model: 'your-model'
});

Works with:

  • Azure OpenAI
  • Anyscale
  • Together.ai
  • Fireworks.ai
  • Local LLM servers

Ollama (Self-Hosted)

Run AI locally with no API keys:

# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh

# Pull a model
ollama pull llama2
import { transform, OllamaProvider } from 'foon-sdk';

const provider = new OllamaProvider({
  model: 'llama2',
  baseUrl: 'http://localhost:11434',  // default
  timeout: 60000                       // Ollama can be slower
});

const result = await transform(input, { schema, provider });

Recommended Models

ModelSizeSpeedQuality
llama3:70b39GBSlowExcellent
llama3:8b4.7GBFastGood
mistral4.1GBFastGood

Note: Ollama can run locally without an API key. If using a hosted Ollama instance, provide the apiKey and baseUrl.

Provider Comparison

When to use OpenAI

  • Best overall performance
  • Fast and efficient (gpt-5-nano)
  • Wide model selection
  • Established reliability

When to use Gemini

  • Cost optimization
  • Good general performance
  • Google Cloud integration

When to use Ollama

  • Data privacy requirements
  • No external API calls
  • Development/testing
  • Cost-free operation

Performance Tips

Caching

Enable caching to reduce provider calls:

import { transform, OpenAIProvider, LRUCache } from 'foon-sdk';

const cache = new LRUCache({ max: 100, ttl: 3600000 });

const result = await transform(input, {
  schema,
  provider: new OpenAIProvider({
    apiKey: process.env.OPENAI_API_KEY,
    model: 'gpt-5-nano'
  }),
  cache
});

console.log(result.trace.cache.hit);  // true if cache was used

Confidence Threshold

Adjust the confidence threshold per provider:

// Higher threshold for production
const result = await transform(input, {
  schema,
  provider,
  confidenceThreshold: 0.9  // Reject mappings below 90%
});

// Lower threshold for development
const result = await transform(input, {
  schema,
  provider,
  confidenceThreshold: 0.7
});

Verbose Mode

Enable verbose output for debugging:

const result = await transform(input, {
  schema,
  provider,
  verbose: true
});

console.log(result.trace.timings);
console.log(result.trace.mappingPlan.raw);