Skip to Content
Platform FeaturesAI Model Selection

AI Model Selection

Learn how to choose the best AI model for generating your trading strategies.

Overview

Lona uses advanced AI models from multiple providers to generate your trading strategy code. You can select which model to use based on your needs, balancing speed, cost, and capability.

Finding the Model Selector

The model selector is located in the chat input area at the bottom of the screen:

  1. Look for the dropdown button next to the stage controls (Ask/Plan/Code)
  2. Click to see available models
  3. Select your preferred model
  4. Your choice persists throughout the conversation

Available Providers

Lona supports 5 AI providers, each with reasoning/thinking enabled for code generation. The default provider is Anthropic.

ProviderCode Generation ModelValidation ModelReasoning
AnthropicClaude Opus 4.6 (adaptive)Claude Opus 4.6Adaptive thinking + high effort
OpenAIGPT-5.2 FlagshipGPT-5.2 FlagshipReasoning effort: high
xAIGrok 4.1 Fast (reasoning)Grok 4.1 Fast (reasoning)Via model name
GoogleGemini 3 ProGemini 3 ProBuilt-in
OpenRouterKimi K2.5 (Moonshot AI)Kimi K2.5Pass-through

Note: AI code validation (structural checks + AI review) always uses Anthropic Opus 4.6 with adaptive thinking, regardless of which provider generates the code. This ensures the highest quality review.

Default Models (Code Generation)

These are the models used by default when generating strategy code. Each provider uses its most capable reasoning model:

ModelProviderBest For
Claude Opus 4.6 (default)AnthropicComplex logic, edge cases, robust code
GPT-5.2 FlagshipOpenAIMulti-indicator strategies, advanced reasoning
Grok 4.1 FastxAIFast generation with built-in reasoning
Gemini 3 ProGoogleStrong reasoning, large context
Kimi K2.5OpenRouterAccess to 300+ models via single provider

Validation Models

AI code validation always uses Anthropic Opus 4.6 with adaptive thinking for the highest quality review. Additionally, each provider uses its top-tier model for validation:

ModelProviderPurpose
Claude Opus 4.6AnthropicTop-tier AI code review
GPT-5.2 FlagshipOpenAIReliable validation
Grok 4.1 FastxAILightweight checks
Gemini 3 ProGoogleFast processing
Kimi K2.5OpenRouterCost-effective validation

Default Model

Claude Opus 4.6 is selected by default with adaptive thinking enabled. This model offers:

  • Adaptive thinking — Claude dynamically determines when and how much to think
  • Highest code generation quality (avg 8.75/10, 100% working in evaluation)
  • Production-ready code with zero fatal bugs across 4 evaluation rounds
  • Best-in-class edge case handling and Backtrader framework knowledge

When to Use Each Category

Choose Basic Models When:

  • Testing a new strategy idea quickly
  • Your strategy uses 1-2 simple indicators
  • Entry/exit rules are straightforward
  • You want rapid iteration
  • Budget or response time is a concern

Example Strategies:

  • Simple moving average crossover
  • RSI overbought/oversold
  • Single breakout condition

Choose Powerful Models When:

  • Your strategy has complex logic
  • Multiple indicators interact together
  • You need sophisticated risk management
  • This is your final production version
  • Requirements have many conditions

Example Strategies:

  • Multi-timeframe analysis
  • Complex pattern recognition
  • Multiple entry/exit conditions with AND/OR logic
  • Strategies with dynamic position sizing

How Model Selection Works

During the Conversation

  1. Select your model from the dropdown
  2. Type your message describing the strategy
  3. The selected model processes your request
  4. Results appear in the chat and canvas

Changing Models Mid-Conversation

You can switch models between messages:

  • Model preference is saved per-message
  • Switching doesn’t lose conversation context
  • Different models may produce different code styles

Tip: Start with a basic model for initial exploration, then switch to a powerful model for the final code generation.

Model Comparison

AspectBasic ModelsPowerful Models
SpeedFast (10-30s)Slower (30-90s)
Simple StrategiesExcellentExcellent
Complex StrategiesGoodExcellent
Edge Case HandlingBasicComprehensive
Code QualityGoodProduction-ready
Best UseExplorationFinal version

Previous “Fast/Advanced” Mode

If you’ve used Lona before, you may remember the “Generate Fast” and “Generate Advanced” buttons. These have been replaced with explicit model selection, giving you:

  • More control over which model processes your request
  • Transparency about what AI is generating your code
  • Flexibility to switch models anytime
  • Better options with access to multiple providers

Provider Information

Anthropic (Claude Opus 4.6) — Default

  • Adaptive thinking enabled — Claude dynamically determines when and how much to think
  • High effort parameter for code generation and validation tasks
  • Highest quality code generation (avg 8.75/10 in evaluation, 100% working)
  • Low effort for explanations via Haiku 4.5 for fast responses

OpenAI (GPT-5.2 Flagship)

  • Reasoning effort set to high for code generation
  • Strong general-purpose capabilities (avg 7.95/10, 100% working)
  • Reliable and consistent outputs

xAI (Grok 4.1)

  • Reasoning controlled via model variant (reasoning vs non-reasoning)
  • Fast generation with competitive quality
  • Good for rapid iteration

Google (Gemini 3 Pro)

  • Built-in reasoning capabilities
  • Large context window for complex strategies
  • Strong multi-step logic handling

OpenRouter (300+ Models)

  • Gateway to models from many providers (Moonshot, Qwen, Z.ai, and more)
  • Default: Kimi K2.5 for all tasks
  • Any OpenRouter model ID can be passed via the model parameter

Best Practices

For Strategy Development

  1. Start simple: Use basic models for initial concepts
  2. Iterate quickly: Fast models for rapid testing
  3. Finalize carefully: Powerful models for production code
  4. Compare results: Try different models for the same requirements

Model Selection Tips

  • Don’t overthink it: Claude Opus 4.6 (default) works well for most cases
  • Try different providers: Each has different strengths — compare with the integration test
  • Use OpenRouter for variety: Access Kimi K2.5, Qwen3, and 300+ other models
  • Stick with one model during a strategy development session for consistency

When to Switch Models

Consider switching if:

  • Generated code doesn’t meet requirements
  • You need better handling of complex logic
  • Speed is more important than sophistication
  • You want to compare different approaches

Troubleshooting

”Model is Slow”

Possible Causes:

  • Powerful model selected for complex request
  • High server load

Solutions:

  1. Wait for completion (can take up to 90 seconds)
  2. Try a basic model for faster response
  3. Simplify your requirements

”Code Quality Issues”

Possible Causes:

  • Basic model struggling with complex logic
  • Unclear requirements

Solutions:

  1. Switch to a powerful model
  2. Clarify your requirements in the chat
  3. Break complex strategies into simpler parts

”Different Results Each Time”

Explanation:

  • AI models have some randomness in outputs
  • This is normal behavior
  • Core logic should be consistent

Solutions:

  1. Review generated code for correctness
  2. Use the same model for consistency
  3. Specify requirements more precisely

Quick Reference

TaskHow To
Find Model SelectorLook next to stage controls (Ask/Plan/Code)
Change ModelClick dropdown, select new model
See Current ModelCheck the dropdown button label
Use Fast ModelSelect from Basic category
Use Powerful ModelSelect from Powerful category
Reset to DefaultSelect Claude Opus 4.6

What’s Next?

Now that you understand model selection:


Related Pages:

Last updated on