Skip to content

Conversation

@roomote
Copy link
Contributor

@roomote roomote bot commented Jan 25, 2026

Summary

This PR attempts to address Issue #10949. Feedback and guidance are welcome.

Problem

Users with Ollama models that support native thinking (like gpt-oss-20b and gpt-oss-120b) were not seeing reasoning blocks appear in Roo Code. The reasoning content was not being detected or displayed.

Solution

This PR adds support for Ollama's native thinking feature (available since Ollama 0.5.0+):

  1. Native thinking field support: Check for message.thinking field in Ollama responses and yield it as reasoning content
  2. Think option: Pass the think option to Ollama chat requests when reasoning is enabled via settings
  3. Effort mapping: Map reasoning effort settings (high, medium, low, minimal) to Ollama's think option values
  4. Backward compatibility: Keep the existing <think> tag detection as a fallback for models that use that format (like DeepSeek R1)

Changes

  • src/api/providers/native-ollama.ts: Added getThinkOption() method and native thinking support in createMessage()
  • src/api/providers/__tests__/native-ollama.spec.ts: Added comprehensive tests for native thinking support

Testing

  • All 21 tests pass
  • TypeScript type checks pass
  • ESLint passes

Usage

For models that support Ollama native thinking, users should:

  1. Enable reasoning effort in settings (enableReasoningEffort: true)
  2. Set their preferred reasoning effort level
  3. Model info should have supportsReasoningEffort: true

The reasoning blocks should now appear for models like gpt-oss-20b and gpt-oss-120b when they return thinking content via the native thinking field.

Fixes #10949


Important

Adds native thinking support for Ollama models, enabling reasoning content detection and display, with tests for new functionality.

  • Behavior:
    • Adds native thinking support for Ollama models in native-ollama.ts by checking message.thinking field and yielding it as reasoning content.
    • Maps reasoning effort settings to Ollama's think option values.
    • Retains <think> tag detection for backward compatibility.
  • Functions:
    • Adds getThinkOption() in NativeOllamaHandler to determine think option based on model and settings.
    • Updates createMessage() in NativeOllamaHandler to include think option in API requests.
  • Testing:
    • Adds tests in native-ollama.spec.ts for native thinking support, including reasoning field handling and think option mapping.

This description was created by Ellipsis for 5762ad7. You can customize this summary. It will automatically update as commits are pushed.

- Add support for Ollama native thinking field (message.thinking)
- Pass think option to Ollama chat request when reasoning is enabled
- Map reasoning effort settings to Ollama think option values
- Keep existing <think> tag detection as fallback for compatibility

This enables reasoning blocks to display for Ollama models that support
native thinking (Ollama 0.5.0+), such as gpt-oss models.

Fixes #10949
@roomote
Copy link
Contributor Author

roomote bot commented Jan 25, 2026

Rooviewer Clock   See task on Roo Cloud

Review completed. No issues found.

The implementation correctly:

  • Adds support for Ollama's native thinking field in stream responses
  • Passes the think option to Ollama when reasoning is enabled
  • Maps reasoning effort settings appropriately (high/xhigh to high, medium to medium, low/minimal to low)
  • Maintains backward compatibility with <think> tag detection
  • Has comprehensive test coverage

Mention @roomote in a comment to request specific changes to this pull request or fix all unresolved issues.

@jamestmartin
Copy link

Well, that's awfully convenient. My next feature request was going to be support for reasoning effort settings. Two birds, one stone.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: Triage

Development

Successfully merging this pull request may close these issues.

[ENHANCEMENT] Stream model thoughts

3 participants