Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request]: Provide access to the content of the reasoning when using reasoning LLMs, Ollama and structured output #18302

Open
MHaurel opened this issue Mar 28, 2025 · 1 comment
Labels
enhancement New feature or request triage Issue needs to be triaged/prioritized

Comments

@MHaurel
Copy link

MHaurel commented Mar 28, 2025

Feature Description

When requesting a structured output from a reasoning LLM using the code below, only the json is returned by the LLM. We would like to have access to the reasoning of the LLM (generally encapsuled in between tags).

llm = Ollama(model="deepseek-r1:1.5b")

structured_llm = llm.as_structured_llm(Invoice)
response = structured_llm.complete(input_text)

Thanks for your work !

Reason

No response

Value of Feature

No response

@MHaurel MHaurel added enhancement New feature or request triage Issue needs to be triaged/prioritized labels Mar 28, 2025
@logan-markewich
Copy link
Collaborator

Generally speaking there is no reasoning -- Ollama, OpenAI, Gemin, etc. hide structured outputs as tool calls within their API

LLM classes that do not support function calling use json prompting, but tbh exposing this will kind of complicate this API design or even break usage for other users 🤔 I'll think a bit more about this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request triage Issue needs to be triaged/prioritized
Projects
None yet
Development

No branches or pull requests

2 participants