Releases: langchain-ai/langchain
langchain==0.3.22
langchain-mistralai==0.2.10
Changes since langchain-mistralai==0.2.9
mistralai: release 0.2.10 (#30526)
Fix: Enable max_retries Parameter in ChatMistralAI Class (#30448)
mistral[patch]: check types in adding model_name to response_metadata (#30499)
standard-tests[patch]: require model_name in response_metadata if returns_usage_metadata (#30497)
langchain-fireworks==0.2.9
langchain-tests==0.3.17
langchain-openai==0.3.11
langchain-core==0.3.49
Changes since langchain-core==0.3.48
core[patch]: release 0.3.49 (#30500)
core[patch]: store model names on usage callback handler (#30487)
core[patch]: mark usage callback handler as beta (#30486)
core[patch]: Remove old accidental commit (#30483)
core[patch]: add token counting callback handler (#30481)
core[patch]: Fix handling of title
when tool schema is specified manually via JSONSchema (#30479)
docs[patch]: update trim_messages doc (#30462)
langchain-tests==0.3.16
langchain-openai==0.3.10
langchain-core==0.3.48
Changes since langchain-core==0.3.47
core[patch]: release 0.3.48 (#30458)
core: add tool_call exclusion in filter_message (#30289)
docs[patch]: add warning to token counter docstring (#30426)
core(mermaid): allow greater customization (#29939)
core[patch]: optimize trim_messages (#30327)
core[patch]: more tests for trim_messages (#30421)
langchain-ollama==0.3.0
Changes since langchain-ollama==0.2.3
langchain-ollama
0.3.0 updates the default method for with_structured_output
to Ollama's dedicated structured output feature. This corresponds to method="json_schema"
. Previously, with_structured_output
used Ollama's tool-calling features for this method.
To restore old behavior: explicitly specify method="function_calling"
when calling with_structured_output
:
llm = ChatOllama(model="...").with_structured_output(
schema, method="function_calling"
)
Other features
Added support for parsing reasoning content in Deepseek models:
llm = ChatOllama(model="deepseek-r1:1.5b", extract_reasoning=True)
result = llm.invoke("What is 3^3?")
result.content # "3^3 is..."
result.additional_kwargs["reasoning_content"] # "<think> To calculate 3^3, I start by... </think>"
Detailed changelog
ollama: release 0.3.0 (#30420)
ollama: add reasoning model support (e.g. deepseek) (#29689)
(Ollama) Fix String Value parsing in _parse_arguments_from_tool_call (#30154)
ollama[minor]: update default method for structured output (#30273)
langchain_ollama: Support keep_alive in embeddings (#30251)
core[patch]: update structured output tracing (#30123)
core: basemessage.text() (#29078)
multiple: fix uv path deps (#29790)
infra: add UV_FROZEN to makefiles (#29642)
infra: migrate to uv (#29566)