Releases: jackmpcollins/magentic
v0.39.0
What's Changed
- Use TypeVar default to remove overloads by @jackmpcollins in #411
- Add missing Field import in docs by @jackmpcollins in #428
- feat: support for passing extra_headers to LitellmChatModel by @ashwin153 in #426
New Contributors
- @ashwin153 made their first contribution in #426
Full Changelog: v0.38.1...v0.39.0
v0.38.1
What's Changed
Full Changelog: v0.38.0...v0.38.1
v0.38.0
What's Changed
- Async streamed response to api message conversion by @ananis25 in #405
- Support AsyncParallelFunctionCall in message_to_X_message by @jackmpcollins in #406
Full Changelog: v0.37.1...v0.38.0
v0.37.1
What's Changed
Anthropic model message serialization now supports StreamedResponse
in AssistantMessage
. Thanks to @ananis25 🎉
PRs
New Contributors
Full Changelog: v0.37.0...v0.37.1
v0.37.0
What's Changed
The @prompt_chain
decorator can now accept a sequence of Message
as input, like @chatprompt
.
from magentic import prompt_chain, UserMessage
def get_current_weather(location, unit="fahrenheit"):
"""Get the current weather in a given location"""
return {"temperature": "72", "forecast": ["sunny", "windy"]}
@prompt_chain(
template=[UserMessage("What's the weather like in {city}?")],
functions=[get_current_weather],
)
def describe_weather(city: str) -> str: ...
describe_weather("Boston")
'The weather in Boston is currently 72°F with sunny and windy conditions.'
PRs
- Allow Messages as input to prompt_chain by @jackmpcollins in #403
Full Changelog: v0.36.0...v0.37.0
v0.36.0
What's Changed
Document the Chat
class and make it importable from the top level.
docs: https://magentic.dev/chat/
from magentic import Chat, OpenaiChatModel, UserMessage
# Create a new Chat instance
chat = Chat(
messages=[UserMessage("Say hello")],
model=OpenaiChatModel("gpt-4o"),
)
# Append a new user message
chat = chat.add_user_message("Actually, say goodbye!")
print(chat.messages)
# [UserMessage('Say hello'), UserMessage('Actually, say goodbye!')]
# Submit the chat to the LLM to get a response
chat = chat.submit()
print(chat.last_message.content)
# 'Hello! Just kidding—goodbye!'
PRs
- Use public import for ChatCompletionStreamState by @jackmpcollins in #398
- Make Chat class public and add docs by @jackmpcollins in #401
- Remove unused content None from openai messages by @jackmpcollins in #402
Full Changelog: v0.35.0...v0.36.0
v0.35.0
What's Changed
UserMessage
now accepts image urls, image bytes, and document bytes directly using the ImageUrl
, ImageBytes
, and DocumentBytes
types.
Example of new UserMessage
syntax and DocumentBytes
from pathlib import Path
from magentic import chatprompt, DocumentBytes, Placeholder, UserMessage
from magentic.chat_model.anthropic_chat_model import AnthropicChatModel
@chatprompt(
UserMessage(
[
"Repeat the contents of this document.",
Placeholder(DocumentBytes, "document_bytes"),
]
),
model=AnthropicChatModel("claude-3-5-sonnet-20241022"),
)
def read_document(document_bytes: bytes) -> str: ...
document_bytes = Path("...").read_bytes()
read_document(document_bytes)
# 'This is a test PDF.'
PRs
- Accept Sequence[Message] instead of list for Chat by @alexchandel in #390
- Bump astral-sh/setup-uv from 4 to 5 by @dependabot in #393
- Support images directly in UserMessage by @jackmpcollins in #387
- Add DocumentBytes for submitting PDF documents by @jackmpcollins in #395
New Contributors
- @alexchandel made their first contribution in #390
Full Changelog: v0.34.1...v0.35.0
v0.34.1
What's Changed
- Consume LLM output stream via returned objects to allow caching by @jackmpcollins in #384
- Improve ruff format/lint rules by @jackmpcollins in #385
- Update overview and configuration docs by @jackmpcollins in #386
Full Changelog: v0.34.0...v0.34.1
v0.34.0
What's Changed
Add StreamedResponse
and AsyncStreamedResponse
to enable parsing responses that contain both text and tool calls. See PR #383 or the new docs (copied below) https://magentic.dev/streaming/#StreamedResponse for more details.
⚡ StreamedResponse
Some LLMs have the ability to generate text output and make tool calls in the same response. This allows them to perform chain-of-thought reasoning or provide additional context to the user. In magentic, the StreamedResponse
(or AsyncStreamedResponse
) class can be used to request this type of output. This object is an iterable of StreamedStr
(or AsyncStreamedStr
) and FunctionCall
instances.
!!! warning "Consuming StreamedStr"
The StreamedStr object must be iterated over before the next item in the `StreamedResponse` is processed, otherwise the string output will be lost. This is because the `StreamedResponse` and `StreamedStr` share the same underlying generator, so advancing the `StreamedResponse` iterator skips over the `StreamedStr` items. The `StreamedStr` object has internal caching so after iterating over it once the chunks will remain available.
In the example below, we request that the LLM generates a greeting and then calls a function to get the weather for two cities. The StreamedResponse
object is then iterated over to print the output, and the StreamedStr
and FunctionCall
items are processed separately.
from magentic import prompt, FunctionCall, StreamedResponse, StreamedStr
def get_weather(city: str) -> str:
return f"The weather in {city} is 20°C."
@prompt(
"Say hello, then get the weather for: {cities}",
functions=[get_weather],
)
def describe_weather(cities: list[str]) -> StreamedResponse: ...
response = describe_weather(["Cape Town", "San Francisco"])
for item in response:
if isinstance(item, StreamedStr):
for chunk in item:
# print the chunks as they are received
print(chunk, sep="", end="")
print()
if isinstance(item, FunctionCall):
# print the function call, then call it and print the result
print(item)
print(item())
# Hello! I'll get the weather for Cape Town and San Francisco for you.
# FunctionCall(<function get_weather at 0x1109825c0>, 'Cape Town')
# The weather in Cape Town is 20°C.
# FunctionCall(<function get_weather at 0x1109825c0>, 'San Francisco')
# The weather in San Francisco is 20°C.
PRs
- Test Ollama via
OpenaiChatModel
by @jackmpcollins in #281 - Rename test to test_openai_chat_model_acomplete_ollama by @jackmpcollins in #381
- Add
(Async)StreamedResponse
for multi-part responses by @jackmpcollins in #383
Full Changelog: v0.33.0...v0.34.0
v0.33.0
What's Changed
Warning
Breaking change: The prompt-function return type and the output_types
argument to ChatModel
must now contain FunctionCall
or (Async)ParallelFunctionCall
if these return types are desired. Previously instances of these types could be returned even if they were not indicated in the output types.
- Dependency updates
- Improve development workflows
- Big internal refactor to prepare for future features. See PR #380 for details.
PRs
- Bump logfire-api from 0.49.0 to 0.52.0 by @dependabot in #327
- Bump litellm from 1.41.21 to 1.44.27 by @dependabot in #330
- Bump jupyterlab from 4.2.3 to 4.2.5 by @dependabot in #322
- Bump anthropic from 0.31.0 to 0.34.2 by @dependabot in #328
- Bump pydantic-settings from 2.3.4 to 2.5.2 by @dependabot in #332
- Bump notebook from 7.2.1 to 7.2.2 by @dependabot in #333
- Bump ruff from 0.5.2 to 0.6.5 by @dependabot in #331
- Bump jupyter from 1.0.0 to 1.1.1 by @dependabot in #335
- Bump logfire-api from 0.52.0 to 0.53.0 by @dependabot in #336
- Bump mkdocs-jupyter from 0.24.8 to 0.25.0 by @dependabot in #338
- Bump pytest-asyncio from 0.23.7 to 0.24.0 by @dependabot in #337
- Update precommit hooks by @jackmpcollins in #339
- Switch to uv from poetry by @jackmpcollins in #373
- Bump astral-sh/setup-uv from 2 to 3 by @dependabot in #374
- Use VCR for tests by @jackmpcollins in #375
- Add CONTRIBUTING.md by @jackmpcollins in #376
- Make VCR match on request body in tests by @jackmpcollins in #377
- Add make help command by @jackmpcollins in #378
- Bump astral-sh/setup-uv from 3 to 4 by @dependabot in #379
- Refactor to reuse stream parsing across ChatModels by @jackmpcollins in #380
Full Changelog: v0.32.0...v0.33.0