Skip to content

Releases: jackmpcollins/magentic

v0.39.0

24 Feb 10:13
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.38.1...v0.39.0

v0.38.1

29 Jan 08:06
Compare
Choose a tag to compare

What's Changed

  • fix - function call with no args when using anthropic api by @ananis25 in #408

Full Changelog: v0.38.0...v0.38.1

v0.38.0

27 Jan 08:10
Compare
Choose a tag to compare

What's Changed

  • Async streamed response to api message conversion by @ananis25 in #405
  • Support AsyncParallelFunctionCall in message_to_X_message by @jackmpcollins in #406

Full Changelog: v0.37.1...v0.38.0

v0.37.1

24 Jan 05:51
Compare
Choose a tag to compare

What's Changed

Anthropic model message serialization now supports StreamedResponse in AssistantMessage. Thanks to @ananis25 🎉

PRs

  • add msg-to-anthropic-msg converter for streamedresponse by @ananis25 in #404

New Contributors

Full Changelog: v0.37.0...v0.37.1

v0.37.0

15 Jan 07:00
Compare
Choose a tag to compare

What's Changed

The @prompt_chain decorator can now accept a sequence of Message as input, like @chatprompt.

from magentic import prompt_chain, UserMessage

def get_current_weather(location, unit="fahrenheit"):
    """Get the current weather in a given location"""
    return {"temperature": "72", "forecast": ["sunny", "windy"]}

@prompt_chain(
    template=[UserMessage("What's the weather like in {city}?")],
    functions=[get_current_weather],
)
def describe_weather(city: str) -> str: ...

describe_weather("Boston")
'The weather in Boston is currently 72°F with sunny and windy conditions.'

PRs

Full Changelog: v0.36.0...v0.37.0

v0.36.0

12 Jan 06:35
Compare
Choose a tag to compare

What's Changed

Document the Chat class and make it importable from the top level.
docs: https://magentic.dev/chat/

from magentic import Chat, OpenaiChatModel, UserMessage

# Create a new Chat instance
chat = Chat(
    messages=[UserMessage("Say hello")],
    model=OpenaiChatModel("gpt-4o"),
)

# Append a new user message
chat = chat.add_user_message("Actually, say goodbye!")
print(chat.messages)
# [UserMessage('Say hello'), UserMessage('Actually, say goodbye!')]

# Submit the chat to the LLM to get a response
chat = chat.submit()
print(chat.last_message.content)
# 'Hello! Just kidding—goodbye!'

PRs

Full Changelog: v0.35.0...v0.36.0

v0.35.0

06 Jan 03:33
Compare
Choose a tag to compare

What's Changed

UserMessage now accepts image urls, image bytes, and document bytes directly using the ImageUrl, ImageBytes, and DocumentBytes types.

Example of new UserMessage syntax and DocumentBytes

from pathlib import Path

from magentic import chatprompt, DocumentBytes, Placeholder, UserMessage
from magentic.chat_model.anthropic_chat_model import AnthropicChatModel


@chatprompt(
    UserMessage(
        [
            "Repeat the contents of this document.",
            Placeholder(DocumentBytes, "document_bytes"),
        ]
    ),
    model=AnthropicChatModel("claude-3-5-sonnet-20241022"),
)
def read_document(document_bytes: bytes) -> str: ...


document_bytes = Path("...").read_bytes()
read_document(document_bytes)
# 'This is a test PDF.'

PRs

New Contributors

Full Changelog: v0.34.1...v0.35.0

v0.34.1

01 Dec 08:10
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.34.0...v0.34.1

v0.34.0

30 Nov 09:34
Compare
Choose a tag to compare

What's Changed

Add StreamedResponse and AsyncStreamedResponse to enable parsing responses that contain both text and tool calls. See PR #383 or the new docs (copied below) https://magentic.dev/streaming/#StreamedResponse for more details.

⚡ StreamedResponse

Some LLMs have the ability to generate text output and make tool calls in the same response. This allows them to perform chain-of-thought reasoning or provide additional context to the user. In magentic, the StreamedResponse (or AsyncStreamedResponse) class can be used to request this type of output. This object is an iterable of StreamedStr (or AsyncStreamedStr) and FunctionCall instances.

!!! warning "Consuming StreamedStr"

The StreamedStr object must be iterated over before the next item in the `StreamedResponse` is processed, otherwise the string output will be lost. This is because the `StreamedResponse` and `StreamedStr` share the same underlying generator, so advancing the `StreamedResponse` iterator skips over the `StreamedStr` items. The `StreamedStr` object has internal caching so after iterating over it once the chunks will remain available.

In the example below, we request that the LLM generates a greeting and then calls a function to get the weather for two cities. The StreamedResponse object is then iterated over to print the output, and the StreamedStr and FunctionCall items are processed separately.

from magentic import prompt, FunctionCall, StreamedResponse, StreamedStr


def get_weather(city: str) -> str:
    return f"The weather in {city} is 20°C."


@prompt(
    "Say hello, then get the weather for: {cities}",
    functions=[get_weather],
)
def describe_weather(cities: list[str]) -> StreamedResponse: ...


response = describe_weather(["Cape Town", "San Francisco"])
for item in response:
    if isinstance(item, StreamedStr):
        for chunk in item:
            # print the chunks as they are received
            print(chunk, sep="", end="")
        print()
    if isinstance(item, FunctionCall):
        # print the function call, then call it and print the result
        print(item)
        print(item())

# Hello! I'll get the weather for Cape Town and San Francisco for you.
# FunctionCall(<function get_weather at 0x1109825c0>, 'Cape Town')
# The weather in Cape Town is 20°C.
# FunctionCall(<function get_weather at 0x1109825c0>, 'San Francisco')
# The weather in San Francisco is 20°C.

PRs

Full Changelog: v0.33.0...v0.34.0

v0.33.0

29 Nov 08:41
Compare
Choose a tag to compare

What's Changed

Warning

Breaking change: The prompt-function return type and the output_types argument to ChatModel must now contain FunctionCall or (Async)ParallelFunctionCall if these return types are desired. Previously instances of these types could be returned even if they were not indicated in the output types.

  • Dependency updates
  • Improve development workflows
  • Big internal refactor to prepare for future features. See PR #380 for details.

PRs

Full Changelog: v0.32.0...v0.33.0