Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: support for passing extra_headers to LitellmChatModel #426

Merged
merged 6 commits into from
Feb 24, 2025

Conversation

ashwin153
Copy link
Contributor

@ashwin153 ashwin153 commented Feb 11, 2025

My ollama instance runs behind an reverse proxy (Cloudflare Access) which uses custom headers (cf-access-client-id / cf-access-client-secret or cf-access-token) to authenticate the user. I'm able to access my instance using litellm in the following way, but I'd like to switch to using magnetic.

import litellm

response = litellm.completion(
    model="ollama/deepseek-coder-v2",
    messages=[{"content": "generate a simple sql query", "role": "user"}],
    api_base="...",
    extra_headers={"cf-access-token": "..."},
)

print(response)

closes #260

@ashwin153
Copy link
Contributor Author

The test fails when I run it locally, but I'm not sure why. Did I set something up incorrectly?

>               _is_litellm_router_call = "model_group" in kwargs.get(
                    "metadata", {}
                )  # check if call from litellm.router/proxy
E               TypeError: argument of type 'NoneType' is not iterable

@jackmpcollins
Copy link
Owner

jackmpcollins commented Feb 11, 2025

@ashwin153 Thanks for this! For this tests you'll need to add a vcr recording so running in CI does not query the live APIs. To do that should just require running make test-vcr-once and committing the generated file. You might also want to read this file to ensure there are no API keys or other info in the headers that should remain private.

See here for an overview of vcr: https://vcrpy.readthedocs.io/en/latest/


It's also possible this is a bug and not related to vcr!

@ashwin153
Copy link
Contributor Author

Ok sweet tests pass now. Thanks for the tip!

@ashwin153
Copy link
Contributor Author

@jackmpcollins Everything ok here?

@jackmpcollins
Copy link
Owner

@jackmpcollins Everything ok here?

@ashwin153 Thanks for the ping! Looks great. I just changed the test to use openai api for the VCR and to check the litellm callback.

@jackmpcollins jackmpcollins merged commit 744bfb3 into jackmpcollins:main Feb 24, 2025
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

how could we adding extra header into LitellmChatModel
2 participants