Perplexity's Sonar Reasoning Model with their specific parameters enabled #8728
Unanswered
ori-spector
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hey Litellm Team! Right now I'm building a feature to test out Perplexity's Sonar Reasoning model. I want to set certain parameters (e.g. search_domain_filter, return_images, return_related_question) to true since I have tier 3 on Perplexity. When I test directly through their API with the same input and system prompt I receive the values requested. However, when I make the same call through litellm, the values are not being returned.
Specifically, I'm doing this through an async stream and the call itself with the model works, but the specific info I've set to true is not returned. I've attached the code snippet below. Please let me know if I'm missing something?
async def complete_web_search_stream(self, system: str, history: list, model="perplexity/sonar-reasoning"):
response = await acompletion(
model=model,
messages=[system_prompt(system)] + history,
temperature=self.temperature,
return_related_questions=True,
return_images=True,
timeout=self.timeout,
num_retries=self.num_retry,
stream=True,
)
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions