-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: implementing openai wrapper #114
feat: implementing openai wrapper #114
Conversation
- Added docs for the wrapper - Modified the model Params to fetch all the parameter of openai chat and completions api - Added the config params for the langfuse tracer like metadata, trace_name, session_id, release and version
Added the following tests - Chat completion streaming - Chat completion without streaming - Completion with streaming - Completion without streaming - Few short learning with streaming. - Few short learning without streaming
…the input structure for all the calls and output structure to send the entire message in chat calls
…se/langfuse-js into noble-varghese/openai-integration
expect(generation.totalTokens).toBeDefined() | ||
expect(generation.promptTokens).toBeDefined() | ||
expect(generation.completionTokens).toBeDefined() | ||
expect(generation.input).toBeDefined() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add message array to test here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You mean statusMessage ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have added statusMessage. Did not find anything called message. Input and output are anyway being validated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I meant checking whether input
is correct
expect(generation.output).toMatch(content) | ||
}, 10000); | ||
|
||
it("Few Short learning with streaming", async () => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what does this test add?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This tests the scenario where multiple inputs are sent in the messages of chat API. Like a conversational message. Tests if the integration captures it in the trace.,
modelParams: Record<string, any> | ||
} | ||
|
||
const parseInputArgs = (args: Record<string, any>): InputArgsData => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could map tools / toolchoice etc to metadata
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
functions, function_call, tools and tool_choice are being sent as part of the input right now. Should it remain the same and additionally send it in the metadata as well ?
Or should it be sent only in metadata. ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The UI doesn't display tools and tool_choice is regular dislay, but in the json mode it is visible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In input is correct, need to fix the front-end rendering when tools are on top-level, currently only rendered well when they are within a single message.
Makes sense to not make them to metadata, agree!
langfuse/src/openai.ts
Outdated
|
||
createGeneration( | ||
output?: string, | ||
usage?: Record<string, any>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does it make sense to type usage more strictly? promptTokens, completionTokens
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It does make sense to have typing there. Realised that langfuse doesn't take the openai usage details directly. Have updated in code and added tests for the same :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What kind of translation do you need to do here? generally we tried to take the openai object as it is
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is more like a UI test case, which checks if the UI is rendering the chats properly. Backend test doesn't make a lot of sense actually.
Additionally, we can always confirm that the wrapper is not just taking the first element in the array.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removing this test from list.
…dditionally, added tests for usage, input, output and total costs in all tests
…ed own type since open.completion usage may tend to be different across versions
…ghese/openai-integration
} | ||
} | ||
|
||
interface WrapperConfig { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think going for camelcase here would be a better fit to the rest of the JS SDK (even though the openai sdk uses snake case, what do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree. This makes sense. We use camel casing on all the other places and since this is our wrapper, it would make sense to keep it as camel cased.
…yle to camel case in the wrapper
…se/langfuse-js into noble-varghese/openai-integration
…apped function to manually flush the data in code
…se/langfuse-js into noble-varghese/openai-integration
b88776b
into
hassieb/lfe-784-feat-add-openai-integration
Problem
Implementing the openAI wrapper for langfuse js
Changes
Release info Sub-libraries affected
Bump level
Libraries affected
Changelog notes