Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: deepseek provider and gptme-eval docker improvements #270

Merged
merged 1 commit into from
Nov 23, 2024

Conversation

ErikBjare
Copy link
Owner

@ErikBjare ErikBjare commented Nov 22, 2024

  • Skip file handling for deepseek provider (does not support files)
  • Make eval results directory configurable via EVAL_RESULTS_DIR env var
  • Configure eval results directory in gptme-eval docker container

Important

Skip file handling for deepseek provider and make eval results directory configurable in gptme-eval Docker setup.

  • Behavior:
    • Skip file handling for deepseek provider in _process_file() in llm_openai.py.
    • Make eval results directory configurable via EVAL_RESULTS_DIR environment variable in main.py.
  • Docker:
    • Set EVAL_RESULTS_DIR in Dockerfile.eval to /app/eval_results.

This description was created by Ellipsis for cdf56fa. It will automatically update as commits are pushed.

- Skip file handling for deepseek provider (does not support files)
- Make eval results directory configurable via EVAL_RESULTS_DIR env var
- Configure eval results directory in gptme-eval docker container
Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

❌ Changes requested. Reviewed everything up to cdf56fa in 59 seconds

More details
  • Looked at 63 lines of code in 3 files
  • Skipped 0 files when reviewing.
  • Skipped posting 1 drafted comments based on config settings.
1. gptme/llm/llm_openai.py:174
  • Draft comment:
    Consider updating the comment to clarify that the function returns early if the provider is deepseek. This improves readability.
  • Reason this comment was not posted:
    Confidence changes required: 20%
    The check for model.provider == "deepseek" is correct, but the comment should be updated to reflect that the function returns early if the provider is deepseek. This is a minor readability improvement.

Workflow ID: wflow_dcfZxhKA5WMO4Qcs


Want Ellipsis to fix these issues? Tag @ellipsis-dev in a comment. You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet mode, and more.

@codecov-commenter
Copy link

codecov-commenter commented Nov 22, 2024

Codecov Report

Attention: Patch coverage is 85.71429% with 1 line in your changes missing coverage. Please review.

Project coverage is 72.42%. Comparing base (7a797cf) to head (cdf56fa).

✅ All tests successful. No failed tests found.

Files with missing lines Patch % Lines
gptme/llm/llm_openai.py 75.00% 1 Missing ⚠️
Additional details and impacted files
@@           Coverage Diff           @@
##           master     #270   +/-   ##
=======================================
  Coverage   72.42%   72.42%           
=======================================
  Files          64       64           
  Lines        4319     4323    +4     
=======================================
+ Hits         3128     3131    +3     
- Misses       1191     1192    +1     
Flag Coverage Δ
anthropic/claude-3-haiku-20240307 71.10% <42.85%> (-0.12%) ⬇️
openai/gpt-4o-mini 71.06% <85.71%> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.


🚨 Try these New Features:

@ErikBjare ErikBjare merged commit 591b5be into master Nov 23, 2024
7 checks passed
@ErikBjare ErikBjare deleted the dev/provider-eval-fixes branch November 23, 2024 12:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants