You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In release b1412 i can successfully run the convert-mpt-hf-to-gguf.py script.
Current Behavior
With the changes introduced with #3746, I get an error with each mpt model:
python convert-mpt-hf-to-gguf.py e:\hf\mpt-7b-storywriter\
gguf: loading model mpt-7b-storywriter
gguf: found 2 model parts
This gguf file is for Little Endian only
gguf: get model metadata
gguf: get tokenizer metadata
gguf: get gpt2 tokenizer vocab
Traceback (most recent call last):
File "e:\hf\llama.cpp\convert-mpt-hf-to-gguf.py", line 140, in <module>
if tokenizer.added_tokens_decoder[i].special:
AttributeError: 'GPTNeoXTokenizerFast' object has no attribute 'added_tokens_decoder'
Environment and Context
Windows 10, running convert scripts in a conda environment with python 3.10.13
The text was updated successfully, but these errors were encountered:
Ah thanks, that solved the problem.
I have my (right now: almost) complete set of MPT gguf models now available at https://huggingface.co/maddes8cht ( alongside the falcons)
Expected Behavior
In release b1412 i can successfully run the
convert-mpt-hf-to-gguf.py
script.Current Behavior
With the changes introduced with #3746, I get an error with each mpt model:
Environment and Context
Windows 10, running convert scripts in a conda environment with python 3.10.13
The text was updated successfully, but these errors were encountered: