Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added RISC-V Vector Support for K-Quants and improved the existing intrinsics #3453

Merged
merged 2 commits into from
Oct 3, 2023

Conversation

Tameem-10xE
Copy link
Contributor

@Tameem-10xE Tameem-10xE commented Oct 3, 2023

Hi,

In #2929, we have added the RISC-V intrinsics for the dot product functions in GGML, this PR improves these existing dot product functions in ggml.c and also adds the new risc-v vector intrinsics for k_quants and row quantize (Q8_0 and Q8_1) functions. Now LLaMa.cpp fully supports to run on RISC-V vector processor with GGUF.

In future, this will enable GGML and LLaMa.cpp to run efficiently on RISC-V hardware with vector support and also open a way to compare its performance with other vector processors like Intel AVX and Arm Neon.

Update: Got access to RISC-V vector board with 8 cores and 4GB RAM, the performance boost is 6-7 times against the scalar version on the same board.

Runining llama.cpp AI model on RVV1.0 vs RISC-V Scalar


The RISC-V Vector intrinsics support is added for the following K_quants functions with both QKK = 256 and QKK = 64 block size

   ggml_vec_dot_q2_K_q8_K
   ggml_vec_dot_q3_K_q8_K
   ggml_vec_dot_q4_K_q8_K
   ggml_vec_dot_q5_K_q8_K
   ggml_vec_dot_q6_K_q8_K


The RVV intrinsics is also added for the following Q8 quantize row functions

    quantize_row_q8_0
    quantize_row_q8_1


The following dot product functions have also been optimized by using fractional LMUL (i.e. 1/2) instead of LMUL = 1. I am a little skeptical of this since it works correctly but I have noticed some decrease in inference accuracy, which I think could be a problem with my system or weights. Although I prefer to stick with it since it utilizes a much less number of vector registers after product

    ggml_vec_dot_q4_0_q8_0
    ggml_vec_dot_q4_1_q8_1
    ggml_vec_dot_q5_0_q8_0
    ggml_vec_dot_q5_1_q8_1


And finally, the vector initialization in Q5 by the temporary array is also replaced by the vid_v intrinsics


[Compilation]
Ubuntu: 22.10
riscv-toolchain: 2023.07.05 riscv64 linux glibc

To compile it for RISC-V run,

$   make   llama-cli                   # For RISC-V CPU
$   make clean
$   make   RISCV_CROSS_COMPILE=1       # For Cross Compilation only


[Directly on RISC-V CPU]

$   ./llama-cli -m ./path/to/model.gguf -p "Anything" -n 50

[QEMU]

$   qemu-riscv64 -L /path/to/sysroot/  -cpu rv64,v=true,vlen=256,elen=64,vext_spec=v1.0 ./llama-cli -m ./path/to/model.gguf -p "Anything" -n 50

Note: Running on qemu emulator could be very slow and may take 2-5 minutes per token

Any feedback is welcome, if you have any suggestions or improvements, especially for fractional LMUL change, please share.

Thanks!

…e existing dot product function for risc-v.

The RVV intrinsics is added for the following quantize row functions
   quantize_row_q8_0
   quantize_row_q8_1

The following dot product functions have also been optimized by using LMUL = 1/2 instead of LMUL = 1
   ggml_vec_dot_q4_0_q8_0
   ggml_vec_dot_q4_1_q8_1
   ggml_vec_dot_q5_0_q8_0
   ggml_vec_dot_q5_1_q8_1

And vector initialization in Q5 by temporary array is also replaced by the vid intrinsics

Signed-off-by: Ahmad Tameem <[email protected]>
This adds RISC-V Vector intrinsics support for the following K_quants functions for both QKK = 256 and QKK = 64
   ggml_vec_dot_q2_K_q8_K
   ggml_vec_dot_q3_K_q8_K
   ggml_vec_dot_q4_K_q8_K
   ggml_vec_dot_q5_K_q8_K
   ggml_vec_dot_q6_K_q8_K

Signed-off-by: Ahmad Tameem <[email protected]>
@ggerganov ggerganov merged commit 79f34ab into ggml-org:master Oct 3, 2023
joelkuiper added a commit to vortext/llama.cpp that referenced this pull request Oct 5, 2023
…example

* 'master' of github.com:ggerganov/llama.cpp: (24 commits)
  convert : fix Baichuan2 models by using vocab size in config.json (ggml-org#3299)
  readme : add project status link
  ggml : fix build after ggml-org#3329
  llm : add Refact model (ggml-org#3329)
  sync : ggml (conv 1d + 2d updates, UB fixes) (ggml-org#3468)
  finetune : readme fix typo (ggml-org#3465)
  ggml : add RISC-V Vector Support for K-Quants and improved the existing intrinsics (ggml-org#3453)
  main : consistent prefix/suffix coloring (ggml-org#3425)
  llama : fix session saving/loading (ggml-org#3400)
  llama : expose model's rope_freq_scale in the API (ggml-org#3418)
  metal : alibi for arbitrary number of heads (ggml-org#3426)
  cmake : make LLAMA_NATIVE flag actually use the instructions supported by the processor (ggml-org#3273)
  Work on the BPE tokenizer (ggml-org#3252)
  convert : fix vocab size when not defined in hparams (ggml-org#3421)
  cmake : increase minimum version for add_link_options (ggml-org#3444)
  CLBlast: Add broadcast support for matrix multiplication (ggml-org#3402)
  gguf : add BERT, MPT, and GPT-J arch info (ggml-org#3408)
  gguf : general usability improvements (ggml-org#3409)
  cmake : make CUDA flags more similar to the Makefile (ggml-org#3420)
  finetune : fix ggml-org#3404 (ggml-org#3437)
  ...
yusiwen pushed a commit to yusiwen/llama.cpp that referenced this pull request Oct 7, 2023
…ng intrinsics (ggml-org#3453)

* Added RVV intrinsics support for Q8 quantize row and also improved the existing dot product function for risc-v.

The RVV intrinsics is added for the following quantize row functions
   quantize_row_q8_0
   quantize_row_q8_1

The following dot product functions have also been optimized by using LMUL = 1/2 instead of LMUL = 1
   ggml_vec_dot_q4_0_q8_0
   ggml_vec_dot_q4_1_q8_1
   ggml_vec_dot_q5_0_q8_0
   ggml_vec_dot_q5_1_q8_1

And vector initialization in Q5 by temporary array is also replaced by the vid intrinsics

Signed-off-by: Ahmad Tameem <[email protected]>

* Added RVV intrinsics support for k_quants

This adds RISC-V Vector intrinsics support for the following K_quants functions for both QKK = 256 and QKK = 64
   ggml_vec_dot_q2_K_q8_K
   ggml_vec_dot_q3_K_q8_K
   ggml_vec_dot_q4_K_q8_K
   ggml_vec_dot_q5_K_q8_K
   ggml_vec_dot_q6_K_q8_K

Signed-off-by: Ahmad Tameem <[email protected]>

---------

Signed-off-by: Ahmad Tameem <[email protected]>
@Tameem-10xE Tameem-10xE deleted the llama-rvv branch October 10, 2023 09:03
@grigohas
Copy link

hello, i am doing what you suggested and i have results. I have 2 questions, when i want to run it without vector proccesor in qemu, what comand do i have to run? also , how can i check that those 2 runs are different and the one with vector proccesor working like i wanted to ? sorry, i am new to this

@Tameem-10xE
Copy link
Contributor Author

Hi, for running on CPU (scalar) provide the path to risc-v toolchain and then use qemu

make llama-cli CC="riscv64-unknown-linux-gnu-gcc -march=rv64gc -mabi=lp64d" CXX="riscv64-unknown-linux-gnu-g++ -march=rv64gc -mabi=lp64d"
qemu-riscv64 -L /path/to/sysroot/  -cpu rv64 ./llama-cli -m ./path/to/model.gguf -p "Anything" -n 100

You can set the seed to get the same results i.e; llama-cli -s (some_seed number) ...

More details: RVV article
Also, this is old, and many things have change, like main -> llama-cli etc.

Thank you

@grigohas
Copy link

Hi, for running on CPU (scalar) provide the path to risc-v toolchain and then use qemu

make llama-cli CC="riscv64-unknown-linux-gnu-gcc -march=rv64gc -mabi=lp64d" CXX="riscv64-unknown-linux-gnu-g++ -march=rv64gc -mabi=lp64d"
qemu-riscv64 -L /path/to/sysroot/  -cpu rv64 ./llama-cli -m ./path/to/model.gguf -p "Anything" -n 100

You can set the seed to get the same results i.e; llama-cli -s (some_seed number) ...

More details: RVV article Also, this is old, and many things have change, like main -> llama-cli etc.

Thank you

yeah i read this article but when i do the make you provided , i get an error for "march=native" error and from what i searched on makefile, i have to do RISCV_CROSS_COMPILE=1 RISCV=1

@Tameem-10xE
Copy link
Contributor Author

Tameem-10xE commented Jul 10, 2024

Sorry yes, I just identified the makefile has been reorder and RISCV=1 is required in current version

@Tameem-10xE
Copy link
Contributor Author

Tameem-10xE commented Jul 10, 2024

After line 432 in makefile, update the flags for vector version with scalar, i.e

MK_CFLAGS += -march=rv64gc -mabi=lp64d
MK_CXXFLAGS += -march=rv64gc -mabi=lp64d

and then run qemu with,

make llama-cli RISCV=1 CC="riscv64-unknown-linux-gnu-gcc" CXX="riscv64-unknown-linux-gnu-g++"

@grigohas
Copy link

okay okay, one last question, i use the same seed and i have results for both with vector and without, but the only difference in log print is the print time . with vector is 2-2.5X more than without. Is it correct?

@Tameem-10xE
Copy link
Contributor Author

Yes, on qemu the vector emulation time is much slower (the actual reason is not known to me, could be due to qemu has to additionally emulate vector processor with the scalar one or parallel processing issues, and also the log use the real-time for comparison), but this should not be the case with actual RISC-V vector board

@grigohas
Copy link

hello again, i am running llama with vector extension on gem5 but since there isnt something on log to check if vector extension is enabled , how do i know ?

@Tameem-10xE
Copy link
Contributor Author

Hi, I’ve submitted a PR (#9442) which will print RISCV_VECT=1 on the terminal, if the vector processor is found. Also, I slightly changed Makefile so it no longer requires a flag for RISC-V vector boards—only RISCV_CROSS_COMPILATION=1 is needed for the emulator (i.e. QEMU).

The following is the output from the RISC-V BPI-F3 board with vector support,
...
image
...
image
...
image
...
image

@grigohas
Copy link

Hello, i have a question, why the load time is less when rvv is enabled ? load time is the loading time of the model right ? how the rvv affects it ?

@Tameem-10xE
Copy link
Contributor Author

Tameem-10xE commented Nov 27, 2024

Hi, Actually that was due to the quantization happening while loading the weights into memory (Not all weights are quantized, some are done while loading them into memory; specifically check the function quantize_row_q8_0_reference in GGML quants.c file or quantize_row_q4_K), also enabling auto-vectorization can affect the load time.

@grigohas
Copy link

okay but why the quantization affects the loadimg time ? can you explain me what changes with the rvv enabled ?

@Tameem-10xE
Copy link
Contributor Author

During model's initialization phase, there could be several reasons, such as auto-vectorization of memory operations by compiler, recalculation, decompression, and data alignment. However, during benchmarking, I noticed the most change due to the quantize_row function, since it was performing some computations before the weights were fully loaded into memory.

@grigohas
Copy link

okay so when the rvv is enabled , whats happening with the quantize_row function and the load time is less ? also if the model is already quantized is there any change ?

@grigohas
Copy link

grigohas commented Nov 29, 2024

when you enable the rvv , the quantize_row function that you mentioned that is perforforming computations doesnt execute ?
@Tameem-10xE

@Tameem-10xE
Copy link
Contributor Author

Tameem-10xE commented Nov 30, 2024

Sorry, I last worked on this project a year ago and did not go deeper into how it works except for what I could quantify. I might be mistaken or unclear about which functions are affecting the load time (also many things has been change). I think you should ask in the GitHub discussion or Discord if they have one. I also noticed a reduction in load time on x86 with vector (~5 times), and I think the most probable cause must be compiler auto-vectorization. If not, they may be able to provide the exact reason.

@grigohas
Copy link

grigohas commented Dec 2, 2024

Sorry, I last worked on this project a year ago and did not go deeper into how it works except for what I could quantify. I might be mistaken or unclear about which functions are affecting the load time (also many things has been change). I think you should ask in the GitHub discussion or Discord if they have one. I also noticed a reduction in load time on x86 with vector (~5 times), and I think the most probable cause must be compiler auto-vectorization. If not, they may be able to provide the exact reason.

Okay, thank you very much. are the weights quantized differently in scalar than with rvv on ? I mean, does it make sense to compare the results of scalar with the rvv on or do they follow a different loading procedure ?

@Tameem-10xE
Copy link
Contributor Author

Welcome!
No, weights are quantized the same for scalar and RVV (lllama-qunatize script is independent of this), I think it does not matter much since loading is a one-time process, but the more important metric for comparison would be inference time i.e how much tokens are being generated with RVV on against the scalar time.

@grigohas
Copy link

grigohas commented Jan 2, 2025

Hello again, i am simulating through gem5 a riscv environment and i am running llama-cli with a llama 4b model. When i change the vlen and elen of the rvv to greater than 256bit and 64 bit , the generated phrase i get , it doesnt make sense. Is the llama implementation working only on rvv with vlen=256 and elen=64 bit ?

@Tameem-10xE
Copy link
Contributor Author

Tameem-10xE commented Jan 2, 2025

Hi, What Output did you got?
Some functions were using dynamic vl i.e (size_t vl = __riscv_vsetvl_e8m1(qk/2);)
But to avoid duplicating instructions to load qh and smaller size arrays, I did limit the vl size to 256, but smaller VL should work without any issues.

@grigohas
Copy link

grigohas commented Jan 2, 2025

Screenshot (26)
For rvv with vlen=512 and elen=64 i got this

@Tameem-10xE
Copy link
Contributor Author

..., this could be a bug, I missed something or the second half of the register could be interfering with the output causing undefined or junk text. Sorry, due to other tasks, I will not be able to look into this for now, but meanwhile you can file an issue or ask further it about in RISC-V intrinsic repo.

@grigohas
Copy link

grigohas commented Jan 5, 2025

Now that we build llama.cpp with cmake instead of make command , what file do i have to change to build it for riscv but without vector extension ? i changed the -march on makefile but it was built again with the vector extension

@Tameem-10xE
Copy link
Contributor Author

You can use this flag -DGGML_RVV=OFF i.e;

mkdir build
cd build

cmake -DGGML_RVV=OFF [... other flags ...]

@grigohas
Copy link

..., this could be a bug, I missed something or the second half of the register could be interfering with the output causing undefined or junk text. Sorry, due to other tasks, I will not be able to look into this for now, but meanwhile you can file an issue or ask further it about in RISC-V intrinsic repo.

Hello again. What do you think i can change to fix this bug and the exe can run with higher vlen with correct results?

@Tameem-10xE
Copy link
Contributor Author

Usually, this could be due to the vl variable, outer or inner for loop (for (int i = 0; i < nb; i++) not completed or missed edge elements) or vsetvli instruction. Also, I am unsure about GEMM5 simulation, which could be different from the QEMU or RV board I tested on (with 256-bit VLEN). Sorry, I have this in my mind, but I will try to fix this next week.

@grigohas
Copy link

grigohas commented Feb 3, 2025

Usually, this could be due to the vl variable, outer or inner for loop (for (int i = 0; i < nb; i++) not completed or missed edge elements) or vsetvli instruction. Also, I am unsure about GEMM5 simulation, which could be different from the QEMU or RV board I tested on (with 256-bit VLEN). Sorry, I have this in my mind, but I will try to fix this next week.

Any news ?

@Tameem-10xE
Copy link
Contributor Author

Really sorry for the late response. I was inactive last week.
Actually, I have only access to 256-bit RV Hardware, so my only option was to check it on an emulator (qemu), and it is working correctly for 512 VLEN with Q4_K. I also go through code but was unable to find any issue. It may be issue with GEMM5? But can't say for sure.

@grigohas
Copy link

grigohas commented Feb 3, 2025

Really sorry for the late response. I was inactive last week. Actually, I have only access to 256-bit RV Hardware, so my only option was to check it on an emulator (qemu), and it is working correctly for 512 VLEN with Q4_K. I also go through code but was unable to find any issue. It may be issue with GEMM5? But can't say for sure.

Can you explain how do you run it on qemu ? I just run llama-cli for vlen=512 and i got this generated phrase with q4_k model
"" Anything�stagemblụ shelterirasichenForKeyronessageHinttexttåk Weltkrieg ""

@Tameem-10xE
Copy link
Contributor Author

Sorry, I plugged the wrong weights above (need to change my naming convention...).

Identified the issue in the function named void ggml_vec_dot_q4_K_q8_K in ggml-cpu-quant.c file. Unable to test this right now, but try to change this variable value of size_t vl=8 in this function to either 16 or 32. (Only for 4-bit q4_k)

@grigohas
Copy link

grigohas commented Feb 3, 2025

Sorry, I plugged the wrong weights above (need to change my naming convention...).

Identified the issue in the function named void ggml_vec_dot_q4_K_q8_K in ggml-cpu-quant.c file. Unable to test this right now, but try to change this variable value of size_t vl=8 in this function to either 16 or 32. (Only for 4-bit q4_k)

I changed the vl to 16 and 32 and it still doesnt work correctly. The same happens for other models too. I tried a q2_k model for 512 and the generated phrase is not correct

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants