main.exe --model models/new3/Meta-Llama-3-8B-Instruct.Q8_0.gguf --color --threads 30 --keep -1 --batch-size 512 --n-predict -1 --repeat-penalty 1.1 --

MaziyarPanahi / Meta-Llama-3-8B-Instruct-GGUF like 35

submited by
Style Pass
2024-04-19 09:30:04

main.exe --model models/new3/Meta-Llama-3-8B-Instruct.Q8_0.gguf --color --threads 30 --keep -1 --batch-size 512 --n-predict -1 --repeat-penalty 1.1 --ctx-size 0 --interactive -ins -ngl 99 --simple-io -r '<|eot_id|>' --in-prefix "\n<|start_header_id|>user<|end_header_id|>\n\n" --in-suffix "<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" -p "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nYou are a helpful, smart, kind, and efficient AI assistant. You always fulfill the user's requests to the best of your ability.<|eot_id|>\n<|start_header_id|>user<|end_header_id|>\n\nHi!<|eot_id|>\n<|start_header_id|>assistant<|end_header_id|>\n\n"

THAT llama3 8b is INSANE. Better than anything I saw even better than wizard 2 8x22b ! ( question with apple sentence, etc ) That is a huge leap to other dimension for small models performance ... I just do not believe such a small model ( 8b ! ) can be so smart and has a such high reasoning capability. I am afraid of testing 70b version ;)

Leave a Comment
Related Posts