Update README.md
Browse files
README.md
CHANGED
|
@@ -61,15 +61,31 @@ wget https://huggingface.co/xtuner/llava-llama-3-8b-v1_1-gguf/resolve/main/llava
|
|
| 61 |
|
| 62 |
# int4 llm
|
| 63 |
wget https://huggingface.co/xtuner/llava-llama-3-8b-v1_1-gguf/resolve/main/llava-llama-3-8b-v1_1-int4.gguf
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 64 |
```
|
| 65 |
|
| 66 |
-
###
|
| 67 |
|
| 68 |
1. Build [llama.cpp](https://github.com/ggerganov/llama.cpp) ([docs](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage)) .
|
| 69 |
2. Build `./llava-cli` ([docs](https://github.com/ggerganov/llama.cpp/tree/master/examples/llava#usage)).
|
| 70 |
|
| 71 |
-
### Chat by `./llava-cli`
|
| 72 |
-
|
| 73 |
Note: llava-llama-3-8b-v1_1 uses the Llama-3-instruct chat template.
|
| 74 |
|
| 75 |
```bash
|
|
|
|
| 61 |
|
| 62 |
# int4 llm
|
| 63 |
wget https://huggingface.co/xtuner/llava-llama-3-8b-v1_1-gguf/resolve/main/llava-llama-3-8b-v1_1-int4.gguf
|
| 64 |
+
|
| 65 |
+
# (optional) ollama fp16 modelfile
|
| 66 |
+
wget https://huggingface.co/xtuner/llava-llama-3-8b-v1_1-gguf/resolve/main/OLLAMA_MODELFILE_F16
|
| 67 |
+
|
| 68 |
+
# (optional) ollama int4 modelfile
|
| 69 |
+
wget https://huggingface.co/xtuner/llava-llama-3-8b-v1_1-gguf/resolve/main/OLLAMA_MODELFILE_INT4
|
| 70 |
+
```
|
| 71 |
+
|
| 72 |
+
### Chat by `ollama`
|
| 73 |
+
|
| 74 |
+
```bash
|
| 75 |
+
# fp16
|
| 76 |
+
ollama create llava-llama3-f16 -f ./OLLAMA_MODELFILE_F16
|
| 77 |
+
ollama run llava-llama3-f16 "xx.png Describe this image"
|
| 78 |
+
|
| 79 |
+
# int4
|
| 80 |
+
ollama create llava-llama3-int4 -f ./OLLAMA_MODELFILE_INT4
|
| 81 |
+
ollama run llava-llama3-int4 "xx.png Describe this image"
|
| 82 |
```
|
| 83 |
|
| 84 |
+
### Chat by `llama.cpp`
|
| 85 |
|
| 86 |
1. Build [llama.cpp](https://github.com/ggerganov/llama.cpp) ([docs](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage)) .
|
| 87 |
2. Build `./llava-cli` ([docs](https://github.com/ggerganov/llama.cpp/tree/master/examples/llava#usage)).
|
| 88 |
|
|
|
|
|
|
|
| 89 |
Note: llava-llama-3-8b-v1_1 uses the Llama-3-instruct chat template.
|
| 90 |
|
| 91 |
```bash
|