| | --- |
| | license: apache-2.0 |
| | pipeline_tag: text-generation |
| | tags: |
| | - cortex.cpp |
| | --- |
| | |
| | ## Overview |
| |
|
| | The [TinyLlama](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) project aims to pretrain a 1.1B Llama model on 3 trillion tokens. This is the chat model finetuned on a diverse range of synthetic dialogues generated by ChatGPT. |
| |
|
| | ## Variants |
| |
|
| | | No | Variant | Cortex CLI command | |
| | | --- | --- | --- | |
| | | 1 | [TinyLLama-1b](https://huggingface.co/cortexso/tinyllama/tree/1b) | `cortex run tinyllama:1b` | |
| |
|
| |
|
| | ## Use it with Jan (UI) |
| |
|
| | 1. Install **Jan** using [Quickstart](https://jan.ai/docs/quickstart) |
| | 2. Use in Jan model Hub: |
| | ```bash |
| | cortexhub/tinyllama |
| | ``` |
| | |
| | ## Use it with Cortex (CLI) |
| | |
| | 1. Install **Cortex** using [Quickstart](https://cortex.jan.ai/docs/quickstart) |
| | 2. Run the model with command: |
| | ```bash |
| | cortex run tinyllama |
| | ``` |
| | |
| | ## Credits |
| | |
| | - **Author:** Microsoft |
| | - **Converter:** [Homebrew](https://www.homebrew.ltd/) |
| | - **Original License:** [License](https://choosealicense.com/licenses/apache-2.0/) |
| | - **Papers:** [Tinyllama Paper](https://arxiv.org/abs/2401.02385) |