File size: 2,864 Bytes
af4524a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 |
---
language:
- en
license: other
license_name: flux-1-dev-non-commercial-license
license_link: LICENSE.md
tags:
- safetensors
- pruna-ai
extra_gated_prompt: By clicking "Agree", you agree to the [FluxDev Non-Commercial
License Agreement](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md)
and acknowledge the [Acceptable Use Policy](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/POLICY.md).
---
# Model Card for LenSch/caching_only
This model was created using the [pruna](https://github.com/PrunaAI/pruna) library. Pruna is a model optimization framework built for developers, enabling you to deliver more efficient models with minimal implementation overhead.
## Usage
First things first, you need to install the pruna library:
```bash
pip install pruna
```
You can [use the library_name library to load the model](https://huggingface.co/LenSch/caching_only?library=library_name) but this might not include all optimizations by default.
To ensure that all optimizations are applied, use the pruna library to load the model using the following code:
```python
from pruna import PrunaModel
loaded_model = PrunaModel.from_pretrained(
"LenSch/caching_only"
)
# we can then run inference using the methods supported by the base model
```
Alternatively, you can visit [the Pruna documentation](https://docs.pruna.ai/en/stable/) for more information.
## Smash Configuration
The compression configuration of the model is stored in the `smash_config.json` file, which describes the optimization methods that were applied to the model.
```bash
{
"batcher": null,
"cacher": "fora",
"compiler": "stable_fast",
"factorizer": null,
"kernel": null,
"pruner": null,
"quantizer": null,
"fora_interval": 4,
"fora_start_step": 1,
"batch_size": 1,
"device": "cuda:0",
"device_map": null,
"save_fns": [
"save_before_apply"
],
"load_fns": [
"diffusers"
],
"reapply_after_load": {
"factorizer": null,
"pruner": null,
"quantizer": null,
"kernel": null,
"cacher": "fora",
"compiler": "stable_fast",
"batcher": null
}
}
```
## 🌍 Join the Pruna AI community!
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/JFQmtFKCjd)
[](https://www.reddit.com/r/PrunaAI/) |