|
|
--- |
|
|
language: |
|
|
- en |
|
|
license: other |
|
|
license_name: flux-1-dev-non-commercial-license |
|
|
license_link: LICENSE.md |
|
|
tags: |
|
|
- safetensors |
|
|
- pruna-ai |
|
|
extra_gated_prompt: By clicking "Agree", you agree to the [FluxDev Non-Commercial |
|
|
License Agreement](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md) |
|
|
and acknowledge the [Acceptable Use Policy](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/POLICY.md). |
|
|
--- |
|
|
|
|
|
# Model Card for LenSch/caching_only |
|
|
|
|
|
This model was created using the [pruna](https://github.com/PrunaAI/pruna) library. Pruna is a model optimization framework built for developers, enabling you to deliver more efficient models with minimal implementation overhead. |
|
|
|
|
|
## Usage |
|
|
|
|
|
First things first, you need to install the pruna library: |
|
|
|
|
|
```bash |
|
|
pip install pruna |
|
|
``` |
|
|
|
|
|
You can [use the library_name library to load the model](https://huggingface.co/LenSch/caching_only?library=library_name) but this might not include all optimizations by default. |
|
|
|
|
|
To ensure that all optimizations are applied, use the pruna library to load the model using the following code: |
|
|
|
|
|
```python |
|
|
from pruna import PrunaModel |
|
|
|
|
|
loaded_model = PrunaModel.from_pretrained( |
|
|
"LenSch/caching_only" |
|
|
) |
|
|
# we can then run inference using the methods supported by the base model |
|
|
``` |
|
|
|
|
|
Alternatively, you can visit [the Pruna documentation](https://docs.pruna.ai/en/stable/) for more information. |
|
|
|
|
|
## Smash Configuration |
|
|
|
|
|
The compression configuration of the model is stored in the `smash_config.json` file, which describes the optimization methods that were applied to the model. |
|
|
|
|
|
```bash |
|
|
{ |
|
|
"batcher": null, |
|
|
"cacher": "fora", |
|
|
"compiler": "stable_fast", |
|
|
"factorizer": null, |
|
|
"kernel": null, |
|
|
"pruner": null, |
|
|
"quantizer": null, |
|
|
"fora_interval": 4, |
|
|
"fora_start_step": 1, |
|
|
"batch_size": 1, |
|
|
"device": "cuda:0", |
|
|
"device_map": null, |
|
|
"save_fns": [ |
|
|
"save_before_apply" |
|
|
], |
|
|
"load_fns": [ |
|
|
"diffusers" |
|
|
], |
|
|
"reapply_after_load": { |
|
|
"factorizer": null, |
|
|
"pruner": null, |
|
|
"quantizer": null, |
|
|
"kernel": null, |
|
|
"cacher": "fora", |
|
|
"compiler": "stable_fast", |
|
|
"batcher": null |
|
|
} |
|
|
} |
|
|
``` |
|
|
|
|
|
## ๐ Join the Pruna AI community! |
|
|
|
|
|
[](https://twitter.com/PrunaAI) |
|
|
[](https://github.com/PrunaAI) |
|
|
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) |
|
|
[](https://discord.gg/JFQmtFKCjd) |
|
|
[](https://www.reddit.com/r/PrunaAI/) |