FP8 Model with Low-Rank LoRA
- Source:
https://huggingface.co/hum-ma/SDXL-models-GGUF/clip - File:
clip_g.safetensors - FP8 Format:
E5M2 - LoRA Rank: 64
- LoRA File:
clip_g-lora-r64.safetensors
Usage (Inference)
from safetensors.torch import load_file
import torch
# Load FP8 model
fp8_state = load_file("clip_g-fp8-e5m2.safetensors")
lora_state = load_file("clip_g-lora-r64.safetensors")
# Reconstruct approximate original weights
reconstructed = {}
for key in fp8_state:
if f"lora_A.{key}" in lora_state and f"lora_B.{key}" in lora_state:
A = lora_state[f"lora_A.{key}"].to(torch.float32)
B = lora_state[f"lora_B.{key}"].to(torch.float32)
lora_weight = B @ A # (rank, out) @ (in, rank) -> (out, in)
fp8_weight = fp8_state[key].to(torch.float32)
reconstructed[key] = fp8_weight + lora_weight
else:
reconstructed[key] = fp8_state[key].to(torch.float32)
Requires PyTorch โฅ 2.1 for FP8 support.
- Downloads last month
- 67
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support