introvoyz041's picture
Upload README.md with huggingface_hub
015a757 verified
metadata
language:
  - en
tags:
  - mlx
  - apple-silicon
  - liquidai
  - lfm2
  - moe
  - transformer
  - long-context
  - instruct
  - quantized
  - 8bit
  - Mixture of Experts
  - coding
  - mlx
  - mlx-my-repo
pipeline_tag: text-generation
library_name: mlx
license: other
license_name: lfm1.0
license_link: LICENSE
base_model: mlx-community/LFM2-8B-A1B-8bit-MLX
model-index:
  - name: >-
      LFM2-8B-A1B — MLX (Apple Silicon), **8-bit** (with guidance on MoE + RAM
      planning)
    results: []

introvoyz041/LFM2-8B-A1B-8bit-MLX-mlx-8Bit

The Model introvoyz041/LFM2-8B-A1B-8bit-MLX-mlx-8Bit was converted to MLX format from mlx-community/LFM2-8B-A1B-8bit-MLX using mlx-lm version 0.28.3.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("introvoyz041/LFM2-8B-A1B-8bit-MLX-mlx-8Bit")

prompt="hello"

if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, tokenize=False, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)