AI & ML interests

None defined yet.

Recent Activity

anditoย  authored a paper about 2 months ago
FineVision: Open Data Is All You Need
ariG23498ย  authored a paper about 2 months ago
FineVision: Open Data Is All You Need
sergiopaniegoย  updated a Space 6 months ago
visionLMsftw/comparevlms
View all activity

sergiopaniegoย 
posted an update about 23 hours ago
view post
Post
1089
ICYMI, you can fine-tune open LLMs using Claude Code

just tell it:
โ€œFine-tune Qwen3-0.6B on open-r1/codeforces-cotsโ€

and Claude submits a real training job on HF GPUs using TRL.

it handles everything:
> dataset validation
> GPU selection
> training + Trackio monitoring
> job submission + cost estimation
when itโ€™s done, your model is on the Hub, ready to use

read more about the process: https://huggingface.co/blog/hf-skills-training
sergiopaniegoย 
posted an update 1 day ago
view post
Post
1043
We just released TRL v0.26.0!

It comes packed with updates:
> Agent training with tools in GRPO
> New CISPO & SAPO losses + reasoning rewards
> vLLM quantization in colocate mode
> Dataset shuffling in SFT
> Lots of NEW examples
> Tons of fixes and documentation improvements

  • 3 replies
ยท
sergiopaniegoย 
posted an update 2 days ago
sergiopaniegoย 
posted an update 6 days ago
view post
Post
2775
Want to get started with fine-tuning but donโ€™t know where to begin? ๐Ÿค“โ˜๏ธ

Weโ€™re expanding our collection of beginner-friendly free Colab notebooks so you can learn and fine-tune models using TRL at no cost

๐Ÿ”ฌ Check out the full list of free notebooks: https://huggingface.co/docs/trl/main/en/example_overview#notebooks

๐Ÿ”ฌ If you want more advanced content, we also have a lot to cover in the community tutorials: https://huggingface.co/docs/trl/community_tutorials

And now the obvious question: what would you like us to add next?
sergiopaniegoย 
posted an update 8 days ago
view post
Post
2319
NEW: @mistralai released a fantastic family of multimodal models, Ministral 3.

You can fine-tune them for free on Colab using TRL โšก๏ธ, supporting both SFT and GRPO

Link to the notebooks:
- SFT: https://colab.research.google.com/github/huggingface/trl/blob/main/examples/notebooks/sft_ministral3_vl.ipynb
- GRPO: https://colab.research.google.com/github/huggingface/trl/blob/main/examples/notebooks/grpo_ministral3_vl.ipynb
- TRL and more examples: https://huggingface.co/docs/trl/index
  • 2 replies
ยท
sergiopaniegoย 
posted an update 9 days ago
sergiopaniegoย 
posted an update 10 days ago
view post
Post
3080
want to use open models easily through an API?

Inference Providers might be exactly what youโ€™re looking for sooo hereโ€™s a complete beginner-friendly walkthrough ๐Ÿง

https://www.youtube.com/watch?v=oxwsizy1Spw
  • 2 replies
ยท
sergiopaniegoย 
posted an update 14 days ago
view post
Post
1728
nanochat is now in transformers!

The LLM by @karpathy is officially in the library, and we wrote a blog covering: how did we port the model, differences from the original, and how to run or train it.

go read it ๐Ÿค“

nanochat-students/transformers
sergiopaniegoย 
posted an update 16 days ago
sergiopaniegoย 
posted an update 17 days ago
sergiopaniegoย 
posted an update 21 days ago
sergiopaniegoย 
posted an update 22 days ago
view post
Post
2581
we've just added several example scripts to TRL showing how to train models with GRPO using some of the new OpenEnv environments

train a model to interact with a browser (๐ŸŽฎ BrowserGym Env), play Wordle (๐ŸŽฎ Wordle Env) and moooore!

TRL (GRPO + vLLM) + OpenEnv! โšก๏ธ

๐Ÿ“ go play with them: https://github.com/huggingface/trl/tree/main/examples/scripts/openenv

๐Ÿ“ examples list: https://huggingface.co/docs/trl/main/en/example_overview#scripts
sergiopaniegoย 
posted an update 24 days ago
sergiopaniegoย 
posted an update about 1 month ago
sergiopaniegoย 
posted an update about 1 month ago
sergiopaniegoย 
posted an update about 1 month ago
sergiopaniegoย 
posted an update about 1 month ago
sergiopaniegoย 
posted an update about 2 months ago
view post
Post
2914
Meet OpenEnv ๐Ÿ‘‹, an open ecosystem of environments for intelligent agents. Build, share, and test agents safely and consistently.

Ideal for training with TRL (we include examples๐Ÿค“), deployment, and community collaboration via the HF Hub

Blog: https://huggingface.co/blog/openenv
Hub for Environments: openenv
OpenEnv repo: https://github.com/meta-pytorch/OpenEnv
Try it out using TRL: https://huggingface.co/docs/trl/main/en/openenv
  • 1 reply
ยท
anditoย 
posted an update about 2 months ago
view post
Post
1801
Finally, our new paper is out! "๐—™๐—ถ๐—ป๐—ฒ๐—ฉ๐—ถ๐˜€๐—ถ๐—ผ๐—ป: ๐—ข๐—ฝ๐—ฒ๐—ป ๐——๐—ฎ๐˜๐—ฎ ๐—œ๐˜€ ๐—”๐—น๐—น ๐—ฌ๐—ผ๐˜‚ ๐—ก๐—ฒ๐—ฒ๐—ฑ"! ๐Ÿฅณ
FineVision: Open Data Is All You Need (2510.17269)

If you've ever trained a VLM, you know this problem: nobody shares their data mixtures. It's a black box, making replicating SOTA work impossible.
We wanted to change that.

FineVision unifies 200 sources into 24 million samples. With 17.3 million images and 9.5 billion answer tokens, it's the largest open resource of its kind.

In the paper, we share how we built it:
๐Ÿ” finding and cleaning data at scale
๐Ÿงน removing excessive duplicates across sources
๐Ÿค— decontaminating against 66 public benchmarks

My favorite part is Figure 6 (in the video!). It's our visual diversity analysis. It shows that FineVision isn't just bigger; it's more balanced and conceptually richer than other open datasets.
NVIDIA's Eagle 2 paper highlighted just how critical this visual diversity is, and our results confirm it: models trained on FineVision consistently outperform those trained on any other open dataset on 11 benchmarks!

๐ŸŽ‰ To celebrate the paper, Iโ€™m also releasing a concatenated and shuffled version of the full dataset! ๐Ÿ‘‰HuggingFaceM4/FineVision_full_shuffled

Itโ€™s ready to stream, so you can start training your own models right away:

from datasets import load_dataset
d = load_dataset("HuggingFaceM4/FineVision_full_shuffled", split="train", streaming=True)
print(next(iter(d)))

A big shoutout to the first authors: Luis Wiedmann and Orr Zohar. They are rockstars!