Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Building on HF
5
13
58
Cédric
cedricbonhomme
Follow
cryptrz's profile picture
Mi6paulino's profile picture
Altfathermary's profile picture
24 followers
·
54 following
https://www.cedricbonhomme.org
cedricbonhomme
cedricbonhomme
AI & ML interests
AI/NLP for Vulnerability Management. Multi-Agent Systems.
Recent Activity
reacted
to
IlyasMoutawwakil
's
post
with 🚀
about 15 hours ago
After 2 months of refinement, I'm happy to announce that a lot of Transformers' modeling code is now significantly more torch-compile & export-friendly 🔥 Why it had to be done 👇 PyTorch's Dynamo compiler is increasingly becoming the default interoperability layer for ML systems. Anything that relies on torch.export or torch.compile, from model optimization to cross-framework integrations, benefits directly when models can be captured as a single dynamo-traced graph ! Transformers models are now easier to: ⚙️ Compile end-to-end with torch.compile backends 📦 Export reliably via torch.export and torch.onnx.export 🚀 Deploy to ONNX / ONNX Runtime, Intel Corporation's OpenVINO, NVIDIA AutoDeploy (TRT-LLM), AMD's Quark, Meta's Executorch and more hardware-specific runtimes. This work aims at unblocking entire TorchDynamo-based toolchains that rely on exporting Transformers across runtimes and accelerators. We are doubling down on Transformers commitment to be a first-class citizen of the PyTorch ecosystem, more exportable, more optimizable, and easier to deploy everywhere. There are definitely some edge-cases that we still haven't addressed so don't hesitate to try compiling / exporting your favorite transformers and to open issues / PRs. PR in the comments ! More updates coming coming soon !
reacted
to
IlyasMoutawwakil
's
post
with 🔥
about 15 hours ago
After 2 months of refinement, I'm happy to announce that a lot of Transformers' modeling code is now significantly more torch-compile & export-friendly 🔥 Why it had to be done 👇 PyTorch's Dynamo compiler is increasingly becoming the default interoperability layer for ML systems. Anything that relies on torch.export or torch.compile, from model optimization to cross-framework integrations, benefits directly when models can be captured as a single dynamo-traced graph ! Transformers models are now easier to: ⚙️ Compile end-to-end with torch.compile backends 📦 Export reliably via torch.export and torch.onnx.export 🚀 Deploy to ONNX / ONNX Runtime, Intel Corporation's OpenVINO, NVIDIA AutoDeploy (TRT-LLM), AMD's Quark, Meta's Executorch and more hardware-specific runtimes. This work aims at unblocking entire TorchDynamo-based toolchains that rely on exporting Transformers across runtimes and accelerators. We are doubling down on Transformers commitment to be a first-class citizen of the PyTorch ecosystem, more exportable, more optimizable, and easier to deploy everywhere. There are definitely some edge-cases that we still haven't addressed so don't hesitate to try compiling / exporting your favorite transformers and to open issues / PRs. PR in the comments ! More updates coming coming soon !
updated
a model
about 15 hours ago
CIRCL/vulnerability-severity-classification-roberta-base
View all activity
Organizations
cedricbonhomme
's datasets
4
Sort: Recently updated
cedricbonhomme/vulnerability-cwe-patch-2
Viewer
•
Updated
Oct 11, 2025
•
4.49k
•
6
cedricbonhomme/vulnerability-scores
Viewer
•
Updated
May 16, 2025
•
200
•
3
cedricbonhomme/tinyTiny
Viewer
•
Updated
Feb 25, 2025
•
200
•
1
cedricbonhomme/tiny
Viewer
•
Updated
Feb 25, 2025
•
2k
•
6