GECToR Large 2024 (ONNX)
ONNX quantized version of Grammarly's GECToR 2024 model for browser-based grammatical error correction with Transformers.js.
Original Model
- Source: Grammarly Pillars of GEC
- Paper: Pillars of Grammatical Error Correction (2024)
- Architecture: RoBERTa-Large + token classification head
- Parameters: ~355M
Conversion Details
- Format: ONNX
- Quantization: INT8 (dynamic quantization)
- Size: ~350MB
- Converted by: Manual export from PyTorch
How It Works
GECToR uses a token classification approach - instead of generating corrected text, it predicts edit operations for each token:
$KEEP- Keep token unchanged$DELETE- Remove token$REPLACE_word- Replace with specific word$APPEND_word- Append word after token$TRANSFORM_*- Apply transformation (case, verb form, etc.)
The model runs iteratively (typically 2-3 passes) until no more edits are predicted.
Usage with Transformers.js
import { pipeline } from '@huggingface/transformers';
const classifier = await pipeline(
'token-classification',
'YOUR_USERNAME/gector-large-2024',
{ dtype: 'q8' }
);
const result = await classifier('He go to school yesterday.');
// Returns token predictions with edit tags
Performance
Best accuracy among GECToR variants. Recommended for quality-critical applications.
License
Apache 2.0 (following original model license)
- Downloads last month
- 21