Safetensors
English
qwen2

L2T 500M Random

How to Get Started with the Model

Use the code below to get started with the model.

from transformers import AutoTokenizer, AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained(
    "l2t-project/l2t-500m-random"
)
tokenizer = AutoTokenizer.from_pretrained(
    "l2t-project/l2t-500m-random"
)

Citation

@article{yamaguchi2026enhancinglinguisticcompetencelanguage,
      title={Enhancing Linguistic Competence of Language Models through Pre-training with Language Learning Tasks}, 
      author={Atsuki Yamaguchi and Maggie Mi and Nikolaos Aletras},
      year={2026},
      eprint={2601.03448},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2601.03448},
      journal={arXiv},
      volume={abs/2601.03448}
}
Downloads last month
8
Safetensors
Model size
0.4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train l2t-project/l2t-500m-random

Paper for l2t-project/l2t-500m-random