π§© Argo β Fine-Tuned Qwen 2.5-Coder-1.5B for JavaScript, React, and Node.js
Model name: jamesmeike/argo Base model: Qwen/Qwen 2.5-Coder-1.5B Language(s): JavaScript (React, Node.js, Express, JSX, TSX) Framework: π€ Transformers, PEFT (LoRA), BitsAndBytes (16-bit quantization)
π§ Model Overview
Argo is a fine-tuned version of Qwen 2.5-Coder-1.5, optimized for modern JavaScript development, including React, Node.js, Express, and Next.js codebases. It aims to generate high-quality, framework-aware code completions, refactors, and full function templates while maintaining code style and syntax accuracy.
π― Objective
The model was trained to:
Autocomplete and generate React components.
Build Express.js/Node.js APIs.
Assist in TypeScript or JavaScript module creation.
Write utility functions and middleware templates.
Understand JSX and TSX patterns.
βοΈ Technical Details
Category Details
Base Model Qwen/Qwen 2.5-Coder-1.5B Fine-Tuning Framework PEFT + LoRA Precision 16-bit (bitsandbytes) Training Duration 5 epochs Batch Size 1 (gradient_accumulation=16) Dataset Filtered subset of Nan-Do/code-search-net-javascript Total Samples ~3,000 Tokenizer AutoTokenizer (from base model)
π§Ύ Training Process
The dataset was filtered for JavaScript samples containing:
react, jsx, tsx, express, node, nextjs
Each entry was tokenized with a maximum length of 512 tokens. Training was performed using Hugging Face Transformers Trainer with LoRA fine-tuning and 8-bit quantization for GPU efficiency.
After training, LoRA weights were merged into the base Qwen 2.5-Coder-1.5B model using:
model = model.merge_and_unload()
π§ͺ Example Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "jamesmeike/argo" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
prompt = "// Create a simple Express.js API that returns 'Hello Argo!'"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_new_tokens=120, temperature=0.7) print(tokenizer.decode(outputs[0], skip_special_tokens=True))
π§© Example Output
const express = require("express"); const app = express();
app.get("/", (req, res) => { res.send("Hello Argo!"); });
app.listen(3000, () => console.log("Server running on port 3000"));
β οΈ Limitations
May produce incomplete or partially correct code in long generations.
Not tested for security vulnerabilities or dependency management.
May occasionally hallucinate imports or libraries.
Intended for educational and research purposes.
π Intended Use
Argo is designed for:
JavaScript/React/Node.js code generation.
Developer assistance in code completion.
Educational fine-tuning reference for StarCoder models.
Not intended for production-critical or private code generation.
π·οΈ Citation
If you use this model, please cite:
@model{jamesmeike/argo, title = {Argo: A Fine-Tuned Qwen 2.5-Coder-1.5B Model for JavaScript and React}, author = {Jamesmeike}, year = {2025}, url = {https://huggingface.co/jamesmeike/argo} }
π Acknowledgements
Qwen for Qwen 2.5-Coder-1.5B
Hugging Face for Transformers, PEFT, and Datasets
Nan-Do/code-search-net-javascript for the open dataset
- Downloads last month
- 121