🧩 Argo β€” Fine-Tuned Qwen 2.5-Coder-1.5B for JavaScript, React, and Node.js

Model name: jamesmeike/argo Base model: Qwen/Qwen 2.5-Coder-1.5B Language(s): JavaScript (React, Node.js, Express, JSX, TSX) Framework: πŸ€— Transformers, PEFT (LoRA), BitsAndBytes (16-bit quantization)


🧠 Model Overview

Argo is a fine-tuned version of Qwen 2.5-Coder-1.5, optimized for modern JavaScript development, including React, Node.js, Express, and Next.js codebases. It aims to generate high-quality, framework-aware code completions, refactors, and full function templates while maintaining code style and syntax accuracy.


🎯 Objective

The model was trained to:

Autocomplete and generate React components.

Build Express.js/Node.js APIs.

Assist in TypeScript or JavaScript module creation.

Write utility functions and middleware templates.

Understand JSX and TSX patterns.


βš™οΈ Technical Details

Category Details

Base Model Qwen/Qwen 2.5-Coder-1.5B Fine-Tuning Framework PEFT + LoRA Precision 16-bit (bitsandbytes) Training Duration 5 epochs Batch Size 1 (gradient_accumulation=16) Dataset Filtered subset of Nan-Do/code-search-net-javascript Total Samples ~3,000 Tokenizer AutoTokenizer (from base model)


🧾 Training Process

The dataset was filtered for JavaScript samples containing:

react, jsx, tsx, express, node, nextjs

Each entry was tokenized with a maximum length of 512 tokens. Training was performed using Hugging Face Transformers Trainer with LoRA fine-tuning and 8-bit quantization for GPU efficiency.

After training, LoRA weights were merged into the base Qwen 2.5-Coder-1.5B model using:

model = model.merge_and_unload()


πŸ§ͺ Example Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "jamesmeike/argo" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")

prompt = "// Create a simple Express.js API that returns 'Hello Argo!'"

inputs = tokenizer(prompt, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_new_tokens=120, temperature=0.7) print(tokenizer.decode(outputs[0], skip_special_tokens=True))


🧩 Example Output

const express = require("express"); const app = express();

app.get("/", (req, res) => { res.send("Hello Argo!"); });

app.listen(3000, () => console.log("Server running on port 3000"));


⚠️ Limitations

May produce incomplete or partially correct code in long generations.

Not tested for security vulnerabilities or dependency management.

May occasionally hallucinate imports or libraries.

Intended for educational and research purposes.


πŸ“š Intended Use

Argo is designed for:

JavaScript/React/Node.js code generation.

Developer assistance in code completion.

Educational fine-tuning reference for StarCoder models.

Not intended for production-critical or private code generation.


🏷️ Citation

If you use this model, please cite:

@model{jamesmeike/argo, title = {Argo: A Fine-Tuned Qwen 2.5-Coder-1.5B Model for JavaScript and React}, author = {Jamesmeike}, year = {2025}, url = {https://huggingface.co/jamesmeike/argo} }


πŸ™Œ Acknowledgements

Qwen for Qwen 2.5-Coder-1.5B

Hugging Face for Transformers, PEFT, and Datasets

Nan-Do/code-search-net-javascript for the open dataset

Downloads last month
121
Safetensors
Model size
2B params
Tensor type
F16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for jamesmeike/argo

Base model

Qwen/Qwen2.5-1.5B
Finetuned
(29)
this model