File size: 3,510 Bytes
570ca4b 9a39f47 37659c7 570ca4b e3b0376 570ca4b 9a39f47 37659c7 9a39f47 37659c7 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 37659c7 9a39f47 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 37659c7 9a39f47 37659c7 9a39f47 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 37659c7 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 37659c7 9a39f47 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 9a39f47 37659c7 9a39f47 570ca4b 9a39f47 570ca4b 9a39f47 570ca4b 37659c7 570ca4b 9a39f47 570ca4b 9a39f47 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 |
---
library_name: transformers
license: mit
language:
- en
base_model:
- Qwen/Qwen2.5-Coder-1.5B
pipeline_tag: text-generation
tags:
- code
---
🧩 Argo — Fine-Tuned Qwen 2.5-Coder-1.5B for JavaScript, React, and Node.js
Model name: jamesmeike/argo
Base model: Qwen/Qwen 2.5-Coder-1.5B
Language(s): JavaScript (React, Node.js, Express, JSX, TSX)
Framework: 🤗 Transformers, PEFT (LoRA), BitsAndBytes (16-bit quantization)
---
🧠 Model Overview
Argo is a fine-tuned version of Qwen 2.5-Coder-1.5, optimized for modern JavaScript development, including React, Node.js, Express, and Next.js codebases.
It aims to generate high-quality, framework-aware code completions, refactors, and full function templates while maintaining code style and syntax accuracy.
---
🎯 Objective
The model was trained to:
Autocomplete and generate React components.
Build Express.js/Node.js APIs.
Assist in TypeScript or JavaScript module creation.
Write utility functions and middleware templates.
Understand JSX and TSX patterns.
---
⚙️ Technical Details
Category Details
Base Model Qwen/Qwen 2.5-Coder-1.5B
Fine-Tuning Framework PEFT + LoRA
Precision 16-bit (bitsandbytes)
Training Duration 5 epochs
Batch Size 1 (gradient_accumulation=16)
Dataset Filtered subset of Nan-Do/code-search-net-javascript
Total Samples ~3,000
Tokenizer AutoTokenizer (from base model)
---
🧾 Training Process
The dataset was filtered for JavaScript samples containing:
> react, jsx, tsx, express, node, nextjs
Each entry was tokenized with a maximum length of 512 tokens.
Training was performed using Hugging Face Transformers Trainer with LoRA fine-tuning and 8-bit quantization for GPU efficiency.
After training, LoRA weights were merged into the base Qwen 2.5-Coder-1.5B model using:
model = model.merge_and_unload()
---
🧪 Example Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "jamesmeike/argo"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
prompt = "// Create a simple Express.js API that returns 'Hello Argo!'"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=120, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
---
🧩 Example Output
const express = require("express");
const app = express();
app.get("/", (req, res) => {
res.send("Hello Argo!");
});
app.listen(3000, () => console.log("Server running on port 3000"));
---
⚠️ Limitations
May produce incomplete or partially correct code in long generations.
Not tested for security vulnerabilities or dependency management.
May occasionally hallucinate imports or libraries.
Intended for educational and research purposes.
---
📚 Intended Use
Argo is designed for:
JavaScript/React/Node.js code generation.
Developer assistance in code completion.
Educational fine-tuning reference for StarCoder models.
Not intended for production-critical or private code generation.
---
🏷️ Citation
If you use this model, please cite:
@model{jamesmeike/argo,
title = {Argo: A Fine-Tuned Qwen 2.5-Coder-1.5B Model for JavaScript and React},
author = {Jamesmeike},
year = {2025},
url = {https://huggingface.co/jamesmeike/argo}
}
---
🙌 Acknowledgements
Qwen for Qwen 2.5-Coder-1.5B
Hugging Face for Transformers, PEFT, and Datasets
Nan-Do/code-search-net-javascript for the open dataset |