--- base_model: Qwen/Qwen2.5-Coder-7B-Instruct library_name: peft model_name: typescript-slm-7b tags: - typescript - code-generation - react - nextjs - angular - nodejs - lora - sft - 7b - transformers - trl license: mit pipeline_tag: text-generation language: - en --- # TypeScript SLM 7B Standard 7B TypeScript model for TypeScript code generation, optimized for React, Next.js, Angular, and Node.js. ## Model Details - **Base Model**: [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) - **Model Size**: 7B parameters - **Training Method**: LoRA (Low-Rank Adaptation) - **Context Length**: 2048 tokens - **LoRA Rank**: 64 - **Training Dataset**: 5,000 high-quality TypeScript samples ## Training Configuration - Batch Size: 2 - Gradient Accumulation: 16 - Effective Batch Size: 32 - Learning Rate: 0.0001 - Epochs: 3 - Hardware: Google Colab A100 40GB ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM from peft import PeftModel base_model = "Qwen/Qwen2.5-Coder-7B-Instruct" model = AutoModelForCausalLM.from_pretrained( base_model, device_map="auto", torch_dtype="auto" ) tokenizer = AutoTokenizer.from_pretrained(base_model) # Load LoRA adapter model = PeftModel.from_pretrained(model, "sylvester-francis/typescript-slm-7b") # Generate code prompt = "Write a React component with TypeScript:" inputs = tokenizer(prompt, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_new_tokens=512) print(tokenizer.decode(outputs[0])) ``` ## Repository https://github.com/sylvester-francis/slm-typescript-model