Phi-3.5-mini-instruct fine-tuned on F*

This model is a LoRA fine-tuned version of Microsoft's Phi-3.5-mini-instruct. It specializes in generating F* code, including function definitions, lemmas, and formal proofs. In their paper Towards Neural Synthesis for SMT-Assisted Proof-Oriented Programming, Saikat Chakraborty et al. (2024) use Phi-2, among other LLMs, for evaluating their dataset.

Model Details

The model was fine-tuned on the Singularity HPC Cluster at IIT Gandhinagar using a single 48GB NVIDIA L40S GPU. The fine-tuning scripts and console logs are available at scripts directory.

Model Description

Evaluation

To examine the tensorboard logs for further training and eval details, use the following commands:

pip install tensorboard
tensorboard --logdir ./logs

Testing Data & Metrics

  • Test Loss: 0.9387
  • Test Perplexity: 2.56
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for dassarthak18/phi3.5-fstar-lora

Adapter
(671)
this model

Dataset used to train dassarthak18/phi3.5-fstar-lora

Collection including dassarthak18/phi3.5-fstar-lora