LoRA
LoRA
Overview
From scratch implementation of LoRA
Technical Details
- Framework: PyTorch
- Dataset: TinyShakespeare
- Category: Fine-tuning
Implementation Details
I implemented the LoRA framework using Pytorch on Tinyshakespeare dataset.
LORA: LOW-RANK ADAPTATION OF LARGE LANGUAGE MODELS
Datasets
Tineshakespeare: in the /data folder
Frameworks:
Pytorch
Results (on A100 GPU Single)
Training steps: 1000 Validation steps: per 100 training steps
Train loss: 3.51 Val loss: 3.50
Source Code
๐ GitHub Repository: LoRA
View the complete implementation, training scripts, and documentation on GitHub.