This repo contains a low-rank adapter for LLaMA-7b fit on the translated Stanford Alpaca dataset. Model was fine-tuned for Polish language. To run it go to it's github repo. Translated Stanford Alpaca dataset is here

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train Lbuk/alpaca-koza-7b