Bordoglor's picture
Upload folder using huggingface_hub
92c1c00 verified

A newer version of the Gradio SDK is available: 6.1.0

Upgrade

Fine-tuning a multilayer perceptron using LoRA and 🤗 PEFT

Open In Colab

PEFT supports fine-tuning any type of model as long as the layers being used are supported. The model does not have to be a transformers model, for instance. To demonstrate this, the accompanying notebook multilayer_perceptron_lora.ipynb shows how to apply LoRA to a simple multilayer perceptron and use it to train a model to perform a classification task.