Stinger2311 commited on
Commit
4022a5a
·
verified ·
1 Parent(s): f0365cf

Add LoRA model card

Browse files
Files changed (1) hide show
  1. README.md +51 -0
README.md ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ tags:
6
+ - peft
7
+ - lora
8
+ - qlora
9
+ - unsloth
10
+ - gemma4
11
+ - chemistry
12
+ - education
13
+ - conversational
14
+ pipeline_tag: text-generation
15
+ base_model:
16
+ - google/gemma-4-e2b-it
17
+ library_name: peft
18
+ ---
19
+
20
+ # WhyBook Gemma 4 E2B LoRA
21
+
22
+ This repository contains the LoRA adapters for the WhyBook chemistry tutoring model.
23
+
24
+ ## Base Model
25
+
26
+ - `google/gemma-4-e2b-it`
27
+
28
+ ## Fine-Tuning Style
29
+
30
+ The adapter was trained to answer in a structured tutoring format:
31
+ - What it is
32
+ - Why it is in your textbook
33
+ - Where you will see it in real life
34
+
35
+ ## Files
36
+
37
+ - `adapter_model.safetensors`
38
+ - `adapter_config.json`
39
+ - tokenizer and chat-template files
40
+
41
+ ## Intended Use
42
+
43
+ Use this repo if you want to:
44
+ - load the adapter on top of the base Gemma 4 E2B model
45
+ - continue fine-tuning
46
+ - inspect the lightweight fine-tuned weights separately from the GGUF export
47
+
48
+ ## Related Repositories
49
+
50
+ - GGUF model: https://huggingface.co/Stinger2311/whybook-gemma4-e2b-gguf
51
+ - Dataset: https://huggingface.co/datasets/Stinger2311/whybook-chemistry-dataset