File size: 3,114 Bytes
0772a46 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 | ---
language:
- en
tags:
- agriculture
- farming
- qa
- lora
- peft
- qwen
license: mit
datasets:
- shchoi83/agriQA
base_model: Qwen/Qwen1.5-1.8B-Chat
---
# πΎ AgriQA Assistant
An intelligent agricultural expert assistant fine-tuned on the agriQA dataset using Qwen1.5-1.8B-Chat with PEFT + LoRA.
## π Features
- **Clear, practical steps** you can apply directly in the field
- **Specific measurements and quantities** for accurate application
- **Safety precautions** when needed
- **Expert tips** for better results
- **Structured responses** with numbered steps
## π§ Technical Details
- **Base Model**: Qwen/Qwen1.5-1.8B-Chat
- **Fine-tuning Method**: PEFT + LoRA (Parameter Efficient Fine-tuning)
- **Dataset**: agriQA (agricultural Q&A pairs)
- **Training Data**: 50,000 samples with structured prompts
- **LoRA Rank**: 2
- **LoRA Alpha**: 4
## π± Usage
### Direct Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load base model
base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen1.5-1.8B-Chat", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-1.8B-Chat", trust_remote_code=True)
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "nada013/agriqa-assistant")
```
### Chat Format
```python
messages = [
{"role": "system", "content": "You are AgriQA, an agricultural expert assistant..."},
{"role": "user", "content": "How to control aphid infestation in mustard crops?"}
]
# Generate response
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt")
outputs = model.generate(inputs, max_new_tokens=512, temperature=0.3)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
```
## π― Response Format
The model provides structured responses:
1. **Direct answer** to the question
2. **Numbered step-by-step solution**
3. **Specific details** (measurements, quantities, product names)
4. **Safety precautions** if needed
5. **Extra tip or follow-up advice**
## π‘ Example Questions
- "How to control aphid infestation in mustard crops?"
- "What fertilizer should I use for coconut plants?"
- "How to increase milk production in cows?"
- "What is the treatment for white diarrhoea in poultry?"
- "How to preserve potato tubers for 7-8 months?"
## π Safety Note
Always follow safety guidelines when applying agricultural practices. The assistant provides general advice - consult local agricultural experts for region-specific recommendations.
## π Training Details
- **Epochs**: 1
- **Learning Rate**: 5e-4
- **Batch Size**: 1 (with gradient accumulation)
- **Max Length**: 256 tokens
- **Optimizer**: AdamW with fused implementation
- **Hardware**: 8GB GPU with 4-bit quantization
## π€ Contributing
This model is trained on the agriQA dataset. For improvements or questions, please refer to the original dataset source.
## π License
This project uses the Qwen1.5-1.8B-Chat model and agriQA dataset. Please refer to their respective licenses for usage terms.
---
**Built with β€οΈ for the agricultural community** |