File size: 2,046 Bytes
b91f5ee
 
 
 
 
41e5705
 
 
 
 
 
 
 
b91f5ee
 
41e5705
 
 
9818d2c
 
41e5705
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9d07ddc
41e5705
 
 
b91f5ee
41e5705
 
 
 
 
b91f5ee
41e5705
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
---
tags:
- gguf
- llama.cpp
- unsloth
license: mit
datasets:
- Yusiko/lia_dataset
language:
- az
- en
base_model:
- meta-llama/Llama-3.1-8B
---

# 🤓LIA (Llama 3.1 8B — Finetuned with Unsloth)

A finetuned Llama 3.1 8B model specialized for Local Intelligent Agent (LIA) intent parsing and local file/system actions. The model converts user requests into compact JSON that LIA executes safely.
# For download LIA and more detailed information, refer to the link below.
- https://github.com/Yusiko99/LIA
## Overview🧐
- Base: Llama 3.1 8B Instruct
- Method: Unsloth SFT (LoRA), merged for deployment
- Dataset: Custom, user-created (intent pairs)
- Output: Raw JSON only (no markdown), with keys: command_type, parameters, reasoning
- Primary goal: Deterministic intent parsing for desktop automation

## 😎Purpose and Tasks
- Parse file/folder operations: open, list, create, write, read, delete, copy, move, rename
- Interpret patterns (e.g., *.pdf) and paths
- Safe fallback to `chat` intent when not a file operation
- Produce stable JSON without code fences or extra prose

Example output:
```json
{
  "command_type": "list_files",
  "parameters": {"path": "Downloads", "pattern": "*.pdf"},
  "reasoning": "User wants to list PDFs in Downloads"
}
```

## 😲Differences vs Original Llama 3.1 8B
- More consistent JSON-only answers for intent parsing
- Lower hallucination rate on file/command names
- Better handling of short/telegraphic commands
- Tuned for low temperature decoding (0.1–0.3)

## Training (Unsloth)
- LoRA-based SFT on user dataset (input → JSON output pairs)
- Chat template aligned with Llama 3.1
- System prompt stresses: “Return raw JSON only”
- Adapters merged to a full checkpoint for serving

## Quick start (Ollama):
```bash
ollama run hf.co/Yusiko/LIA:Q4_K_M
```

## 📃License and Credits
- Base: Meta Llama 3.1 8B Instruct (respect base license)
- Finetuning: Unsloth
- Packaging: Ollama
- LIA is protecting with MIT license

For questions or integration help, open an issue on the repository.