GoodDavid commited on
Commit
8629504
·
verified ·
1 Parent(s): 2ee5a24

Update README.md

Browse files

Offline AI 2.0 – EuroLLM-9B-Q8_0 (GGUF)

Privacy-first AI. Fully local. No cloud.

Offline AI 2.0 distributes a Q8_0 quantized GGUF version of EuroLLM-9B for efficient offline inference using llama.cpp.

This release is part of the OfflineAI.online project and represents Version 2.0 — evolving from a simple offline model runner into a lightweight private AI workspace.

What This Is:
Quantized distribution of EuroLLM-9B
GGUF format (llama.cpp compatible)

Optimized for local execution
No fine-tuning applied
No cloud dependency
Everything runs locally on your machine.

Requirements:
~16 GB RAM recommended
llama.cpp or compatible GGUF runtime
macOS or Windows

License:
Base model: EuroLLM-9B — Apache License 2.0
Runtime: llama.cpp — MIT License
Offline AI wrapper/interface: © David Káninský

Project:
Website: https://OfflineAI.online

Digital independence.
Local-first computing.
AI without surveillance economics.

Files changed (1) hide show
  1. README.md +129 -3
README.md CHANGED
@@ -1,3 +1,129 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - cs
4
+ - sk
5
+ - en
6
+ - de
7
+ license: apache-2.0
8
+ base_model: EuroLLM-9B
9
+ quantization: Q8_0
10
+ tags:
11
+ - gguf
12
+ - llama.cpp
13
+ - offline
14
+ - local-ai
15
+ - multilingual
16
+ pipeline_tag: text-generation
17
+ library_name: llama.cpp
18
+ ---
19
+
20
+ # Offline AI 2.0 – EuroLLM-9B-Q8_0 (GGUF)
21
+
22
+ Offline AI 2.0 is the next evolution of the OfflineAI.online project.
23
+
24
+ Version 1 proved a simple idea:
25
+ AI can run completely offline.
26
+ No cloud.
27
+ No tracking.
28
+ No data collection.
29
+
30
+ Version 2.0 expands this concept into a lightweight private AI workspace designed for independent work, experimentation, and digital sovereignty.
31
+
32
+ Everything runs locally.
33
+ No internet connection required.
34
+ No data leaves your device.
35
+
36
+ This repository distributes a **quantized GGUF Q8_0 variant of the EuroLLM-9B model** for efficient offline inference via llama.cpp.
37
+ The original model weights are unmodified and not fine-tuned as part of this project.
38
+
39
+ ---
40
+
41
+ ## 🖥️ PLATFORM
42
+
43
+ macOS / Windows
44
+
45
+ - No installation required
46
+ - No external dependencies
47
+ - No account needed
48
+ - Launch and use
49
+
50
+ (Platform-specific instructions remain the same as in Version 1.)
51
+
52
+ ---
53
+
54
+ ## 🔧 TECHNICAL INFORMATION
55
+
56
+ Base model: EuroLLM-9B (quantized Q8_0 for offline execution)
57
+ Format: GGUF (llama.cpp compatible)
58
+ Runtime: llama.cpp
59
+ Offline AI Version: 2.0
60
+ Recommended RAM: 16 GB
61
+ Platforms: macOS, Windows
62
+
63
+ The EuroLLM model provides strong multilingual performance (Czech, Slovak, English, German), optimized for European language contexts. :contentReference[oaicite:2]{index=2}
64
+
65
+ ---
66
+
67
+ ## 🧠 WHAT CHANGED IN 2.0
68
+
69
+ - Refined wrapper architecture
70
+ - Improved response handling
71
+ - More stable execution
72
+ - Cleaner interaction flow
73
+ - Stronger project identity and structure
74
+ - Designed as a private AI workspace rather than a simple launcher
75
+
76
+ Offline AI 2.0 is not just “run model → chat → exit”.
77
+ It is built as a foundation for future expansion while remaining minimal and fully local.
78
+
79
+ ---
80
+
81
+ ## 🔐 PROJECT PHILOSOPHY
82
+
83
+ Offline AI exists to demonstrate that:
84
+
85
+ - Modern AI does not require cloud infrastructure
86
+ - Open-source models can operate independently
87
+ - Digital tools can respect user privacy
88
+ - AI can be used without surveillance economics
89
+
90
+ This project promotes:
91
+
92
+ - Digital independence
93
+ - Transparency
94
+ - Local-first computing
95
+ - Educational experimentation
96
+
97
+ ---
98
+
99
+ ## 📄 MODEL ORIGIN & LICENSE
100
+
101
+ Model: EuroLLM-9B
102
+ Original authors: EuroLLM Project consortium
103
+ Funded by: European Union research initiatives
104
+ Base model license: Apache License 2.0
105
+
106
+ Quantized distribution: GGUF Q8_0 (for offline inference)
107
+ Runtime: llama.cpp (MIT License)
108
+ Offline AI interface and wrapper: © David Káninský
109
+
110
+ All components are used in compliance with their respective licenses.
111
+
112
+ ---
113
+
114
+ ## ⚠️ DISCLAIMER
115
+
116
+ This project is an educational and experimental implementation.
117
+
118
+ It is not a commercial AI service and does not replace professional advice.
119
+ Outputs are not intended for legal, medical, financial, or critical decision-making use.
120
+
121
+ Use beyond personal, research, or educational purposes is at your own responsibility.
122
+
123
+ ---
124
+
125
+ ## 🌍 PROJECT
126
+
127
+ Website: https://OfflineAI.online
128
+ Localizations: .cz / .sk / .de
129
+ Author: David Káninský