GoodDavid commited on
Commit
f38c051
·
verified ·
1 Parent(s): 5c26e9b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +72 -31
README.md CHANGED
@@ -14,20 +14,21 @@ tags:
14
  - local-ai
15
  - multilingual
16
  - cli-runtime
 
17
  pipeline_tag: text-generation
18
  library_name: llama.cpp
19
  ---
20
 
21
- # Offline AI 2.1 – EuroLLM-9B-Q8_0 (GGUF)
22
 
23
- Offline AI 2.1 is a fully local AI runtime built around digital sovereignty, privacy, and system autonomy.
24
 
25
  No cloud.
26
  No telemetry.
27
  No tracking.
28
  No external dependencies.
29
 
30
- Everything runs locally via llama.cpp.
31
 
32
  ---
33
 
@@ -37,13 +38,46 @@ Below is the Offline AI runtime interface:
37
 
38
  ![Offline AI CLI Help Menu](cli_help_menu.png)
39
 
40
- Offline AI is not just a model launcher — it is a structured local AI workspace with:
41
 
42
- - Profile handling
43
- - Runtime status inspection
44
- - Controlled execution flow
45
- - Modular architecture foundation
46
- - Admin mode (locked access for advanced system control)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
 
48
  ---
49
 
@@ -52,41 +86,46 @@ Offline AI is not just a model launcher — it is a structured local AI workspac
52
  Base model: EuroLLM-9B
53
  Quantization: Q8_0 (GGUF)
54
  Format: llama.cpp compatible
55
- Runtime: llama.cpp
56
- Offline AI Version: 2.1
57
  Recommended RAM: 16 GB
58
- Platforms: macOS, Windows
59
 
60
- This repository distributes a quantized GGUF Q8_0 variant of the EuroLLM-9B model for efficient offline inference.
61
- The original model weights are unmodified and not fine-tuned as part of this project.
 
62
 
63
  ---
64
 
65
- ## 🧠 WHAT'S NEW IN 2.1
66
 
67
- - Refined CLI architecture
68
- - Structured command system
69
- - Improved response handling
70
- - More stable execution
71
- - Admin access layer (locked system control mode)
72
- - Cleaner internal logic separation
 
 
73
 
74
- Offline AI 2.1 transitions from a simple launcher to a structured local runtime environment.
75
 
76
  ---
77
 
78
  ## 🔐 PROJECT PHILOSOPHY
79
 
80
- Offline AI demonstrates that:
 
 
81
 
82
- - Modern AI can operate without cloud infrastructure
83
- - Open models can run independently
84
  - AI tools can respect user privacy
85
- - Local-first computing is viable
86
 
87
- The project promotes:
88
 
89
- - Digital independence
90
  - Transparent system design
91
  - Offline experimentation
92
  - User-controlled AI environments
@@ -101,8 +140,8 @@ Funded by: European Union research initiatives
101
  Base model license: Apache License 2.0
102
 
103
  Quantized distribution: GGUF Q8_0
104
- Runtime: llama.cpp (MIT License)
105
- Offline AI interface and wrapper: © David Káninský
106
 
107
  All components are used in compliance with their respective licenses.
108
 
@@ -123,4 +162,6 @@ Use beyond personal, research, or educational purposes is at your own responsibi
123
 
124
  Website: https://OfflineAI.online
125
  Domains: .cz / .sk / .de
126
- Author: David Káninský
 
 
 
14
  - local-ai
15
  - multilingual
16
  - cli-runtime
17
+ - ai-runtime
18
  pipeline_tag: text-generation
19
  library_name: llama.cpp
20
  ---
21
 
22
+ # Offline AI 2.2 – EuroLLM-9B-Q8_0 (GGUF)
23
 
24
+ Offline AI 2.2 is a fully local AI runtime environment built around digital sovereignty, privacy, and system autonomy.
25
 
26
  No cloud.
27
  No telemetry.
28
  No tracking.
29
  No external dependencies.
30
 
31
+ Everything runs locally via **llama.cpp**.
32
 
33
  ---
34
 
 
38
 
39
  ![Offline AI CLI Help Menu](cli_help_menu.png)
40
 
41
+ Offline AI is no longer just a model launcher.
42
 
43
+ It is a **local AI runtime environment** designed to manage and operate language models fully offline with a structured command interface.
44
+
45
+ Core capabilities include:
46
+
47
+ - CLI runtime environment
48
+ - Model lifecycle management
49
+ - Profile-based workspace system
50
+ - Snapshot conversation archiving
51
+ - Runtime diagnostics and monitoring
52
+ - Administrative control layer
53
+
54
+ The architecture is designed as a foundation for **multi-model local AI systems**.
55
+
56
+ ---
57
+
58
+ ## 🧠 RUNTIME ARCHITECTURE
59
+
60
+ Offline AI uses a layered architecture:
61
+
62
+ User (CLI)
63
+
64
+ Python Runtime
65
+
66
+ C++ Inference Engine (llama.cpp)
67
+
68
+ GGUF Language Model
69
+
70
+
71
+ The Python runtime acts as the **control layer**, responsible for:
72
+
73
+ - command handling
74
+ - model orchestration
75
+ - workspace profiles
76
+ - snapshots and notes
77
+ - system diagnostics
78
+ - administrative operations
79
+
80
+ The inference backend is a lightweight C++ wrapper around **llama.cpp** with real-time token streaming.
81
 
82
  ---
83
 
 
86
  Base model: EuroLLM-9B
87
  Quantization: Q8_0 (GGUF)
88
  Format: llama.cpp compatible
89
+ Inference engine: llama.cpp
90
+ Offline AI Version: 2.2
91
  Recommended RAM: 16 GB
92
+ Platforms: macOS, Windows, Linux
93
 
94
+ This repository distributes a **quantized GGUF Q8_0 variant** of the EuroLLM-9B model optimized for efficient local inference.
95
+
96
+ The original model weights are **not modified and not fine-tuned** as part of this project.
97
 
98
  ---
99
 
100
+ ## 🚀 WHAT'S NEW IN 2.2
101
 
102
+ - Structured CLI runtime environment
103
+ - Model lifecycle management system
104
+ - Model alias system
105
+ - Workspace profiles and isolation
106
+ - Snapshot conversation archiving
107
+ - Runtime diagnostics and monitoring
108
+ - Administrative control mode
109
+ - Improved modular runtime architecture
110
 
111
+ Offline AI 2.2 evolves the project from a simple model launcher into a **local AI runtime platform** prepared for managing multiple specialized AI models.
112
 
113
  ---
114
 
115
  ## 🔐 PROJECT PHILOSOPHY
116
 
117
+ Offline AI demonstrates that modern AI systems can operate fully offline.
118
+
119
+ The project explores the idea that:
120
 
121
+ - AI does not require cloud infrastructure
122
+ - Open models can run independently on personal hardware
123
  - AI tools can respect user privacy
124
+ - Local-first computing is a viable architecture
125
 
126
+ Offline AI promotes:
127
 
128
+ - Digital sovereignty
129
  - Transparent system design
130
  - Offline experimentation
131
  - User-controlled AI environments
 
140
  Base model license: Apache License 2.0
141
 
142
  Quantized distribution: GGUF Q8_0
143
+ Runtime engine: llama.cpp (MIT License)
144
+ Offline AI runtime interface: © David Káninský
145
 
146
  All components are used in compliance with their respective licenses.
147
 
 
162
 
163
  Website: https://OfflineAI.online
164
  Domains: .cz / .sk / .de
165
+
166
+ Offline AI Runtime
167
+ Author: David Káninský