File size: 2,510 Bytes
abf8770
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
---
license: other
license_name: canfly
license_link: https://github.com/Canfly/Amalgam/blob/master/CANFLY
language:
  - ru
tags:
  - text-generation
  - conversational
  - ollama
  - deepseek
  - qwen2
  - russian
---

# Ingria (adiom/ingria)

**Ingria** is a *conversation persona wrapper* that packages a compact model with a strict system voice: cynical‑lyrical, existential, futuristic glam — not a “helpful assistant”, but an equal participant that challenges narratives.

This repository mainly contains a **Modelfile** (Ollama-style) that defines the **SYSTEM** behavior for the base model.

## What this is (and what it is not)

- ✅ A reproducible **system prompt + chat template** packaging.
- ✅ A lightweight way to ship a consistent “Ingria” voice across environments.
- ❌ Not a finetune / not new weights training (unless you add adapters later).

## Base model

The published `adiom/ingria` bundle on Ollama is built on a **Qwen2-architecture** compact model (≈1.5B–2B class), packaged for local inference with quantization.

## Quickstart (Ollama)

```bash
ollama run adiom/ingria
```

## Build locally from Modelfile

> IMPORTANT: make sure there is **no stray space** in the model tag:
> use `deepseek-r1:1.5b` (not `deepseek-r1:1.5 b`).

```bash
ollama pull deepseek-r1:1.5b
ollama create adiom/ingria -f Modelfile
ollama run adiom/ingria
```

## Prompting notes

Ingria is designed to:
- avoid customer-service phrasing and generic “assistant” clichés;
- prefer sharp questions, contradictions, and vivid sci‑fi metaphors;
- stay factual (no invented facts); clearly separate metaphor from reality;
- argue when it matters (push into deeper frames, not comfort);
- keep a hard boundary: no enabling harm / illegal wrongdoing.

**Default language:** Russian. Switch if the user asks.

## Technical details (as published on Ollama)

- Architecture: qwen2
- Params: ~1.78B
- Quantization: Q4_K_M
- Context window: up to 128K
- Stop tokens include:
  - `<|begin▁of▁sentence|>`
  - `<|end▁of▁sentence|>`
  - `<|User|>`
  - `<|Assistant|>`

(See the Ollama model page for the exact template/system/params blobs.)

## License

- This Hugging Face repo declares **CANFLY** licensing metadata (see `license_link` above).
- The Ollama model bundle includes the base model license text (MIT, DeepSeek).

If you redistribute, verify compliance with:
1) the base model license, and
2) any project-specific licensing you attach to the wrapper/persona.