Roshan1162003 commited on
Commit
7843c42
·
0 Parent(s):

Fresh clean upload (history reset)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +7 -0
  2. .gitignore +7 -0
  3. .hintrc +15 -0
  4. assets/icons/globe.svg +15 -0
  5. assets/icons/message-circle.svg +13 -0
  6. document.md +59 -0
  7. engine/cpu_avx/api.py +98 -0
  8. engine/cpu_avx/bujji_engine.exe +3 -0
  9. engine/cpu_avx/ggml-base.dll +3 -0
  10. engine/cpu_avx/ggml-cpu.dll +3 -0
  11. engine/cpu_avx/ggml-rpc.dll +3 -0
  12. engine/cpu_avx/ggml.dll +3 -0
  13. engine/cpu_avx/libcurl-x64.dll +3 -0
  14. engine/cpu_avx/llama-batched-bench.exe +3 -0
  15. engine/cpu_avx/llama-batched.exe +3 -0
  16. engine/cpu_avx/llama-bench.exe +3 -0
  17. engine/cpu_avx/llama-cli.exe +3 -0
  18. engine/cpu_avx/llama-convert-llama2c-to-ggml.exe +3 -0
  19. engine/cpu_avx/llama-cvector-generator.exe +3 -0
  20. engine/cpu_avx/llama-embedding.exe +3 -0
  21. engine/cpu_avx/llama-eval-callback.exe +3 -0
  22. engine/cpu_avx/llama-export-lora.exe +3 -0
  23. engine/cpu_avx/llama-gemma3-cli.exe +3 -0
  24. engine/cpu_avx/llama-gen-docs.exe +3 -0
  25. engine/cpu_avx/llama-gguf-hash.exe +3 -0
  26. engine/cpu_avx/llama-gguf-split.exe +3 -0
  27. engine/cpu_avx/llama-gguf.exe +3 -0
  28. engine/cpu_avx/llama-gritlm.exe +3 -0
  29. engine/cpu_avx/llama-imatrix.exe +3 -0
  30. engine/cpu_avx/llama-infill.exe +3 -0
  31. engine/cpu_avx/llama-llava-cli.exe +3 -0
  32. engine/cpu_avx/llama-llava-clip-quantize-cli.exe +3 -0
  33. engine/cpu_avx/llama-lookahead.exe +3 -0
  34. engine/cpu_avx/llama-lookup-create.exe +3 -0
  35. engine/cpu_avx/llama-lookup-merge.exe +3 -0
  36. engine/cpu_avx/llama-lookup-stats.exe +3 -0
  37. engine/cpu_avx/llama-lookup.exe +3 -0
  38. engine/cpu_avx/llama-minicpmv-cli.exe +3 -0
  39. engine/cpu_avx/llama-mtmd-cli.exe +3 -0
  40. engine/cpu_avx/llama-parallel.exe +3 -0
  41. engine/cpu_avx/llama-passkey.exe +3 -0
  42. engine/cpu_avx/llama-perplexity.exe +3 -0
  43. engine/cpu_avx/llama-q8dot.exe +3 -0
  44. engine/cpu_avx/llama-quantize.exe +3 -0
  45. engine/cpu_avx/llama-qwen2vl-cli.exe +3 -0
  46. engine/cpu_avx/llama-retrieval.exe +3 -0
  47. engine/cpu_avx/llama-run.exe +3 -0
  48. engine/cpu_avx/llama-save-load-state.exe +3 -0
  49. engine/cpu_avx/llama-simple-chat.exe +3 -0
  50. engine/cpu_avx/llama-simple.exe +3 -0
.gitattributes ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ *.bin filter=lfs diff=lfs merge=lfs -text
2
+ *.pt filter=lfs diff=lfs merge=lfs -text
3
+ *.pth filter=lfs diff=lfs merge=lfs -text
4
+ *.onnx filter=lfs diff=lfs merge=lfs -text
5
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
6
+ *.gguf filter=lfs diff=lfs merge=lfs -text
7
+ *.zip filter=lfs diff=lfs merge=lfs -text
.gitignore ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ node_modules/
2
+ dist/
3
+ build/
4
+ .env
5
+ *.log
6
+ *.tmp
7
+ .DS_Store
.hintrc ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "extends": [
3
+ "development"
4
+ ],
5
+ "hints": {
6
+ "compat-api/css": [
7
+ "default",
8
+ {
9
+ "ignore": [
10
+ "backdrop-filter"
11
+ ]
12
+ }
13
+ ]
14
+ }
15
+ }
assets/icons/globe.svg ADDED
assets/icons/message-circle.svg ADDED
document.md ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Student & Developer Documentation
2
+
3
+ ## Overview
4
+ Welcome to the RemiAI Framework! This document is designed to help you understand how to customize, configure, and make this application your own. This framework is built to be "Plug-and-Play"—meaning you don't need to know Python or complex AI coding to use it.
5
+
6
+ ## 🛠️ How to Customize
7
+
8
+ ### 1. Changing the AI Name
9
+ Want to name the AI "Jarvis" or "MyBot"?
10
+ 1. Open `index.html` in any text editor (VS Code, Notepad, etc.).
11
+ 2. Search for "RemiAI" or "Bujji".
12
+ 3. Replace the text with your desired name.
13
+ 4. Save the file.
14
+ 5. Restart the app (`npm start`), and your new name will appear!
15
+
16
+ ### 2. Replacing the AI Model
17
+ This application is powered by a **GGUF** model file. You can swap this "brain" for a smarter one, a faster one, or one specialized in coding/storytelling.
18
+
19
+ **Steps to Change the Model:**
20
+ 1. **Download a Model**: Go to [Hugging Face](https://huggingface.co/models?library=gguf) and search for GGUF models (e.g., `Llama-3-8B-GGUF`, `Mistral-7B-GGUF`).
21
+ 2. **Select File**: Download the `.gguf` file (Q4_K_M or Q5_K_M are good balances of speed and intelligence).
22
+ 3. **Rename**: Rename your downloaded file to exactly:
23
+ > **`model.gguf`**
24
+ 4. **Replace**:
25
+ * Go to the `engine` folder in your project directory.
26
+ * Paste your new `model.gguf` there, replacing the old one (or place it one level up depending on your specific folder setup—check `main.js` which looks for `../model.gguf` relative to the engine binary). *Note: Standard setup is usually placing `model.gguf` in the root or `engine` folder as configured.*
27
+ 5. **Restart**: Run `npm start`. The app will now use the new intelligence!
28
+
29
+ **Note**: Make sure your laptop have good health don't use laptop more then 5 years old because running an entire Gen AI model weights or Neural Network will damage the laptop - your laptop may stucks, over heat, shutdown automatically and some it will make your laptop or device dead so be carefule
30
+ **GOOD CONFIGURATION NO DAMAGE** (i3 processor, 8GB RAM) - if you are using the PC.
31
+ (i5 processor, 16GB RAM) - if you are using the laptop.
32
+ new i3 with 8GB RAM laptop's will easily runs but the laptop want to be new and good heavy if the laptop was too old it will not work even you have i5 processor and 16GB RAM
33
+
34
+ ### 3. Customizing the UI
35
+ All styles are in `styles.css` (or within `index.html`).
36
+ * **Colors**: Change the background colors or chat bubble colors in the CSS.
37
+ * **Icons**: Replace `remiai.ico` with your own `.ico` file to change the app icon.
38
+
39
+ ## ❓ Frequently Asked Questions (FAQ)
40
+
41
+ **Q: Do I need Python?**
42
+ A: **No.** The application comes with a pre-compiled engine (`bujji_engine.exe` / `llama-server.exe`) that runs the model directly.
43
+
44
+ **Q: Why does it say "AVX2"?**
45
+ A: AVX2 is a feature in modern CPUs that makes the AI run faster. The app automatically detects if you have it. If not, it switches to a slower but compatible mode (AVX).
46
+
47
+ **Q: The app opens but doesn't reply.**
48
+ A:
49
+ 1. Check if `model.gguf` exists in the correct folder.
50
+ 2. Open the Developer Tools (Ctrl+Shift+I) in the app to see if there are any red errors in the Console.
51
+
52
+ **Q: How do I build it into an .exe file?**
53
+ A: Run the command:
54
+ ```bash
55
+ npm run dist
56
+ ```
57
+ This will create an installer in the `release` folder that you can share with friends!
58
+
59
+ `if you are facing errors while building open the power shell as an administrator and run the above command then it will works 100%`
engine/cpu_avx/api.py ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import sys
2
+ import os
3
+ import multiprocessing
4
+ from flask import Flask, request, Response
5
+ from waitress import serve
6
+ import json
7
+ import traceback
8
+
9
+ # --- 1. SETUP LOGGING ---
10
+ def log(msg):
11
+ print(f"[ENGINE] {msg}", flush=True)
12
+
13
+ # --- 2. PATH SETUP ---
14
+ if getattr(sys, 'frozen', False):
15
+ BASE_DIR = os.path.dirname(sys.executable)
16
+ else:
17
+ BASE_DIR = os.path.dirname(os.path.abspath(__file__))
18
+
19
+ MODEL_PATH = os.path.join(BASE_DIR, "model.gguf")
20
+ log(f"Base Directory: {BASE_DIR}")
21
+
22
+ app = Flask(__name__)
23
+
24
+ # --- 3. THE "MONKEY PATCH" (CRITICAL FIX) ---
25
+ # We intercept the library's attempt to set up logging and stop it.
26
+ try:
27
+ import llama_cpp
28
+
29
+ # Create a dummy function that does NOTHING
30
+ def dummy_log_set(callback, user_data):
31
+ return
32
+
33
+ # Overwrite the library's internal function with our dummy
34
+ # Now, when Llama() runs, it CALLS this instead of the C function.
35
+ llama_cpp.llama_log_set = dummy_log_set
36
+
37
+ log("Successfully patched Llama logging.")
38
+ except Exception as e:
39
+ log(f"Patch warning: {e}")
40
+
41
+ # --- 4. LOAD MODEL ---
42
+ llm = None
43
+ try:
44
+ from llama_cpp import Llama
45
+
46
+ total_cores = multiprocessing.cpu_count()
47
+ safe_threads = max(1, int(total_cores * 0.5))
48
+
49
+ if not os.path.exists(MODEL_PATH):
50
+ log("CRITICAL ERROR: model.gguf is missing!")
51
+ else:
52
+ log("Loading Model...")
53
+ llm = Llama(
54
+ model_path=MODEL_PATH,
55
+ n_ctx=4096,
56
+ n_threads=safe_threads,
57
+ n_gpu_layers=0,
58
+ verbose=False,
59
+ chat_format="gemma",
60
+ use_mmap=False
61
+ )
62
+ log("Model Loaded Successfully!")
63
+
64
+ except Exception as e:
65
+ log(f"CRITICAL EXCEPTION during load: {e}")
66
+ log(traceback.format_exc())
67
+
68
+ @app.route('/', methods=['GET'])
69
+ def health_check():
70
+ if llm: return "OK", 200
71
+ return "MODEL_FAILED", 500
72
+
73
+ @app.route('/chat_stream', methods=['POST'])
74
+ def chat_stream():
75
+ if not llm:
76
+ return Response("data: " + json.dumps({'chunk': "Error: Brain failed initialization."}) + "\n\n", mimetype='text/event-stream')
77
+
78
+ data = request.json
79
+ messages = [{"role": "user", "content": data.get('message', '')}]
80
+
81
+ def generate():
82
+ try:
83
+ stream = llm.create_chat_completion(messages=messages, max_tokens=1000, stream=True)
84
+ for chunk in stream:
85
+ if 'content' in chunk['choices'][0]['delta']:
86
+ yield f"data: {json.dumps({'chunk': chunk['choices'][0]['delta']['content']})}\n\n"
87
+ except Exception as e:
88
+ log(f"Gen Error: {e}")
89
+ yield f"data: {json.dumps({'chunk': ' Error.'})}\n\n"
90
+
91
+ return Response(stream_with_context(generate()), mimetype='text/event-stream')
92
+
93
+ if __name__ == '__main__':
94
+ log("Starting Waitress Server on Port 5000...")
95
+ try:
96
+ serve(app, host='127.0.0.1', port=5000, threads=6)
97
+ except Exception as e:
98
+ log(f"Server Crash: {e}")
engine/cpu_avx/bujji_engine.exe ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9d7218b54eef84bed96f12d866a6ae451a761fdb609d7b9965fea16b497899d2
3
+ size 3239936
engine/cpu_avx/ggml-base.dll ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:61fb5e1c5272308728a7af17ab130e0987e966d70ff0db43826af30991e6b8b5
3
+ size 488448
engine/cpu_avx/ggml-cpu.dll ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5d6e3f238f1d97508d662bc0b8e41fc9ff5fd1f53d265f3666f5acb4536757da
3
+ size 520192
engine/cpu_avx/ggml-rpc.dll ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4d9ddf43958cd3de0c23c27cabfd66499aeda590de4f1f1c49a7448d9e02b647
3
+ size 97792
engine/cpu_avx/ggml.dll ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:041fbc12f681857a892331324b5f7bc4316726883626c196447633b7ffc76442
3
+ size 69120
engine/cpu_avx/libcurl-x64.dll ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5bd5fda8cf2bef630dd4eed5e60b65af2aed9c76c1818c36133701c09cf6d0ac
3
+ size 3144776
engine/cpu_avx/llama-batched-bench.exe ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e747449920c8b8802038a06bbbcfbd73687d410be41d9ed8d65ff6c7a72d6d8f
3
+ size 1295872
engine/cpu_avx/llama-batched.exe ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:31c50c850448f0231f61e8fb15995823a8ec1d0711e5710a0e11413b2bfec8a4
3
+ size 1296384
engine/cpu_avx/llama-bench.exe ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:48f5cf89c0696fd248d26de965211e5143b7426bcb80e1b84f2d2abad250223c
3
+ size 215552
engine/cpu_avx/llama-cli.exe ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:95b285722ff441d2d92da0721979e28d72261bd0ddeddd3041b4a12979f67923
3
+ size 1356288
engine/cpu_avx/llama-convert-llama2c-to-ggml.exe ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:57ec73a2b1bd1c7aaeef830bdc947d1d6281470d49306cd80a40b57d6b601ffa
3
+ size 82432
engine/cpu_avx/llama-cvector-generator.exe ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f45ebffc0d35b8a68974b2d779b25380110113bd8af84044c9399b3e38fafed8
3
+ size 1329152
engine/cpu_avx/llama-embedding.exe ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ba9ebbdac5403bf0815a8dc7ab94ab6d6c1cf3e5f4aa77a8b326047ab1fc2373
3
+ size 1312768
engine/cpu_avx/llama-eval-callback.exe ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4c0c04124892fd8d1be153c4fd7dd66d96a9e98b0e23d9892433a0ccf18517e5
3
+ size 1305088
engine/cpu_avx/llama-export-lora.exe ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c6f108720162e23eecc0b5105c01042db1d15d604718d91d7b900e334429de5e
3
+ size 1307648
engine/cpu_avx/llama-gemma3-cli.exe ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c13facc879eb6cef47f5914a7685d0f71ae1b911a017dcf5ba089e8d481592d6
3
+ size 26112
engine/cpu_avx/llama-gen-docs.exe ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4de7708bf248d427170d76117d8a9883b205599ffa3ce093bdb290f175daed98
3
+ size 541184
engine/cpu_avx/llama-gguf-hash.exe ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:230009691bb7f727f7b15e0149efb3b60a747abcd51aae05ef95cd0382f6cf82
3
+ size 65536
engine/cpu_avx/llama-gguf-split.exe ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ed336646f82f00951657c8515fa8e30a0126263c40b6068fc266e89080b0474e
3
+ size 49152
engine/cpu_avx/llama-gguf.exe ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bb9aa3d0b05bd527c6fbae73949183189c3b789f79815c9997a6052a6703ddb4
3
+ size 29696
engine/cpu_avx/llama-gritlm.exe ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9559ddfcf1e9a798398c915c62fc1916329470b2109fbd15d637daa66f002bec
3
+ size 1304576
engine/cpu_avx/llama-imatrix.exe ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:db8bfaa593b33b08608410f76873b719f7f38e0146825a6b57f5563d89cae1f2
3
+ size 1331200
engine/cpu_avx/llama-infill.exe ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a87a65b4b21fbd96c38d124e2169f98b1164abb08e5337cb8d61d7ee8b5aa13e
3
+ size 1336832
engine/cpu_avx/llama-llava-cli.exe ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c13facc879eb6cef47f5914a7685d0f71ae1b911a017dcf5ba089e8d481592d6
3
+ size 26112
engine/cpu_avx/llama-llava-clip-quantize-cli.exe ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dea5fca05ae35d42677b33e8775403eb01ba83041ac4503c30ea5b859ad6f794
3
+ size 279552
engine/cpu_avx/llama-lookahead.exe ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d5673e70d5b5390e450b3afbfdaec0b5b4213c4f9bc54f3ad4396182a3513d0a
3
+ size 1323520
engine/cpu_avx/llama-lookup-create.exe ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ada5e39f34e382dc27a65d6cf70b7ec5c67000677cffbc06934d61df766d9bd2
3
+ size 1308672
engine/cpu_avx/llama-lookup-merge.exe ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:76e89219f880f05a0962c9bf8f64deb207e3ecefd7dcc1e11ef6f230b67b8c6b
3
+ size 47616
engine/cpu_avx/llama-lookup-stats.exe ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b1f0688e6f4d48b0f409abd02ddea445ca00bad86571da4046d1644bd78b00e6
3
+ size 1323520
engine/cpu_avx/llama-lookup.exe ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e87fa6a92e0a2e57b5d0d4f0a3d2878f1bfd00db5c9b3999792ab544aa940da3
3
+ size 1340928
engine/cpu_avx/llama-minicpmv-cli.exe ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c13facc879eb6cef47f5914a7685d0f71ae1b911a017dcf5ba089e8d481592d6
3
+ size 26112
engine/cpu_avx/llama-mtmd-cli.exe ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0b605a3504d41cbfb3a9afe5ab8f45298a18eb17b8172d8037bb4a6360613eb3
3
+ size 1540096
engine/cpu_avx/llama-parallel.exe ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5c7a9e6d8c9a0bc9a2c552a693f2c91daf647905793a3df23b160fdd84325e52
3
+ size 1327616
engine/cpu_avx/llama-passkey.exe ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ec587011402a2f1a1f3518ea53a09e34ef80d3012a031a279f926ced9a221f58
3
+ size 1298432
engine/cpu_avx/llama-perplexity.exe ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bf2f8f254b69679b4e3958774724443163b1c4dd2cd859254b9032b175dfc159
3
+ size 1384448
engine/cpu_avx/llama-q8dot.exe ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ad4ac0298c73936e2a438f58b22bb16b603bce637a3a499b22f36584ceb77277
3
+ size 19456
engine/cpu_avx/llama-quantize.exe ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:66d5b8673aff2f63f4ce6a2e3449cb7353a60a6053d3afb76301c1bec04bac2f
3
+ size 100864
engine/cpu_avx/llama-qwen2vl-cli.exe ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:43cdbfaa7754e1c4a07269d341dde008948836532190209b19396a66f0699191
3
+ size 26112
engine/cpu_avx/llama-retrieval.exe ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ce2f9b2449eb5ff61639308f9ab253f1109a0834eebfb3226d24cca4d7b57848
3
+ size 1322496
engine/cpu_avx/llama-run.exe ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9575f3d51162b9663dff98d2106bb5318756906197967d20ab03c12d3ab3ec2b
3
+ size 1007616
engine/cpu_avx/llama-save-load-state.exe ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c282e88d7576659b76ce0f2d7364c8572b9679641af5c27ed70e7a8058a39dee
3
+ size 1307648
engine/cpu_avx/llama-simple-chat.exe ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:581b4a3049cd0fd38bc91757e198763ccf1790c20a011af80f74aec0d6a9f46d
3
+ size 28160
engine/cpu_avx/llama-simple.exe ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c66b40340e18cec139fd464fa43cd8cdc7e03f9c0861815ae803d16e38e78397
3
+ size 22528