ObtuseAglet commited on
Commit
f05d8cf
·
1 Parent(s): 6e99b50

Feat: Added huggingface readme.md format to match l-bom v0.2.0

Browse files
README.md CHANGED
@@ -1,21 +1,21 @@
1
  ---
2
  title: GUI-BOM
3
- emoji: 📚
4
  colorFrom: red
5
  colorTo: pink
6
  sdk: static
7
  pinned: true
8
  ---
9
 
10
- # GUI-BOM
11
 
12
  `GUI-BOM` is a local browser-based wrapper around `L-BOM`, a Python tool that inspects local LLM model artifacts such as `.gguf` and `.safetensors` files and emits a lightweight Software Bill of Materials with file identity, format details, model metadata, and parsing warnings.
13
 
14
  The project now supports both a CLI workflow and a polished local GUI for people who would rather click through a browser than work in a command prompt.
15
 
16
- ## Quick start on Windows
17
 
18
- ### Before you begin
19
 
20
  1. Install Python 3.10 or newer for Windows.
21
  2. During installation, enable the option to add Python to `PATH` if it is offered.
@@ -33,7 +33,7 @@ python --version
33
 
34
  `start-gui.bat` needs one of those commands to exist already. It creates a virtual environment and installs the app, but it does not install Python for you.
35
 
36
- ### Launch with the batch file
37
 
38
  1. Download or extract the project to a folder on your PC.
39
  2. Open the folder in File Explorer.
@@ -44,7 +44,7 @@ python --version
44
 
45
  If the script prints `Unable to start the GUI.`, Python is usually missing or not available through `py` or `python`.
46
 
47
- ### Launch manually from PowerShell
48
 
49
  If you prefer to run the setup yourself:
50
 
@@ -55,7 +55,7 @@ py -3 -m venv .venv
55
  .venv\Scripts\python -m llm_sbom.cli gui
56
  ```
57
 
58
- ## Install
59
 
60
  ```bash
61
  pip install .
@@ -67,7 +67,7 @@ For editable local development:
67
  pip install -e ".[dev]"
68
  ```
69
 
70
- ## GUI usage
71
 
72
  Start the local web app manually after installing the package:
73
 
@@ -90,7 +90,7 @@ The GUI includes:
90
  - summary cards, document details, and warning views
91
  - copy and download actions for JSON, SPDX, and table output
92
 
93
- ## One-click Windows launch
94
 
95
  `start-gui.bat` is the fastest way to get started on Windows once Python is installed.
96
 
@@ -104,7 +104,7 @@ It:
104
 
105
  If Python is not installed, the launcher will fail and print `Unable to start the GUI.`
106
 
107
- ## Docker usage
108
 
109
  Build the image:
110
 
@@ -120,7 +120,7 @@ docker run --rm -p 7860:7860 -v C:\models:/models l-bom-gui
120
 
121
  Then open `http://127.0.0.1:7860` and browse to `/models` inside the app.
122
 
123
- ## CLI usage
124
 
125
  Show the installed version:
126
 
@@ -140,64 +140,296 @@ Scan a single model file and emit SPDX tag-value:
140
  l-bom scan .\models\Llama-3.1-8B-Instruct-Q4_K_M.gguf --format spdx
141
  ```
142
 
 
 
 
 
 
 
143
  Scan a directory recursively and render a table:
144
 
145
  ```bash
146
  l-bom scan .\models --format table
147
  ```
148
 
 
 
 
 
 
 
 
 
 
 
 
 
 
149
  Skip SHA256 hashing for very large files and write the result to disk:
150
 
151
  ```bash
152
  l-bom scan .\models --no-hash --output .\model-sbom.json
153
  ```
154
 
155
- ## Sample JSON output
 
 
156
 
157
  ```json
158
  {
159
  "sbom_version": "1.0",
160
- "generated_at": "2026-03-24T14:08:22.118000+00:00",
161
- "tool_name": "L-BOM",
162
- "tool_version": "0.1.0",
163
- "model_path": "C:\\models\\Llama-3.1-8B-Instruct-Q4_K_M.gguf",
164
- "model_filename": "Llama-3.1-8B-Instruct-Q4_K_M.gguf",
165
- "file_size_bytes": 4682873912,
166
- "sha256": "8b0b3cb15be2e0a0f4b474230ef326f6180fc76efad1d761bf9ce949f6e785b4",
167
  "format": "gguf",
168
- "architecture": "llama",
169
- "parameter_count": 8030261248,
170
- "quantization": "Q4_K_M",
171
  "dtype": null,
172
- "context_length": 8192,
173
- "vocab_size": 128256,
174
- "license": "llama3.1",
175
- "base_model": "meta-llama/Llama-3.1-8B-Instruct",
176
- "training_framework": "transformers 4.43.2",
177
  "metadata": {
178
- "general.name": "Llama 3.1 8B Instruct",
179
- "general.file_type": 14,
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
180
  "gguf_version": 3,
181
  "endianness": "little",
182
  "metadata_keys": [
183
  "general.architecture",
184
- "general.file_type",
185
- "llama.context_length",
186
- "tokenizer.ggml.tokens"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
187
  ],
188
- "sidecar_config": {
189
- "model_type": "llama",
190
- "architectures": [
191
- "LlamaForCausalLM"
192
- ],
193
- "torch_dtype": "bfloat16",
194
- "transformers_version": "4.43.2"
 
195
  }
196
  },
197
  "warnings": []
198
  }
199
  ```
200
 
201
- ## License
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
202
 
203
  This project is licensed under the MIT License. See `LICENSE` for the full text.
 
1
  ---
2
  title: GUI-BOM
3
+ emoji: 📊
4
  colorFrom: red
5
  colorTo: pink
6
  sdk: static
7
  pinned: true
8
  ---
9
 
10
+ ## GUI-BOM
11
 
12
  `GUI-BOM` is a local browser-based wrapper around `L-BOM`, a Python tool that inspects local LLM model artifacts such as `.gguf` and `.safetensors` files and emits a lightweight Software Bill of Materials with file identity, format details, model metadata, and parsing warnings.
13
 
14
  The project now supports both a CLI workflow and a polished local GUI for people who would rather click through a browser than work in a command prompt.
15
 
16
+ ### Quick start on Windows
17
 
18
+ #### Before you begin
19
 
20
  1. Install Python 3.10 or newer for Windows.
21
  2. During installation, enable the option to add Python to `PATH` if it is offered.
 
33
 
34
  `start-gui.bat` needs one of those commands to exist already. It creates a virtual environment and installs the app, but it does not install Python for you.
35
 
36
+ #### Launch with the batch file
37
 
38
  1. Download or extract the project to a folder on your PC.
39
  2. Open the folder in File Explorer.
 
44
 
45
  If the script prints `Unable to start the GUI.`, Python is usually missing or not available through `py` or `python`.
46
 
47
+ #### Launch manually from PowerShell
48
 
49
  If you prefer to run the setup yourself:
50
 
 
55
  .venv\Scripts\python -m llm_sbom.cli gui
56
  ```
57
 
58
+ ### Install
59
 
60
  ```bash
61
  pip install .
 
67
  pip install -e ".[dev]"
68
  ```
69
 
70
+ ### GUI usage
71
 
72
  Start the local web app manually after installing the package:
73
 
 
90
  - summary cards, document details, and warning views
91
  - copy and download actions for JSON, SPDX, and table output
92
 
93
+ ### One-click Windows launch
94
 
95
  `start-gui.bat` is the fastest way to get started on Windows once Python is installed.
96
 
 
104
 
105
  If Python is not installed, the launcher will fail and print `Unable to start the GUI.`
106
 
107
+ ### Docker usage
108
 
109
  Build the image:
110
 
 
120
 
121
  Then open `http://127.0.0.1:7860` and browse to `/models` inside the app.
122
 
123
+ ### CLI usage
124
 
125
  Show the installed version:
126
 
 
140
  l-bom scan .\models\Llama-3.1-8B-Instruct-Q4_K_M.gguf --format spdx
141
  ```
142
 
143
+ Scan a single model file and emit a huggingface-style README:
144
+
145
+ ```bash
146
+ l-bom scan .\models\Llama-3.1-8B-Instruct-Q4_K_M.gguf --format hf-readme
147
+ ```
148
+
149
  Scan a directory recursively and render a table:
150
 
151
  ```bash
152
  l-bom scan .\models --format table
153
  ```
154
 
155
+ Export a single model scan as Hugging Face-ready `README.md` content:
156
+
157
+ ```bash
158
+ l-bom scan .\models\Llama-3.1-8B-Instruct-Q4_K_M.gguf --format hf-readme --hf-sdk static --hf-app-file index.html
159
+ ```
160
+
161
+ Override the inferred title and short description for the README front matter:
162
+
163
+ ```bash
164
+ l-bom scan .\models\Llama-3.1-8B-Instruct-Q4_K_M.gguf --format hf-readme --hf-title "Llama 3.1 Demo" --hf-short-description "Quantized GGUF artifact for a local demo space"
165
+ ```
166
+
167
+
168
  Skip SHA256 hashing for very large files and write the result to disk:
169
 
170
  ```bash
171
  l-bom scan .\models --no-hash --output .\model-sbom.json
172
  ```
173
 
174
+ ### Sample JSON output
175
+
176
+ Sample JSON output for (`LFM2.5-1.2B-Instruct-Q8_0.gguf`)[https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct-GGUF]:
177
 
178
  ```json
179
  {
180
  "sbom_version": "1.0",
181
+ "generated_at": "2026-03-25T04:07:53.262551+00:00",
182
+ "tool_name": "l-bom",
183
+ "tool_version": "0.2.0",
184
+ "model_path": "C:\\models\\LFM2.5-1.2B-Instruct-GGUF\\LFM2.5-1.2B-Instruct-Q8_0.gguf",
185
+ "model_filename": "LFM2.5-1.2B-Instruct-Q8_0.gguf",
186
+ "file_size_bytes": 1246253888,
187
+ "sha256": "f6b981dcb86917fa463f78a362320bd5e2dc45445df147287eedb85e5a30d26a",
188
  "format": "gguf",
189
+ "architecture": "lfm2",
190
+ "parameter_count": 1170340608,
191
+ "quantization": "Q5_1",
192
  "dtype": null,
193
+ "context_length": 128000,
194
+ "vocab_size": 65536,
195
+ "license": null,
196
+ "base_model": null,
197
+ "training_framework": null,
198
  "metadata": {
199
+ "general.architecture": "lfm2",
200
+ "general.type": "model",
201
+ "general.name": "4cd563d5a96af9e7c738b76cd89a0a200db7608f",
202
+ "general.finetune": "4cd563d5a96af9e7c738b76cd89a0a200db7608f",
203
+ "general.size_label": "1.2B",
204
+ "general.license": "other",
205
+ "general.license.name": "lfm1.0",
206
+ "general.license.link": "LICENSE",
207
+ "general.tags": [
208
+ "liquid",
209
+ "lfm2.5",
210
+ "edge",
211
+ "text-generation"
212
+ ],
213
+ "general.languages": [
214
+ "en",
215
+ "ar",
216
+ "zh",
217
+ "fr",
218
+ "de",
219
+ "ja",
220
+ "ko",
221
+ "es"
222
+ ],
223
+ "lfm2.block_count": 16,
224
+ "lfm2.context_length": 128000,
225
+ "lfm2.embedding_length": 2048,
226
+ "lfm2.feed_forward_length": 8192,
227
+ "lfm2.attention.head_count": 32,
228
+ "lfm2.attention.head_count_kv": [
229
+ 0,
230
+ 0,
231
+ 8,
232
+ 0,
233
+ 0,
234
+ 8,
235
+ 0,
236
+ 0,
237
+ 8,
238
+ 0,
239
+ 8,
240
+ 0,
241
+ 8,
242
+ 0,
243
+ 8,
244
+ 0
245
+ ],
246
+ "lfm2.rope.freq_base": 1000000.0,
247
+ "lfm2.attention.layer_norm_rms_epsilon": 9.999999747378752e-06,
248
+ "lfm2.vocab_size": 65536,
249
+ "lfm2.shortconv.l_cache": 3,
250
+ "tokenizer.ggml.model": "gpt2",
251
+ "tokenizer.ggml.pre": "lfm2",
252
+ "tokenizer.ggml.tokens": {
253
+ "type": "array",
254
+ "element_type": "STRING",
255
+ "count": 65536,
256
+ "preview": [
257
+ "<|pad|>",
258
+ "<|startoftext|>",
259
+ "<|endoftext|>",
260
+ "<|fim_pre|>",
261
+ "<|fim_mid|>",
262
+ "<|fim_suf|>",
263
+ "<|im_start|>",
264
+ "<|im_end|>",
265
+ "<|tool_list_start|>",
266
+ "<|tool_list_end|>",
267
+ "<|tool_call_start|>",
268
+ "<|tool_call_end|>",
269
+ "<|tool_response_start|>",
270
+ "<|tool_response_end|>",
271
+ "<|reserved_4|>",
272
+ "<|reserved_5|>"
273
+ ],
274
+ "truncated": true
275
+ },
276
+ "tokenizer.ggml.token_type": {
277
+ "type": "array",
278
+ "element_type": "INT32",
279
+ "count": 65536,
280
+ "preview": [
281
+ 3,
282
+ 3,
283
+ 3,
284
+ 3,
285
+ 3,
286
+ 3,
287
+ 3,
288
+ 3,
289
+ 3,
290
+ 3,
291
+ 3,
292
+ 3,
293
+ 3,
294
+ 3,
295
+ 1,
296
+ 1
297
+ ],
298
+ "truncated": true
299
+ },
300
+ "tokenizer.ggml.merges": {
301
+ "type": "array",
302
+ "element_type": "STRING",
303
+ "count": 63683,
304
+ "preview": [
305
+ "Ċ Ċ",
306
+ "Ċ ĊĊ",
307
+ "ĊĊ Ċ",
308
+ "Ċ ĊĊĊ",
309
+ "ĊĊ ĊĊ",
310
+ "ĊĊĊ Ċ",
311
+ "Ċ ĊĊĊĊ",
312
+ "ĊĊ ĊĊĊ",
313
+ "ĊĊĊ ĊĊ",
314
+ "ĊĊĊĊ Ċ",
315
+ "Ċ ĊĊĊĊĊ",
316
+ "ĊĊ ĊĊĊĊ",
317
+ "ĊĊĊ ĊĊĊ",
318
+ "ĊĊĊĊ ĊĊ",
319
+ "ĊĊĊĊĊ Ċ",
320
+ "Ċ ĊĊĊĊĊĊ"
321
+ ],
322
+ "truncated": true
323
+ },
324
+ "tokenizer.ggml.bos_token_id": 1,
325
+ "tokenizer.ggml.eos_token_id": 7,
326
+ "tokenizer.ggml.padding_token_id": 0,
327
+ "tokenizer.ggml.add_bos_token": true,
328
+ "tokenizer.ggml.add_sep_token": false,
329
+ "tokenizer.ggml.add_eos_token": false,
330
+ "tokenizer.chat_template": "{{- bos_token -}}\n{%- set keep_past_thinking = keep_past_thinking | default(false) -%}\n{%- set ns = namespace(system_prompt=\"\") -%}\n{%- if messages[0][\"role\"] == \"system\" -%}\n {%- set ns.system_prompt = messages[0][\"content\"] -%}\n {%- set messages = messages[1:] -%}\n{%- endif -%}\n{%- if tools -%}\n {%- set ns.system_prompt = ns.system_prompt + (\"\\n\" if ns.system_prompt else \"\") + \"List of tools: [\" -%}\n {%- for tool in tools -%}\n {%- if tool is not string -%}\n {%- set tool = tool | tojson -%}\n {%- endif -%}\n {%- set ns.system_prompt = ns.system_prompt + tool -%}\n {%- if not loop.last -%}\n {%- set ns.system_prompt = ns.system_prompt + \", \" -%}\n {%- endif -%}\n {%- endfor -%}\n {%- set ns.system_prompt = ns.system_prompt + \"]\" -%}\n{%- endif -%}\n{%- if ns.system_prompt -%}\n {{- \"<|im_start|>system\\n\" + ns.system_prompt + \"<|im_end|>\\n\" -}}\n{%- endif -%}\n{%- set ns.last_assistant_index = -1 -%}\n{%- for message in messages -%}\n {%- if message[\"role\"] == \"assistant\" -%}\n {%- set ns.last_assistant_index = loop.index0 -%}\n {%- endif -%}\n{%- endfor -%}\n{%- for message in messages -%}\n {{- \"<|im_start|>\" + message[\"role\"] + \"\\n\" -}}\n {%- set content = message[\"content\"] -%}\n {%- if content is not string -%}\n {%- set content = content | tojson -%}\n {%- endif -%}\n {%- if message[\"role\"] == \"assistant\" and not keep_past_thinking and loop.index0 != ns.last_assistant_index -%}\n {%- if \"</think>\" in content -%}\n {%- set content = content.split(\"</think>\")[-1] | trim -%}\n {%- endif -%}\n {%- endif -%}\n {{- content + \"<|im_end|>\\n\" -}}\n{%- endfor -%}\n{%- if add_generation_prompt -%}\n {{- \"<|im_start|>assistant\\n\" -}}\n{%- endif -%}",
331
+ "general.quantization_version": 2,
332
+ "general.file_type": 7,
333
  "gguf_version": 3,
334
  "endianness": "little",
335
  "metadata_keys": [
336
  "general.architecture",
337
+ "general.type",
338
+ "general.name",
339
+ "general.finetune",
340
+ "general.size_label",
341
+ "general.license",
342
+ "general.license.name",
343
+ "general.license.link",
344
+ "general.tags",
345
+ "general.languages",
346
+ "lfm2.block_count",
347
+ "lfm2.context_length",
348
+ "lfm2.embedding_length",
349
+ "lfm2.feed_forward_length",
350
+ "lfm2.attention.head_count",
351
+ "lfm2.attention.head_count_kv",
352
+ "lfm2.rope.freq_base",
353
+ "lfm2.attention.layer_norm_rms_epsilon",
354
+ "lfm2.vocab_size",
355
+ "lfm2.shortconv.l_cache",
356
+ "tokenizer.ggml.model",
357
+ "tokenizer.ggml.pre",
358
+ "tokenizer.ggml.tokens",
359
+ "tokenizer.ggml.token_type",
360
+ "tokenizer.ggml.merges",
361
+ "tokenizer.ggml.bos_token_id",
362
+ "tokenizer.ggml.eos_token_id",
363
+ "tokenizer.ggml.padding_token_id",
364
+ "tokenizer.ggml.add_bos_token",
365
+ "tokenizer.ggml.add_sep_token",
366
+ "tokenizer.ggml.add_eos_token",
367
+ "tokenizer.chat_template",
368
+ "general.quantization_version",
369
+ "general.file_type"
370
  ],
371
+ "tensor_count": 148,
372
+ "tensor_type_counts": {
373
+ "Q8_0": 93,
374
+ "F32": 55
375
+ },
376
+ "tensor_type_parameter_counts": {
377
+ "Q8_0": 1170210816,
378
+ "F32": 129792
379
  }
380
  },
381
  "warnings": []
382
  }
383
  ```
384
 
385
+ ## Sample Huggingface Output
386
+
387
+ Sample hf-readme output for (`LFM2.5-1.2B-Instruct-Q8_0.gguf`)[https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct-GGUF]:
388
+
389
+ ```markdown
390
+ ---
391
+ title: "LFM2.5-1.2B-Instruct-Q8_0"
392
+ short_description: "GGUF model artifact for LFM2.5-1.2B-Instruct-Q8_0 with 1.17B parameters, 1.2B, Q5_1, 128.00K context"
393
+ tags:
394
+ - "liquid"
395
+ - "lfm2.5"
396
+ - "edge"
397
+ - "text-generation"
398
+ - "gguf"
399
+ - "lfm2"
400
+ - "q5_1"
401
+ ---
402
+
403
+ # LFM2.5-1.2B-Instruct-Q8_0
404
+
405
+ GGUF model artifact for LFM2.5-1.2B-Instruct-Q8_0 with 1.17B parameters, 1.2B, Q5_1, 128.00K context
406
+
407
+ This README content was generated by `L-BOM` from a local model artifact.
408
+
409
+ ## Artifact details
410
+
411
+ - **Filename:** `LFM2.5-1.2B-Instruct-Q8_0.gguf`
412
+ - **Path:** `C:\models\LFM2.5-1.2B-Instruct-GGUF\LFM2.5-1.2B-Instruct-Q8_0.gguf`
413
+ - **Format:** `gguf`
414
+ - **File size:** `1.16 GiB` (1,246,253,888 bytes)
415
+ - **SHA256:** `f6b981dcb86917fa463f78a362320bd5e2dc45445df147287eedb85e5a30d26a`
416
+ - **Architecture:** `lfm2`
417
+ - **Parameters:** `1,170,340,608` (1.17B)
418
+ - **Size label:** `1.2B`
419
+ - **Quantization:** `Q5_1`
420
+ - **Context length:** `128,000`
421
+ - **Vocabulary size:** `65,536`
422
+
423
+ ## Model metadata
424
+
425
+ - **License:** `lfm1.0`
426
+ - **License reference:** `LICENSE`
427
+ - **Languages:** `en`, `ar`, `zh`, `fr`, `de`, `ja`, `ko`, `es`
428
+ - **Tags:** `liquid`, `lfm2.5`, `edge`, `text-generation`, `gguf`, `lfm2`, `q5_1`
429
+ Generated with L-BOM. Contribute to L-BOM 💘 development: https://github.com/CHKDSKLabs/l-bom
430
+ ```
431
+
432
+
433
+ #### License
434
 
435
  This project is licensed under the MIT License. See `LICENSE` for the full text.
index.html CHANGED
@@ -11,7 +11,12 @@
11
  <h1>GUI-BOM</h1>
12
  <p>GUI-BOM is the graphical user interface for L-BOM, a small Python CLI for inspecting local LLM model artifacts and generating a lightweight software bill of materials.</p>
13
  <p>It works with files such as <code>.gguf</code> and <code>.safetensors</code> and reports file identity, format details, model metadata, and parsing warnings.</p>
14
- <p>Install it locally with the included .bat file or via Docker, then visit <a href="http://127.0.0.1:7860" target="_blank">http://127.0.0.1:7860</a> to generate output in JSON, SPDX, or table format.</p>
 
 
 
 
 
15
  <p>Source, usage examples, and development details are available on <a href="https://github.com/CHKDSKLabs/gui-bom" target="_blank" rel="noreferrer">GitHub</a>.</p>
16
  </div>
17
  </body>
 
11
  <h1>GUI-BOM</h1>
12
  <p>GUI-BOM is the graphical user interface for L-BOM, a small Python CLI for inspecting local LLM model artifacts and generating a lightweight software bill of materials.</p>
13
  <p>It works with files such as <code>.gguf</code> and <code>.safetensors</code> and reports file identity, format details, model metadata, and parsing warnings.</p>
14
+ <p>Install it locally with the included .bat file or via Docker, then visit <a href="http://127.0.0.1:7860" target="_blank">http://127.0.0.1:7860</a> to generate output in JSON, SPDX, huggingface or table format.</p>
15
+ <div class="link-grid" aria-label="Project links">
16
+ <a class="link-chip" href="https://huggingface.co/spaces/CHK-DSK-Labs/gui-bom/tree/main" target="_blank" rel="noreferrer">Files</a>
17
+ <a class="link-chip" href="https://huggingface.co/spaces/CHK-DSK-Labs/gui-bom/discussions" target="_blank" rel="noreferrer">Community</a>
18
+ <a class="link-chip" href="https://huggingface.co/CHK-DSK-Labs" target="_blank" rel="noreferrer">CHK-DSK Labs</a>
19
+ </div>
20
  <p>Source, usage examples, and development details are available on <a href="https://github.com/CHKDSKLabs/gui-bom" target="_blank" rel="noreferrer">GitHub</a>.</p>
21
  </div>
22
  </body>
llm_sbom/__init__.py CHANGED
@@ -1,4 +1,4 @@
1
  """Package metadata for L-BOM."""
2
 
3
- __version__ = "0.1.0"
4
 
 
1
  """Package metadata for L-BOM."""
2
 
3
+ __version__ = "0.2.0"
4
 
llm_sbom/cli.py CHANGED
@@ -10,8 +10,9 @@ from . import __version__
10
  from .output import render_output
11
  from .scanner import scan_path
12
 
 
13
 
14
- @click.group(help="Generate a Software Bill of Materials for local LLM model files.")
15
  def main() -> None:
16
  pass
17
 
@@ -20,7 +21,7 @@ def main() -> None:
20
  @click.option(
21
  "--format",
22
  "output_format",
23
- type=click.Choice(["json", "spdx", "table"], case_sensitive=False),
24
  default="json",
25
  show_default=True,
26
  help="Choose the output format.",
@@ -35,10 +36,64 @@ def main() -> None:
35
  is_flag=True,
36
  help="Skip SHA256 computation for very large model files.",
37
  )
38
- def scan(path: Path, output_format: str, output: Path | None, no_hash: bool) -> None:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39
 
40
  documents = scan_path(path, compute_hash=not no_hash)
41
- rendered = render_output(documents, output_format, color=output is None)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42
 
43
  if output is not None:
44
  write_output_file(output, rendered)
@@ -46,7 +101,7 @@ def scan(path: Path, output_format: str, output: Path | None, no_hash: bool) ->
46
 
47
  if rendered.endswith("\n"):
48
  click.echo(rendered, nl=False, color=True)
49
- click.echo("Contribute to L-BOM 💘 development: https://github.com/CHKDSKLabs/l-bom", color=True)
50
  else:
51
  click.echo(rendered, color=True)
52
 
 
10
  from .output import render_output
11
  from .scanner import scan_path
12
 
13
+ from .huggingface import HuggingFaceReadmeOptions, render_huggingface_readme
14
 
15
+ @click.group(help="Generate a Software Bill of Materials for local LLM model files.", epilog="Use --help with any command (for example, l-bom scan --help) for additional information and parameters.")
16
  def main() -> None:
17
  pass
18
 
 
21
  @click.option(
22
  "--format",
23
  "output_format",
24
+ type=click.Choice(["json", "spdx", "table", "hf-readme"], case_sensitive=False),
25
  default="json",
26
  show_default=True,
27
  help="Choose the output format.",
 
36
  is_flag=True,
37
  help="Skip SHA256 computation for very large model files.",
38
  )
39
+ @click.option(
40
+ "--hf-title",
41
+ type=str,
42
+ help="Override the inferred title when exporting a Hugging Face README.",
43
+ )
44
+ @click.option(
45
+ "--hf-sdk",
46
+ type=click.Choice(["gradio", "docker", "static"], case_sensitive=False),
47
+ help="Add the Space SDK field to a Hugging Face README export.",
48
+ )
49
+ @click.option(
50
+ "--hf-app-file",
51
+ type=str,
52
+ help="Add the app_file field to a Hugging Face README export.",
53
+ )
54
+ @click.option(
55
+ "--hf-app-port",
56
+ type=int,
57
+ help="Add the app_port field to a Hugging Face README export.",
58
+ )
59
+ @click.option(
60
+ "--hf-short-description",
61
+ type=str,
62
+ help="Override the inferred short_description in a Hugging Face README export.",
63
+ )
64
+ def scan(
65
+ path: Path,
66
+ output_format: str,
67
+ output: Path | None,
68
+ no_hash: bool,
69
+ hf_title: str | None,
70
+ hf_sdk: str | None,
71
+ hf_app_file: str | None,
72
+ hf_app_port: int | None,
73
+ hf_short_description: str | None,
74
+ ) -> None:
75
 
76
  documents = scan_path(path, compute_hash=not no_hash)
77
+ if output_format.lower() == "hf-readme":
78
+ if len(documents) != 1:
79
+ raise click.ClickException(
80
+ "Hugging Face README export currently supports scanning a single model file at a time."
81
+ )
82
+ if hf_app_port is not None and hf_sdk not in {None, "docker"}:
83
+ raise click.ClickException("--hf-app-port can only be used with --hf-sdk docker.")
84
+
85
+ rendered = render_huggingface_readme(
86
+ documents[0],
87
+ HuggingFaceReadmeOptions(
88
+ title=hf_title,
89
+ sdk=hf_sdk.lower() if hf_sdk else None,
90
+ app_file=hf_app_file,
91
+ app_port=hf_app_port,
92
+ short_description=hf_short_description,
93
+ ),
94
+ )
95
+ else:
96
+ rendered = render_output(documents, output_format, color=output is None)
97
 
98
  if output is not None:
99
  write_output_file(output, rendered)
 
101
 
102
  if rendered.endswith("\n"):
103
  click.echo(rendered, nl=False, color=True)
104
+ click.echo("Generated with L-BOM: SBOM generator for gguf and safetensors files: https://github.com/CHKDSKLabs/l-bom", color=False)
105
  else:
106
  click.echo(rendered, color=True)
107
 
llm_sbom/gui.py CHANGED
@@ -13,11 +13,12 @@ from typing import Any, TypedDict
13
  from flask import Flask, jsonify, render_template, request
14
 
15
  from . import __version__
 
16
  from .output import render_output
17
  from .scanner import MODEL_SUFFIXES, scan_path
18
  from .schema import SBOMDocument
19
 
20
- OUTPUT_FORMATS = {"json", "spdx", "table"}
21
 
22
 
23
  class DirectoryEntry(TypedDict):
@@ -94,7 +95,7 @@ def create_app() -> Flask:
94
  if not isinstance(raw_path, str) or not raw_path.strip():
95
  return _error("A model file or directory path is required.", 400)
96
  if not isinstance(output_format, str) or output_format.lower() not in OUTPUT_FORMATS:
97
- return _error("Output format must be one of: json, spdx, table.", 400)
98
  if not isinstance(compute_hash, bool):
99
  return _error("The compute_hash value must be true or false.", 400)
100
 
@@ -109,7 +110,39 @@ def create_app() -> Flask:
109
 
110
  documents = scan_path(resolved, compute_hash=compute_hash)
111
  normalized_format = output_format.lower()
112
- rendered_output = render_output(documents, normalized_format, color=False)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
113
 
114
  return jsonify(
115
  {
 
13
  from flask import Flask, jsonify, render_template, request
14
 
15
  from . import __version__
16
+ from .huggingface import HuggingFaceReadmeOptions, render_huggingface_readme
17
  from .output import render_output
18
  from .scanner import MODEL_SUFFIXES, scan_path
19
  from .schema import SBOMDocument
20
 
21
+ OUTPUT_FORMATS = {"json", "spdx", "table", "hf-readme"}
22
 
23
 
24
  class DirectoryEntry(TypedDict):
 
95
  if not isinstance(raw_path, str) or not raw_path.strip():
96
  return _error("A model file or directory path is required.", 400)
97
  if not isinstance(output_format, str) or output_format.lower() not in OUTPUT_FORMATS:
98
+ return _error("Output format must be one of: json, spdx, table, hf-readme.", 400)
99
  if not isinstance(compute_hash, bool):
100
  return _error("The compute_hash value must be true or false.", 400)
101
 
 
110
 
111
  documents = scan_path(resolved, compute_hash=compute_hash)
112
  normalized_format = output_format.lower()
113
+
114
+ if normalized_format == "hf-readme":
115
+ if len(documents) != 1:
116
+ return _error(
117
+ "Hugging Face README export currently supports scanning a single model file at a time.",
118
+ 400,
119
+ )
120
+
121
+ hf_title = payload.get("hf_title") or None
122
+ hf_sdk = payload.get("hf_sdk") or None
123
+ hf_app_file = payload.get("hf_app_file") or None
124
+ hf_app_port = payload.get("hf_app_port") or None
125
+ hf_short_description = payload.get("hf_short_description") or None
126
+
127
+ if hf_sdk and not isinstance(hf_sdk, str):
128
+ return _error("hf_sdk must be a string.", 400)
129
+ if hf_sdk and hf_sdk.lower() not in {"gradio", "docker", "static"}:
130
+ return _error("hf_sdk must be one of: gradio, docker, static.", 400)
131
+ if hf_app_port is not None and hf_sdk not in {None, "docker"}:
132
+ return _error("hf_app_port can only be used with hf_sdk docker.", 400)
133
+
134
+ rendered_output = render_huggingface_readme(
135
+ documents[0],
136
+ HuggingFaceReadmeOptions(
137
+ title=hf_title,
138
+ sdk=hf_sdk.lower() if hf_sdk else None,
139
+ app_file=hf_app_file,
140
+ app_port=int(hf_app_port) if hf_app_port is not None else None,
141
+ short_description=hf_short_description,
142
+ ),
143
+ )
144
+ else:
145
+ rendered_output = render_output(documents, normalized_format, color=False)
146
 
147
  return jsonify(
148
  {
llm_sbom/huggingface.py ADDED
@@ -0,0 +1,307 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Helpers for exporting Hugging Face-friendly README metadata."""
2
+
3
+ from __future__ import annotations
4
+
5
+ import json
6
+ import re
7
+ from dataclasses import dataclass
8
+ from pathlib import Path
9
+ from typing import Any, TypeAlias
10
+
11
+ from .schema import SBOMDocument
12
+
13
+ FrontMatterValue: TypeAlias = str | int | bool | list[str]
14
+ FrontMatter: TypeAlias = dict[str, FrontMatterValue]
15
+
16
+
17
+ @dataclass(frozen=True)
18
+ class HuggingFaceReadmeOptions:
19
+ title: str | None = None
20
+ sdk: str | None = None
21
+ app_file: str | None = None
22
+ app_port: int | None = None
23
+ short_description: str | None = None
24
+
25
+
26
+ @dataclass(frozen=True)
27
+ class HuggingFaceReadmeDocument:
28
+ front_matter: FrontMatter
29
+ body: str
30
+
31
+
32
+ def build_huggingface_readme(
33
+ document: SBOMDocument, options: HuggingFaceReadmeOptions
34
+ ) -> HuggingFaceReadmeDocument:
35
+
36
+ title = options.title or _derive_title(document)
37
+ short_description = options.short_description or _derive_short_description(document, title)
38
+
39
+ front_matter: FrontMatter = {
40
+ "title": title,
41
+ "short_description": short_description,
42
+ }
43
+ if options.sdk:
44
+ front_matter["sdk"] = options.sdk
45
+ if options.app_file:
46
+ front_matter["app_file"] = options.app_file
47
+ if options.app_port is not None:
48
+ front_matter["app_port"] = options.app_port
49
+
50
+ models = _extract_models(document)
51
+ if models:
52
+ front_matter["models"] = models
53
+
54
+ tags = _extract_tags(document)
55
+ if tags:
56
+ front_matter["tags"] = tags
57
+
58
+ body = _render_readme_body(document, title, short_description)
59
+ return HuggingFaceReadmeDocument(front_matter=front_matter, body=body)
60
+
61
+
62
+ def render_huggingface_readme(document: SBOMDocument, options: HuggingFaceReadmeOptions) -> str:
63
+
64
+ readme = build_huggingface_readme(document, options)
65
+ return f"{_render_front_matter(readme.front_matter)}\n\n{readme.body}\n"
66
+
67
+
68
+ def _derive_title(document: SBOMDocument) -> str:
69
+
70
+ metadata_name = _metadata_string(document.metadata, "general.name")
71
+ if metadata_name and not _looks_like_hash(metadata_name):
72
+ return metadata_name
73
+ return Path(document.model_filename).stem
74
+
75
+
76
+ def _derive_short_description(document: SBOMDocument, title: str) -> str:
77
+
78
+ details: list[str] = []
79
+ if document.parameter_count is not None:
80
+ details.append(f"{_format_compact_number(document.parameter_count)} parameters")
81
+ size_label = _metadata_string(document.metadata, "general.size_label")
82
+ if size_label:
83
+ details.append(size_label)
84
+ if document.quantization:
85
+ details.append(document.quantization)
86
+ if document.context_length is not None:
87
+ details.append(f"{_format_compact_number(document.context_length)} context")
88
+
89
+ summary = f"{document.format.upper()} model artifact for {title}"
90
+ if details:
91
+ summary = f"{summary} with {', '.join(details)}"
92
+ return summary
93
+
94
+
95
+ def _extract_models(document: SBOMDocument) -> list[str]:
96
+
97
+ candidates = [
98
+ document.base_model,
99
+ _metadata_string(document.metadata, "base_model"),
100
+ _metadata_string(document.metadata, "base_model_repo_id"),
101
+ _metadata_string(document.metadata, "source_model"),
102
+ ]
103
+
104
+ models: list[str] = []
105
+ for candidate in candidates:
106
+ if candidate is None or "/" not in candidate:
107
+ continue
108
+ if candidate not in models:
109
+ models.append(candidate)
110
+ return models
111
+
112
+
113
+ def _extract_tags(document: SBOMDocument) -> list[str]:
114
+
115
+ tags: list[str] = []
116
+ metadata_tags = document.metadata.get("general.tags")
117
+ if isinstance(metadata_tags, list):
118
+ for tag in metadata_tags:
119
+ normalized = _normalize_tag(tag)
120
+ if normalized and normalized not in tags:
121
+ tags.append(normalized)
122
+
123
+ for value in (document.format, document.architecture, document.quantization):
124
+ normalized = _normalize_tag(value)
125
+ if normalized and normalized not in tags:
126
+ tags.append(normalized)
127
+
128
+ return tags
129
+
130
+
131
+ def _render_readme_body(document: SBOMDocument, title: str, short_description: str) -> str:
132
+
133
+ lines = [
134
+ f"# {title}",
135
+ "",
136
+ short_description,
137
+ "",
138
+ "This README content was generated by `L-BOM` from a local model artifact.",
139
+ "",
140
+ "## Artifact details",
141
+ "",
142
+ ]
143
+
144
+ _append_bullet(lines, "Filename", f"`{document.model_filename}`")
145
+ _append_bullet(lines, "Path", f"`{document.model_path}`")
146
+ _append_bullet(lines, "Format", _code_or_none(document.format))
147
+ _append_bullet(lines, "File size", _format_file_size(document.file_size_bytes))
148
+ _append_bullet(lines, "SHA256", _code_or_none(document.sha256))
149
+ _append_bullet(lines, "Architecture", _code_or_none(document.architecture))
150
+ _append_bullet(lines, "Parameters", _format_parameter_count(document.parameter_count))
151
+ _append_bullet(lines, "Size label", _code_or_none(_metadata_string(document.metadata, "general.size_label")))
152
+ _append_bullet(lines, "Quantization", _code_or_none(document.quantization))
153
+ _append_bullet(lines, "Dtype", _code_or_none(document.dtype))
154
+ _append_bullet(lines, "Context length", _format_optional_int(document.context_length))
155
+ _append_bullet(lines, "Vocabulary size", _format_optional_int(document.vocab_size))
156
+
157
+ metadata_lines: list[str] = []
158
+ license_name = _extract_license_name(document)
159
+ license_link = _metadata_string(document.metadata, "general.license.link")
160
+ _append_bullet(metadata_lines, "License", _code_or_none(license_name))
161
+ _append_bullet(metadata_lines, "License reference", _code_or_none(license_link))
162
+ _append_bullet(metadata_lines, "Base model", _code_or_none(document.base_model))
163
+ _append_bullet(metadata_lines, "Training framework", _code_or_none(document.training_framework))
164
+
165
+ languages = _extract_languages(document)
166
+ if languages:
167
+ _append_bullet(metadata_lines, "Languages", ", ".join(f"`{language}`" for language in languages))
168
+
169
+ tags = _extract_tags(document)
170
+ if tags:
171
+ _append_bullet(metadata_lines, "Tags", ", ".join(f"`{tag}`" for tag in tags))
172
+
173
+ if metadata_lines:
174
+ lines.extend(["", "## Model metadata", ""])
175
+ lines.extend(metadata_lines)
176
+
177
+ if document.warnings:
178
+ lines.extend(["", "## Warnings", ""])
179
+ for warning in document.warnings:
180
+ lines.append(f"- {warning}")
181
+
182
+ return "\n".join(lines)
183
+
184
+
185
+ def _extract_license_name(document: SBOMDocument) -> str | None:
186
+
187
+ return (
188
+ document.license
189
+ or _metadata_string(document.metadata, "general.license.name")
190
+ or _metadata_string(document.metadata, "general.license")
191
+ )
192
+
193
+
194
+ def _extract_languages(document: SBOMDocument) -> list[str]:
195
+
196
+ raw_languages = document.metadata.get("general.languages")
197
+ if not isinstance(raw_languages, list):
198
+ return []
199
+
200
+ languages: list[str] = []
201
+ for language in raw_languages:
202
+ if not isinstance(language, str):
203
+ continue
204
+ normalized = language.strip()
205
+ if normalized and normalized not in languages:
206
+ languages.append(normalized)
207
+ return languages
208
+
209
+
210
+ def _append_bullet(lines: list[str], label: str, value: str | None) -> None:
211
+
212
+ if value is None:
213
+ return
214
+ lines.append(f"- **{label}:** {value}")
215
+
216
+
217
+ def _render_front_matter(front_matter: FrontMatter) -> str:
218
+
219
+ lines = ["---"]
220
+ for key, value in front_matter.items():
221
+ if isinstance(value, list):
222
+ lines.append(f"{key}:")
223
+ for item in value:
224
+ lines.append(f" - {_yaml_scalar(item)}")
225
+ continue
226
+ lines.append(f"{key}: {_yaml_scalar(value)}")
227
+ lines.append("---")
228
+ return "\n".join(lines)
229
+
230
+
231
+ def _yaml_scalar(value: str | int | bool) -> str:
232
+
233
+ if isinstance(value, bool):
234
+ return "true" if value else "false"
235
+ if isinstance(value, int):
236
+ return str(value)
237
+ return json.dumps(value, ensure_ascii=False)
238
+
239
+
240
+ def _metadata_string(metadata: dict[str, Any], key: str) -> str | None:
241
+
242
+ value = metadata.get(key)
243
+ if not isinstance(value, str):
244
+ return None
245
+ stripped = value.strip()
246
+ return stripped or None
247
+
248
+
249
+ def _normalize_tag(value: str | None) -> str | None:
250
+
251
+ if value is None:
252
+ return None
253
+ normalized = value.strip().lower().replace(" ", "-")
254
+ return normalized or None
255
+
256
+
257
+ def _code_or_none(value: str | None) -> str | None:
258
+
259
+ if value is None:
260
+ return None
261
+ return f"`{value}`"
262
+
263
+
264
+ def _format_optional_int(value: int | None) -> str | None:
265
+
266
+ if value is None:
267
+ return None
268
+ return f"`{value:,}`"
269
+
270
+
271
+ def _format_parameter_count(value: int | None) -> str | None:
272
+
273
+ if value is None:
274
+ return None
275
+ return f"`{value:,}` ({_format_compact_number(value)})"
276
+
277
+
278
+ def _format_compact_number(value: int) -> str:
279
+
280
+ thresholds = (
281
+ (1_000_000_000, "B"),
282
+ (1_000_000, "M"),
283
+ (1_000, "K"),
284
+ )
285
+ for threshold, suffix in thresholds:
286
+ if value >= threshold:
287
+ return f"{value / threshold:.2f}{suffix}"
288
+ return str(value)
289
+
290
+
291
+ def _format_file_size(size_bytes: int) -> str:
292
+
293
+ if size_bytes < 1024:
294
+ return f"`{size_bytes} B`"
295
+
296
+ units = ("KiB", "MiB", "GiB", "TiB")
297
+ value = float(size_bytes)
298
+ for unit in units:
299
+ value /= 1024
300
+ if value < 1024 or unit == units[-1]:
301
+ return f"`{value:.2f} {unit}` ({size_bytes:,} bytes)"
302
+ return f"`{size_bytes:,} bytes`"
303
+
304
+
305
+ def _looks_like_hash(value: str) -> bool:
306
+
307
+ return bool(re.fullmatch(r"[0-9a-fA-F]{16,}", value))
llm_sbom/web/static/app.css CHANGED
@@ -468,6 +468,41 @@ select:focus,
468
  max-height: 740px;
469
  }
470
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
471
  @media (max-width: 1120px) {
472
  .controls,
473
  .browser {
 
468
  max-height: 740px;
469
  }
470
 
471
+ .hf-options {
472
+ margin-top: 20px;
473
+ padding: 18px 20px;
474
+ border-radius: 20px;
475
+ background: rgba(255, 255, 255, 0.03);
476
+ border: 1px solid rgba(154, 178, 255, 0.12);
477
+ }
478
+
479
+ .hf-options-note {
480
+ margin: 6px 0 16px;
481
+ color: var(--muted);
482
+ font-size: 0.88rem;
483
+ line-height: 1.5;
484
+ }
485
+
486
+ .hf-fields {
487
+ display: grid;
488
+ grid-template-columns: repeat(auto-fit, minmax(200px, 1fr));
489
+ gap: 14px;
490
+ }
491
+
492
+ input[type="number"] {
493
+ width: 100%;
494
+ padding: 14px 16px;
495
+ border-radius: 16px;
496
+ color: var(--text);
497
+ background: var(--card-strong);
498
+ border: 1px solid rgba(154, 178, 255, 0.12);
499
+ }
500
+
501
+ input[type="number"]:focus {
502
+ outline: 2px solid rgba(150, 240, 255, 0.85);
503
+ outline-offset: 2px;
504
+ }
505
+
506
  @media (max-width: 1120px) {
507
  .controls,
508
  .browser {
llm_sbom/web/static/app.js CHANGED
@@ -33,6 +33,12 @@ const elements = {
33
  tabButtons: Array.from(document.querySelectorAll("[data-tab]")),
34
  overviewTab: document.querySelector("#overview-tab"),
35
  rawTab: document.querySelector("#raw-tab"),
 
 
 
 
 
 
36
  };
37
 
38
  boot().catch((error) => {
@@ -112,12 +118,14 @@ function bindEvents() {
112
  return;
113
  }
114
 
115
- const extension = state.currentFormat === "json" ? "json" : "txt";
 
 
116
  const blob = new Blob([state.renderedOutput], { type: "text/plain;charset=utf-8" });
117
  const url = URL.createObjectURL(blob);
118
  const link = document.createElement("a");
119
  link.href = url;
120
- link.download = `L-BOM.${extension}`;
121
  document.body.append(link);
122
  link.click();
123
  link.remove();
@@ -128,6 +136,7 @@ function bindEvents() {
128
  button.addEventListener("click", () => {
129
  state.currentFormat = button.dataset.format;
130
  syncFormatButtons();
 
131
  });
132
  }
133
 
@@ -188,16 +197,32 @@ async function runScan() {
188
  setBusy(true);
189
 
190
  try {
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
191
  const payload = await fetchJson("/api/scan", {
192
  method: "POST",
193
  headers: {
194
  "Content-Type": "application/json",
195
  },
196
- body: JSON.stringify({
197
- path,
198
- format: state.currentFormat,
199
- compute_hash: elements.hashToggle.checked,
200
- }),
201
  });
202
 
203
  state.selectedPath = payload.selected_path;
 
33
  tabButtons: Array.from(document.querySelectorAll("[data-tab]")),
34
  overviewTab: document.querySelector("#overview-tab"),
35
  rawTab: document.querySelector("#raw-tab"),
36
+ hfOptions: document.querySelector("#hf-options"),
37
+ hfTitle: document.querySelector("#hf-title"),
38
+ hfShortDescription: document.querySelector("#hf-short-description"),
39
+ hfSdk: document.querySelector("#hf-sdk"),
40
+ hfAppFile: document.querySelector("#hf-app-file"),
41
+ hfAppPort: document.querySelector("#hf-app-port"),
42
  };
43
 
44
  boot().catch((error) => {
 
118
  return;
119
  }
120
 
121
+ const filename = state.currentFormat === "hf-readme"
122
+ ? "README.md"
123
+ : `L-BOM.${state.currentFormat === "json" ? "json" : "txt"}`;
124
  const blob = new Blob([state.renderedOutput], { type: "text/plain;charset=utf-8" });
125
  const url = URL.createObjectURL(blob);
126
  const link = document.createElement("a");
127
  link.href = url;
128
+ link.download = filename;
129
  document.body.append(link);
130
  link.click();
131
  link.remove();
 
136
  button.addEventListener("click", () => {
137
  state.currentFormat = button.dataset.format;
138
  syncFormatButtons();
139
+ elements.hfOptions.hidden = state.currentFormat !== "hf-readme";
140
  });
141
  }
142
 
 
197
  setBusy(true);
198
 
199
  try {
200
+ const body = {
201
+ path,
202
+ format: state.currentFormat,
203
+ compute_hash: elements.hashToggle.checked,
204
+ };
205
+
206
+ if (state.currentFormat === "hf-readme") {
207
+ const hfTitle = elements.hfTitle.value.trim();
208
+ const hfShortDescription = elements.hfShortDescription.value.trim();
209
+ const hfSdk = elements.hfSdk.value;
210
+ const hfAppFile = elements.hfAppFile.value.trim();
211
+ const hfAppPortRaw = elements.hfAppPort.value.trim();
212
+
213
+ if (hfTitle) body.hf_title = hfTitle;
214
+ if (hfShortDescription) body.hf_short_description = hfShortDescription;
215
+ if (hfSdk) body.hf_sdk = hfSdk;
216
+ if (hfAppFile) body.hf_app_file = hfAppFile;
217
+ if (hfAppPortRaw) body.hf_app_port = parseInt(hfAppPortRaw, 10);
218
+ }
219
+
220
  const payload = await fetchJson("/api/scan", {
221
  method: "POST",
222
  headers: {
223
  "Content-Type": "application/json",
224
  },
225
+ body: JSON.stringify(body),
 
 
 
 
226
  });
227
 
228
  state.selectedPath = payload.selected_path;
llm_sbom/web/templates/index.html CHANGED
@@ -51,6 +51,7 @@
51
  <button class="segment active" data-format="json" type="button">JSON</button>
52
  <button class="segment" data-format="spdx" type="button">SPDX</button>
53
  <button class="segment" data-format="table" type="button">Table</button>
 
54
  </div>
55
  </div>
56
 
@@ -66,6 +67,38 @@
66
  <button id="copy-path" class="ghost-button" type="button">Copy path</button>
67
  </div>
68
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
69
  <div id="message" class="message" hidden></div>
70
  </section>
71
 
 
51
  <button class="segment active" data-format="json" type="button">JSON</button>
52
  <button class="segment" data-format="spdx" type="button">SPDX</button>
53
  <button class="segment" data-format="table" type="button">Table</button>
54
+ <button class="segment" data-format="hf-readme" type="button">HF Readme</button>
55
  </div>
56
  </div>
57
 
 
67
  <button id="copy-path" class="ghost-button" type="button">Copy path</button>
68
  </div>
69
 
70
+ <div id="hf-options" class="hf-options" hidden>
71
+ <p class="eyebrow">Hugging Face README options</p>
72
+ <p class="hf-options-note">HF Readme only supports a single model file. All fields below are optional overrides.</p>
73
+ <div class="hf-fields">
74
+ <label class="field">
75
+ <span class="field-label">Title</span>
76
+ <input id="hf-title" type="text" placeholder="Inferred from model metadata" autocomplete="off">
77
+ </label>
78
+ <label class="field">
79
+ <span class="field-label">Short description</span>
80
+ <input id="hf-short-description" type="text" placeholder="Inferred from model metadata" autocomplete="off">
81
+ </label>
82
+ <label class="field">
83
+ <span class="field-label">SDK</span>
84
+ <select id="hf-sdk">
85
+ <option value="">None</option>
86
+ <option value="gradio">gradio</option>
87
+ <option value="docker">docker</option>
88
+ <option value="static">static</option>
89
+ </select>
90
+ </label>
91
+ <label class="field">
92
+ <span class="field-label">App file</span>
93
+ <input id="hf-app-file" type="text" placeholder="e.g. app.py or index.html" autocomplete="off">
94
+ </label>
95
+ <label class="field">
96
+ <span class="field-label">App port</span>
97
+ <input id="hf-app-port" type="number" placeholder="Docker only" min="1" max="65535" autocomplete="off">
98
+ </label>
99
+ </div>
100
+ </div>
101
+
102
  <div id="message" class="message" hidden></div>
103
  </section>
104
 
pyproject.toml CHANGED
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
4
 
5
  [project]
6
  name = "L-BOM"
7
- version = "0.1.0"
8
  description = "Generate Software Bill of Materials documents for local LLM model files."
9
  readme = "README.md"
10
  requires-python = ">=3.10"
 
4
 
5
  [project]
6
  name = "L-BOM"
7
+ version = "0.2.0"
8
  description = "Generate Software Bill of Materials documents for local LLM model files."
9
  readme = "README.md"
10
  requires-python = ">=3.10"
style.css CHANGED
@@ -46,6 +46,48 @@ p {
46
  color: #cbd5e1;
47
  }
48
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
  .card p:last-child {
50
  margin-bottom: 0;
51
  }
@@ -80,4 +122,12 @@ a:focus-visible {
80
  padding: 24px 20px;
81
  border-radius: 20px;
82
  }
 
 
 
 
 
 
 
 
83
  }
 
46
  color: #cbd5e1;
47
  }
48
 
49
+ .link-grid {
50
+ display: flex;
51
+ flex-wrap: wrap;
52
+ gap: 12px;
53
+ margin: 8px 0 20px;
54
+ }
55
+
56
+ .link-chip {
57
+ display: inline-flex;
58
+ align-items: center;
59
+ justify-content: center;
60
+ min-height: 48px;
61
+ padding: 0.85rem 1.2rem;
62
+ border: 1px solid rgba(125, 211, 252, 0.38);
63
+ border-radius: 14px;
64
+ font-weight: 600;
65
+ letter-spacing: 0.01em;
66
+ color: #f8fafc;
67
+ background: linear-gradient(180deg, rgba(14, 165, 233, 0.34), rgba(8, 47, 73, 0.88));
68
+ box-shadow:
69
+ 0 10px 24px rgba(8, 47, 73, 0.32),
70
+ inset 0 1px 0 rgba(255, 255, 255, 0.12),
71
+ inset 0 -1px 0 rgba(12, 74, 110, 0.4);
72
+ transition: background-color 160ms ease, border-color 160ms ease, color 160ms ease, transform 160ms ease;
73
+ }
74
+
75
+ .link-chip:hover,
76
+ .link-chip:focus-visible {
77
+ border-color: rgba(186, 230, 253, 0.65);
78
+ background: linear-gradient(180deg, rgba(56, 189, 248, 0.46), rgba(12, 74, 110, 0.95));
79
+ color: #f8fafc;
80
+ text-decoration: none;
81
+ transform: translateY(-2px);
82
+ }
83
+
84
+ .link-chip:active {
85
+ transform: translateY(0);
86
+ box-shadow:
87
+ 0 6px 14px rgba(8, 47, 73, 0.28),
88
+ inset 0 1px 0 rgba(255, 255, 255, 0.08);
89
+ }
90
+
91
  .card p:last-child {
92
  margin-bottom: 0;
93
  }
 
122
  padding: 24px 20px;
123
  border-radius: 20px;
124
  }
125
+
126
+ .link-grid {
127
+ gap: 10px;
128
+ }
129
+
130
+ .link-chip {
131
+ width: 100%;
132
+ }
133
  }
tests/test_cli.py CHANGED
@@ -1,8 +1,12 @@
 
 
1
  from click.testing import CliRunner
2
 
3
  import llm_sbom.gui as gui_module
4
  from llm_sbom import __version__
5
  from llm_sbom.cli import main
 
 
6
 
7
 
8
  def test_version_command_prints_package_version() -> None:
@@ -24,6 +28,110 @@ def test_scan_empty_directory_table_output() -> None:
24
  assert "No model files found." in result.output
25
 
26
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27
  def test_gui_command_dispatches_to_server_launcher(monkeypatch) -> None:
28
  runner = CliRunner()
29
  captured: dict[str, object] = {}
 
1
+ from typing import Any
2
+
3
  from click.testing import CliRunner
4
 
5
  import llm_sbom.gui as gui_module
6
  from llm_sbom import __version__
7
  from llm_sbom.cli import main
8
+ from llm_sbom.huggingface import HuggingFaceReadmeOptions, render_huggingface_readme
9
+ from llm_sbom.schema import SBOMDocument
10
 
11
 
12
  def test_version_command_prints_package_version() -> None:
 
28
  assert "No model files found." in result.output
29
 
30
 
31
+ def test_render_hf_readme_infers_curated_metadata() -> None:
32
+ document = _sample_document()
33
+
34
+ rendered = render_huggingface_readme(document, HuggingFaceReadmeOptions())
35
+
36
+ assert 'title: "LFM2.5-1.2B-Instruct-Q8_0"' in rendered
37
+ assert "tags:" in rendered
38
+ assert ' - "liquid"' in rendered
39
+ assert ' - "gguf"' in rendered
40
+ assert "This README content was generated by `L-BOM`" in rendered
41
+ assert "- **License:** `lfm1.0`" in rendered
42
+ assert "- **Languages:** `en`, `fr`" in rendered
43
+ assert "tokenizer.chat_template" not in rendered
44
+
45
+
46
+ def test_render_hf_readme_honors_overrides() -> None:
47
+ document = _sample_document()
48
+
49
+ rendered = render_huggingface_readme(
50
+ document,
51
+ HuggingFaceReadmeOptions(
52
+ title="LocalFusion Demo",
53
+ sdk="static",
54
+ app_file="index.html",
55
+ short_description="Custom summary",
56
+ ),
57
+ )
58
+
59
+ assert 'title: "LocalFusion Demo"' in rendered
60
+ assert 'sdk: "static"' in rendered
61
+ assert 'app_file: "index.html"' in rendered
62
+ assert 'short_description: "Custom summary"' in rendered
63
+ assert "# LocalFusion Demo" in rendered
64
+
65
+
66
+ def test_render_hf_readme_includes_warnings() -> None:
67
+ document = _sample_document(warnings=["Quantization may need review."])
68
+
69
+ rendered = render_huggingface_readme(document, HuggingFaceReadmeOptions())
70
+
71
+ assert "## Warnings" in rendered
72
+ assert "- Quantization may need review." in rendered
73
+
74
+
75
+ def test_scan_hf_readme_rejects_multi_model_directory() -> None:
76
+ runner = CliRunner()
77
+
78
+ with runner.isolated_filesystem():
79
+ with open("one.gguf", "wb") as handle:
80
+ handle.write(b"bad")
81
+ with open("two.gguf", "wb") as handle:
82
+ handle.write(b"bad")
83
+
84
+ result = runner.invoke(main, ["scan", ".", "--format", "hf-readme"])
85
+
86
+ assert result.exit_code != 0
87
+ assert "single model file at a time" in result.output
88
+
89
+
90
+ def _sample_document(**overrides: object) -> SBOMDocument:
91
+ values: dict[str, Any] = {
92
+ "sbom_version": "1.0",
93
+ "generated_at": "2026-03-25T04:07:53.262551+00:00",
94
+ "tool_name": "l-bom",
95
+ "tool_version": "0.2.0",
96
+ "model_path": r"C:\models\LFM2.5-1.2B-Instruct-GGUF\LFM2.5-1.2B-Instruct-Q8_0.gguf",
97
+ "model_filename": "LFM2.5-1.2B-Instruct-Q8_0.gguf",
98
+ "file_size_bytes": 1246253888,
99
+ "sha256": "f6b981dcb86917fa463f78a362320bd5e2dc45445df147287eedb85e5a30d26a",
100
+ "format": "gguf",
101
+ "architecture": "lfm2",
102
+ "parameter_count": 1170340608,
103
+ "quantization": "Q5_1",
104
+ "dtype": None,
105
+ "context_length": 128000,
106
+ "vocab_size": 65536,
107
+ "license": None,
108
+ "base_model": None,
109
+ "training_framework": None,
110
+ "metadata": {
111
+ "general.name": "4cd563d5a96af9e7c738b76cd89a0a200db7608f",
112
+ "general.size_label": "1.2B",
113
+ "general.license": "other",
114
+ "general.license.name": "lfm1.0",
115
+ "general.license.link": "LICENSE",
116
+ "general.tags": ["liquid", "text-generation"],
117
+ "general.languages": ["en", "fr"],
118
+ "tokenizer.chat_template": "huge value that should not leak into the README",
119
+ },
120
+ "warnings": [],
121
+ }
122
+ values.update(overrides)
123
+ return SBOMDocument(**values)
124
+
125
+ def test_scan_empty_directory_table_output() -> None:
126
+ runner = CliRunner()
127
+
128
+ with runner.isolated_filesystem():
129
+ result = runner.invoke(main, ["scan", ".", "--format", "table"])
130
+
131
+ assert result.exit_code == 0
132
+ assert "No model files found." in result.output
133
+
134
+
135
  def test_gui_command_dispatches_to_server_launcher(monkeypatch) -> None:
136
  runner = CliRunner()
137
  captured: dict[str, object] = {}