Spaces:
Sleeping
Sleeping
Refine LLM prompt for semantic-vector chunk optimization.
Browse filesUpdate optimizer prompts to enforce subject preservation and sentence-by-sentence meaning retention for rewrites, allow exact or distributed core-term usage for BERT goals, and collect one-line rationale from the model. Surface rationale in debug UI and synchronize full documentation.
Made-with: Cursor
- docs/FULL_FUNCTIONAL_DOCUMENTATION.md +6 -1
- optimizer.py +46 -5
- templates/index.html +2 -0
docs/FULL_FUNCTIONAL_DOCUMENTATION.md
CHANGED
|
@@ -424,9 +424,12 @@ HTML extraction pipeline:
|
|
| 424 |
|
| 425 |
### Генерация кандидатов
|
| 426 |
- `_llm_edit_chunk` — отправляет structured prompt в OpenAI-compatible API.
|
|
|
|
| 427 |
- учитывает `cascade_level` и тип операции (`rewrite`/`insert`)
|
| 428 |
- явно требует грамматически корректный и естественный текст
|
| 429 |
- ограничивает число предложений по уровню
|
|
|
|
|
|
|
| 430 |
|
| 431 |
### Применение правок
|
| 432 |
- `_replace_span` — замена диапазона предложений.
|
|
@@ -448,6 +451,7 @@ HTML extraction pipeline:
|
|
| 448 |
3. выбрать пул чанков и операцию каскада.
|
| 449 |
- на шаг выбирается несколько span-кандидатов (multi-chunk selection), а не один;
|
| 450 |
- ранжирование учитывает `focus_terms/avoid_terms`, chunk-level relevance и шумовые эвристики (menu/CTA/header penalties);
|
|
|
|
| 451 |
- используется `attempt_cursor` по цели и `attempted_spans`, чтобы избежать циклов по одному и тому же участку.
|
| 452 |
4. сгенерировать `N` кандидатов для каждого выбранного span.
|
| 453 |
5. pre-validation (формат/качество/длины).
|
|
@@ -472,7 +476,8 @@ HTML extraction pipeline:
|
|
| 472 |
- `L4`: более широкий rewrite окна (до 5 предложений с вариативным охватом).
|
| 473 |
11. вести подробный лог по каждому кандидату.
|
| 474 |
- в debug-таблице фиксируются и chunk-level сигналы (`local+`, `chunk Δ`, `rel before->after`) наряду с глобальными (`Δ score`, `valid`, `goal+`);
|
| 475 |
-
- для каждого кандидата сохраняется `llm_prompt_debug` (операция, цель, фокус-термы, chunk и ближайший контекст), что позволяет анализировать фактический вход в LLM
|
|
|
|
| 476 |
- также сохраняется `metrics_delta` (вклад BM25/BERT/Semantic/N-gram/Title в общий сдвиг), включая `semantic_gap_sum` и изменение состава gap-термов (`semantic_gap_terms_added/removed`), чтобы видеть, за счет чего падает или растет `score`.
|
| 477 |
|
| 478 |
---
|
|
|
|
| 424 |
|
| 425 |
### Генерация кандидатов
|
| 426 |
- `_llm_edit_chunk` — отправляет structured prompt в OpenAI-compatible API.
|
| 427 |
+
- роль модели в prompt: **semantic-vector optimizer for SEO**, а не общий “copy editor”.
|
| 428 |
- учитывает `cascade_level` и тип операции (`rewrite`/`insert`)
|
| 429 |
- явно требует грамматически корректный и естественный текст
|
| 430 |
- ограничивает число предложений по уровню
|
| 431 |
+
- для BERT допускает 2 валидные схемы: exact phrase один раз **или** естественное разнесённое использование core-термов (`mbit`, `alternatives`) в одном абзаце.
|
| 432 |
+
- для `rewrite` явно требует сохранить исходный смысл `sentence-by-sentence` и не менять субъект/ключевую сущность без необходимости.
|
| 433 |
|
| 434 |
### Применение правок
|
| 435 |
- `_replace_span` — замена диапазона предложений.
|
|
|
|
| 451 |
3. выбрать пул чанков и операцию каскада.
|
| 452 |
- на шаг выбирается несколько span-кандидатов (multi-chunk selection), а не один;
|
| 453 |
- ранжирование учитывает `focus_terms/avoid_terms`, chunk-level relevance и шумовые эвристики (menu/CTA/header penalties);
|
| 454 |
+
- для BERT-целей ранжирование не ограничивается участками с already-present вхождениями: дополнительно приоритизируются релевантные участки с недопредставленными core-термами, где их можно добавить естественно;
|
| 455 |
- используется `attempt_cursor` по цели и `attempted_spans`, чтобы избежать циклов по одному и тому же участку.
|
| 456 |
4. сгенерировать `N` кандидатов для каждого выбранного span.
|
| 457 |
5. pre-validation (формат/качество/длины).
|
|
|
|
| 476 |
- `L4`: более широкий rewrite окна (до 5 предложений с вариативным охватом).
|
| 477 |
11. вести подробный лог по каждому кандидату.
|
| 478 |
- в debug-таблице фиксируются и chunk-level сигналы (`local+`, `chunk Δ`, `rel before->after`) наряду с глобальными (`Δ score`, `valid`, `goal+`);
|
| 479 |
+
- для каждого кандидата сохраняется `llm_prompt_debug` (операция, цель, фокус-термы, chunk и ближайший контекст), что позволяет анализировать фактический вход в LLM;
|
| 480 |
+
- LLM возвращает поле `rationale` (1 строка) — краткое объяснение, почему правка должна повысить релевантность цели.
|
| 481 |
- также сохраняется `metrics_delta` (вклад BM25/BERT/Semantic/N-gram/Title в общий сдвиг), включая `semantic_gap_sum` и изменение состава gap-термов (`semantic_gap_terms_added/removed`), чтобы видеть, за счет чего падает или растет `score`.
|
| 482 |
|
| 483 |
---
|
optimizer.py
CHANGED
|
@@ -376,6 +376,7 @@ def _rank_sentence_indices(
|
|
| 376 |
return [0]
|
| 377 |
stop = STOP_WORDS.get(language, STOP_WORDS["en"])
|
| 378 |
focus = [x for x in focus_terms if x and x not in stop]
|
|
|
|
| 379 |
avoid = [x for x in avoid_terms if x]
|
| 380 |
center = (len(sentences) - 1) / 2.0
|
| 381 |
|
|
@@ -394,8 +395,30 @@ def _rank_sentence_indices(
|
|
| 394 |
avoid_score = sum(lower.count(t.lower()) for t in avoid)
|
| 395 |
chunk_rel = _chunk_goal_relevance(s, goal_type, goal_label, focus_terms, language)
|
| 396 |
noise_penalty = 1.0 if _is_noise_like_sentence(s) else 0.0
|
| 397 |
-
|
| 398 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 399 |
scored.append((idx, score, len(s)))
|
| 400 |
|
| 401 |
scored.sort(key=lambda x: (x[1], -x[2]), reverse=True)
|
|
@@ -666,8 +689,10 @@ def _llm_edit_chunk(
|
|
| 666 |
endpoint = base_url.rstrip("/") + "/chat/completions"
|
| 667 |
op = operation if operation in {"rewrite", "insert"} else "rewrite"
|
| 668 |
system_msg = (
|
| 669 |
-
"You are
|
| 670 |
-
"
|
|
|
|
|
|
|
| 671 |
"Do not rewrite the whole text. Never change topic or introduce unrelated entities."
|
| 672 |
)
|
| 673 |
op_instruction = (
|
|
@@ -686,6 +711,7 @@ def _llm_edit_chunk(
|
|
| 686 |
"Text must be grammatically correct and natural for native readers.\n"
|
| 687 |
"Keep edits tightly local to the provided chunk and immediate context only.\n"
|
| 688 |
"Edit must be substantive (not just synonyms) and should increase relevance to the goal phrase.\n"
|
|
|
|
| 689 |
f"Focus terms to strengthen: {', '.join(focus_terms) if focus_terms else '-'}\n"
|
| 690 |
f"Terms to de-emphasize/avoid overuse: {', '.join(avoid_terms) if avoid_terms else '-'}\n\n"
|
| 691 |
f"Chunk to edit/expand:\n{chunk_text}\n\n"
|
|
@@ -697,7 +723,10 @@ def _llm_edit_chunk(
|
|
| 697 |
f"3) Max {max_sent} sentence(s) in edited_text.\n"
|
| 698 |
"4) Keep key named entities from the original chunk unchanged when possible.\n"
|
| 699 |
"5) For BERT goal, improve semantic match to goal phrase without keyword stuffing.\n"
|
| 700 |
-
"6)
|
|
|
|
|
|
|
|
|
|
| 701 |
)
|
| 702 |
payload = {
|
| 703 |
"model": model,
|
|
@@ -719,12 +748,15 @@ def _llm_edit_chunk(
|
|
| 719 |
)
|
| 720 |
parsed = _extract_json_object(content)
|
| 721 |
edited = ""
|
|
|
|
| 722 |
if parsed:
|
| 723 |
edited = str(parsed.get("edited_text") or parsed.get("revised_sentence") or parsed.get("rewrite") or "").strip()
|
|
|
|
| 724 |
if not edited:
|
| 725 |
raise ValueError("LLM returned invalid JSON edit payload.")
|
| 726 |
return {
|
| 727 |
"edited_text": edited,
|
|
|
|
| 728 |
"prompt_debug": {
|
| 729 |
"operation": op,
|
| 730 |
"cascade_level": cascade_level,
|
|
@@ -940,6 +972,7 @@ def optimize_text(request_data: Dict[str, Any]) -> Dict[str, Any]:
|
|
| 940 |
temperature=temp,
|
| 941 |
)
|
| 942 |
edited_text = str((llm_result or {}).get("edited_text", "")).strip()
|
|
|
|
| 943 |
prompt_debug = (llm_result or {}).get("prompt_debug", {})
|
| 944 |
if not edited_text or edited_text == original_span_text:
|
| 945 |
continue
|
|
@@ -980,6 +1013,7 @@ def optimize_text(request_data: Dict[str, Any]) -> Dict[str, Any]:
|
|
| 980 |
"chunk_relevance_after": after_rel,
|
| 981 |
"term_diff": _term_diff(original_span_text, edited_text, language),
|
| 982 |
"llm_prompt_debug": prompt_debug,
|
|
|
|
| 983 |
"operation": operation,
|
| 984 |
"sentence_index": sent_idx,
|
| 985 |
"span_start": span_start,
|
|
@@ -1007,6 +1041,7 @@ def optimize_text(request_data: Dict[str, Any]) -> Dict[str, Any]:
|
|
| 1007 |
"chunk_relevance_after": after_rel,
|
| 1008 |
"term_diff": _term_diff(original_span_text, edited_text, language),
|
| 1009 |
"llm_prompt_debug": prompt_debug,
|
|
|
|
| 1010 |
"operation": operation,
|
| 1011 |
"sentence_index": sent_idx,
|
| 1012 |
"span_start": span_start,
|
|
@@ -1058,6 +1093,7 @@ def optimize_text(request_data: Dict[str, Any]) -> Dict[str, Any]:
|
|
| 1058 |
"chunk_relevance_after": after_rel,
|
| 1059 |
"term_diff": _term_diff(original_span_text, edited_text, language),
|
| 1060 |
"llm_prompt_debug": prompt_debug,
|
|
|
|
| 1061 |
"invalid_reasons": invalid_reasons,
|
| 1062 |
"delta_score": delta_score,
|
| 1063 |
"candidate_score": cand_metrics.get("score"),
|
|
@@ -1088,6 +1124,7 @@ def optimize_text(request_data: Dict[str, Any]) -> Dict[str, Any]:
|
|
| 1088 |
"goal_type": goal.get("type"),
|
| 1089 |
"goal_label": goal.get("label"),
|
| 1090 |
},
|
|
|
|
| 1091 |
"operation": operation,
|
| 1092 |
"sentence_index": sent_idx,
|
| 1093 |
"span_start": span_start,
|
|
@@ -1178,6 +1215,7 @@ def optimize_text(request_data: Dict[str, Any]) -> Dict[str, Any]:
|
|
| 1178 |
"chunk_relevance_after": c.get("chunk_relevance_after"),
|
| 1179 |
"term_diff": c.get("term_diff"),
|
| 1180 |
"llm_prompt_debug": c.get("llm_prompt_debug"),
|
|
|
|
| 1181 |
"metrics_delta": c.get("metrics_delta"),
|
| 1182 |
"invalid_reasons": c.get("invalid_reasons", []),
|
| 1183 |
"delta_score": c.get("delta_score"),
|
|
@@ -1324,6 +1362,7 @@ def optimize_text(request_data: Dict[str, Any]) -> Dict[str, Any]:
|
|
| 1324 |
"chunk_relevance_after": c.get("chunk_relevance_after"),
|
| 1325 |
"term_diff": c.get("term_diff"),
|
| 1326 |
"llm_prompt_debug": c.get("llm_prompt_debug"),
|
|
|
|
| 1327 |
"metrics_delta": c.get("metrics_delta"),
|
| 1328 |
"invalid_reasons": c.get("invalid_reasons", []),
|
| 1329 |
"delta_score": c.get("delta_score"),
|
|
@@ -1372,6 +1411,7 @@ def optimize_text(request_data: Dict[str, Any]) -> Dict[str, Any]:
|
|
| 1372 |
"chunk_relevance_after": c.get("chunk_relevance_after"),
|
| 1373 |
"term_diff": c.get("term_diff"),
|
| 1374 |
"llm_prompt_debug": c.get("llm_prompt_debug"),
|
|
|
|
| 1375 |
"metrics_delta": c.get("metrics_delta"),
|
| 1376 |
"invalid_reasons": c.get("invalid_reasons", []),
|
| 1377 |
"delta_score": c.get("delta_score"),
|
|
@@ -1439,6 +1479,7 @@ def optimize_text(request_data: Dict[str, Any]) -> Dict[str, Any]:
|
|
| 1439 |
"chunk_relevance_after": c.get("chunk_relevance_after"),
|
| 1440 |
"term_diff": c.get("term_diff"),
|
| 1441 |
"llm_prompt_debug": c.get("llm_prompt_debug"),
|
|
|
|
| 1442 |
"metrics_delta": c.get("metrics_delta"),
|
| 1443 |
"invalid_reasons": c.get("invalid_reasons", []),
|
| 1444 |
"delta_score": c.get("delta_score"),
|
|
|
|
| 376 |
return [0]
|
| 377 |
stop = STOP_WORDS.get(language, STOP_WORDS["en"])
|
| 378 |
focus = [x for x in focus_terms if x and x not in stop]
|
| 379 |
+
goal_phrase = (goal_label or "").strip().lower()
|
| 380 |
avoid = [x for x in avoid_terms if x]
|
| 381 |
center = (len(sentences) - 1) / 2.0
|
| 382 |
|
|
|
|
| 395 |
avoid_score = sum(lower.count(t.lower()) for t in avoid)
|
| 396 |
chunk_rel = _chunk_goal_relevance(s, goal_type, goal_label, focus_terms, language)
|
| 397 |
noise_penalty = 1.0 if _is_noise_like_sentence(s) else 0.0
|
| 398 |
+
|
| 399 |
+
# For BERT goals, do not over-focus on existing occurrences only:
|
| 400 |
+
# prioritize semantically relevant chunks where phrase/terms may still be underrepresented.
|
| 401 |
+
if goal_type == "bert":
|
| 402 |
+
tokenized = _filter_stopwords(_tokenize(lower), language)
|
| 403 |
+
token_set = set(tokenized)
|
| 404 |
+
core_terms = [t.lower() for t in focus if t]
|
| 405 |
+
core_hits = sum(1 for t in core_terms if t in token_set)
|
| 406 |
+
coverage = (core_hits / max(1, len(core_terms))) if core_terms else 0.0
|
| 407 |
+
phrase_present = 1.0 if (goal_phrase and goal_phrase in lower) else 0.0
|
| 408 |
+
# Boost candidates where semantic context is relevant but explicit core terms are not saturated yet.
|
| 409 |
+
missing_term_boost = (1.0 - coverage) * 1.4 if chunk_rel >= 0.18 else 0.0
|
| 410 |
+
phrase_absent_boost = 0.35 if (goal_phrase and not phrase_present and chunk_rel >= 0.2) else 0.0
|
| 411 |
+
score = (
|
| 412 |
+
(chunk_rel * 5.0)
|
| 413 |
+
+ (focus_score * 1.2)
|
| 414 |
+
+ (missing_term_boost + phrase_absent_boost)
|
| 415 |
+
+ (avoid_score * 1.5)
|
| 416 |
+
- (noise_penalty * 3.0)
|
| 417 |
+
- (abs(idx - center) * 0.05)
|
| 418 |
+
)
|
| 419 |
+
else:
|
| 420 |
+
# Prefer semantically relevant and lexical matches; push noisy headers/CTA lower.
|
| 421 |
+
score = (chunk_rel * 4.0) + (focus_score * 3.0) + (avoid_score * 2.0) - (noise_penalty * 3.0) - (abs(idx - center) * 0.05)
|
| 422 |
scored.append((idx, score, len(s)))
|
| 423 |
|
| 424 |
scored.sort(key=lambda x: (x[1], -x[2]), reverse=True)
|
|
|
|
| 689 |
endpoint = base_url.rstrip("/") + "/chat/completions"
|
| 690 |
op = operation if operation in {"rewrite", "insert"} else "rewrite"
|
| 691 |
system_msg = (
|
| 692 |
+
"You are a semantic-vector optimizer for SEO tasks. "
|
| 693 |
+
"Your task is to improve chunk relevance to the focus terms/goal phrase with minimal local edits. "
|
| 694 |
+
"Preserve narrative flow, factual tone, and language. "
|
| 695 |
+
"Return strict JSON only: {\"edited_text\": \"...\", \"rationale\": \"...\"}. "
|
| 696 |
"Do not rewrite the whole text. Never change topic or introduce unrelated entities."
|
| 697 |
)
|
| 698 |
op_instruction = (
|
|
|
|
| 711 |
"Text must be grammatically correct and natural for native readers.\n"
|
| 712 |
"Keep edits tightly local to the provided chunk and immediate context only.\n"
|
| 713 |
"Edit must be substantive (not just synonyms) and should increase relevance to the goal phrase.\n"
|
| 714 |
+
"Do not change the sentence subject/entity focus unless absolutely required by grammar.\n"
|
| 715 |
f"Focus terms to strengthen: {', '.join(focus_terms) if focus_terms else '-'}\n"
|
| 716 |
f"Terms to de-emphasize/avoid overuse: {', '.join(avoid_terms) if avoid_terms else '-'}\n\n"
|
| 717 |
f"Chunk to edit/expand:\n{chunk_text}\n\n"
|
|
|
|
| 723 |
f"3) Max {max_sent} sentence(s) in edited_text.\n"
|
| 724 |
"4) Keep key named entities from the original chunk unchanged when possible.\n"
|
| 725 |
"5) For BERT goal, improve semantic match to goal phrase without keyword stuffing.\n"
|
| 726 |
+
"6) For BERT goals you may use either: (a) exact phrase once, or (b) natural distributed use of core terms in one paragraph.\n"
|
| 727 |
+
"7) For rewrite: preserve original meaning sentence-by-sentence while improving relevance.\n"
|
| 728 |
+
"8) Provide rationale in one short sentence.\n"
|
| 729 |
+
"9) Only output JSON object."
|
| 730 |
)
|
| 731 |
payload = {
|
| 732 |
"model": model,
|
|
|
|
| 748 |
)
|
| 749 |
parsed = _extract_json_object(content)
|
| 750 |
edited = ""
|
| 751 |
+
rationale = ""
|
| 752 |
if parsed:
|
| 753 |
edited = str(parsed.get("edited_text") or parsed.get("revised_sentence") or parsed.get("rewrite") or "").strip()
|
| 754 |
+
rationale = str(parsed.get("rationale") or parsed.get("why") or "").strip()
|
| 755 |
if not edited:
|
| 756 |
raise ValueError("LLM returned invalid JSON edit payload.")
|
| 757 |
return {
|
| 758 |
"edited_text": edited,
|
| 759 |
+
"rationale": rationale,
|
| 760 |
"prompt_debug": {
|
| 761 |
"operation": op,
|
| 762 |
"cascade_level": cascade_level,
|
|
|
|
| 972 |
temperature=temp,
|
| 973 |
)
|
| 974 |
edited_text = str((llm_result or {}).get("edited_text", "")).strip()
|
| 975 |
+
llm_rationale = str((llm_result or {}).get("rationale", "")).strip()
|
| 976 |
prompt_debug = (llm_result or {}).get("prompt_debug", {})
|
| 977 |
if not edited_text or edited_text == original_span_text:
|
| 978 |
continue
|
|
|
|
| 1013 |
"chunk_relevance_after": after_rel,
|
| 1014 |
"term_diff": _term_diff(original_span_text, edited_text, language),
|
| 1015 |
"llm_prompt_debug": prompt_debug,
|
| 1016 |
+
"llm_rationale": llm_rationale,
|
| 1017 |
"operation": operation,
|
| 1018 |
"sentence_index": sent_idx,
|
| 1019 |
"span_start": span_start,
|
|
|
|
| 1041 |
"chunk_relevance_after": after_rel,
|
| 1042 |
"term_diff": _term_diff(original_span_text, edited_text, language),
|
| 1043 |
"llm_prompt_debug": prompt_debug,
|
| 1044 |
+
"llm_rationale": llm_rationale,
|
| 1045 |
"operation": operation,
|
| 1046 |
"sentence_index": sent_idx,
|
| 1047 |
"span_start": span_start,
|
|
|
|
| 1093 |
"chunk_relevance_after": after_rel,
|
| 1094 |
"term_diff": _term_diff(original_span_text, edited_text, language),
|
| 1095 |
"llm_prompt_debug": prompt_debug,
|
| 1096 |
+
"llm_rationale": llm_rationale,
|
| 1097 |
"invalid_reasons": invalid_reasons,
|
| 1098 |
"delta_score": delta_score,
|
| 1099 |
"candidate_score": cand_metrics.get("score"),
|
|
|
|
| 1124 |
"goal_type": goal.get("type"),
|
| 1125 |
"goal_label": goal.get("label"),
|
| 1126 |
},
|
| 1127 |
+
"llm_rationale": "",
|
| 1128 |
"operation": operation,
|
| 1129 |
"sentence_index": sent_idx,
|
| 1130 |
"span_start": span_start,
|
|
|
|
| 1215 |
"chunk_relevance_after": c.get("chunk_relevance_after"),
|
| 1216 |
"term_diff": c.get("term_diff"),
|
| 1217 |
"llm_prompt_debug": c.get("llm_prompt_debug"),
|
| 1218 |
+
"llm_rationale": c.get("llm_rationale"),
|
| 1219 |
"metrics_delta": c.get("metrics_delta"),
|
| 1220 |
"invalid_reasons": c.get("invalid_reasons", []),
|
| 1221 |
"delta_score": c.get("delta_score"),
|
|
|
|
| 1362 |
"chunk_relevance_after": c.get("chunk_relevance_after"),
|
| 1363 |
"term_diff": c.get("term_diff"),
|
| 1364 |
"llm_prompt_debug": c.get("llm_prompt_debug"),
|
| 1365 |
+
"llm_rationale": c.get("llm_rationale"),
|
| 1366 |
"metrics_delta": c.get("metrics_delta"),
|
| 1367 |
"invalid_reasons": c.get("invalid_reasons", []),
|
| 1368 |
"delta_score": c.get("delta_score"),
|
|
|
|
| 1411 |
"chunk_relevance_after": c.get("chunk_relevance_after"),
|
| 1412 |
"term_diff": c.get("term_diff"),
|
| 1413 |
"llm_prompt_debug": c.get("llm_prompt_debug"),
|
| 1414 |
+
"llm_rationale": c.get("llm_rationale"),
|
| 1415 |
"metrics_delta": c.get("metrics_delta"),
|
| 1416 |
"invalid_reasons": c.get("invalid_reasons", []),
|
| 1417 |
"delta_score": c.get("delta_score"),
|
|
|
|
| 1479 |
"chunk_relevance_after": c.get("chunk_relevance_after"),
|
| 1480 |
"term_diff": c.get("term_diff"),
|
| 1481 |
"llm_prompt_debug": c.get("llm_prompt_debug"),
|
| 1482 |
+
"llm_rationale": c.get("llm_rationale"),
|
| 1483 |
"metrics_delta": c.get("metrics_delta"),
|
| 1484 |
"invalid_reasons": c.get("invalid_reasons", []),
|
| 1485 |
"delta_score": c.get("delta_score"),
|
templates/index.html
CHANGED
|
@@ -860,6 +860,7 @@
|
|
| 860 |
const termDiff = c.term_diff ? safeHtml(JSON.stringify(c.term_diff)) : '-';
|
| 861 |
const metricDelta = c.metrics_delta ? safeHtml(JSON.stringify(c.metrics_delta)) : '-';
|
| 862 |
const promptDbg = c.llm_prompt_debug ? safeHtml(JSON.stringify(c.llm_prompt_debug, null, 2)) : '-';
|
|
|
|
| 863 |
return `
|
| 864 |
<tr>
|
| 865 |
<td>${c.candidate_index ?? '-'}</td>
|
|
@@ -878,6 +879,7 @@
|
|
| 878 |
</td>
|
| 879 |
<td>
|
| 880 |
<div style="max-width: 520px; white-space: normal;">${sentAfter}</div>
|
|
|
|
| 881 |
<details class="mt-1">
|
| 882 |
<summary class="small">LLM input</summary>
|
| 883 |
<pre class="small mb-0" style="white-space: pre-wrap;">${promptDbg}</pre>
|
|
|
|
| 860 |
const termDiff = c.term_diff ? safeHtml(JSON.stringify(c.term_diff)) : '-';
|
| 861 |
const metricDelta = c.metrics_delta ? safeHtml(JSON.stringify(c.metrics_delta)) : '-';
|
| 862 |
const promptDbg = c.llm_prompt_debug ? safeHtml(JSON.stringify(c.llm_prompt_debug, null, 2)) : '-';
|
| 863 |
+
const rationale = c.llm_rationale ? safeHtml(c.llm_rationale) : '-';
|
| 864 |
return `
|
| 865 |
<tr>
|
| 866 |
<td>${c.candidate_index ?? '-'}</td>
|
|
|
|
| 879 |
</td>
|
| 880 |
<td>
|
| 881 |
<div style="max-width: 520px; white-space: normal;">${sentAfter}</div>
|
| 882 |
+
<div class="small text-muted mt-1">rationale: ${rationale}</div>
|
| 883 |
<details class="mt-1">
|
| 884 |
<summary class="small">LLM input</summary>
|
| 885 |
<pre class="small mb-0" style="white-space: pre-wrap;">${promptDbg}</pre>
|