| --- |
| library_name: transformers |
| license: mit |
| base_model: openai-community/gpt2 |
| language: |
| - en |
| tags: |
| - modular-intelligence |
| - text-generation |
| - structured-reasoning |
| - experimental |
| --- |
| |
| # Modular Intelligence (GPT-2 baseline) |
|
|
| This repository is an **experimental baseline** for **Modular Intelligence** built on top of `openai-community/gpt2`. |
|
|
| The goal is **not** to claim that GPT-2 is “intelligent”, but to show how a **small, simple model** can be wrapped inside a **modular reasoning architecture**: |
|
|
| - **Modules**: small, single-purpose “skills” (e.g. analysis note, strategy memo). |
| - **Checkers**: strict reviewers that check the output of a module. |
| - **Structured outputs**: fixed sections like CONTEXT / OPTIONS / RISKS / NEXT STEPS. |
|
|
| Later, this same architecture can be reused with much stronger models. |
|
|
| --- |
|
|
| ## What this model is |
|
|
| - A **GPT-2 checkpoint** configured as the engine behind a **Modular Intelligence** framework. |
| - It is **not** heavily fine-tuned; it is used mainly to demonstrate: |
| - Structured prompts |
| - Module definitions |
| - Checker patterns |
| - Deterministic, repeatable formats |
|
|
| Think of this repo as: |
|
|
| > “The engine inside a modular reasoning system, using GPT-2 for a minimal, low-cost demo.” |
|
|
| --- |
|
|
| ## What’s different from base GPT-2? |
|
|
| Base GPT-2 is a generic text generator. |
|
|
| Here, GPT-2 is wrapped in a **specific contract**: |
|
|
| 1. **Fixed module types** |
| For example: |
| - `analysis_note_v1` |
| - `document_explainer_v1` |
| - `strategy_memo_v1` |
| - `message_reply_v1` |
| - `profile_application_v1` |
| - `system_blueprint_v1` |
| - `modular_brainstorm_v1` |
|
|
| 2. **Fixed output sections** |
| Each module must respond in a strict, labelled format. Example (Strategy Memo): |
|
|
| - CONTEXT |
| - OBJECTIVE |
| - CONSTRAINTS |
| - OPTIONS |
| - RECOMMENDATION |
| - RISKS |
| - NEXT_ACTIONS |
| |
| 3. **Paired checkers** |
| Certain modules have a checker module that: |
| - Re-reads the original task |
| - Reviews the draft output |
| - Returns a verdict + issues + suggested fixes |
| |
| 4. **Use pattern** |
| Instead of “just generating text”, you: |
| - Call a **module** with structured inputs |
| - Get a **structured output** |
| - Optionally call a **checker** on that output |
| |
| So the “intelligence” here is in the **architecture and prompts**, not in new weights. |
| |
| --- |
| |
| ## Dataset |
| |
| This repository **does not introduce** a new training dataset and **does not re-train** GPT-2. |
| |
| - **Base model**: `openai-community/gpt2` |
| - **Training objective**: next-token prediction (causal language modeling) |
| - **Original GPT-2 pretraining data** (by OpenAI, not included here): |
| - Large, general-domain English web corpus (“WebText”) |
| - ~40 GB of text from web pages linked from Reddit posts with score ≥ 3 |
| - Mixed content (news, blogs, forums, technical/non-technical) |
| |
| In this repository: |
| |
| - GPT-2 is used **as-is** as the language engine. |
| - The **Modular Intelligence** behaviour comes from: |
| - The **module prompts** (how we talk to the model) |
| - The **checker prompts** (how we review its answers) |
| - The **fixed output formats** |
| |
| No new datasets are uploaded or used for further fine-tuning inside this repo. |
| |
| --- |
| |
| ## Modular Intelligence: modules and checkers (simple view) |
| |
| ### Generator modules |
| |
| Each generator is a “skill” with a fixed format. |
| |
| 1. **Analysis Note (`analysis_note_v1`)** |
| |
| - **Inputs**: |
| - `context` – short description of the situation or text |
| - `questions` – what you want to understand |
| - `constraints` – any limits (time, style, scope) |
| |
| - **Outputs (sections)**: |
| - CONTEXT |
| - QUESTIONS |
| - FRAMEWORK |
| - ANALYSIS |
| - CONCLUSION |
| - NEXT_STEPS |
|
|
| 2. **Document Explainer (`document_explainer_v1`)** |
|
|
| - **Inputs**: |
| - `document_text` |
| - `focus` |
| - `audience` |
|
|
| - **Outputs**: |
| - SNAPSHOT |
| - KEY_POINTS |
| - STRUCTURE |
| - DETAILED_EXPLANATION |
| - IMPLICATIONS |
| - ACTIONS |
|
|
| 3. **Strategy Memo (`strategy_memo_v1`)** |
|
|
| - **Inputs**: |
| - `context` |
| - `objective` |
| - `constraints` |
|
|
| - **Outputs**: |
| - CONTEXT |
| - OBJECTIVE |
| - CONSTRAINTS |
| - OPTIONS |
| - RECOMMENDATION |
| - RISKS |
| - NEXT_ACTIONS |
| |
| 4. **Message / Post Reply (`message_reply_v1`)** |
| |
| - **Inputs**: |
| - `source_text` |
| - `your_angle` |
| - `tone_notes` |
|
|
| - **Outputs**: |
| - DRAFT_REPLY |
| |
| 5. **Profile / Application Draft (`profile_application_v1`)** |
| |
| - **Inputs**: |
| - `target_role_or_goal` |
| - `your_background` |
| - `audience` |
|
|
| - **Outputs**: |
| - POSITIONING |
| - KEY_POINTS |
| - FULL_DRAFT |
|
|
| 6. **System / Architecture Blueprint (`system_blueprint_v1`)** |
|
|
| - **Inputs**: |
| - `objective` |
| - `current_state` |
| - `constraints` |
|
|
| - **Outputs**: |
| - OBJECTIVE |
| - CURRENT_STATE |
| - COMPONENTS |
| - FLOWS |
| - RISKS |
| - IMPROVEMENTS |
| - NEXT_STEPS |
|
|
| 7. **Modular Brainstorm (`modular_brainstorm_v1`)** |
|
|
| - **Inputs**: |
| - `problem_or_domain` |
| - `goal` |
|
|
| - **Outputs**: |
| - OBJECTIVE |
| - CURRENT |
| - MODULES |
| - CHECKERS |
| - DATA_NEEDS |
| - NEXT_STEPS |
|
|
| --- |
|
|
| ### Checker modules |
|
|
| Checkers are “reviewers” that inspect generated outputs. |
|
|
| Examples: |
|
|
| 1. **Analysis Note Checker (`analysis_note_checker_v1`)** |
| |
| - **Inputs**: |
| - `original_task` |
| - `draft_output` |
| |
| - **Outputs**: |
| - VERDICT |
| - STRUCTURE |
| - CLARITY |
| - ALIGNMENT |
| - GAPS |
| - FIXES |
| |
| 2. **Document Explainer Checker (`document_explainer_checker_v1`)** |
|
|
| - VERDICT |
| - ACCURACY |
| - STRUCTURE |
| - AUDIENCE_FIT |
| - MISSING |
| - FIXES |
| |
| 3. **Strategy Memo Checker (`strategy_memo_checker_v1`)** |
| |
| - VERDICT |
| - OBJECTIVE_ALIGNMENT |
| - CONSTRAINT_HANDLING |
| - OPTION_QUALITY |
| - RISKS |
| - FIXES |
|
|
| 4. **Style & Voice Checker (`style_voice_checker_v1`)** |
| |
| - VERDICT |
| - STYLE_MATCH |
| - TONE |
| - REDUNDANCY |
| - SUGGESTIONS |
| |
| 5. **Profile Checker (`profile_checker_v1`)** |
|
|
| - VERDICT |
| - ALIGNMENT |
| - SIGNAL |
| - CLARITY |
| - FIXES |
|
|
| 6. **System Checker (`system_blueprint_checker_v1`)** |
| |
| - VERDICT |
| - COHERENCE |
| - GAPS |
| - FLOW_ISSUES |
| - RISKS |
| - FIXES |
| |
| --- |
| |
| ## How to use this model (simple) |
| |
| You can treat this model like any GPT-2 text generator, **but** if you want Modular Intelligence behaviour: |
|
|
| 1. Pick a module (e.g. `strategy_memo_v1`). |
| 2. Build a prompt that: |
| - States the module name |
| - Lists the inputs clearly |
| - Lists the required output sections |
|
|
| 3. Ask the model to **fill in each section in order**. |
| 4. Optionally call the corresponding checker with: |
| - Original task |
| - Draft output |
|
|
| A reference implementation and UI are provided in the accompanying Hugging Face Space (if linked), but the pattern can be re-implemented in any environment. |
|
|
| --- |
|
|
| ## Limitations |
|
|
| - GPT-2 is **small and outdated** by modern standards. |
| - It will: |
| - Hallucinate |
| - Get facts wrong |
| - Sometimes ignore structure |
| - Struggle with long contexts |
|
|
| The goal is to demonstrate the **architecture**, not to claim state-of-the-art performance. |
|
|
| Do **not** use this model for high-stakes decisions or any application where mistakes could cause real harm. |
|
|
| --- |
|
|
| ## License and IP |
|
|
| - Code and configuration: **MIT License**. |
| - The **Modular Intelligence architecture, module definitions, and checker patterns** are a conceptual layer that can be reused and extended, but the name and approach may be treated as separate intellectual property by the author. |
|
|
| Always review the base model’s license (`openai-community/gpt2`) for any additional constraints. |