moelanoby commited on
Commit
8d7daea
·
verified ·
1 Parent(s): b00ef0f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -3
README.md CHANGED
@@ -7,7 +7,7 @@ This repository contains an Attention-Linked Memory augmented Qwen model (ALM-Qw
7
 
8
  * **AttentionLinkedMemory (ALM)**: A custom PyTorch module for two-level attention-based retrieval from structured memory. (See `ALM.py`)
9
  * **QwenGenerator**: Wraps a Hugging Face Qwen model (e.g., Qwen2.5-0.5B-Instruct or Qwen2.5-7B-Instruct) for text generation.
10
- * **ALMQwenModel_HF**: The main class orchestrating the ALM retrieval and Qwen generation. (See `alm_qwen_hf.py`)
11
  * **Saved Weights & Config**:
12
  * `alm_layer_state_dict.pth`: Trained weights for the ALM layer.
13
  * `alm_qwen_hf_config.json`: Configuration for the `ALMQwenModel_HF`, including ALM parameters and paths to the Qwen components.
@@ -30,7 +30,7 @@ This repository contains an Attention-Linked Memory augmented Qwen model (ALM-Qw
30
 
31
  3. **Load the model in Python**:
32
  ```python
33
- from alm_qwen_hf import ALMQwenModel_HF # Make sure alm_qwen_hf.py and ALM.py are in your PYTHONPATH
34
  import torch
35
 
36
  # Desired device
@@ -79,4 +79,3 @@ The ALM layer (`alm_layer_state_dict.pth`) might have been trained. The Qwen mod
79
  * The `load_model` method in `alm_qwen_hf.py` handles the reconstruction of the composite model.
80
 
81
  ---
82
- *This README was auto-generated. Please update with more specific details about your model.*
 
7
 
8
  * **AttentionLinkedMemory (ALM)**: A custom PyTorch module for two-level attention-based retrieval from structured memory. (See `ALM.py`)
9
  * **QwenGenerator**: Wraps a Hugging Face Qwen model (e.g., Qwen2.5-0.5B-Instruct or Qwen2.5-7B-Instruct) for text generation.
10
+ * **ALMQwenModel_HF**: The main class orchestrating the ALM retrieval and Qwen generation. (See `alm_qwen.py`)
11
  * **Saved Weights & Config**:
12
  * `alm_layer_state_dict.pth`: Trained weights for the ALM layer.
13
  * `alm_qwen_hf_config.json`: Configuration for the `ALMQwenModel_HF`, including ALM parameters and paths to the Qwen components.
 
30
 
31
  3. **Load the model in Python**:
32
  ```python
33
+ from alm_qwen import ALMQwenModel_HF # Make sure alm_qwen_hf.py and ALM.py are in your PYTHONPATH
34
  import torch
35
 
36
  # Desired device
 
79
  * The `load_model` method in `alm_qwen_hf.py` handles the reconstruction of the composite model.
80
 
81
  ---