File size: 2,842 Bytes
ba95f3f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94

---
language:
- en
license: mit
size_categories:
- 10K<n<100K
task_categories:
- question-answering
- text-analysis
tags:
- knowledge-coupling
- llama2
- hotpotqa
- multi-hop-reasoning
- gradient-analysis
- ripple-effects
---

# Knowledge Coupling Analysis on HotpotQA Dataset

## Dataset Description

This dataset contains the results of a comprehensive knowledge coupling analysis performed on the HotpotQA dataset using LLaMA2-7B model. The analysis investigates how different pieces of knowledge interact within the model's parameter space through gradient-based coupling measurements.

## Research Overview

- **Model**: meta-llama/Llama-2-7b-hf (layers 28-31 focused analysis)
- **Dataset**: HotpotQA (train + dev splits, 97,852 total samples)
- **Method**: Gradient-based knowledge coupling via cosine similarity
- **Target Layers**: model.layers.28-31.mlp.down_proj (semantically rich layers)

## Key Findings

The analysis revealed:
- Mean coupling score: 0.0222 across all knowledge piece pairs
- High coupling pairs (≥0.4 threshold): Critical for ripple effect prediction
- Layer-specific analysis focusing on MLP down-projection layers
- Comprehensive gradient analysis with 180,355,072 dimensions per knowledge piece

## Files Description

### Core Results
- `global_analysis_results.json`: Comprehensive analysis summary with statistics
- `all_knowledge_pieces.json`: Complete set of processed knowledge pieces (92MB)
- `all_coupling_pairs.csv`: All pairwise coupling measurements (245MB)

### Supporting Files
- `dataset_info.json`: Dataset statistics and conversion details
- `coupling_analysis_config.json`: Analysis configuration and parameters

## Usage

```python
from datasets import load_dataset

# Load the knowledge coupling results
dataset = load_dataset("your-username/hotpotqa-knowledge-coupling")

# Access global analysis results
global_results = dataset["global_analysis"]

# Access knowledge pieces
knowledge_pieces = dataset["knowledge_pieces"]

# Access coupling pairs
coupling_pairs = dataset["coupling_pairs"]
```

## Citation

If you use this dataset in your research, please cite:

```bibtex
@dataset{hotpotqa_knowledge_coupling,
  title={Knowledge Coupling Analysis on HotpotQA Dataset using LLaMA2-7B},
  author={[Your Name]},
  year={2024},
  publisher={HuggingFace},
  url={https://huggingface.co/datasets/your-username/hotpotqa-knowledge-coupling}
}
```

## Technical Details

- **Gradient Computation**: ∇_θ log P(answer|question) for cloze-style questions
- **Coupling Measurement**: Cosine similarity between L2-normalized gradients
- **Memory Optimization**: Focused on layers 28-31 to handle GPU memory constraints
- **Hardware**: NVIDIA A40 GPU (46GB VRAM)

## License

This dataset is released under the MIT License. The original HotpotQA dataset follows its respective licensing terms.