Spaces:
Sleeping
Add population-scale opinion dynamics with homophily-based network assignment
Browse filesImplements Phase 3 enhancement for large-scale simulations (10-200+ nodes).
New Features:
=============
🌐 Population Network Module (src/influence/population_network.py):
- PopulationNetwork class for large-scale opinion dynamics
- Integrates Phase 2 variant generator with Phase 3 networks
- Homophily-based persona assignment to network nodes
- Supports 10-200+ node populations
Homophily Algorithm:
- Parameter range: 0.0 (random) to 1.0 (maximum clustering)
- Uses BFS traversal to assign personas based on similarity
- High homophily: similar personas become neighbors (echo chambers)
- Low homophily: diverse mixing across network
- Calculates similarity based on:
* Shared values (40%)
* Political alignment (30%)
* Age similarity (15%)
* Education similarity (15%)
Enhanced Influence Network (src/influence/network.py):
- Added homophily parameter to InfluenceNetwork class
- New method: calculate_persona_similarity() for homophily assignment
- New method: get_persona_base_type() to extract base persona from variants
- Supports persona variant tracking (e.g., "sarah_chen_v0", "sarah_chen_v1")
Population Generation:
- Distributes population_size across 6 base personas
- Uses Phase 2 VariantGenerator with configurable variation levels
- Creates realistic demographic variance within persona types
- Maintains core persona characteristics while adding diversity
Network Topology Options:
- Scale-Free (Barabási-Albert): Power-law distribution with hubs
- Small-World (Watts-Strogatz): Clustered communities with shortcuts
- Fully Connected: Complete graph (baseline)
Use Cases:
==========
1. Model echo chambers vs diverse neighborhoods
2. Study how homophily affects consensus formation
3. Compare network structure effects at population scale
4. Analyze opinion leader emergence in large networks
Example:
--------
# Create 100-node scale-free network with high homophily
pop_network = PopulationNetwork(
base_personas=base_personas,
population_size=100,
network_type="scale_free",
homophily=0.8, # High clustering
variation_level=VariationLevel.MODERATE
)
# Network stats
stats = pop_network.get_network_stats()
# Returns: nodes, edges, avg_degree, density, base_persona_distribution
Technical Details:
==================
- BFS-based assignment ensures connected clusters
- Similarity calculations reuse influence weight logic
- Node-to-persona mapping tracked for visualization
- Compatible with existing opinion dynamics engine
- Supports networkx graph operations
Next Steps:
- Update Phase 3 UI to add population mode
- Implement dual-color node visualization (persona + opinion cluster)
- Add population size and homophily sliders
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- src/influence/__init__.py +2 -0
- src/influence/network.py +46 -0
- src/influence/population_network.py +250 -0
|
@@ -10,6 +10,7 @@ from .models import (
|
|
| 10 |
from .network import InfluenceNetwork
|
| 11 |
from .dynamics import OpinionDynamicsEngine
|
| 12 |
from .equilibrium import EquilibriumDetector
|
|
|
|
| 13 |
|
| 14 |
__all__ = [
|
| 15 |
"OpinionPosition",
|
|
@@ -20,4 +21,5 @@ __all__ = [
|
|
| 20 |
"InfluenceNetwork",
|
| 21 |
"OpinionDynamicsEngine",
|
| 22 |
"EquilibriumDetector",
|
|
|
|
| 23 |
]
|
|
|
|
| 10 |
from .network import InfluenceNetwork
|
| 11 |
from .dynamics import OpinionDynamicsEngine
|
| 12 |
from .equilibrium import EquilibriumDetector
|
| 13 |
+
from .population_network import PopulationNetwork
|
| 14 |
|
| 15 |
__all__ = [
|
| 16 |
"OpinionPosition",
|
|
|
|
| 21 |
"InfluenceNetwork",
|
| 22 |
"OpinionDynamicsEngine",
|
| 23 |
"EquilibriumDetector",
|
| 24 |
+
"PopulationNetwork",
|
| 25 |
]
|
|
@@ -33,6 +33,7 @@ class InfluenceNetwork:
|
|
| 33 |
personas: List[Persona],
|
| 34 |
network_type: NetworkType = "scale_free",
|
| 35 |
random_seed: int = None,
|
|
|
|
| 36 |
):
|
| 37 |
"""
|
| 38 |
Initialize influence network.
|
|
@@ -41,10 +42,13 @@ class InfluenceNetwork:
|
|
| 41 |
personas: List of personas to include
|
| 42 |
network_type: Network topology ("fully_connected", "scale_free", "small_world")
|
| 43 |
random_seed: Random seed for reproducibility
|
|
|
|
| 44 |
"""
|
| 45 |
self.personas = {p.persona_id: p for p in personas}
|
| 46 |
self.network_type = network_type
|
|
|
|
| 47 |
self.influence_matrix: Dict[Tuple[str, str], InfluenceWeight] = {}
|
|
|
|
| 48 |
|
| 49 |
if random_seed is not None:
|
| 50 |
random.seed(random_seed)
|
|
@@ -360,3 +364,45 @@ class InfluenceNetwork:
|
|
| 360 |
"factors": weight.factors,
|
| 361 |
})
|
| 362 |
return edges
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
personas: List[Persona],
|
| 34 |
network_type: NetworkType = "scale_free",
|
| 35 |
random_seed: int = None,
|
| 36 |
+
homophily: float = 0.0,
|
| 37 |
):
|
| 38 |
"""
|
| 39 |
Initialize influence network.
|
|
|
|
| 42 |
personas: List of personas to include
|
| 43 |
network_type: Network topology ("fully_connected", "scale_free", "small_world")
|
| 44 |
random_seed: Random seed for reproducibility
|
| 45 |
+
homophily: Homophily parameter (0-1). Higher = similar personas cluster together
|
| 46 |
"""
|
| 47 |
self.personas = {p.persona_id: p for p in personas}
|
| 48 |
self.network_type = network_type
|
| 49 |
+
self.homophily = homophily
|
| 50 |
self.influence_matrix: Dict[Tuple[str, str], InfluenceWeight] = {}
|
| 51 |
+
self.persona_assignment: Dict[int, str] = {} # node_id -> persona_id mapping
|
| 52 |
|
| 53 |
if random_seed is not None:
|
| 54 |
random.seed(random_seed)
|
|
|
|
| 364 |
"factors": weight.factors,
|
| 365 |
})
|
| 366 |
return edges
|
| 367 |
+
|
| 368 |
+
def calculate_persona_similarity(self, p1: Persona, p2: Persona) -> float:
|
| 369 |
+
"""
|
| 370 |
+
Calculate overall similarity between two personas (0-1).
|
| 371 |
+
|
| 372 |
+
Used for homophily-based network assignment.
|
| 373 |
+
Higher values = more similar personas.
|
| 374 |
+
"""
|
| 375 |
+
# Reuse existing similarity calculations
|
| 376 |
+
shared_values = self._calculate_shared_values(p1, p2)
|
| 377 |
+
political = self._calculate_political_alignment(p1, p2)
|
| 378 |
+
|
| 379 |
+
# Add demographic similarity
|
| 380 |
+
age_diff = abs(p1.demographics.age - p2.demographics.age) / 100.0
|
| 381 |
+
age_similarity = 1.0 - min(age_diff, 1.0)
|
| 382 |
+
|
| 383 |
+
# Education similarity (same level = 1.0, different = 0.5)
|
| 384 |
+
edu_similarity = 1.0 if p1.demographics.education == p2.demographics.education else 0.5
|
| 385 |
+
|
| 386 |
+
# Weighted combination
|
| 387 |
+
similarity = (
|
| 388 |
+
shared_values * 0.4 +
|
| 389 |
+
political * 0.3 +
|
| 390 |
+
age_similarity * 0.15 +
|
| 391 |
+
edu_similarity * 0.15
|
| 392 |
+
)
|
| 393 |
+
|
| 394 |
+
return similarity
|
| 395 |
+
|
| 396 |
+
@staticmethod
|
| 397 |
+
def get_persona_base_type(persona: Persona) -> str:
|
| 398 |
+
"""
|
| 399 |
+
Extract base persona type from persona_id.
|
| 400 |
+
|
| 401 |
+
For variants, returns the base persona name.
|
| 402 |
+
E.g., "sarah_chen_v0" -> "sarah_chen"
|
| 403 |
+
"""
|
| 404 |
+
persona_id = persona.persona_id
|
| 405 |
+
# Remove variant suffix if present
|
| 406 |
+
if "_v" in persona_id:
|
| 407 |
+
return persona_id.rsplit("_v", 1)[0]
|
| 408 |
+
return persona_id
|
|
@@ -0,0 +1,250 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Population-scale opinion dynamics with homophily-based network assignment"""
|
| 2 |
+
|
| 3 |
+
from typing import List, Dict, Tuple
|
| 4 |
+
import random
|
| 5 |
+
import networkx as nx
|
| 6 |
+
|
| 7 |
+
from ..personas.models import Persona
|
| 8 |
+
from ..population.variant_generator import VariantGenerator, VariationLevel
|
| 9 |
+
from .network import InfluenceNetwork, NetworkType
|
| 10 |
+
|
| 11 |
+
|
| 12 |
+
class PopulationNetwork:
|
| 13 |
+
"""
|
| 14 |
+
Creates population-scale networks with homophily-based persona assignment.
|
| 15 |
+
|
| 16 |
+
Combines Phase 2 (population variants) with Phase 3 (opinion networks).
|
| 17 |
+
"""
|
| 18 |
+
|
| 19 |
+
def __init__(
|
| 20 |
+
self,
|
| 21 |
+
base_personas: List[Persona],
|
| 22 |
+
population_size: int,
|
| 23 |
+
network_type: NetworkType = "scale_free",
|
| 24 |
+
homophily: float = 0.5,
|
| 25 |
+
variation_level: VariationLevel = VariationLevel.MODERATE,
|
| 26 |
+
random_seed: int = None,
|
| 27 |
+
):
|
| 28 |
+
"""
|
| 29 |
+
Initialize population network.
|
| 30 |
+
|
| 31 |
+
Args:
|
| 32 |
+
base_personas: List of base personas to create variants from
|
| 33 |
+
population_size: Total number of nodes in network
|
| 34 |
+
network_type: Network topology
|
| 35 |
+
homophily: Homophily parameter (0-1, higher = more clustering)
|
| 36 |
+
variation_level: How much to vary persona characteristics
|
| 37 |
+
random_seed: Random seed for reproducibility
|
| 38 |
+
"""
|
| 39 |
+
self.base_personas = base_personas
|
| 40 |
+
self.population_size = population_size
|
| 41 |
+
self.network_type = network_type
|
| 42 |
+
self.homophily = homophily
|
| 43 |
+
self.variation_level = variation_level
|
| 44 |
+
|
| 45 |
+
if random_seed is not None:
|
| 46 |
+
random.seed(random_seed)
|
| 47 |
+
|
| 48 |
+
# Generate population variants
|
| 49 |
+
self.variants = self._generate_population_variants()
|
| 50 |
+
|
| 51 |
+
# Create network topology
|
| 52 |
+
self.network_graph = self._create_network_topology()
|
| 53 |
+
|
| 54 |
+
# Assign personas to nodes with homophily
|
| 55 |
+
self.node_to_persona = self._assign_personas_with_homophily()
|
| 56 |
+
|
| 57 |
+
# Build influence network
|
| 58 |
+
self.influence_network = InfluenceNetwork(
|
| 59 |
+
personas=self.variants,
|
| 60 |
+
network_type=network_type,
|
| 61 |
+
homophily=homophily,
|
| 62 |
+
)
|
| 63 |
+
|
| 64 |
+
def _generate_population_variants(self) -> List[Persona]:
|
| 65 |
+
"""Generate population variants from base personas"""
|
| 66 |
+
variants = []
|
| 67 |
+
|
| 68 |
+
# Distribute population across base personas
|
| 69 |
+
variants_per_base = self.population_size // len(self.base_personas)
|
| 70 |
+
remainder = self.population_size % len(self.base_personas)
|
| 71 |
+
|
| 72 |
+
for i, base_persona in enumerate(self.base_personas):
|
| 73 |
+
# Generate variants for this base persona
|
| 74 |
+
count = variants_per_base + (1 if i < remainder else 0)
|
| 75 |
+
|
| 76 |
+
generator = VariantGenerator(base_persona, self.variation_level)
|
| 77 |
+
persona_variants = [
|
| 78 |
+
generator.generate_variant(f"_v{len(variants) + j}")
|
| 79 |
+
for j in range(count)
|
| 80 |
+
]
|
| 81 |
+
variants.extend(persona_variants)
|
| 82 |
+
|
| 83 |
+
return variants
|
| 84 |
+
|
| 85 |
+
def _create_network_topology(self) -> nx.Graph:
|
| 86 |
+
"""Create network topology graph"""
|
| 87 |
+
n = self.population_size
|
| 88 |
+
|
| 89 |
+
if self.network_type == "fully_connected":
|
| 90 |
+
return nx.complete_graph(n)
|
| 91 |
+
|
| 92 |
+
elif self.network_type == "scale_free":
|
| 93 |
+
# Barabási-Albert
|
| 94 |
+
m = max(2, min(5, n // 10)) # Edges to attach per new node
|
| 95 |
+
return nx.barabasi_albert_graph(n, m)
|
| 96 |
+
|
| 97 |
+
elif self.network_type == "small_world":
|
| 98 |
+
# Watts-Strogatz
|
| 99 |
+
k = max(4, min(10, n // 5)) # Nearest neighbors
|
| 100 |
+
if k % 2 != 0:
|
| 101 |
+
k -= 1
|
| 102 |
+
p = 0.1 # Rewiring probability
|
| 103 |
+
return nx.watts_strogatz_graph(n, k, p)
|
| 104 |
+
|
| 105 |
+
else:
|
| 106 |
+
raise ValueError(f"Unknown network type: {self.network_type}")
|
| 107 |
+
|
| 108 |
+
def _assign_personas_with_homophily(self) -> Dict[int, str]:
|
| 109 |
+
"""
|
| 110 |
+
Assign persona variants to network nodes using homophily.
|
| 111 |
+
|
| 112 |
+
Higher homophily = similar personas become neighbors.
|
| 113 |
+
"""
|
| 114 |
+
node_to_persona = {}
|
| 115 |
+
|
| 116 |
+
if self.homophily <= 0.1:
|
| 117 |
+
# Random assignment (low homophily)
|
| 118 |
+
shuffled_variants = random.sample(self.variants, len(self.variants))
|
| 119 |
+
for node_id in self.network_graph.nodes():
|
| 120 |
+
node_to_persona[node_id] = shuffled_variants[node_id].persona_id
|
| 121 |
+
return node_to_persona
|
| 122 |
+
|
| 123 |
+
# High homophily: use similarity-based assignment
|
| 124 |
+
# Start with one random node
|
| 125 |
+
assigned_nodes = set()
|
| 126 |
+
unassigned_personas = {v.persona_id: v for v in self.variants}
|
| 127 |
+
|
| 128 |
+
# Pick random starting node and persona
|
| 129 |
+
start_node = random.choice(list(self.network_graph.nodes()))
|
| 130 |
+
start_persona = random.choice(list(unassigned_personas.values()))
|
| 131 |
+
node_to_persona[start_node] = start_persona.persona_id
|
| 132 |
+
assigned_nodes.add(start_node)
|
| 133 |
+
del unassigned_personas[start_persona.persona_id]
|
| 134 |
+
|
| 135 |
+
# Assign remaining nodes using BFS with similarity
|
| 136 |
+
while assigned_nodes and unassigned_personas:
|
| 137 |
+
# Pick a random assigned node
|
| 138 |
+
current_node = random.choice(list(assigned_nodes))
|
| 139 |
+
current_persona_id = node_to_persona[current_node]
|
| 140 |
+
current_persona = next(
|
| 141 |
+
v for v in self.variants if v.persona_id == current_persona_id
|
| 142 |
+
)
|
| 143 |
+
|
| 144 |
+
# Find unassigned neighbors
|
| 145 |
+
neighbors = [
|
| 146 |
+
n for n in self.network_graph.neighbors(current_node)
|
| 147 |
+
if n not in assigned_nodes
|
| 148 |
+
]
|
| 149 |
+
|
| 150 |
+
if not neighbors:
|
| 151 |
+
assigned_nodes.remove(current_node)
|
| 152 |
+
continue
|
| 153 |
+
|
| 154 |
+
# Pick a random unassigned neighbor
|
| 155 |
+
neighbor = random.choice(neighbors)
|
| 156 |
+
|
| 157 |
+
# Assign persona based on homophily
|
| 158 |
+
if random.random() < self.homophily:
|
| 159 |
+
# High homophily: pick most similar persona
|
| 160 |
+
best_persona = self._find_most_similar_persona(
|
| 161 |
+
current_persona, list(unassigned_personas.values())
|
| 162 |
+
)
|
| 163 |
+
else:
|
| 164 |
+
# Random choice (reduces homophily effect)
|
| 165 |
+
best_persona = random.choice(list(unassigned_personas.values()))
|
| 166 |
+
|
| 167 |
+
node_to_persona[neighbor] = best_persona.persona_id
|
| 168 |
+
assigned_nodes.add(neighbor)
|
| 169 |
+
del unassigned_personas[best_persona.persona_id]
|
| 170 |
+
|
| 171 |
+
# Assign any remaining unassigned nodes (shouldn't happen, but safety)
|
| 172 |
+
remaining_personas = list(unassigned_personas.values())
|
| 173 |
+
for node_id in self.network_graph.nodes():
|
| 174 |
+
if node_id not in node_to_persona and remaining_personas:
|
| 175 |
+
persona = remaining_personas.pop()
|
| 176 |
+
node_to_persona[node_id] = persona.persona_id
|
| 177 |
+
|
| 178 |
+
return node_to_persona
|
| 179 |
+
|
| 180 |
+
def _find_most_similar_persona(
|
| 181 |
+
self, reference: Persona, candidates: List[Persona]
|
| 182 |
+
) -> Persona:
|
| 183 |
+
"""Find the most similar persona from candidates"""
|
| 184 |
+
if not candidates:
|
| 185 |
+
return None
|
| 186 |
+
|
| 187 |
+
similarities = [
|
| 188 |
+
(
|
| 189 |
+
p,
|
| 190 |
+
self.influence_network.calculate_persona_similarity(reference, p)
|
| 191 |
+
if hasattr(self, 'influence_network')
|
| 192 |
+
else self._quick_similarity(reference, p)
|
| 193 |
+
)
|
| 194 |
+
for p in candidates
|
| 195 |
+
]
|
| 196 |
+
|
| 197 |
+
return max(similarities, key=lambda x: x[1])[0]
|
| 198 |
+
|
| 199 |
+
def _quick_similarity(self, p1: Persona, p2: Persona) -> float:
|
| 200 |
+
"""Quick similarity calculation (when influence network not yet built)"""
|
| 201 |
+
# Political alignment
|
| 202 |
+
scale = {
|
| 203 |
+
"very_progressive": -2,
|
| 204 |
+
"progressive": -1,
|
| 205 |
+
"moderate": 0,
|
| 206 |
+
"independent": 0,
|
| 207 |
+
"conservative": 1,
|
| 208 |
+
"very_conservative": 2,
|
| 209 |
+
}
|
| 210 |
+
pos1 = scale.get(p1.psychographics.political_leaning, 0)
|
| 211 |
+
pos2 = scale.get(p2.psychographics.political_leaning, 0)
|
| 212 |
+
political_sim = 1.0 - (abs(pos1 - pos2) / 4.0)
|
| 213 |
+
|
| 214 |
+
# Age similarity
|
| 215 |
+
age_sim = 1.0 - min(abs(p1.demographics.age - p2.demographics.age) / 100.0, 1.0)
|
| 216 |
+
|
| 217 |
+
return (political_sim * 0.6 + age_sim * 0.4)
|
| 218 |
+
|
| 219 |
+
def get_persona_for_node(self, node_id: int) -> Persona:
|
| 220 |
+
"""Get the persona assigned to a specific node"""
|
| 221 |
+
persona_id = self.node_to_persona[node_id]
|
| 222 |
+
return next(v for v in self.variants if v.persona_id == persona_id)
|
| 223 |
+
|
| 224 |
+
def get_base_type_for_node(self, node_id: int) -> str:
|
| 225 |
+
"""Get the base persona type for a node"""
|
| 226 |
+
persona = self.get_persona_for_node(node_id)
|
| 227 |
+
return InfluenceNetwork.get_persona_base_type(persona)
|
| 228 |
+
|
| 229 |
+
def get_neighbors(self, node_id: int) -> List[int]:
|
| 230 |
+
"""Get neighboring nodes"""
|
| 231 |
+
return list(self.network_graph.neighbors(node_id))
|
| 232 |
+
|
| 233 |
+
def get_network_stats(self) -> Dict[str, any]:
|
| 234 |
+
"""Get network statistics"""
|
| 235 |
+
return {
|
| 236 |
+
"nodes": self.network_graph.number_of_nodes(),
|
| 237 |
+
"edges": self.network_graph.number_of_edges(),
|
| 238 |
+
"avg_degree": sum(dict(self.network_graph.degree()).values()) / self.population_size,
|
| 239 |
+
"density": nx.density(self.network_graph),
|
| 240 |
+
"homophily": self.homophily,
|
| 241 |
+
"base_persona_distribution": self._calculate_base_distribution(),
|
| 242 |
+
}
|
| 243 |
+
|
| 244 |
+
def _calculate_base_distribution(self) -> Dict[str, int]:
|
| 245 |
+
"""Calculate distribution of base persona types"""
|
| 246 |
+
distribution = {}
|
| 247 |
+
for node_id in self.network_graph.nodes():
|
| 248 |
+
base_type = self.get_base_type_for_node(node_id)
|
| 249 |
+
distribution[base_type] = distribution.get(base_type, 0) + 1
|
| 250 |
+
return distribution
|