YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Iterative PageRank Model
Non-autoregressive iterative transformer that learns PageRank via shared-weight refinement.
Architecture
- Params: 796,032
- Layers: 4 (shared across 8 iterations)
- Width: 128, Heads: 4
- Graph size: 64 nodes (directed)
- Damping: 0.85
Task
Given a directed graph's adjacency matrix, predict each node's PageRank value. The model learns power iteration implicitly through iterative refinement.
Metrics
- KL divergence: between predicted and true PR distribution
- Ranking accuracy: pairwise ordering correctness
- Top-k accuracy: overlap of predicted vs true top-k important nodes
Usage
import torch
from iterative_pagerank import IterativePageRankModel, PageRankConfig, generate_batch
ckpt = torch.load("model.pt", weights_only=True)
config = PageRankConfig(**ckpt["config"])
model = IterativePageRankModel(config)
model.load_state_dict(ckpt["model_state_dict"])
model.eval()
adj, targets, meta = generate_batch(1, config, edge_prob=0.1,
graph_weights=(0.0, 0.0, 1.0), device="cpu")
with torch.no_grad():
all_prs = model(adj, n_iters=64)
print(f"Predicted: {all_prs[-1][0].topk(5)}")
print(f"True: {targets[0].topk(5)}")
- Downloads last month
- 18
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support