lcalvobartolome's picture
Initial commit
c2db44c
metadata
configs:
  - config_name: default
    data_files:
      - split: bills_bertopic
        path: model-runs/bills/bertopic/
      - split: bills_ctm
        path: model-runs/bills/ctm/
      - split: bills_mallet
        path: model-runs/bills/mallet/
      - split: wiki_bertopic
        path: model-runs/wiki/bertopic/
      - split: wiki_ctm
        path: model-runs/wiki/ctm/
      - split: wiki_mallet
        path: model-runs/wiki/mallet/
datasets:
  - lcalvobartolome/proxann_topic_models
language:
  - en
license:
  - mit
pretty_name: ProxAnn Topic Models
size_categories:
  - n<1K
tags:
  - topic-modeling
  - bertopic
  - ctm
  - lda
  - mallet
  - proxann
  - english
  - models

ProxAnn Topic Models

ProxAnn Topic Models provides the trained topic models used in
PROXANN: Use-Oriented Evaluations of Topic Models and Document Clustering
(Hoyle et al., ACL 2025).

This collection includes 50-topic models for both the Bills (Adler & Wilkerson, 2008) and Wiki (Merity et al., 2017) corpora.
All source datasets are available at lcalvobartolome/proxann_data.


Overview

Split Path Description
bills_bertopic model-runs/bills/bertopic/ 50-topic BERTopic model trained on Bills using proxann.topic_models.train.BERTopicTrainer (MiniLM-L6-v2 embeddings).
bills_ctm model-runs/bills/ctm/ 50-topic Contextualized Topic Model (CTM) trained on Bills, following Hoyle et al., 2022.
bills_mallet model-runs/bills/mallet/ 50-topic LDA–MALLET model trained on Bills, from Hoyle et al., 2022.
wiki_bertopic model-runs/wiki/bertopic/ 50-topic BERTopic model trained on Wiki using proxann.topic_models.train.BERTopicTrainer.
wiki_ctm model-runs/wiki/ctm/ 50-topic CTM model trained on Wiki, from Hoyle et al., 2022.
wiki_mallet model-runs/wiki/mallet/ 50-topic LDA–MALLET model trained on Wiki, from Hoyle et al., 2022.

Repository Layout

model-runs/
├── bills/
│   ├── bertopic/  
│   ├── ctm/       
│   └── mallet/    
└── wiki/
    ├── bertopic/
    ├── ctm/
    └── mallet/

Each folder contains model-specific artifacts (see below).


Artifacts Summary

Model Type Core Files Notes
BERTopic betas.npy, thetas.npz Topic–word and doc–topic matrices; trained with proxann.topic_models.train.BERTopicTrainer. Optional: config.yaml, vocab.txt, document_topic_info.csv.
CTM beta.npy, train.theta.npy Topic–word and doc–topic matrices from Hoyle et al., 2022. Optional: config.yml, test.theta.npy, topics.txt.
LDA–MALLET beta.npy, doctopics.npz.npy Topic–word and doc–topic matrices from Hoyle et al., 2022. Optional: state.mallet.gz, topickeys.txt, inferencer.mallet.

Notes

  • Embeddings: BERTopic trained with sentence-transformers/all-MiniLM-L6-v2 and CTM with sentence-transformers/multi-qa-mpnet-base-dot-v1.
  • Topics: All models use 50 topics.
  • CTM & MALLET: Directly adapted from Hoyle et al., 2022 experimental setup.
  • Data: Tokens and embeddings from ProxAnn Data.

Related Resources


License & Attribution

Released under the MIT License. Text content derives from Wikipedia (Merity et al., 2017) and the Congressional Bills Project (Adler & Wilkerson, 2008). Please provide attribution when reusing these materials.


Citation

If you use this dataset, please cite:

@inproceedings{hoyle-etal-2025-proxann,
    title = "{P}rox{A}nn: Use-Oriented Evaluations of Topic Models and Document Clustering",
    author = "Hoyle, Alexander Miserlis  and
      Calvo-Bartolom{\'e}, Lorena  and
      Boyd-Graber, Jordan Lee  and
      Resnik, Philip",
    editor = "Che, Wanxiang  and
      Nabende, Joyce  and
      Shutova, Ekaterina  and
      Pilehvar, Mohammad Taher",
    booktitle = "Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = jul,
    year = "2025",
    address = "Vienna, Austria",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2025.acl-long.772/",
    doi = "10.18653/v1/2025.acl-long.772",
    pages = "15872--15897",
    ISBN = "979-8-89176-251-0",
    abstract = "Topic models and document-clustering evaluations either use automated metrics that align poorly with human preferences, or require expert labels that are intractable to scale. We design a scalable human evaluation protocol and a corresponding automated approximation that reflect practitioners' real-world usage of models. Annotators{---}or an LLM-based proxy{---}review text items assigned to a topic or cluster, infer a category for the group, then apply that category to other documents. Using this protocol, we collect extensive crowdworker annotations of outputs from a diverse set of topic models on two datasets. We then use these annotations to validate automated proxies, finding that the best LLM proxy is statistically indistinguishable from a human annotator and can therefore serve as a reasonable substitute in automated evaluations."
}