text string | label string |
|---|---|
So, counsel, we've spoken a little bit about how Colorado has handled this compelled speech question differently with respect to different messages, some that it prefers, others that it dislikes. I'm curious how other states have dealt with this conundrum besides Colorado and how you -- which ones of those you think we should take account of. | Background |
Is that -- do you understand that to be part of the stipulations or not? | Clarification |
Last question. This might be what Justice Kagan was asking, but it might be something different. The -- if you have to intervene before you move to dismiss -- and so, if this is repetitive, I apologize -- the D.C. Circuit said that would be largely academic, that requirement, if you had to intervene before moving to dismiss. Do you agree with that? I mean, in other words, it doesn't matter one way or the other. | Communicate |
Even though the site doesn't say anything about that? It doesn't say, wow, gay marriage is a wonderful thing. It *41 doesn't say -- it doesn't even say, you know, we're here to celebrate this wonderful marriage in my hypothetical. It doesn't even say that. | Criticism |
I mean, it does seem a little bit like due process Lochnerism for corporations here, doesn't it? | Humor |
Do they have to -- can you compel that speech? Do they have to publish it? | Implications |
Ms. Hansford, I think everyone might be underselling Steele here. I mean, it's true what Justice Alito says about this first sentence sets up the question in an odd way. But the actual holding and heart of the opinion is on page 286, and that's where the Court says -- it says, okay, we deem the Lanham Act's scope to encompass Petitioners' activities here, and then it says why.n Why do we deem it that way? His operations and their effects weren't confined within the territorial limits of a foreign nation. He brought component parts of his wares in the U.S. and Bulovas filtered through the Mexican border into this country. His competing goods reflected adversely on Bulova's trade reputation in markets cultivated here as well as abroad. So, in some ways, I mean, what Steele says here on page 286, it doesn't use the two-step terminology that we've developed, but this is basically the second step as we've understood it. | Support |
OralArgumentQuestionPurposeLegalBenchClassification
This task classifies questions asked by Supreme Court justices at oral argument into seven categories: 1. Background - questions seeking factual or procedural information that is missing or not clear in the briefing 2. Clarification - questions seeking to get an advocate to clarify her position or the scope of the rule being advocated for 3. Implications - questions about the limits of a rule or its implications for future cases 4. Support - questions offering support for the advocate’s position 5. Criticism - questions criticizing an advocate’s position 6. Communicate - question designed primarily to communicate with other justices 7. Humor - questions designed to interject humor into the argument and relieve tension
| Task category | t2c |
| Domains | Legal, Written |
| Reference | https://huggingface.co/datasets/nguha/legalbench |
How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code:
import mteb
task = mteb.get_tasks(["OralArgumentQuestionPurposeLegalBenchClassification"])
evaluator = mteb.MTEB(task)
model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)
To learn more about how to run models on mteb task check out the GitHub repitory.
Citation
If you use this dataset, please cite the dataset as well as mteb, as this dataset likely includes additional processing as a part of the MMTEB Contribution.
@misc{guha2023legalbench,
archiveprefix = {arXiv},
author = {Neel Guha and Julian Nyarko and Daniel E. Ho and Christopher Ré and Adam Chilton and Aditya Narayana and Alex Chohlas-Wood and Austin Peters and Brandon Waldon and Daniel N. Rockmore and Diego Zambrano and Dmitry Talisman and Enam Hoque and Faiz Surani and Frank Fagan and Galit Sarfaty and Gregory M. Dickinson and Haggai Porat and Jason Hegland and Jessica Wu and Joe Nudell and Joel Niklaus and John Nay and Jonathan H. Choi and Kevin Tobia and Margaret Hagan and Megan Ma and Michael Livermore and Nikon Rasumov-Rahe and Nils Holzenberger and Noam Kolt and Peter Henderson and Sean Rehaag and Sharad Goel and Shang Gao and Spencer Williams and Sunny Gandhi and Tom Zur and Varun Iyer and Zehua Li},
eprint = {2308.11462},
primaryclass = {cs.CL},
title = {LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models},
year = {2023},
}
@article{enevoldsen2025mmtebmassivemultilingualtext,
title={MMTEB: Massive Multilingual Text Embedding Benchmark},
author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
publisher = {arXiv},
journal={arXiv preprint arXiv:2502.13595},
year={2025},
url={https://arxiv.org/abs/2502.13595},
doi = {10.48550/arXiv.2502.13595},
}
@article{muennighoff2022mteb,
author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
title = {MTEB: Massive Text Embedding Benchmark},
publisher = {arXiv},
journal={arXiv preprint arXiv:2210.07316},
year = {2022}
url = {https://arxiv.org/abs/2210.07316},
doi = {10.48550/ARXIV.2210.07316},
}
Dataset Statistics
Dataset Statistics
The following code contains the descriptive statistics from the task. These can also be obtained using:
import mteb
task = mteb.get_task("OralArgumentQuestionPurposeLegalBenchClassification")
desc_stats = task.metadata.descriptive_stats
{
"test": {
"num_samples": 312,
"number_of_characters": 84152,
"number_texts_intersect_with_train": 0,
"min_text_length": 4,
"average_text_length": 269.71794871794873,
"max_text_length": 2152,
"unique_text": 312,
"unique_labels": 7,
"labels": {
"Background": {
"count": 57
},
"Clarification": {
"count": 83
},
"Communicate": {
"count": 14
},
"Criticism": {
"count": 51
},
"Humor": {
"count": 28
},
"Implications": {
"count": 67
},
"Support": {
"count": 12
}
}
},
"train": {
"num_samples": 7,
"number_of_characters": 2184,
"number_texts_intersect_with_train": null,
"min_text_length": 72,
"average_text_length": 312.0,
"max_text_length": 928,
"unique_text": 7,
"unique_labels": 7,
"labels": {
"Background": {
"count": 1
},
"Clarification": {
"count": 1
},
"Communicate": {
"count": 1
},
"Criticism": {
"count": 1
},
"Humor": {
"count": 1
},
"Implications": {
"count": 1
},
"Support": {
"count": 1
}
}
}
}
This dataset card was automatically generated using MTEB
- Downloads last month
- 5