papers / 20240318 /2303.04488v3.json
yilunzhao's picture
Add files using upload-large-folder tool
f47caac verified
{
"title": "Magnushammer: A Transformer-Based Approach to Premise Selection",
"abstract": "This paper presents a novel approach to premise selection, a crucial reasoning task in automated theorem proving. Traditionally, symbolic methods that rely on extensive domain knowledge and engineering effort are applied to this task. In contrast, this work demonstrates that contrastive training with the transformer architecture can achieve higher-quality retrieval of relevant premises, without the engineering overhead. Our method, Magnushammer, outperforms the most advanced and widely used automation tool in interactive theorem proving called Sledgehammer. On the PISA and miniF2F benchmarks Magnushammer achieves (against ) and (against ) success rates, respectively. By combining Magnushammer with a language-model-based automated theorem prover, we further improve the state-of-the-art proof success rate from to on the PISA benchmark using x fewer parameters. Moreover, we develop and open source a novel dataset for premise selection,\ncontaining textual representations of (proof state, relevant premise) pairs. To the best of our knowledge, this is the largest available premise selection dataset, and the first one for the Isabelle proof assistant.",
"sections": [
{
"section_id": "1",
"parent_section_id": null,
"section_name": "Introduction",
"text": "###figure_1### Automating mathematical reasoning has been a central theme of artificial intelligence since its earliest days (De Bruijn, 1970 ###reference_b16###).\nRecently, machine learning has led to significant advancements in both informal (Lewkowycz et al., 2022 ###reference_b42###) and formal mathematical reasoning (Kaliszyk and Urban, 2015b ###reference_b34###; Alemi et al., 2016 ###reference_b3###; Polu and Sutskever, 2020 ###reference_b55###; Han et al., 2022 ###reference_b25###).\nThe latter approach, adopted in this paper, allows mechanical verification of proofs by proof assistants.\nModern mathematics development is gradual:\nit feeds upon a huge body of already established knowledge and constantly adds to it.\nProving a mathematical statement requires retrieval of facts from the knowledge base that can advance the proof. In automated reasoning literature, this retrieval process is known as premise selection.\nMany tools have been developed to tackle premise selection (Alama et al., 2011 ###reference_b1###; K\u00fchlwein et al., 2012 ###reference_b39###; Kaliszyk et al., 2017 ###reference_b35###; Bansal et al., 2019 ###reference_b5###), including a broad class known as \u201chammers,\u201d which leverage powerful automated theorem provers (ATPs) to determine useful premises (Paulson and Blanchette, 2012 ###reference_b51###; Gauthier and Kaliszyk, 2015 ###reference_b21###; Kaliszyk and Urban, 2015a ###reference_b33###; Czajka and Kaliszyk, 2018 ###reference_b15###). One such tool, Sledgehammer (SH) (Paulson and Blanchette, 2012 ###reference_b51###), has gained prominence with Isabelle (Paulson, 1993 ###reference_b50###), where it helped to create a significant portion of Isabelle\u2019s proof corpus.\nHammers are not yet available in all proof assistants (Ebner, 2020 ###reference_b19###): implementing them is challenging due to\nthe complex techniques required for different logics and type systems.\nThere is a need for an effective premise selection tool that requires less adaptation to work for different proof assistants.\nIn this study, we provide a generic, data-driven, transformer-based (Vaswani et al., 2017 ###reference_b70###) premise selection tool: Magnushammer. It constitutes a novel way to tackle the premise selection task, effective while requiring little domain-specific knowledge. Magnushammer is trained contrastively to perform premise retrieval in two stages: in the Select stage, it retrieves the most relevant premises (measured by the cosine similarity of their embeddings to that of the current proof state) from tens of thousands (the database contains 433K premises in total and typically 30K\u201350K are available in each proof state);\nin the Rerank stage, the retrieved premises are re-ranked with proof-state-aware scores:\ntokens of the proof state directly attend to tokens of the premise, giving a more contextualized relevance score. An overview of Magnushammer\u2019s architecture is shown in Figure 1(b) ###reference_sf2###.\nMagnushammer can prove of the theorems on the PISA benchmark (Jiang et al., 2021 ###reference_b30###), a substantial improvement over Sledgehammer\u2019s . We demonstrate that this dominance is consistent with varying controlled compute budgets, shown in Figure 1 ###reference_###.\nFurthermore, we replace the premise selection component (Sledgehammer) in a neural-symbolic model Thor (Jiang et al., 2022a ###reference_b31###) with Magnushammer and improve the state-of-the-art proof success rate on PISA from to .\nTo train Magnushammer, we extracted a premise selection dataset from the Isabelle theorem prover and its human proof libraries.\nThe dataset consists of M premise selection instances, with K unique premises.\nTo the best of our knowledge, this is the largest open-sourced premise selection dataset, and the first one of this kind for Isabelle. We find Magnushammer to be data efficient, outperforming Sledgehammer with only K training examples ( of the training data available)."
},
{
"section_id": "2",
"parent_section_id": null,
"section_name": "Background: proof assistants, Isabelle, and Sledgehammer",
"text": "Proof assistants (aka interactive theorem provers, or ITPs)\nsuch as\nIsabelle (Paulson, 1993 ###reference_b50###),\nLean (de Moura et al., 2015 ###reference_b18###),\nCoq (Bertot, 2008 ###reference_b7###),\nHOL Light (Harrison, 1996 ###reference_b26###),\nor Mizar (Grabowski et al., 2010 ###reference_b23###),\nare software tools designed to assist the development of formal proofs.\nThey provide expressive language for the formalization of mathematical statements and proofs while verifying them formally.\nIn Isabelle, theorems are proved sequentially:\nan initial proof state is obtained after the theorem is stated, and the proof state changes when\nthe user provides a valid proof step (see Appendix A.1 ###reference_### for an example).\nProof states contain information about the already established facts and the remaining goals to prove. Proof steps consist of tactics, which are optionally parametrized by premises.\nTactics are theorem-proving procedures and can complete some proofs in one step\nprovided with relevant premises. However, finding these premises is difficult:\none needs to select a handful of relevant facts from the current proof context,\nwhich typically contains tens of thousands of them.\n###figure_2### ###figure_3### Sledgehammer (Paulson and Blanchette, 2012 ###reference_b51###; Blanchette et al., 2013 ###reference_b8###) is a powerful automated reasoning tool for Isabelle. It belongs to a broader class of tools known as \u201chammers,\u201d which integrate automated theorem provers (ATPs) into proof assistants. The goal of these tools is to support the process of finding and applying proof methods. Sledgehammer has become an indispensable tool for Isabelle practitioners (Paulson and Blanchette, 2012 ###reference_b51###). It allows for closing low-level gaps between subsequent high-level steps of proof without the need to memorize entire lemma libraries.\nSledgehammer is designed to first pre-select a number of relevant facts heuristically, translate them together with a conjecture to simpler logic, and try to prove the conjecture using strong, external ATPs like E (Schulz, 2004 ###reference_b61###), SPASS (Weidenbach, 2001 ###reference_b73###), Vampire (Kov\u00e1cs and Voronkov, 2013 ###reference_b37###),\nZ3 (de Moura and Bj\u00f8rner, 2008 ###reference_b17###), or cvc5 (Barbosa et al., 2022 ###reference_b6###). If successful, these provers generate complete proofs. They are, however, not trusted by Isabelle. Instead, the facts used in the external proofs are extracted and used to produce a proof inside Isabelle using its native methods. Up to this last step, known as proof reconstruction, Sledgehammer is essentially used as a precise premise selection tool.\nSee Figure 1(a) ###reference_sf1### depicting the whole process.\nWhile immensely useful, Sledgehammer comes with several limitations. First, increasing computational power for Sledgehammer brings quickly diminishing returns (B\u00f6hme and Nipkow, 2010 ###reference_b10###).\nSecond, the logic projection and proof reconstruction in a hammer are not straightforward for type systems other than higher-order logic (Czajka and Kaliszyk, 2018 ###reference_b15###). Finally, Sledgehammer\u2019s performance hinges on the relevance filtering scheme, a suite of methods based on handcrafted heuristics (Meng and Paulson, 2009 ###reference_b44###) or classical machine learning (K\u00fchlwein et al., 2013 ###reference_b40###). Such approaches are unlikely to efficiently utilize the constantly growing body of proof data.\nWe argue that all these limitations can be overcome with deep-learning-based approaches. Neural networks have shown remarkable effectiveness in end-to-end problem solving with little or no feature engineering (Krizhevsky et al., 2012 ###reference_b38###; Brown et al., 2020 ###reference_b12###). Adopting textual representations with generic neural solutions removes the need for logic projection, ATP solving, and proof reconstruction.\nMoreover, large language models have recently displayed impressive scaling properties with respect to both model size (Kaplan et al., 2020 ###reference_b36###) and data (Hoffmann et al., 2022 ###reference_b27###)."
},
{
"section_id": "3",
"parent_section_id": null,
"section_name": "Magnushammer",
"text": "The goal of premise selection is to find relevant mathematical facts for a given proof state. We focus on selecting premises with a neural model informed by their textual representations instead of relying on fact structures like Sledgehammer (see Section 2 ###reference_###). The core idea of Magnushammer is to combine fast retrieval based on representational similarity (Select) with a more accurate re-ranking (Rerank), as outlined in Algorithm 1 ###reference_###. Our method closely follows those of Nogueira and Cho (2019 ###reference_b48###) and Izacard et al. (2021 ###reference_b29###). This hierarchical approach is scalable to large formal libraries containing hundreds of thousands of facts. Below we describe the two-stage Magnushammer approach.\nSelect leverages representation similarity and is based on batch-contrastive learning similar to the methods of Alemi et al. (2016 ###reference_b3###), Bansal et al. (2019 ###reference_b5###), Han et al. (2021 ###reference_b24###), or Radford et al. (2021 ###reference_b58###). Select embeds premises and proof states into a common latent space and uses cosine similarity to determine their relevance. During inference, it requires only one pass of a neural network to compute the proof state embedding and dot product with cached premise embeddings. Select is hence fast and scalable to large sets of premises. In our experiments, there are between K and K premises in a typical proof state context, from which we select most relevant ones.\nRerank scores the relevance of the selected premises for the current proof state by analyzing the pairs.\nRerank is trained to output the probability of the being relevant to the . The premises retrieved by Select are re-ranked with respect to these probabilities, and the final list comprises of the top premises (we set ). Having both the premise and the proof state in a single input allows Rerank to be more accurate. However, at the same time, it is much slower, as each pair must be scored individually."
},
{
"section_id": "4",
"parent_section_id": null,
"section_name": "Datasets",
"text": "We created and released111https://huggingface.co/datasets/Simontwice/premise_selection_in_isabelle ###reference_/premise_selection_in_isabelle### a comprehensive dataset of textual representations for Isabelle\u2019s proof states and premises.To the best of our knowledge, this is the first high-quality dataset of this kind for Isabelle, and also the largest premise selection dataset overall. We used the two largest collections of Isabelle theories to create the dataset: the Archive of Formal Proofs ###reference_www.isa-afp.org/### and the Isabelle Standard library ###reference_###.\nFor every proof step in every proof from these collections, we extracted the preceding proof state and the set of premises used in the proof step; this was turned into pairs constituting training data points. We call this the Human Proofs Library (HPL) dataset. In addition, we used Sledgehammer to generate proofs that are different from the human ones by using potentially alternative premises. We refer to this as the SH partition, and its union with HPL constitutes the Machine-Augmented Proofs Library (MAPL) dataset. Statistics for all these datasets are given in Table 1 ###reference_###. Note that MAPL grosses over M data points.\nBelow we describe in more detail how data points are extracted from a proof step.\nAn Isabelle\u2019s proof is a sequence of\n pairs: has the state information, and is a tactic application that advances the proof. A may use : theorems, lemmas, or definitions established previously. Suppose a contains premises: . We then extract data points: .\nExecuting Sledgehammer on the may result in multiple different synthetic s, and data points can be extracted from each in the same way (see Appendix A.2 ###reference_### for details).\nMining the HPL partition took K CPU hours, and mining the SH partition took K CPU hours (17 CPU years) on a distributed system.\nOur datasets have distinguishing features:\nThe human-originating dataset is augmented by alternatives generated with Sledgehammer, which results in a significantly larger and more diverse dataset.\nThis also decreases the probability of sampling false negatives while training contrastively: a negative example may in fact be positive, but we just have not seen an alternative proof using . Generating multiple alternative proofs partially remedies this problem.\nBoth s and s are represented as\n\u201chigh-level\u201d Isabelle\u2019s text\ninstead of \u201clow-level\u201d logical formalism like, e.g., TPTP (Sutcliffe, 2017 ###reference_b63###) used by Alama et al. (2014 ###reference_b2###).\nThis makes the dataset more suitable for language models, decreases the\nneed for feature engineering, and facilitates cross-proof-assistant\npre-training (Conneau and Lample, 2019 ###reference_b14###)."
},
{
"section_id": "5",
"parent_section_id": null,
"section_name": "Experiments",
"text": "We evaluate Magnushammer on the PISA and miniF2F theorem proving benchmarks using proof success rate as a metric. Our main result is that Magnushammer outperforms Sledgehammer by a large margin and, combined with Thor (Jiang et al., 2022a ###reference_b31###), sets a new state of the art on the PISA benchmark ( from ). Through ablations, we study the effectiveness of Magnushammer and the contribution of its components. Additional results and details can be found in Appendix E ###reference_###."
},
{
"section_id": "5.1",
"parent_section_id": "5",
"section_name": "Experimental details",
"text": "For evaluation, we use PISA (Jiang et al., 2021 ###reference_b30###) and miniF2F (Zheng et al., 2022 ###reference_b78###) benchmarks.\nPISA contains problems randomly selected from the Archive of Formal Proofs;222When training on data from the Archive of Formal Proofs, we remove the subset of it appearing in PISA. we use the same problems as Jiang et al. (2022a ###reference_b31###) for our evaluations. miniF2F consists of high-school competition-level problems, split into validation and test set, each with problems.\n###table_1### ###table_2### To evaluate the performance, we measure proof success rate: the percentage of successful proofs. A proof is successful if it is formally verified by Isabelle. We distinguish single-step and multi-step settings. In the single-step setting, we check if the theorem can be proven in one step by applying premises retrieved by the evaluated premise selection method (e.g., Magnushammer). In the multi-step scenario, we perform a proof search using a language model following Thor (Jiang et al., 2022a ###reference_b31###).\nThor + Magnushammer uses Magnushammer instead of Sledgehammer as the premise selection component. A further explanation is given in Section 5.2 ###reference_.SSS0.Px2###.\nAlgorithm 3 ###reference_### (in Appendix D ###reference_###) details the evaluation of Magnushammer in the single-step setting. It generates proof steps by combining each tactic with top premises from a ranking provided by Magnushammer, where is a prescribed set of tactics, , and is a list of integers. Such constructed proof steps are then executed in Isabelle. We define the computational budget for such an evaluation as , where is a timeout expressed in seconds (we use s as we observed little benefit from increasing it).\nEstimating the computational budget for Sledgehammer is difficult due to its complex internal architecture. We approximate it by , where is the \u2018number of CPU cores\u2019 (corresponding to steps executed in parallel) and is the timeout. We use for our calculations. See Appendix A.4 ###reference_### for more details.\nFor our main experiments, we pre-train standard decoder-only transformer models with M and M non-embedding parameters and fine-tune them for downstream tasks of premise selection or proof step generation. Full details are given in Appendix C ###reference_###.\nIn our experiments, we use the Portal-to-ISAbelle API (Jiang et al., 2021 ###reference_b30###) to interact with Isabelle."
},
{
"section_id": "5.2",
"parent_section_id": "5",
"section_name": "Results on PISA and miniF2F benchmarks",
"text": "Our main empirical results, summarized in Table 2 ###reference_### and Table 3 ###reference_###, were obtained with the M parameter model. Figure 1 ###reference_### and Section 5.2.1 ###reference_.SSS1### deepen this study, showing that Magnushammer outperforms Sledgehammer across a broad spectrum of computational budgets.\nIn the single-step setting, Magnushammer outperforms Sledgehammer by a wide margin on both PISA ( vs. ) and miniF2F ( vs. ).\nAdditionally, on PISA, Magnushammer outperforms TF-IDF and BM25: text-based, non-trainable retrieval methods (Robertson and Zaragoza, 2009 ###reference_b59###) which are strong baselines in common retrieval benchmarks (Thakur et al., 2021 ###reference_b65###). This suggests that Magnushammer is able to learn more than just superficial text similarity.\nIn all these experiments we used the same evaluation protocol (following Algorithm 3 ###reference_###) and computational budget of as detailed in Appendix D.1 ###reference_###.\nInterestingly, retrieval based on the generic OpenAI embeddings (Neelakantan et al., 2022 ###reference_b46###) (specifically: text-embedding-ada-002) yields reasonable performance comparable to Sledgehammer. This confirms the potential of neural premise selection to replace traditional symbolic methods. There is, however, a large gap to match Magnushammer. This shows that contrastive fine-tuning on our dataset provides non-trivial gains and supports our hypothesis that Magnushammer learns more than just mere textual similarity exploited by the general purpose method.\nNeural theorem provers utilize language models to generate proof steps, following the approach proposed by Polu and Sutskever (2020 ###reference_b55###). This allows for the creation of more complex, multi-step proofs. The proof generation involves sampling a proof step from the language model, verifying it, and repeating this process until the proof is closed or the computational budget is exceeded. The best-first search algorithm is often used to explore the most promising proof steps.\nThor (Jiang et al., 2022a ###reference_b31###) augments neural theorem provers with premise-selection capabilities. To this end, Thor allows the model to generate proof steps using Sledgehammer, which we replace with Magnushammer (see Appendix D.2 ###reference_### for details). Thor + Magnushammer establishes a new state of the art on the PISA benchmark ( vs. ).\nOn miniF2F, our method also significantly outperforms Thor and achieves results competitive with the current state of the art. In these experiments, we give Magnushammer a computational budget of .\nIt is important to note that other theorem-proving approaches in the multi-step\nsection of Table 3 ###reference_### require much larger language models:\nfor Thor it is M non-embedding parameters; DSP (Draft, Sketch, and Prove) by Jiang et al. (2022b ###reference_b32###) uses\nMinerva model (Lewkowycz et al., 2022 ###reference_b42###) with B parameters. Moreover, these other\napproaches rely on ideas orthogonal to premise selection. Specifically, Thor +\nauto (Wu et al., 2022a ###reference_b74###) proposes a variation of Thor, involving expert\niteration on auto-formalized data. DSP involves creating a high-level outline\nof a proof and uses Sledgehammer to solve the low-level subproblems. We\nhypothesize that both methods would perform even better when combined with\nMagnushammer."
},
{
"section_id": "5.2.1",
"parent_section_id": "5.2",
"section_name": "5.2.1 Scaling computational budget",
"text": "In this section, we discuss how the quality of premise selection methods varies with the computational budget available during evaluation. Figure 1 ###reference_### shows the results, and\nthe definition of the compute budget is provided in Section 5.1 ###reference_.SSS0.Px3###.\nNotably, Magnushammer outperforms Sledgehammer even with very limited computational resources, and it scales well, particularly within the medium budget range.\nFor Magnushammer and BM25, we use Algorithm 3 ###reference_###\n(Appendix D ###reference_###)\nin various configurations (i.e., settings of and ). We start with one tactic, , and , which yields (recall that s). We then gradually add more tactics to and more values to . The final setup uses and containing all powers of , from up to , which yields . Details are provided in Appendix D ###reference_###. For Sledgehammer, we scale the timeout parameter up to s."
},
{
"section_id": "5.3",
"parent_section_id": "5",
"section_name": "Impact of training data",
"text": "We study how the amount and type of data impact the proof success rate by comparing HPL and MAPL datasets. For this comparison, we used models with M non-embedding parameters and a computational budget of .\nOur method is data-efficient: see Figure 2(a) ###reference_sf1###. We observe that Magnushammer fine-tuned on only of MAPL \u2013 equivalent to approximately K samples \u2013 is already able to outperform Sledgehammer. This indicates that when starting from a pre-trained model, Magnushammer is a promising approach for addressing premise selection in theorem-proving environments with limited training data. The effect of pre-training diminishes as the amount of training data increases.\nFine-tuning on MAPL or HPL leads to subtle differences ( vs. when the whole datasets are used). This outcome may be attributed to the impact of model pre-training and the fact that the HPL dataset is rich enough to obtain good performance on the PISA benchmark (as observed in the previous paragraph). We speculate that the bigger MAPL dataset might be essential for future harder benchmarks and scaling up the model size.\n###figure_4### ###figure_5###"
},
{
"section_id": "5.4",
"parent_section_id": "5",
"section_name": "Ablations",
"text": "We use models trained on the MAPL dataset and evaluate them with a computational budget of .\nTo study how the performance of our method depends on the model size, we vary the number of layers and embedding dimension .\nA positive correlation between the model size and the proof rate is shown in\nFigure 2(b) ###reference_sf2###. We observe that even a tiny model with\nK parameters () outperforms Sledgehammer ( vs.\n). We also note the benefit of pre-training and that scaling the number\nof layers is more beneficial than scaling the embedding dimension. Details\ncan be found in Appendix\nC.1 ###reference_###. The impact of re-ranking is studied in Appendix\nC.5 ###reference_###."
},
{
"section_id": "6",
"parent_section_id": null,
"section_name": "Related work",
"text": "Premise selection becomes a crucial task whenever proving theorems\nautomatically within a large formal library. Moreover, this task has several\nunique aspects that are challenging from the perspective of learning-based\napproaches. Therefore, there exist multiple works that tackle learning premise\nselection (either explicitly or implicitly) applying various methods focusing\non different aspects.\nMany works employ classical machine learning like Bayesian and kernel\nmethods (K\u00fchlwein et al., 2012 ###reference_b39###; Alama et al., 2014 ###reference_b2###),\n-NN (Blanchette et al., 2016 ###reference_b9###), or decision trees\n(Piotrowski and Urban, 2018 ###reference_b52###; Nagashima and He, 2018 ###reference_b45###; Piotrowski et al., 2023 ###reference_b54###). The common weakness of these approaches is\nthe necessity of using hand-engineered features, whereas faster, simpler training is an advantage.\nAlemi et al. (2016 ###reference_b3###) were the first to apply deep learning to premise selection,\nthus dispensing with the hand-designed features completely. Their approach was evaluated\nin an automated theorem proving setting and not in a proof assistant, as is Magnushammer.\nThey also implicitly learn embeddings of conjectures and premises, which are concatenated and\npassed through a shallow network, whereas the training signal comes from the logistic loss.\nIn contrast, Magnushammer demonstrated the strength of training with the\ncontrastive loss, where the obtained embeddings just need to be passed through a\nsimple cosine similarity measure to provide high-quality rankings.\nMost of the methods explicitly targeting the premise selection problem\n(including this work) retrieve a ranking of independently treated premises.\nIn contrast, Piotrowski and Urban (2020 ###reference_b53###) aimed at modelling the implicit\ndependencies between the premises and used LSTM-based language models\nto produce structured sequences of premises. However, the premises were treated\nthere as opaque tokens, not giving the neural model the ability to inspect the\nstatements of the premises.\nEffective deep learning approaches often leverage the explicit structure of\nmathematical expressions using graph neural networks (Wang et al., 2017 ###reference_b72###; Paliwal et al., 2020 ###reference_b49###; Goertzel et al., 2022 ###reference_b22###).\nOur work uses the transformer architecture (Vaswani et al., 2017 ###reference_b70###), which\nis highly scalable and capable of producing powerful representations of raw text\ndata.\nPre-trained transformer language models have been applied to various aspects of\ntheorem proving, including autoformalization (Wu et al., 2022a ###reference_b74###; Jiang et al., 2022b ###reference_b32###), conjecturing (Urban and Jakubuv, 2020 ###reference_b67###), and tactic prediction / proof step\nsearch (Yang and Deng, 2019 ###reference_b76###; Polu and Sutskever, 2020 ###reference_b55###; Han et al., 2022 ###reference_b25###; Lample et al., 2022 ###reference_b41###; Polu et al., 2023 ###reference_b56###). The works from the last category often implicitly deal with\npremise selection by treating premises as names / tokens to be generated and not\ninspecting their statements. The application of generative language models to\nstatement-aware premise selection has been limited, as the length of the\npossible premises often greatly exceeds the context of several thousand tokens\nthat the models are designed to handle. Thor (Jiang et al., 2022a ###reference_b31###) circumvents\nthe difficulty of premise selection by invoking Sledgehammer. In contrast,\nMagnushammer retrieves rather than generates to overcome the context length\nlimitation. Therefore it can be used in tandem with other models (its\ncombination with Thor is demonstrated in Section 5 ###reference_###).\nBatch-contrastive learning is widely used in speech\n(van den Oord et al., 2018 ###reference_b69###), text (Izacard et al., 2021 ###reference_b29###), image (Chen et al., 2020 ###reference_b13###)\nand image-text (Radford et al., 2021 ###reference_b58###) representation learning. These\nmethods have proven effective despite the possibility of false negatives\noccurring in contrastive batches (Robinson et al., 2021 ###reference_b60###). The Select phase of our premise selection model relies on in-batch negative examples to\ntrain the retriever, similar to HOList (Bansal et al., 2019 ###reference_b5###) and\nContriever (Izacard et al., 2021 ###reference_b29###). Like HOList, we mine additional negatives, which\nwe found crucial for performance. The Rerank stage closely resembles\n(Nogueira and Cho, 2019 ###reference_b48###), but instead of using BM25, we jointly train\nretrieval and re-ranking, utilizing premises retrieved by Select as hard\nnegatives for Rerank training.\nHan et al. (2021 ###reference_b24###) use contrastive learning in informal\npremise selection.\nConcurrently to our work, Yang et al. (2023 ###reference_b77###) develop a premise selection\nmethod for Lean also using contrastive learning in a way similar to our Select method, but without the Rerank stage.\nThere are multiple lines of work considering datasets based on formal theorem proving.\nThese include benchmarks like ProofNet (Azerbayev et al., 2022 ###reference_b4###) for Lean, and miniF2F (Zheng et al., 2022 ###reference_b78###) that supports multiple ITPs.\nThese datasets only focus on evaluation, not providing data for training the models. Another line of research focuses on benchmarking machine learning models\u2019 reasoning capabilities while also providing training data (Bansal et al., 2019 ###reference_b5###; Li et al., 2021 ###reference_b43###; Han et al., 2022 ###reference_b25###). Existing public datasets for premise selection include the ones introduced in (Alama et al., 2014 ###reference_b2###; Piotrowski and Urban, 2020 ###reference_b53###). In comparison to these works, we publish the data in high-level, textual format, as seen in Isabelle, instead of low-level, structured languages such as TPTP (Sutcliffe, 2017 ###reference_b63###).\nThere exists a rich body of work developing complex hammers systems for\ndifferent proof assistants\n(Paulson and Blanchette, 2012 ###reference_b51###; Kaliszyk and Urban, 2015a ###reference_b33###; Gauthier and Kaliszyk, 2015 ###reference_b21###; Czajka and Kaliszyk, 2018 ###reference_b15###). Unlike the\ntraditional hammers, our method does not depend on external ATPs and requires\nlittle domain-specific knowledge."
},
{
"section_id": "7",
"parent_section_id": null,
"section_name": "Limitations and future work",
"text": ""
}
],
"appendix": [
{
"section_id": "Appendix x1",
"parent_section_id": null,
"section_name": "Appendix",
"text": ""
},
{
"section_id": "Appendix 1",
"parent_section_id": null,
"section_name": "Appendix A Isabelle environment",
"text": "This section contains visual examples of proofs in Isabelle and provides some configuration details of the environment.\nFigure A.1 ###reference_### shows an example theorem and its proof, as seen in Isabelle\u2019s most popular IDE, jEdit. The theorem comes from an entry to the Archive of Formal Proofs \u2013 Fun With Functions [Nipkow, 2008 ###reference_b47###]. It states that any mapping from the set of natural numbers to itself that satisfies must be the identity function. The proof starts with a simple induction and then refines the result to arrive at the thesis. This problem was included in Terence Tao\u2019s booklet Solving Mathematical Problems [Tao, 2010 ###reference_b64###].\n###figure_6### ###figure_7### This section describes how to generate alternative proof steps using Sledgehammer which we do to obtain datasets described in Section 4 ###reference_###. First, we find all intermediate propositions within the proof (they can be nested) and try to replace the proof of the proposition with a Sledgehammer step. If successful, we record such a step in the dataset and proceed with both the original and the alternative proof. Figure A.3 ###reference_### provides a visual example of the aforementioned propositions.\n###figure_8### Figure A.4 ###reference_### contains a multi-step proof of the irrationality of written in Isabelle. The proof contains multiple usages of tactics that require premises.\n###figure_9### We set up Sledgehammer in Isabelle 2021-1, following the configuration used by Jiang et al. [2022a ###reference_b31###]. We run Sledgehammer using different sets of settings and calculate the total proof rate by taking the union of problems solved by each run. The Sledgehammer timeout is set to default seconds. We use only on-machine automated theorem provers (same as Isabelle environment), so external provers used by Sledgehammer are the following: Z3, SPASS, Vampire, CVC4, and E.\nIn our calculation of the Sledgehammer computation budget, see Section 5.1 ###reference_###, we assume \u2019CPU cores.\u2019 We run our experiments on machines with CPU cores, making the assumption realistic. Moreover, we emphasize that the performance gap between Magnushammer and Sledgehammer is large enough that altering the value of , e.g., to an unrealistic level , would not qualitatively change conclusions."
},
{
"section_id": "Appendix 2",
"parent_section_id": null,
"section_name": "Appendix B Details of Magnushammer",
"text": "Select stage is trained using the InfoNCE loss van den Oord et al. [2018 ###reference_b69###] defined as:\nwhere is a query (a proof state), is a positive premise (a ground truth from the dataset), are negative premises. We define as cosine similarity between proof state and premise embeddings; is a non-trainable temperature parameter. We list our hyperparameter choices in section C.2 ###reference_###.\nPremise retrieval task can be cast as binary classification, trying to determine if a given pair is relevant. Applying classification to each pair is computationally infeasible, however, it could be used to re-rank a small set of premises retrieved by Select.\nNamely, we use the following cross-entropy loss:\nwhere is the output of the Rerank part of the model (see \u201dSigmoid\u201d in Figure 1(b) ###reference_sf2###) for a given pair. Typically, we sample a batch of positive pairs from the dataset. For each such pair negatives are constructed from the most likely false positives returned by Select. Specifically, negative premises , which are facts that were never used as a premise for , are first chosen. Then, the top of according to Select are selected, and are sampled from them to construct negative pairs, which are included in .\nWe train Magnushammer as two separate tasks alternating update steps as presented in Algorithm 2 ###reference_###. Note that the backbone of the architecture is shared between Select and Rerank, thus such multi-task training is potentially more effective than having two separate models. Calculation of the negative premises for Select is costly, thus for efficiency reasons we recalculate the top premises, see Section B.2 ###reference_###, every steps in the function, as outlined in the Algorithm 2 ###reference_###."
},
{
"section_id": "Appendix 3",
"parent_section_id": null,
"section_name": "Appendix C Training details",
"text": "We use a decoder-only transformer architecture, following the setup from Wang and Komatsuzaki [2021 ###reference_b71###] and using rotary position embedding by Su et al. [2021 ###reference_b62###], a variation of relative positional encoding. The feedforward dimension in the transformer block is set to where denotes embedding dimension, and the number of attention heads is . Our M model has layers and an embedding dimension of . The larger M model consists of layers and has . For all the models, we use the original GPT-2 tokenizer [Radford et al., 2019 ###reference_b57###].\nIn Select, we append a specialized token at the end of the sequence to compute the embedding for a proof state and linearly project its embedding. Premises are embedded analogously. Similarly to Radford et al. [2021 ###reference_b58###] that train separate projections for images and captions, we train separate proof state and premise projections and share the transformer backbone (see Figure 1(b) ###reference_sf2###). Analogously for Rerank, we compute the relevance score by taking the embedding of the last token and then projecting it to a scalar value.\nWe performed the following hyperparameter sweeps. We note that we have not observed significant differences between obtained results.\nLearning rate: , chosen:\nDropout: , chosen:\nWeight decay: , chosen:\nBatch size in Select: , chosen:\nNumber of negatives in Select: , chosen:\nTemperature for InfoNCE loss in Select: , chosen:\nBatch size for Rerank: , chosen\nNumber of negatives per proof state in Rerank: , chosen: .\nPre-training has been shown to dramatically increase the capabilities and performance of decoder-only models on tasks other than language modeling [Howard and Ruder, 2018 ###reference_b28###]. Motivated by that, we pre-train our models on GitHub and arXiv subsets of the Pile [Gao et al., 2021 ###reference_b20###]. The models are trained for M steps, with a context length of . Global batch size is set to sequences giving a total number of tokens per batch. Dropout is disabled, and weight decay is set to . The learning rate increases linearly from to for the first steps, and then the cosine schedule is applied to decrease its value gradually.\nWe train Magnushammer by taking a pre-trained language model, removing its language modeling head, and attaching three linear projections heads \u2013 one projection for proof state embedding, another one for premise embedding, and the last one for producing relevance score for Rerank, as depicted in Figure 1(b) ###reference_sf2### and described in Section C.1 ###reference_###. For the proof step generation task, we fine-tune our language models by applying the algorithm used to train Thor [Jiang et al., 2022a ###reference_b31###].\nWe find that the Select-only method, i.e., Magnushammer without the Rerank phase, already significantly outperforms Sledgehammer. Tested on the M model, it achieves a proof rate comparable to obtained by Magnushammer.\nSelect-only mode is a computationally appealing alternative, as it only needs a single forward pass to embed the current proof state (the setting used recently by Yang et al. [2023 ###reference_b77###].) Premise embeddings can be pre-computed and cached, allowing inference on the CPU without the need for GPU or TPU accelerators.\nWe gratefully acknowledge that our research was supported with Cloud TPUs from Google\u2019s TPU Research Cloud (TRC).\nWe use TPU virtual machines from the Google Cloud Platform (GCP) for all stages: pre-training, fine-tuning, and evaluation. Each TPU virtual machine has 8 TPU v3 cores, 96 CPU cores, and over 300GB of RAM. TPU v3 cores have around 16GB of memory each. The Isabelle environment is set to have access to 32 CPU cores."
},
{
"section_id": "Appendix 4",
"parent_section_id": null,
"section_name": "Appendix D Magnushammer evaluation",
"text": "In Algorithm 3 ###reference_### we outline our evaluation method described in Section 5.1 ###reference_###. To generate proof steps, we use the following tactics:\nsmt, metis, auto, simp, blast, meson, force, eval, presburger, linarith. Algorithm 3 ###reference_### is also used to evaluate BM25, where we select with this retrieval method instead of Magnushammer.\nFor our main result (Section 5.2 ###reference_###), we allocate the computational budget of as follows: apart from the powers of two from to , we also try the following values: , which in total gives values. With each of these values, tactics are used with timeout , yielding .\nFor the ablation studies, we only use powers of two from to , and the same set of tactics, which gives .\nTo generate more complex proofs we combine Thor [Jiang et al., 2022a ###reference_b31###] with Magnushammer as introduced in multi-step setting in Section 5.2 ###reference_.SSS0.Px2###.\nFirstly, we follow the procedure described in Jiang et al. [2022a ###reference_b31###] to pre-process training data and fine-tune our pre-trained language model for the proof generation task (pre-training details can be found in Appendix C.3 ###reference_###). During the evaluation, when the language model generates the <hammer> token, we call our method instead of Sledgehammer. More specifically, we use an augmented Algorithm 3 ###reference_### that returns the proof states resulting from applying the steps (instead of returning binary information on whether any of the steps closed the proof). We then pick at most s = 2 states among these and add them to the BFS queue.\nWe assign the same computational budget as proposed in Thor, with the only difference being\nthat each proof_step has a timeout limit of s (instead of s), which we found to perform better in our setup.\nThe search is terminated if and only if one of the following scenarios happens: (1) a valid proof has been found for the theorem; (2) the language model is queried 300 times; (3) a wall-time timeout of s has been reached (assuming parallel execution of Magnushammer steps); (4) the queue is empty but the theorem is not proved.\nWe keep the same maximum length of the queue equal to ."
},
{
"section_id": "Appendix 5",
"parent_section_id": null,
"section_name": "Appendix E Additional experimental results",
"text": "We provide additional details for our main experiments and ablations.\nWe observed that different tactics use different subsets of premises. This motivated us to extend the context given to our model with tactic prompt. Namely, provide the tactic name as an additional argument to the premise selection model, similarly to Bansal et al. [2019 ###reference_b5###]. Prompting model with the tactic name does not yield significant improvements. However, it allows the model for a more accurate premise selection. Namely, as presented in Figure A.5 ###reference_### and Table 6 ###reference_###, we observe that premises necessary to close the proof are ranked higher. This motivates an alternative performance metric presented in the next section.\n\n###figure_10### Consider the number of premises used to generate steps in Algorithm 3 ###reference_### (parameter in the for-loop). Intuitively, the fewer premises needed the better, since it means that all the premises necessary to close the proof are ranked higher (high recall), thus the model does a more accurate premise selection. In other words, a better retrieval model should be able to score all the necessary facts higher and push unnecessary facts down the list.\nTo compare different models we fix a set of tactics and accumulate problems solved as we increase the number of premises used to generate steps in Algorithm 3 ###reference_###. This is presented in Table 6 ###reference_### and Figure A.5 ###reference_###.\nNamely, for each , we count the number of problems solved using at most premises.\nEffectively, each new value of adds one new step per tactic to try.\nIt is non-trivial to estimate the lower bound on how many problems can be closed directly from the root state in a single proof step.\nTo answer this question, we use different models in Algorithm 3 ###reference_### and take the union of problems solved by them. Namely, we ensemble the results of the Magnushammer variations introduced in previous sections:\nMagnushammer-86M, Magnushammer-38M, Magnushammer-Select, Sledgehammer, BM25, and the models presented in Section 5.4 ###reference_###. Such a combination successfully closes of the proofs."
},
{
"section_id": "Appendix 6",
"parent_section_id": null,
"section_name": "Appendix F Examples of proofs found by Magnushammer",
"text": "In Sledgehammer, once one of the external provers found a proof, it is likely\nthat it can be reproduced inside Isabelle (but not always, as reported by\nPaulson and Blanchette [2012 ###reference_b51###]). The external provers significantly reduce the\nnumber of premises passed to the reproduction step, therefore the Isabelle\u2019s\nproof will be short. The major bottleneck of Sledgehammer, however, is the\npre-selection step: the external provers often cannot find a proof because they\nare provided too few \u2013 or too many \u2013 premises.\nIn Magnushammer, on the other hand, we skip the external provers completely and\ninput premises directly into the native Isabelle\u2019s tactics to produce a proof.\nThis means that the prediction must be of high quality in order to obtain good\nresults. The number of the premises will be typically larger \u2013 therefore the\nproofs will be longer, and of form of a combination of a strong tactic and a\nlong list of premises as its arguments.\nAs an example demonstrating the difference between Magnushammer and\nSledgehammer from the perspective of produced proofs, let\u2019s see two proofs of\nthe algebraic theorem set_r_ar_cos_ker from the Archive of Formal\nProofs:333from the theory Group-Ring-Module/Algebra4.thy, accesible at\nhttps://search.isabelle.in.tum.de/#theory/default_Isabelle2022_AFP2022/Group-Ring-Module/Algebra4 ###reference_default_Isabelle2022_AFP2022/Group-Ring-Module/Algebra4###\nSledgehammer\u2019s proof:\nMagnushammer\u2019s proof:\nBoth Sledgehammer and Magnushammer were able to solve it, however, the latter used more premises. This is expected: whenever both methods find a proof, the Magnushammer\u2019s proof is often longer in the sense of the number of premises used. Yet, Sledgehammer\u2019s weaker pre-selection scheme causes it to find fewer proofs in comparison.\nAn example of a theorem that Sledgehammer was unable to prove (with a generous\ntime limit of 60 s), but Magnushammer has proven, is lemma\nunit_disc_fix_moebius_uminus.444from the theory\nComplex_Geometry/Unit_Circle_Preserving_Moebius.thy, accessible at\n\nhttps://search.isabelle.in.tum.de/#theory/default_Isabelle2022_AFP2022/Complex_Geometry/Unit_Circle_Preserving_Moebius ###reference_default_Isabelle2022_AFP2022/Complex_Geometry/Unit_Circle_Preserving_Moebius###\nThe proof produced by Magnushammer consists of the smt tactic and a list of\npremises. Thus, Magnushammer was able to retrieve the necessary premises in\ncontrast to Sledgehammer:"
}
],
"tables": {
"1": {
"table_html": "<figure class=\"ltx_table ltx_align_floatright\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Statistics of MAPL and both its partitions: HPL (coming from human-written proofs) and SH (coming from Sledgehammer-generated proofs). The data points are of the form of pairs.\n</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.3\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.3.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S4.T1.3.1.1.1\" style=\"padding-left:7.0pt;padding-right:7.0pt;\">Dataset</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.3.1.1.2\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S4.T1.3.1.1.2.1\">HPL</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.3.1.1.3\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S4.T1.3.1.1.3.1\">SH</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.3.1.1.4\" style=\"padding-left:7.0pt;padding-right:7.0pt;\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S4.T1.3.1.1.4.1\">MAPL</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.3.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T1.3.2.1.1\" style=\"padding-left:7.0pt;padding-right:7.0pt;\">Data points</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.3.2.1.2\" style=\"padding-left:7.0pt;padding-right:7.0pt;\">1.1M</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.3.2.1.3\" style=\"padding-left:7.0pt;padding-right:7.0pt;\">3.3M</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.3.2.1.4\" style=\"padding-left:7.0pt;padding-right:7.0pt;\">4.4M</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.3.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.3.3.2.1\" style=\"padding-left:7.0pt;padding-right:7.0pt;\">Unique proof states</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.2.2\" style=\"padding-left:7.0pt;padding-right:7.0pt;\">570K</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.2.3\" style=\"padding-left:7.0pt;padding-right:7.0pt;\">500K</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.2.4\" style=\"padding-left:7.0pt;padding-right:7.0pt;\">570K</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.3.4.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S4.T1.3.4.3.1\" style=\"padding-left:7.0pt;padding-right:7.0pt;\">Unique premises</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.3.4.3.2\" style=\"padding-left:7.0pt;padding-right:7.0pt;\">300K</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.3.4.3.3\" style=\"padding-left:7.0pt;padding-right:7.0pt;\">306K</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.3.4.3.4\" style=\"padding-left:7.0pt;padding-right:7.0pt;\">433K</td>\n</tr>\n</tbody>\n</table>\n</figure>",
"capture": "Table 1: Statistics of MAPL and both its partitions: HPL (coming from human-written proofs) and SH (coming from Sledgehammer-generated proofs). The data points are of the form of pairs.\n"
},
"2": {
"table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Proof rates on the PISA benchmark. On the single-step task,\nMagnushammer outperforms both Sledgehammer and BM25 by a wide margin. On\nthe multi-step task, Magnushammer combined with Thor achieves the\nstate-of-the-art proof rate of .</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S5.T2.10\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T2.10.9.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S5.T2.10.9.1.1\">Task</td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S5.T2.10.9.1.2\">Method</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.10.9.1.3\">Proof rate (%)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.1\">\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T2.3.1.2\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.3.1.3\">BM25</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.4.2\">\n<td class=\"ltx_td\" id=\"S5.T2.4.2.2\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.4.2.3\">TF-IDF</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.4.2.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.5.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.5.3.2\">Single-step</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.5.3.3\">OpenAI embed. \u00a0<cite class=\"ltx_cite ltx_citemacro_citep\">(Neelakantan et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2303.04488v3#bib.bib46\" title=\"\">2022</a>)</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.3.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.6.4\">\n<td class=\"ltx_td\" id=\"S5.T2.6.4.2\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.6.4.3\">Sledgehammer</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.6.4.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.7.5\">\n<td class=\"ltx_td\" id=\"S5.T2.7.5.2\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.7.5.3\">Magnushammer (ours)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.7.5.1\"><span class=\"ltx_text ltx_markedasmath ltx_font_bold\" id=\"S5.T2.7.5.1.1\">59.5</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.8.6\">\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T2.8.6.2\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.8.6.3\">LISA\u00a0<cite class=\"ltx_cite ltx_citemacro_citep\">(Jiang et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2303.04488v3#bib.bib30\" title=\"\">2021</a>)</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.8.6.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.9.7\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.9.7.2\">Multi-step</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.9.7.3\">Thor\u00a0<cite class=\"ltx_cite ltx_citemacro_citep\">(Jiang et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2303.04488v3#bib.bib31\" title=\"\">2022a</a>)</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.9.7.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.10.8\">\n<td class=\"ltx_td ltx_border_bb\" id=\"S5.T2.10.8.2\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T2.10.8.3\">Thor + Magnushammer (ours)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.10.8.1\"><span class=\"ltx_text ltx_markedasmath ltx_font_bold\" id=\"S5.T2.10.8.1.1\">71.0</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
"capture": "Table 2: Proof rates on the PISA benchmark. On the single-step task,\nMagnushammer outperforms both Sledgehammer and BM25 by a wide margin. On\nthe multi-step task, Magnushammer combined with Thor achieves the\nstate-of-the-art proof rate of ."
},
"3": {
"table_html": "<figure class=\"ltx_table\" id=\"S5.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Proof rates on the miniF2F benchmark. On the single-step task, Magnushammer outperforms Sledgehammer and its variant with additional heuristics <cite class=\"ltx_cite ltx_citemacro_citep\">(Jiang et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2303.04488v3#bib.bib32\" title=\"\">2022b</a>)</cite>. On the multi-step task, Thor+Magnushammer obtains competitive results, significantly outperforming Thor+Sledgehammer.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S5.T3.14\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T3.14.15.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S5.T3.14.15.1.1\">Task</td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S5.T3.14.15.1.2\">Method</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T3.14.15.1.3\">Valid\u00a0(%)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T3.14.15.1.4\">Test\u00a0(%)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.2.2\">\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T3.2.2.3\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T3.2.2.4\">Sledgehammer</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.2.2.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.4.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.4.4.3\">Single-step</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.4.4.4\">Sledgehammer + heuristics</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.3.3.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.4.4.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.6.6\">\n<td class=\"ltx_td\" id=\"S5.T3.6.6.3\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.6.6.4\">Magnushammer (ours)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.5.5.1\"><span class=\"ltx_text ltx_markedasmath ltx_font_bold\" id=\"S5.T3.5.5.1.1\">33.6</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.6.6.2\"><span class=\"ltx_text ltx_markedasmath ltx_font_bold\" id=\"S5.T3.6.6.2.1\">34.0</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.8.8\">\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T3.8.8.3\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T3.8.8.4\">Thor + Sledgehammer\u00a0<cite class=\"ltx_cite ltx_citemacro_citep\">(Jiang et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2303.04488v3#bib.bib31\" title=\"\">2022a</a>)</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.7.7.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.8.8.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.10.10\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.10.10.3\">Multi-step</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.10.10.4\">Thor + Sledgehammer + auto \u00a0<cite class=\"ltx_cite ltx_citemacro_citep\">(Wu et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2303.04488v3#bib.bib74\" title=\"\">2022a</a>)</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.9.9.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.10.10.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.12.12\">\n<td class=\"ltx_td\" id=\"S5.T3.12.12.3\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.12.12.4\">Thor + Magnushammer (ours)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.11.11.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.12.12.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.14.14\">\n<td class=\"ltx_td ltx_border_bb\" id=\"S5.T3.14.14.3\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T3.14.14.4\">DSP\u00a0<cite class=\"ltx_cite ltx_citemacro_citep\">(Jiang et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2303.04488v3#bib.bib32\" title=\"\">2022b</a>)</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T3.13.13.1\"><span class=\"ltx_text ltx_markedasmath ltx_font_bold\" id=\"S5.T3.13.13.1.1\">43.9</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T3.14.14.2\"><span class=\"ltx_text ltx_markedasmath ltx_font_bold\" id=\"S5.T3.14.14.2.1\">39.3</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
"capture": "Table 3: Proof rates on the miniF2F benchmark. On the single-step task, Magnushammer outperforms Sledgehammer and its variant with additional heuristics (Jiang et\u00a0al., 2022b). On the multi-step task, Thor+Magnushammer obtains competitive results, significantly outperforming Thor+Sledgehammer."
},
"4": {
"table_html": "<figure class=\"ltx_table\" id=\"A5.T4\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 4: </span>Relation between the training data and the proof rate discussed in Section <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2303.04488v3#S5.SS3\" title=\"5.3 Impact of training data \u2023 5 Experiments \u2023 Magnushammer: A Transformer-Based Approach to Premise Selection\"><span class=\"ltx_text ltx_ref_tag\">5.3</span></a> and Figure <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2303.04488v3#S5.F2.sf1\" title=\"2(a) \u2023 Figure 3 \u2023 Dataset type \u2023 5.3 Impact of training data \u2023 5 Experiments \u2023 Magnushammer: A Transformer-Based Approach to Premise Selection\"><span class=\"ltx_text ltx_ref_tag\">2(a)</span></a>.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A5.T4.24\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"A5.T4.24.25.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A5.T4.24.25.1.1\">Dataset</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A5.T4.24.25.1.2\">Fraction</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A5.T4.24.25.1.3\">Pre-trained</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A5.T4.24.25.1.4\">Proof rate\u00a0(%)</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A5.T4.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A5.T4.2.2.3\">MAPL</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A5.T4.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A5.T4.2.2.4\">Yes</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A5.T4.2.2.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A5.T4.4.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T4.4.4.3\">HPL</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T4.3.3.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T4.4.4.4\">Yes</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T4.4.4.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A5.T4.6.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T4.6.6.3\">MAPL</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T4.5.5.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T4.6.6.4\">No</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T4.6.6.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A5.T4.8.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A5.T4.8.8.3\">MAPL</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A5.T4.7.7.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A5.T4.8.8.4\">Yes</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A5.T4.8.8.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A5.T4.10.10\">\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T4.10.10.3\">HPL</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T4.9.9.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T4.10.10.4\">Yes</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T4.10.10.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A5.T4.12.12\">\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T4.12.12.3\">MAPL</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T4.11.11.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T4.12.12.4\">No</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T4.12.12.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A5.T4.14.14\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A5.T4.14.14.3\">MAPL</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A5.T4.13.13.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A5.T4.14.14.4\">Yes</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A5.T4.14.14.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A5.T4.16.16\">\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T4.16.16.3\">HPL</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T4.15.15.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T4.16.16.4\">Yes</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T4.16.16.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A5.T4.18.18\">\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T4.18.18.3\">MAPL</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T4.17.17.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T4.18.18.4\">No</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T4.18.18.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A5.T4.20.20\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A5.T4.20.20.3\">MAPL</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A5.T4.19.19.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A5.T4.20.20.4\">Yes</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A5.T4.20.20.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A5.T4.22.22\">\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T4.22.22.3\">HPL</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T4.21.21.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T4.22.22.4\">Yes</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T4.22.22.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A5.T4.24.24\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A5.T4.24.24.3\">MAPL</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A5.T4.23.23.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A5.T4.24.24.4\">No</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A5.T4.24.24.2\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
"capture": "Table 4: Relation between the training data and the proof rate discussed in Section 5.3 and Figure 2(a)."
},
"5": {
"table_html": "<figure class=\"ltx_table\" id=\"A5.T5\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 5: </span>Proof rate on PISA for different models discussed in Section <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2303.04488v3#S5.SS4\" title=\"5.4 Ablations \u2023 5 Experiments \u2023 Magnushammer: A Transformer-Based Approach to Premise Selection\"><span class=\"ltx_text ltx_ref_tag\">5.4</span></a> and Figure <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2303.04488v3#S5.F2.sf2\" title=\"2(b) \u2023 Figure 3 \u2023 Dataset type \u2023 5.3 Impact of training data \u2023 5 Experiments \u2023 Magnushammer: A Transformer-Based Approach to Premise Selection\"><span class=\"ltx_text ltx_ref_tag\">2(b)</span></a>. We vary the number of layers and the embedding dimension of the Transformer model.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A5.T5.38\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"A5.T5.5.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A5.T5.5.1.1\">Transformer \n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A5.T5.5.1.2\">#Parameters</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A5.T5.5.1.3\">Pre-trained</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A5.T5.5.1.4\">Proof rate\u00a0(%)</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A5.T5.8.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A5.T5.6.2.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A5.T5.7.3.2\">\nK</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A5.T5.8.4.4\">No</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A5.T5.8.4.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A5.T5.11.7\">\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T5.9.5.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T5.10.6.2\">\nM</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T5.11.7.4\">No</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T5.11.7.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A5.T5.14.10\">\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T5.12.8.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T5.13.9.2\">\nM</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T5.14.10.4\">No</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T5.14.10.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A5.T5.17.13\">\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T5.15.11.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T5.16.12.2\">\nM</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T5.17.13.4\">No</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T5.17.13.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A5.T5.20.16\">\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T5.18.14.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T5.19.15.2\">\nM</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T5.20.16.4\">No</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T5.20.16.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A5.T5.23.19\">\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T5.21.17.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T5.22.18.2\">\nM</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T5.23.19.4\">No</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T5.23.19.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A5.T5.26.22\">\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T5.24.20.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T5.25.21.2\">\nM</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T5.26.22.4\">Yes</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T5.26.22.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A5.T5.29.25\">\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T5.27.23.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T5.28.24.2\">\nM</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T5.29.25.4\">No</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T5.29.25.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A5.T5.32.28\">\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T5.30.26.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T5.31.27.2\">\nM</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T5.32.28.4\">No</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T5.32.28.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A5.T5.35.31\">\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T5.33.29.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T5.34.30.2\">\nM</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T5.35.31.4\">Yes</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T5.35.31.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A5.T5.38.34\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A5.T5.36.32.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A5.T5.37.33.2\">\nM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A5.T5.38.34.4\">Yes</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A5.T5.38.34.3\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
"capture": "Table 5: Proof rate on PISA for different models discussed in Section 5.4 and Figure 2(b). We vary the number of layers and the embedding dimension of the Transformer model."
},
"6": {
"table_html": "<figure class=\"ltx_table\" id=\"A5.T6\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 6: </span>Effect of the number of premises used for generating tactic steps on the proof rate. We fix a set of tactics and accumulate problems solved as we increase the number of premises used to generate steps in Algorithm <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2303.04488v3#alg3\" title=\"Algorithm 3 \u2023 D.1 Computational budget \u2023 Appendix D Magnushammer evaluation \u2023 Magnushammer: A Transformer-Based Approach to Premise Selection\"><span class=\"ltx_text ltx_ref_tag\">3</span></a>. Namely, for each , we count the number of problems solved using at most premises.\nThe \u201cTactic\u201d column indicates whether the model was given a tactic prompt.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A5.T6.14\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"A5.T6.14.10\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"A5.T6.14.10.11\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">Model</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A5.T6.14.10.12\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">Tactic</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A5.T6.14.10.13\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">Dataset</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A5.T6.5.1.1\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A5.T6.6.2.2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A5.T6.7.3.3\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A5.T6.8.4.4\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A5.T6.9.5.5\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A5.T6.10.6.6\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A5.T6.11.7.7\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A5.T6.12.8.8\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A5.T6.13.9.9\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A5.T6.14.10.10\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A5.T6.14.11.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"A5.T6.14.11.1.1\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">BM25</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A5.T6.14.11.1.2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">No</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A5.T6.14.11.1.3\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">N/A</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A5.T6.14.11.1.4\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">9.63</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A5.T6.14.11.1.5\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">13.56</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A5.T6.14.11.1.6\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">15.62</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A5.T6.14.11.1.7\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">16.70</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A5.T6.14.11.1.8\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">18.47</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A5.T6.14.11.1.9\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">20.73</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A5.T6.14.11.1.10\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">23.38</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A5.T6.14.11.1.11\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">25.44</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A5.T6.14.11.1.12\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">28.00</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A5.T6.14.11.1.13\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">30.55</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A5.T6.14.12.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A5.T6.14.12.2.1\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">MH-86M</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T6.14.12.2.2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">No</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T6.14.12.2.3\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">HPL</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T6.14.12.2.4\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">9.63</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T6.14.12.2.5\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">19.25</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T6.14.12.2.6\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">22.99</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T6.14.12.2.7\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">28.68</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T6.14.12.2.8\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">34.58</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T6.14.12.2.9\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">39.88</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T6.14.12.2.10\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">44.50</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T6.14.12.2.11\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">47.84</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T6.14.12.2.12\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">51.47</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T6.14.12.2.13\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">52.95</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A5.T6.14.13.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A5.T6.14.13.3.1\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">MH-86M</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T6.14.13.3.2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">Yes</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T6.14.13.3.3\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">HPL</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T6.14.13.3.4\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">9.63</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T6.14.13.3.5\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">20.24</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T6.14.13.3.6\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">25.44</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T6.14.13.3.7\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">31.53</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T6.14.13.3.8\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">36.15</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T6.14.13.3.9\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">40.67</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T6.14.13.3.10\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">44.70</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T6.14.13.3.11\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">48.53</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T6.14.13.3.12\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">51.87</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T6.14.13.3.13\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">54.22</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A5.T6.14.14.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A5.T6.14.14.4.1\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">MH-86M</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T6.14.14.4.2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">No</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T6.14.14.4.3\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">MAPL</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T6.14.14.4.4\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">9.63</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T6.14.14.4.5\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">18.27</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T6.14.14.4.6\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">22.00</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T6.14.14.4.7\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">27.70</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T6.14.14.4.8\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">35.07</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T6.14.14.4.9\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">39.78</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T6.14.14.4.10\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">44.99</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T6.14.14.4.11\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">49.31</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T6.14.14.4.12\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">52.65</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T6.14.14.4.13\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">55.60</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A5.T6.14.15.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"A5.T6.14.15.5.1\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">MH-86M</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A5.T6.14.15.5.2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">Yes</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A5.T6.14.15.5.3\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">MAPL</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A5.T6.14.15.5.4\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">9.63</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A5.T6.14.15.5.5\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">19.94</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A5.T6.14.15.5.6\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">25.93</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A5.T6.14.15.5.7\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">33.79</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A5.T6.14.15.5.8\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">39.29</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A5.T6.14.15.5.9\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">43.71</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A5.T6.14.15.5.10\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">47.94</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A5.T6.14.15.5.11\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">52.06</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A5.T6.14.15.5.12\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">54.32</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A5.T6.14.15.5.13\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">56.19</td>\n</tr>\n</tbody>\n</table>\n</figure>",
"capture": "Table 6: Effect of the number of premises used for generating tactic steps on the proof rate. We fix a set of tactics and accumulate problems solved as we increase the number of premises used to generate steps in Algorithm 3. Namely, for each , we count the number of problems solved using at most premises.\nThe \u201cTactic\u201d column indicates whether the model was given a tactic prompt."
}
},
"image_paths": {
"1": {
"figure_path": "2303.04488v3_figure_1.png",
"caption": "Figure 1: \nProof success rate for varying computational budget for Magnushammer, Sledgehammer, and BM25.\nMagnushammer shows remarkable scalability. See Sections 5.1 for the definition of computational budget and Section 5.2.1 for configurations depicted in this figure.",
"url": "http://arxiv.org/html/2303.04488v3/x1.png"
},
"2(a)": {
"figure_path": "2303.04488v3_figure_2(a).png",
"caption": "(a) A call to Sledgehammer triggers the following sequence of steps: First, available facts are filtered based on their similarity to the conjecture. Then, the conjecture together with the selected facts (usually a few hundred in number) are translated to simpler logic used by the external provers (E, SPASS, etc.). Then, such problems are fed into each ATP separately.\nFinally, the premises used in the successful ATP proofs are used\nto reconstruct a proof inside Isabelle using its native methods.\nFigure 2: \nOverview of Sledgehammer (a) and Magnushammer (b).",
"url": "http://arxiv.org/html/2303.04488v3/x2.png"
},
"2(b)": {
"figure_path": "2303.04488v3_figure_2(b).png",
"caption": "(b) Given a proof state, we first retrieve the most relevant premises according to the cosine similarity of their embeddings with the proof state embedding (Select). We then re-rank these with a model that encodes each proof state and premise pair, outputting a relevance score (Rerank). The bulk of the architecture is a shared transformer model, in orange.\nFigure 2: \nOverview of Sledgehammer (a) and Magnushammer (b).",
"url": "http://arxiv.org/html/2303.04488v3/x3.png"
},
"3(a)": {
"figure_path": "2303.04488v3_figure_3(a).png",
"caption": "(a) We randomly sample fractions of MAPL or HPL datasets and use them for training Magnushammer. Even 0.1%percent0.10.1\\%0.1 % of the MAPL dataset allows pre-trained Magnushammer to outperform the Sledgehammer and BM25 baselines. See Table 4 for numerical data.\nFigure 3: \nImpacts of the training data quantity and the model parameters on the proof rate. The vertical axis is the proof rate in percentage. In Subfigure 2(a), the horizontal axis is the fraction of training dataset used and in Subfigure 2(b) it is the number of parameters in the model.",
"url": "http://arxiv.org/html/2303.04488v3/x4.png"
},
"3(b)": {
"figure_path": "2303.04488v3_figure_3(b).png",
"caption": "(b) We train Magnushammer of different sizes. Even with a one-layer transformer, Magnushammer outperforms Sledgehammer.\nWe observe consistent performance gains with increasing model sizes. Pre-trained models perform better. See Table 5 for numerical data.\nFigure 3: \nImpacts of the training data quantity and the model parameters on the proof rate. The vertical axis is the proof rate in percentage. In Subfigure 2(a), the horizontal axis is the fraction of training dataset used and in Subfigure 2(b) it is the number of parameters in the model.",
"url": "http://arxiv.org/html/2303.04488v3/x5.png"
},
"4": {
"figure_path": "2303.04488v3_figure_4.png",
"caption": "Figure A.1: An example theorem in Isabelle. The statement is highlighted in the orange frame and the body of the proof is in the green frame. In this proof, most of the lines contain two consecutive steps: the first formulates a new proposition, and the second proves it. See a detailed analysis of the line 8 of the proof in Figure A.2 below.",
"url": "http://arxiv.org/html/2303.04488v3/extracted/5478554/images/icml_1.png"
},
"5": {
"figure_path": "2303.04488v3_figure_5.png",
"caption": "Figure A.2: The line is broken down into two steps: the first one (green frame) includes the proposition (since m\ud835\udc5amitalic_m is natural and positive, it must have a predecessor k\ud835\udc58kitalic_k) and the second (blue frame) proves it using the tactic metis with premise not0_implies_Suc, that states that a nonnegative natural number is a successor of some other natural number. The used premise is a fact which is already defined in the lemma library. The proof state resulting from the first step is in the yellow frame. The full premise statement is highlighted in pink.",
"url": "http://arxiv.org/html/2303.04488v3/extracted/5478554/images/state_icml_450.png"
},
"6": {
"figure_path": "2303.04488v3_figure_6.png",
"caption": "Figure A.3: Example intermediate propositions highlighted in red. Note: not all propositions were highlighted.",
"url": "http://arxiv.org/html/2303.04488v3/extracted/5478554/images/icml_sh_3.png"
},
"7": {
"figure_path": "2303.04488v3_figure_7.png",
"caption": "Figure A.4: A proof of 2\u2209\u211a2\u211a\\sqrt{2}\\notin\\mathbb{Q}square-root start_ARG 2 end_ARG \u2209 blackboard_Q [Jiang et al., 2022a, Figure 1]. The steps containing metis, smt, fastforce, blast, auto, fastforce are examples of steps using premises. For instance, one such proof step is by (metis Rats_cases\u2019 less_irrefl). This step invokes metis and provides two premises as arguments, namely Rats_cases\u2019 and less_irrefl.",
"url": "http://arxiv.org/html/2303.04488v3/extracted/5478554/images/normal_proof.png"
},
"8": {
"figure_path": "2303.04488v3_figure_8.png",
"caption": "Figure A.5: We calculate accumulated proof rate in the following way: try 1 premise, count problems solved, then try 2 premises, count problems solved using 1 or 2 premises, then try 4 premises, count problems solved using 1, 2, or 4 premises etc. Following this, on the x-axis we have the number of premises used to generate steps in Algorithm 3. The y-axis presents the accumulative proof rate as we try more and more premises. The higher the proof rate for the smaller number of premises used the better. We observe that prompting the model with the tactic is not necessary to achieve the final high proof. However, it allows the model for a more accurate premise selection \u2013 all premises necessary to close the proof are ranked higher.",
"url": "http://arxiv.org/html/2303.04488v3/x6.png"
}
},
"validation": true,
"references": [
{
"1": {
"title": "Premise selection for mathematics by corpus analysis and kernel\nmethods.",
"author": "Jesse Alama, Daniel K\u00fchlwein, Evgeni Tsivtsivadze, Josef Urban, and Tom\nHeskes.",
"venue": "CoRR, abs/1108.3446, 2011.",
"url": null
}
},
{
"2": {
"title": "Premise selection for mathematics by corpus analysis and kernel\nmethods.",
"author": "Jesse Alama, Tom Heskes, Daniel K\u00fchlwein, Evgeni Tsivtsivadze, and Josef\nUrban.",
"venue": "J. Autom. Reason., 52(2):191\u2013213, 2014.",
"url": null
}
},
{
"3": {
"title": "DeepMath \u2013 deep sequence models for premise selection.",
"author": "Alexander A. Alemi, Fran\u00e7ois Chollet, Geoffrey Irving, Christian Szegedy,\nand Josef Urban.",
"venue": "CoRR, abs/1606.04442, 2016.",
"url": null
}
},
{
"4": {
"title": "ProofNet: A benchmark for autoformalizing and formally proving\nundergraduate-level mathematics problems.",
"author": "Zhangir Azerbayev, Bartosz Piotrowski, and Jeremy Avigad.",
"venue": "In Advances in Neural Information Processing Systems 35, 2nd\nMATH-AI Workshop at NeurIPS\u201922, 2022.",
"url": null
}
},
{
"5": {
"title": "HOList: An environment for machine learning of higher order logic\ntheorem proving.",
"author": "Kshitij Bansal, Sarah M. Loos, Markus N. Rabe, Christian Szegedy, and Stewart\nWilcox.",
"venue": "In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors,\nProceedings of the 36th International Conference on Machine Learning,\nICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of\nProceedings of Machine Learning Research, pages 454\u2013463. PMLR,\n2019.",
"url": null
}
},
{
"6": {
"title": "cvc5: A versatile and industrial-strength SMT solver.",
"author": "Haniel Barbosa, Clark W. Barrett, Martin Brain, Gereon Kremer, Hanna Lachnitt,\nMakai Mann, Abdalrhman Mohamed, Mudathir Mohamed, Aina Niemetz, Andres\nN\u00f6tzli, Alex Ozdemir, Mathias Preiner, Andrew Reynolds, Ying Sheng,\nCesare Tinelli, and Yoni Zohar.",
"venue": "In Dana Fisman and Grigore Rosu, editors, Tools and Algorithms\nfor the Construction and Analysis of Systems \u2013 28th International\nConference, TACAS 2022, Held as Part of the European Joint Conferences on\nTheory and Practice of Software, ETAPS 2022, Munich, Germany, April 2-7,\n2022, Proceedings, Part I, volume 13243 of Lecture Notes in Computer\nScience, pages 415\u2013442. Springer, 2022.",
"url": null
}
},
{
"7": {
"title": "A short presentation of coq.",
"author": "Yves Bertot.",
"venue": "In Otmane A\u00eft Mohamed, C\u00e9sar A. Mu\u00f1oz, and\nSofi\u00e8ne Tahar, editors, Theorem Proving in Higher Order Logics,\n21st International Conference, TPHOLs 2008, Montreal, Canada, August 18-21,\n2008. Proceedings, volume 5170 of Lecture Notes in Computer Science,\npages 12\u201316. Springer, 2008.",
"url": null
}
},
{
"8": {
"title": "Extending Sledgehammer with SMT solvers.",
"author": "Jasmin Christian Blanchette, Sascha B\u00f6hme, and Lawrence C. Paulson.",
"venue": "J. Autom. Reason., 51(1):109\u2013128, 2013.",
"url": null
}
},
{
"9": {
"title": "A learning-based fact selector for Isabelle/HOL.",
"author": "Jasmin Christian Blanchette, David Greenaway, Cezary Kaliszyk, Daniel\nK\u00fchlwein, and Josef Urban.",
"venue": "J. Autom. Reason., 57(3):219\u2013244, 2016.",
"url": null
}
},
{
"10": {
"title": "Sledgehammer: Judgement day.",
"author": "Sascha B\u00f6hme and Tobias Nipkow.",
"venue": "In J\u00fcrgen Giesl and Reiner H\u00e4hnle, editors, Automated\nReasoning, pages 107\u2013121, Berlin, Heidelberg, 2010. Springer Berlin\nHeidelberg.",
"url": null
}
},
{
"11": {
"title": "Improving language models by retrieving from trillions of tokens.",
"author": "Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza\nRutherford, Katie Millican, George van den Driessche, Jean-Baptiste\nLespiau, Bogdan Damoc, Aidan Clark, Diego de Las Casas, Aurelia Guy, Jacob\nMenick, Roman Ring, Tom Hennigan, Saffron Huang, Loren Maggiore, Chris Jones,\nAlbin Cassirer, Andy Brock, Michela Paganini, Geoffrey Irving, Oriol Vinyals,\nSimon Osindero, Karen Simonyan, Jack W. Rae, Erich Elsen, and Laurent Sifre.",
"venue": "In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba\nSzepesv\u00e1ri, Gang Niu, and Sivan Sabato, editors, International\nConference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore,\nMaryland, USA, volume 162 of Proceedings of Machine Learning\nResearch, pages 2206\u20132240. PMLR, 2022.",
"url": null
}
},
{
"12": {
"title": "Language models are few-shot learners.",
"author": "Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan,\nPrafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda\nAskell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom\nHenighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens\nWinter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott\nGray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec\nRadford, Ilya Sutskever, and Dario Amodei.",
"venue": "CoRR, abs/2005.14165, 2020.",
"url": null
}
},
{
"13": {
"title": "A simple framework for contrastive learning of visual\nrepresentations.",
"author": "Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton.",
"venue": "In Proceedings of the 37th International Conference on Machine\nLearning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of\nProceedings of Machine Learning Research, pages 1597\u20131607. PMLR,\n2020.",
"url": null
}
},
{
"14": {
"title": "Cross-lingual language model pretraining.",
"author": "Alexis Conneau and Guillaume Lample.",
"venue": "In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch\u00e9-Buc, E. Fox, and R. Garnett, editors, Advances in Neural\nInformation Processing Systems, volume 32. Curran Associates, Inc., 2019.",
"url": null
}
},
{
"15": {
"title": "Hammer for Coq: Automation for dependent type theory.",
"author": "Lukasz Czajka and Cezary Kaliszyk.",
"venue": "J. Autom. Reason., 61(1-4):423\u2013453, 2018.",
"url": null
}
},
{
"16": {
"title": "The mathematical language AUTOMATH, its usage, and some of its\nextensions.",
"author": "Nicolaas Govert De Bruijn.",
"venue": "In Symposium on automatic demonstration, pages 29\u201361.\nSpringer, 1970.",
"url": null
}
},
{
"17": {
"title": "Z3: An efficient SMT solver.",
"author": "Leonardo de Moura and Nikolaj Bj\u00f8rner.",
"venue": "In C. R. Ramakrishnan and Jakob Rehof, editors, Tools and\nAlgorithms for the Construction and Analysis of Systems, pages 337\u2013340,\nBerlin, Heidelberg, 2008. Springer Berlin Heidelberg.",
"url": null
}
},
{
"18": {
"title": "The Lean theorem prover (system description).",
"author": "Leonardo Mendon\u00e7a de Moura, Soonho Kong, Jeremy Avigad, Floris van Doorn,\nand Jakob von Raumer.",
"venue": "In Amy P. Felty and Aart Middeldorp, editors, Automated\nDeduction \u2013 CADE-25 \u2013 25th International Conference on Automated\nDeduction, Berlin, Germany, August 1-7, 2015, Proceedings, volume 9195 of\nLecture Notes in Computer Science, pages 378\u2013388. Springer, 2015.",
"url": null
}
},
{
"19": {
"title": "Integration of general-purpose automated theorem provers in Lean,\n2020.",
"author": "Gabriel Ebner.",
"venue": "https://www.andrew.cmu.edu/user/avigad/meetings/fomm2020/slides/fomm_ebner.pdf.",
"url": null
}
},
{
"20": {
"title": "The Pile: An 800GB dataset of diverse text for language modeling.",
"author": "Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles\nFoster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser,\nand Connor Leahy.",
"venue": "CoRR, abs/2101.00027, 2021.",
"url": null
}
},
{
"21": {
"title": "Premise selection and external provers for HOL4.",
"author": "Thibault Gauthier and Cezary Kaliszyk.",
"venue": "In Xavier Leroy and Alwen Tiu, editors, Proceedings of the 2015\nConference on Certified Programs and Proofs, CPP 2015, Mumbai, India,\nJanuary 15-17, 2015, pages 49\u201357. ACM, 2015.",
"url": null
}
},
{
"22": {
"title": "The Isabelle ENIGMA.",
"author": "Zarathustra Amadeus Goertzel, Jan Jakubuv, Cezary Kaliszyk, Miroslav\nOls\u00e1k, Jelle Piepenbrock, and Josef Urban.",
"venue": "In June Andronick and Leonardo de Moura, editors, 13th\nInternational Conference on Interactive Theorem Proving, ITP 2022, August\n7-10, 2022, Haifa, Israel, volume 237 of LIPIcs, pages 16:1\u201316:21.\nSchloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik, 2022.",
"url": null
}
},
{
"23": {
"title": "Mizar in a nutshell.",
"author": "Adam Grabowski, Artur Kornilowicz, and Adam Naumowicz.",
"venue": "J. Formaliz. Reason., 3(2):153\u2013245, 2010.",
"url": null
}
},
{
"24": {
"title": "Contrastive finetuning of generative language models for informal\npremise selection.",
"author": "Jesse Michael Han, Tao Xu, Stanislas Polu, Arvind Neelakantan, and Alec\nRadford.",
"venue": "6th Conference on Artificial Intelligence and Theorem Proving,\n2021.",
"url": null
}
},
{
"25": {
"title": "Proof artifact co-training for theorem proving with language models.",
"author": "Jesse Michael Han, Jason Rute, Yuhuai Wu, Edward W. Ayers, and Stanislas Polu.",
"venue": "In The Tenth International Conference on Learning\nRepresentations, ICLR 2022, Virtual Event, April 25-29, 2022.\nOpenReview.net, 2022.",
"url": null
}
},
{
"26": {
"title": "HOL light: A tutorial introduction.",
"author": "John Harrison.",
"venue": "In Mandayam K. Srivas and Albert John Camilleri, editors,\nFormal Methods in Computer-Aided Design, First International\nConference, FMCAD \u201996, Palo Alto, California, USA, November 6-8, 1996,\nProceedings, volume 1166 of Lecture Notes in Computer Science, pages\n265\u2013269. Springer, 1996.",
"url": null
}
},
{
"27": {
"title": "Training compute-optimal large language models.",
"author": "Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor\nCai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes\nWelbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den\nDriessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich\nElsen, Jack W. Rae, Oriol Vinyals, and Laurent Sifre.",
"venue": "CoRR, abs/2203.15556, 2022.",
"url": null
}
},
{
"28": {
"title": "Universal language model fine-tuning for text classification.",
"author": "Jeremy Howard and Sebastian Ruder.",
"venue": "In Iryna Gurevych and Yusuke Miyao, editors, Proceedings of the\n56th Annual Meeting of the Association for Computational Linguistics, ACL\n2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages\n328\u2013339. Association for Computational Linguistics, 2018.",
"url": null
}
},
{
"29": {
"title": "Towards unsupervised dense information retrieval with contrastive\nlearning.",
"author": "Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr\nBojanowski, Armand Joulin, and Edouard Grave.",
"venue": "CoRR, abs/2112.09118, 2021.",
"url": null
}
},
{
"30": {
"title": "LISA: Language models of ISAbelle proofs.",
"author": "Albert Q. Jiang, Wenda Li, Jesse Michael Han, and Yuhuai Wu.",
"venue": "6th Conference on Artificial Intelligence and Theorem Proving,\n2021.",
"url": null
}
},
{
"31": {
"title": "Thor: Wielding hammers to integrate language models and automated\ntheorem provers.",
"author": "Albert Q. Jiang, Wenda Li, Szymon Tworkowski, Konrad Czechowski, Tomasz\nOdrzyg\u00f3\u017ad\u017a, Piotr Mi\u0142o\u015b, Yuhuai Wu, and Mateja Jamnik.",
"venue": "In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho,\neditors, Advances in Neural Information Processing Systems,\n2022a.",
"url": null
}
},
{
"32": {
"title": "Draft, sketch, and prove: Guiding formal theorem provers with\ninformal proofs.",
"author": "Albert Q. Jiang, Sean Welleck, Jin Peng Zhou, Wenda Li, Jiacheng Liu, Mateja\nJamnik, Timoth\u00e9e Lacroix, Yuhuai Wu, and Guillaume Lample.",
"venue": "CoRR, abs/2210.12283, 2022b.",
"url": null
}
},
{
"33": {
"title": "HOL(y)Hammer: Online ATP service for HOL Light.",
"author": "Cezary Kaliszyk and Josef Urban.",
"venue": "Math. Comput. Sci., 9(1):5\u201322,\n2015a.",
"url": null
}
},
{
"34": {
"title": "MizAR 40 for Mizar 40.",
"author": "Cezary Kaliszyk and Josef Urban.",
"venue": "Journal of Automated Reasoning, 55(3):245\u2013256, 2015b.",
"url": null
}
},
{
"35": {
"title": "HolStep: A machine learning dataset for higher-order logic\ntheorem proving.",
"author": "Cezary Kaliszyk, Fran\u00e7ois Chollet, and Christian Szegedy.",
"venue": "CoRR, abs/1703.00426, 2017.",
"url": null
}
},
{
"36": {
"title": "Scaling laws for neural language models.",
"author": "Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon\nChild, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei.",
"venue": "CoRR, abs/2001.08361, 2020.",
"url": null
}
},
{
"37": {
"title": "First-order theorem proving and vampire.",
"author": "Laura Kov\u00e1cs and Andrei Voronkov.",
"venue": "In Natasha Sharygina and Helmut Veith, editors, Computer Aided\nVerification \u2013 25th International Conference, CAV 2013, Saint Petersburg,\nRussia, July 13-19, 2013. Proceedings, volume 8044 of Lecture Notes in\nComputer Science, pages 1\u201335. Springer, 2013.",
"url": null
}
},
{
"38": {
"title": "ImageNet classification with deep convolutional neural networks.",
"author": "Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton.",
"venue": "In Peter L. Bartlett, Fernando C. N. Pereira, Christopher J. C.\nBurges, L\u00e9on Bottou, and Kilian Q. Weinberger, editors, Advances\nin Neural Information Processing Systems 25: 26th Annual Conference on Neural\nInformation Processing Systems 2012. Proceedings of a meeting held December\n3-6, 2012, Lake Tahoe, Nevada, United States, pages 1106\u20131114, 2012.",
"url": null
}
},
{
"39": {
"title": "Overview and evaluation of premise selection techniques for large\ntheory mathematics.",
"author": "Daniel K\u00fchlwein, Twan van Laarhoven, Evgeni Tsivtsivadze, Josef Urban,\nand Tom Heskes.",
"venue": "In Bernhard Gramlich, Dale Miller, and Uli Sattler, editors,\nAutomated Reasoning \u2013 6th International Joint Conference, IJCAR\n2012, Manchester, UK, June 26-29, 2012. Proceedings, volume 7364 of\nLecture Notes in Computer Science, pages 378\u2013392. Springer, 2012.",
"url": null
}
},
{
"40": {
"title": "MaSh: Machine learning for Sledgehammer.",
"author": "Daniel K\u00fchlwein, Jasmin Christian Blanchette, Cezary Kaliszyk, and Josef\nUrban.",
"venue": "In Sandrine Blazy, Christine Paulin-Mohring, and David Pichardie,\neditors, Interactive Theorem Proving \u2013 4th International Conference,\nITP 2013, Rennes, France, July 22-26, 2013. Proceedings, volume 7998 of\nLecture Notes in Computer Science, pages 35\u201350. Springer, 2013.",
"url": null
}
},
{
"41": {
"title": "HyperTree proof search for neural theorem proving.",
"author": "Guillaume Lample, Timothee Lacroix, Marie Anne Lachaux, Aurelien Rodriguez,\nAmaury Hayat, Thibaut Lavril, Gabriel Ebner, and Xavier Martinet.",
"venue": "In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho,\neditors, Advances in Neural Information Processing Systems, 2022.",
"url": null
}
},
{
"42": {
"title": "Solving quantitative reasoning problems with language models.",
"author": "Aitor Lewkowycz, Anders Johan Andreassen, David Dohan, Ethan Dyer, Henryk\nMichalewski, Vinay Venkatesh Ramasesh, Ambrose Slone, Cem Anil, Imanol\nSchlag, Theo Gutman-Solo, Yuhuai Wu, Behnam Neyshabur, Guy Gur-Ari, and\nVedant Misra.",
"venue": "In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho,\neditors, Advances in Neural Information Processing Systems, 2022.",
"url": null
}
},
{
"43": {
"title": "IsarStep: A benchmark for high-level mathematical reasoning.",
"author": "Wenda Li, Lei Yu, Yuhuai Wu, and Lawrence C. Paulson.",
"venue": "In International Conference on Learning Representations, 2021.",
"url": null
}
},
{
"44": {
"title": "Lightweight relevance filtering for machine-generated resolution\nproblems.",
"author": "Jia Meng and Lawrence C. Paulson.",
"venue": "J. Appl. Log., 7(1):41\u201357, 2009.",
"url": null
}
},
{
"45": {
"title": "PaMpeR: Proof Method Recommendation system for\nIsabelle/HOL, 2018.",
"author": "Yutaka Nagashima and Yilun He.",
"venue": "URL https://arxiv.org/abs/1806.07239.",
"url": null
}
},
{
"46": {
"title": "Text and code embeddings by contrastive pre-training, 2022.",
"author": "Arvind Neelakantan, Tao Xu, Raul Puri, Alec Radford, Jesse Michael Han, Jerry\nTworek, Qiming Yuan, Nikolas Tezak, Jong Wook Kim, Chris Hallacy, Johannes\nHeidecke, Pranav Shyam, Boris Power, Tyna Eloundou Nekoul, Girish Sastry,\nGretchen Krueger, David Schnurr, Felipe Petroski Such, Kenny Hsu, Madeleine\nThompson, Tabarak Khan, Toki Sherbakov, Joanne Jang, Peter Welinder, and\nLilian Weng.",
"venue": null,
"url": null
}
},
{
"47": {
"title": "Fun with functions.",
"author": "Tobias Nipkow.",
"venue": "Archive of Formal Proofs, August 2008.",
"url": null
}
},
{
"48": {
"title": "Passage re-ranking with BERT.",
"author": "Rodrigo Frassetto Nogueira and Kyunghyun Cho.",
"venue": "CoRR, abs/1901.04085, 2019.",
"url": null
}
},
{
"49": {
"title": "Graph representations for higher-order logic and theorem proving.",
"author": "Aditya Paliwal, Sarah M. Loos, Markus N. Rabe, Kshitij Bansal, and Christian\nSzegedy.",
"venue": "In The Thirty-Fourth AAAI Conference on Artificial\nIntelligence, AAAI 2020, The Thirty-Second Innovative Applications of\nArtificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium\non Educational Advances in Artificial Intelligence, EAAI 2020, New York,\nNY, USA, February 7-12, 2020, pages 2967\u20132974. AAAI Press, 2020.",
"url": null
}
},
{
"50": {
"title": "Isabelle: The next 700 theorem provers.",
"author": "Lawrence C. Paulson.",
"venue": "CoRR, cs.LO/9301106, 1993.",
"url": null
}
},
{
"51": {
"title": "Three years of experience with Sledgehammer, a practical link\nbetween automatic and interactive theorem provers.",
"author": "Lawrence Charles Paulson and Jasmin Christian Blanchette.",
"venue": "In IWIL@LPAR, 2012.",
"url": null
}
},
{
"52": {
"title": "ATPboost: Learning premise selection in binary setting with ATP\nfeedback.",
"author": "Bartosz Piotrowski and Josef Urban.",
"venue": "In Didier Galmiche, Stephan Schulz, and Roberto Sebastiani, editors,\nAutomated Reasoning \u2013 9th International Joint Conference, IJCAR\n2018, Held as Part of the Federated Logic Conference, FloC 2018, Oxford, UK,\nJuly 14-17, 2018, Proceedings, volume 10900 of Lecture Notes in\nComputer Science, pages 566\u2013574. Springer, 2018.",
"url": null
}
},
{
"53": {
"title": "Stateful premise selection by recurrent neural networks.",
"author": "Bartosz Piotrowski and Josef Urban.",
"venue": "In Elvira Albert and Laura Kov\u00e1cs, editors, LPAR 2020:\n23rd International Conference on Logic for Programming, Artificial\nIntelligence and Reasoning, Alicante, Spain, May 22-27, 2020, volume 73 of\nEPiC Series in Computing, pages 409\u2013422. EasyChair, 2020.",
"url": null
}
},
{
"54": {
"title": "Machine-learned premise selection for Lean.",
"author": "Bartosz Piotrowski, Ramon Fern\u00e1ndez Mir, and Edward W. Ayers.",
"venue": "In Revantha Ramanayake and Josef Urban, editors, Automated\nReasoning with Analytic Tableaux and Related Methods - 32nd International\nConference, TABLEAUX 2023, Prague, Czech Republic, September 18-21, 2023,\nProceedings, volume 14278 of Lecture Notes in Computer Science, pages\n175\u2013186. Springer, 2023.",
"url": null
}
},
{
"55": {
"title": "Generative language modeling for automated theorem proving.",
"author": "Stanislas Polu and Ilya Sutskever.",
"venue": "CoRR, abs/2009.03393, 2020.",
"url": null
}
},
{
"56": {
"title": "Formal mathematics statement curriculum learning.",
"author": "Stanislas Polu, Jesse Michael Han, Kunhao Zheng, Mantas Baksys, Igor\nBabuschkin, and Ilya Sutskever.",
"venue": "In The Eleventh International Conference on Learning\nRepresentations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net,\n2023.",
"url": null
}
},
{
"57": {
"title": "Language models are unsupervised multitask learners.",
"author": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya\nSutskever, et al.",
"venue": "OpenAI blog, 1(8):9, 2019.",
"url": null
}
},
{
"58": {
"title": "Learning transferable visual models from natural language\nsupervision.",
"author": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh,\nSandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark,\nGretchen Krueger, and Ilya Sutskever.",
"venue": "CoRR, abs/2103.00020, 2021.",
"url": null
}
},
{
"59": {
"title": "The probabilistic relevance framework: BM25 and beyond.",
"author": "Stephen E. Robertson and Hugo Zaragoza.",
"venue": "Found. Trends Inf. Retr., 3(4):333\u2013389,\n2009.",
"url": null
}
},
{
"60": {
"title": "Contrastive learning with hard negative samples.",
"author": "Joshua David Robinson, Ching-Yao Chuang, Suvrit Sra, and Stefanie Jegelka.",
"venue": "In 9th International Conference on Learning Representations,\nICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021.",
"url": null
}
},
{
"61": {
"title": "System description: E 0.81.",
"author": "Stephan Schulz.",
"venue": "In David Basin and Micha\u00ebl Rusinowitch, editors, Automated\nReasoning, pages 223\u2013228, Berlin, Heidelberg, 2004. Springer Berlin\nHeidelberg.",
"url": null
}
},
{
"62": {
"title": "Roformer: Enhanced transformer with rotary position embedding.",
"author": "Jianlin Su, Yu Lu, Shengfeng Pan, Bo Wen, and Yunfeng Liu.",
"venue": "CoRR, abs/2104.09864, 2021.",
"url": null
}
},
{
"63": {
"title": "The TPTP Problem Library and Associated Infrastructure. From CNF to\nTH0, TPTP v6.4.0.",
"author": "G. Sutcliffe.",
"venue": "Journal of Automated Reasoning, 59(4):483\u2013502, 2017.",
"url": null
}
},
{
"64": {
"title": "Solving mathematical problems: A personal perspective.",
"author": "Terence Tao.",
"venue": "Oxford University Press, 2010.",
"url": null
}
},
{
"65": {
"title": "BEIR: A heterogenous benchmark for zero-shot evaluation of\ninformation retrieval models.",
"author": "Nandan Thakur, Nils Reimers, Andreas R\u00fcckl\u00e9, Abhishek Srivastava,\nand Iryna Gurevych.",
"venue": "CoRR, abs/2104.08663, 2021.",
"url": null
}
},
{
"66": {
"title": "Formal premise selection with language models.",
"author": "Szymon Tworkowski, Maciej Miku\u0142a, Tomasz Odrzyg\u00f3\u017ad\u017a, Konrad Czechowski,\nSzymon Antoniak, Albert Jiang, Christian Szegedy, \u0141ukasz Kuci\u0144ski, Piotr\nMi\u0142o\u015b, and Yuhuai Wu.",
"venue": "AITP 2022, 2022.",
"url": null
}
},
{
"67": {
"title": "First neural conjecturing datasets and experiments.",
"author": "Josef Urban and Jan Jakubuv.",
"venue": "In Christoph Benzm\u00fcller and Bruce R. Miller, editors,\nIntelligent Computer Mathematics - 13th International Conference,\nCICM 2020, Bertinoro, Italy, July 26-31, 2020, Proceedings, volume 12236\nof Lecture Notes in Computer Science, pages 315\u2013323. Springer, 2020.",
"url": null
}
},
{
"68": {
"title": "MaLARea SG1 \u2013 machine learner for automated reasoning with\nsemantic guidance.",
"author": "Josef Urban, Geoff Sutcliffe, Petr Pudl\u00e1k, and Ji\u0159\u00ed\nVysko\u010dil.",
"venue": "In Alessandro Armando, Peter Baumgartner, and Gilles Dowek, editors,\nAutomated Reasoning, pages 441\u2013456, Berlin, Heidelberg, 2008.\nSpringer Berlin Heidelberg.",
"url": null
}
},
{
"69": {
"title": "Representation learning with contrastive predictive coding.",
"author": "A\u00e4ron van den Oord, Yazhe Li, and Oriol Vinyals.",
"venue": "CoRR, abs/1807.03748, 2018.",
"url": null
}
},
{
"70": {
"title": "Attention is all you need.",
"author": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones,\nAidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin.",
"venue": "CoRR, abs/1706.03762, 2017.",
"url": null
}
},
{
"71": {
"title": "GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model.",
"author": "Ben Wang and Aran Komatsuzaki.",
"venue": "https://github.com/kingoflolz/mesh-transformer-jax, May 2021.",
"url": null
}
},
{
"72": {
"title": "Premise selection for theorem proving by deep graph embedding.",
"author": "Mingzhe Wang, Yihe Tang, Jian Wang, and Jia Deng.",
"venue": "CoRR, abs/1709.09994, 2017.",
"url": null
}
},
{
"73": {
"title": "Combining superposition, sorts and splitting.",
"author": "Christoph Weidenbach.",
"venue": "In John Alan Robinson and Andrei Voronkov, editors, Handbook of\nAutomated Reasoning, pages 1965\u20132013. Elsevier and MIT Press, 2001.",
"url": null
}
},
{
"74": {
"title": "Autoformalization with large language models.",
"author": "Yuhuai Wu, Albert Q. Jiang, Wenda Li, Markus Norman Rabe, Charles E Staats,\nMateja Jamnik, and Christian Szegedy.",
"venue": "In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho,\neditors, Advances in Neural Information Processing Systems,\n2022a.",
"url": null
}
},
{
"75": {
"title": "Memorizing transformers.",
"author": "Yuhuai Wu, Markus Norman Rabe, DeLesley Hutchins, and Christian Szegedy.",
"venue": "In The Tenth International Conference on Learning\nRepresentations, ICLR 2022, Virtual Event, April 25-29, 2022.\nOpenReview.net, 2022b.",
"url": null
}
},
{
"76": {
"title": "Learning to prove theorems via interacting with proof assistants.",
"author": "Kaiyu Yang and Jia Deng.",
"venue": "In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors,\nProceedings of the 36th International Conference on Machine Learning,\nICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of\nProceedings of Machine Learning Research, pages 6984\u20136994. PMLR,\n2019.",
"url": null
}
},
{
"77": {
"title": "LeanDojo: Theorem proving with retrieval-augmented language models.",
"author": "Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu,\nSaad Godil, Ryan Prenger, and Anima Anandkumar.",
"venue": "CoRR, abs/2306.15626, 2023.",
"url": null
}
},
{
"78": {
"title": "miniF2F: A cross-system benchmark for formal olympiad-level\nmathematics.",
"author": "Kunhao Zheng, Jesse Michael Han, and Stanislas Polu.",
"venue": "In The Tenth International Conference on Learning\nRepresentations, ICLR 2022, Virtual Event, April 25-29, 2022.\nOpenReview.net, 2022.",
"url": null
}
}
],
"url": "http://arxiv.org/html/2303.04488v3"
}