File size: 5,160 Bytes
e6c62db
1dc4f2b
 
 
 
 
 
 
 
 
 
 
 
e6f4ca7
 
e6c62db
 
eb98f73
9d15c67
caaad53
9d15c67
048bbd3
9d15c67
 
 
 
 
 
 
 
1dc4f2b
d60bff1
 
 
1dc4f2b
9d15c67
40e0ab0
1dc4f2b
40e0ab0
9d15c67
 
1dc4f2b
40e0ab0
1dc4f2b
40e0ab0
 
 
 
 
 
 
 
1dc4f2b
9d15c67
1dc4f2b
 
 
 
40e0ab0
 
 
 
 
9d15c67
1dc4f2b
 
9d15c67
 
 
 
 
44e02fc
caaad53
 
 
8efef33
1990d9a
caaad53
 
 
 
1990d9a
caaad53
 
51869a5
93502eb
557134e
f4e66b1
caaad53
 
9d15c67
 
 
 
caaad53
9d15c67
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1dc4f2b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
---
license: mit
language:
- en
metrics:
- perplexity
pipeline_tag: text-generation
tags:
- llama-2
- astronomy
- astrophysics
- arxiv
inference: false
base_model:
- meta-llama/Llama-2-7b-hf
---

# AstroLLaMA-2-7B-Base_Abstract

AstroLLaMA-2-7B-Abstract is a specialized base language model for astronomy, developed by fine-tuning Meta's LLaMA-2-7b architecture on astronomical literature. This model was originally developed by the AstroLLaMA team as part of the UniverseTBD initiative. It is designed for next token prediction tasks and is not an instruct/chat model.

**Note**: This model is provided for completeness in the series of AstroLLaMA models. The core AstroLLaMA team has since moved on to develop more advanced models under AstroMLab. For the original UniverseTBD version, please visit [their repository](https://huggingface.co/universeTBD/astrollama).

## Model Details

- **Base Architecture**: LLaMA-2-7b
- **Training Data**: Abstracts from 326,238 astronomy papers from arXiv's astro-ph category (April 1992 to July 2023)
- **Fine-tuning Method**: Parameter-Efficient Fine-Tuning (PEFT) with LowRank Adaptation (LoRA)
- **Primary Use**: Next token prediction for astronomy-related text generation and analysis
- **Reference**: [Nguyen et al. 2023](https://arxiv.org/abs/2309.06126)


## Generating text from a prompt

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

# Load the model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("AstroMLab/astrollama-2-7b-base_abstract")
model = AutoModelForCausalLM.from_pretrained("AstroMLab/astrollama-2-7b-base_abstract", device_map="auto")

# Create the pipeline with explicit truncation
from transformers import pipeline
generator = pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
    device_map="auto",
    truncation=True,
    max_length=512
)

# Example prompt from an astronomy paper
prompt = "In this letter, we report the discovery of the highest redshift, " \
    "heavily obscured, radio-loud QSO candidate selected using JWST NIRCam/MIRI, " \
    "mid-IR, sub-mm, and radio imaging in the COSMOS-Web field. "

# Set seed for reproducibility
torch.manual_seed(42)

# Generate text
generated_text = generator(prompt, do_sample=True)
print(generated_text[0]['generated_text'])
```


## Model Limitations and Biases

This model is specifically trained on astronomy abstracts and may not generalize well to other domains. Users should be aware of potential biases in the training data, which may reflect historical trends and biases in astronomical research publications.

Importantly, this model has been superseded by more advanced versions. Here's a performance comparison chart based upon the astronomical benchmarking Q&A as described in [Ting et al. 2024](https://arxiv.org/abs/2407.11194).

| Model | Score (%) |
|-------|-----------|
| **AstroSage-LLaMA-3.1-8B (AstroMLab)** | **80.9** |
| **AstroLLaMA-2-70B (AstroMLab)** | **76.0** |
| LLaMA-3.1-8B | 73.7 |
| Gemma-2-9B | 71.5 |
| Qwen-2.5-7B | 70.4 |
| Yi-1.5-9B | 68.4 |
| InternLM-2.5-7B | 64.5 |
| Mistral-7B-v0.3 | 63.9 |
| ChatGLM3-6B | 50.4 |
| AstroLLaMA-2-7B-AIC | 44.3 |
| <span style="color:red">AstroLLaMA-2-7B-Abstract</span> | <span style="color:red">43.5</span> |

As shown, AstroLLaMA-2-7B series are outperformed by newer models. For state-of-the-art performance, we recommend using the latest models.


## Ethical Considerations

While this model is designed for scientific use, users should be mindful of potential misuse, such as generating misleading scientific content. Always verify model outputs against peer-reviewed sources for critical applications.


## Citation

If you use this model in your research, please cite:

```
@ARTICLE{2023arXiv230906126D,
       author = {{Dung Nguyen}, Tuan and {Ting}, Yuan-Sen and {Ciuc{\u{a}}}, Ioana and {O'Neill}, Charlie and {Sun}, Ze-Chang and {Jab{\l}o{\'n}ska}, Maja and {Kruk}, Sandor and {Perkowski}, Ernest and {Miller}, Jack and {Li}, Jason and {Peek}, Josh and {Iyer}, Kartheik and {R{\'o}{\.z}a{\'n}ski}, Tomasz and {Khetarpal}, Pranav and {Zaman}, Sharaf and {Brodrick}, David and {Rodr{\'\i}guez M{\'e}ndez}, Sergio J. and {Bui}, Thang and {Goodman}, Alyssa and {Accomazzi}, Alberto and {Naiman}, Jill and {Cranney}, Jesse and {Schawinski}, Kevin and {UniverseTBD}},
        title = "{AstroLLaMA: Towards Specialized Foundation Models in Astronomy}",
      journal = {arXiv e-prints},
     keywords = {Astrophysics - Instrumentation and Methods for Astrophysics, Astrophysics - Cosmology and Nongalactic Astrophysics, Astrophysics - Astrophysics of Galaxies, Astrophysics - High Energy Astrophysical Phenomena, Computer Science - Computation and Language, Computer Science - Machine Learning},
         year = 2023,
        month = sep,
          eid = {arXiv:2309.06126},
        pages = {arXiv:2309.06126},
          doi = {10.48550/arXiv.2309.06126},
archivePrefix = {arXiv},
       eprint = {2309.06126},
 primaryClass = {astro-ph.IM},
       adsurl = {https://ui.adsabs.harvard.edu/abs/2023arXiv230906126D},
      adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
```