| | --- |
| | license: apache-2.0 |
| | datasets: |
| | - arxiv_abstracts |
| | language: |
| | - en |
| | pipeline_tag: text-generation |
| | tags: |
| | - tiny |
| | - pico |
| | - scratch |
| | - llama-2 |
| | - academic |
| | --- |
| | |
| | # AbstractsLlama-8M |
| |
|
| | AbstractsLlama-8M is an ultra-compact, "pico-sized" language model **trained from scratch** by **Pico-Kittens**. It utilizes the **Llama 2 architecture** and is specifically optimized for generating scientific and academic text. |
| |
|
| | ## Model Details |
| |
|
| | - **Developed by:** Pico-Kittens |
| | - **Model type:** Llama 2-based Causal Language Model |
| | - **Training Status:** Trained from scratch (Not a fine-tune) |
| | - **Parameters:** ~8 Million |
| | - **Language(s):** English |
| | - **License:** apache-2.0 |
| |
|
| | ## Training Data |
| |
|
| | The model was trained on a large-scale collection of **ArXiv abstracts**. The training objective was to compress the structural patterns, technical nomenclature, and "academic tone" of scientific research into a minimal parameter budget. |
| |
|
| | ## Capabilities & Limitations |
| |
|
| | AbstractsLlama-8M is an experimental model. While it effectively mimics the syntax of research papers, users should be aware of the following: |
| |
|
| | * **Scientific Syntax:** Highly competent; it excels at producing the "feel" of a formal research proposal or abstract. |
| | * **Architecture:** Implements the Llama 2 transformer block structure at a micro scale. |
| | * **Hallucinations:** Extremely high. The model will invent methodologies, chemical structures, and mathematical frameworks that do not exist. |
| | * **Context:** Limited. It is best suited for short-form generation (under 128 tokens). |
| |
|
| | --- |
| |
|
| | ## Generation Sample |
| |
|
| | **User:** *We propose* |
| |
|
| | **AbstractsLlama-8M:** |
| | > We propose a unified framework for modeling large-scale non-linearity of Cancer (NCI) problems with a variable-scale dataset for the linearized dynamics of polynomial conjugal structure. Our key idea of a multi-objective-centile-based model with a fixed, non-preferred variational autoencoder (NMAE) for feature extraction, which includes ax-aware, non-convex optimization formulation for both a single |
| |
|
| | --- |
| |
|
| | ## How to Get Started |
| |
|
| | ```python |
| | import torch |
| | from transformers import pipeline |
| | |
| | device = 0 if torch.cuda.is_available() else -1 |
| | pipe = pipeline("text-generation", model="PicoKittens/AbstractsLlama-8M", device=device) |
| | |
| | output = pipe("We propose", max_new_tokens=100, do_sample=True) |
| | print(output[0]['generated_text']) |