| | --- |
| | language: en |
| | license: apache-2.0 |
| | tags: |
| | - llm |
| | - gpt |
| | - create-llm |
| | - pytorch |
| | base_model: |
| | - BabyLM-community/babylm-baseline-100m-gpt-bert-causal-focus |
| | --- |
| | |
| | # nova |
| |
|
| | This model was trained using [create-llm](https://github.com/theaniketgiri/create-llm). |
| |
|
| | ## Model Description |
| |
|
| | A language model trained with create-llm framework. |
| |
|
| | ## Usage |
| |
|
| | ```python |
| | import torch |
| | from transformers import AutoTokenizer |
| | |
| | # Load model |
| | model = torch.load('pytorch_model.bin') |
| | model.eval() |
| | |
| | # Load tokenizer (if available) |
| | try: |
| | tokenizer = AutoTokenizer.from_pretrained("cthyatt/nova") |
| | except: |
| | print("Tokenizer not available") |
| | |
| | # Generate text |
| | # Add your generation code here |
| | ``` |
| |
|
| | ## Training Details |
| |
|
| | - **Framework:** PyTorch |
| | - **Tool:** create-llm |
| | - **Deployment:** Hugging Face Hub |
| |
|
| | ## Citation |
| |
|
| | ```bibtex |
| | @misc{cthyatt-nova, |
| | author = {Sir. Christopher Thomas Hyatt}, |
| | title = {nova}, |
| | year = {2025}, |
| | publisher = {Hugging Face}, |
| | howpublished = {\url{https://huggingface.co/cthyatt/nova}} |
| | } |
| | ``` |