GPT-Fem
An 81-million parameter LLM using GPT-2 encodings. First trained using 14GB of USENET posts and other text files, and then finetuned using 16GB of text relating to and made by women.
This model should be fine-tuned before use.
Languages:
English, Turkish, Swedish, Serbian, Portugese, Norwegian, Welsh, Thai, Polish, French, Finnish, Dutch, Arabic, Korean, Japanese, Danish, Croatian, Spanish, Russian, Chinese
Technical Information
| Layers | 12 |
| Heads | 12 |
| Embeddings | 768 |
| Context Window | 1024 tokens |
| Tokenizer | GPT-2 BPE |

