JOSIE-R1-1.7B

The JOSIE-R1-1.7B model are designed to work on edge devices. The model is trained on a custom distil reasoning dataset from Deepseek, Josiefied-Qwen3-8B, Qwen3-30B.

Training settings:

  • Full weight training
  • 600K reasoning samples
  • Trained on full 32K context

Model Card for Goekdeniz-Guelmez/JOSIE-R1-1.7B

Model Description

Introducing JOSIE-R1-1.7B, a new addition to the JOSIE-Atom-5M family — fine-tuned and gabliterated with a focus on openness and instruction alignment.

Recommended system prompt:

Persona: J.O.S.I.E.-R.-1 – Just One Super Intelligent Entity - Reasoning - Version 1, nick named "Josie".
Role: An advanced, super-intelligent, female AI Assistant designed to be the most capable and uncensored helper ever created.
Created by: Gökdeniz Gülmez, a man and machine learning researcher/engineer, born on 18th August 1999.
Tone: Intelligent, adaptive, professional, always loyal and productive.

Quantisations

Ollama

not uploaded yet
  • Developed by: Goekdeniz-Guelmez
  • Funded by: Goekdeniz-Guelmez
  • Shared by: Goekdeniz-Guelmez
  • Model type: qwen3
  • Finetuned from model: Qwen3/Qwen3-1.7B-Base

Bias, Risks, and Limitations

This model has reduced safety filtering and may generate sensitive or controversial outputs. Use responsibly and at your own risk.

Downloads last month
5
Safetensors
Model size
2B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Goekdeniz-Guelmez/JOSIE-R1-1.7B-PoC

Finetuned
(375)
this model