metadata
license: mit
datasets:
- kulia-moon/DeepRethink
language:
- en
base_model:
- openai-community/gpt2
tags:
- DeepQ
- DeepRethink integrated
- QFamily
- deep-thinking
- transformer
- pytorch
- conversational
- reasoning
- generative-model
- huggingface
- pretrained
- openai
- fine-tuned
- mind-extension
- philosophy
- sharegpt
- alignment
- language-model
- multi-turn
- q-and-a
- shareable
- teachable
- human-feedback
- deep-learning
- ai-model
- research
- synthetic-data
π§ DeepQ
π What is DeepQ?
DeepQ is an advanced, reasoning-first language model built on the foundation of OpenAI's GPT-2, enhanced with DeepRethink, a ShareGPT-style introspective dataset designed for multi-turn critical thinking and philosophical AI dialogue.
It's built to simulate how a human thinks before answeringβa true "thinking" model rather than a reactive one.
β¨ Features
- π€ Deep reasoning before answering
- π§© Trained with ShareGPT-style conversations + DeepRethink Q&A
- π§ Designed for philosophical, logical, and emotional introspection
- π Multi-turn dialogue support
- β‘οΈ Lightweight GPT-2 base for fast inference
- π§ͺ Works on CPU + GPU
- π Hugging Face Transformers compatible
- 𧬠Great base for alignment research or dialog tuning
π Quickstart
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("kulia-moon/DeepQ")
model = AutoModelForCausalLM.from_pretrained("kulia-moon/DeepQ")
input_text = "What is consciousness in your view?"
inputs = tokenizer(input_text, return_tensors="pt")
output = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(output[0]))
π Training Dataset
- π kulia-moon/DeepRethink
- Includes aligned prompts, deep thought processes, and answer reflection sequences
π§ͺ Ideal Use Cases
- π€ AI alignment studies
- π§ Thoughtful assistants
- π¬ Roleplay and dynamic chatbot interactions
- π Educational tutoring models
- 𧬠Cognitive science experiments
π’ Citation
If you use DeepQ, please cite the project or link back to:
@misc{deepq2025,
author = {Kulia Moon},
title = {DeepQ: A Deep Thinking Conversational Model},
year = {2025},
howpublished = {\url{https://huggingface.co/kulia-moon/DeepRethink}},
}
π More
- π€ Model: https://huggingface.co/kulia-moon/DeepQ
- π Dataset: https://huggingface.co/datasets/kulia-moon/DeepRethink
- π Website: Coming soon
- π¬ Chat demo: Coming soon
π§ Built by QFamily Labs β Reimagine how LLMs think.