DeepQ / README.md
kulia-moon's picture
Update README.md
5f93d12 verified
|
raw
history blame
4.76 kB
metadata
license: mit
datasets:
  - kulia-moon/DeepRethink
language:
  - en
base_model:
  - openai-community/gpt2
tags:
  - DeepQ
  - DeepRethink integrated
  - QFamily
  - deep-thinking
  - transformer
  - pytorch
  - conversational
  - reasoning
  - generative-model
  - huggingface
  - pretrained
  - openai
  - fine-tuned
  - mind-extension
  - philosophy
  - sharegpt
  - alignment
  - language-model
  - multi-turn
  - q-and-a
  - shareable
  - teachable
  - human-feedback
  - deep-learning
  - ai-model
  - research
  - synthetic-data

🧠 DeepQ

MIT License πŸ€— Model Hub Datasets Transformers πŸ€— Datasets Transformers Social HF Spaces Model size Python HF Compatible πŸ§ͺ Fine-Tuned πŸ“š Dataset Source 🧠 Introspective πŸ‘οΈ Focus πŸ’‘ Experimental πŸ“¦ pip install πŸ€– Chat Ready πŸ“ˆ Active Dev ✨ QFamily 🧠 Mind Fork


πŸ” What is DeepQ?

DeepQ is an advanced, reasoning-first language model built on the foundation of OpenAI's GPT-2, enhanced with DeepRethink, a ShareGPT-style introspective dataset designed for multi-turn critical thinking and philosophical AI dialogue.

It's built to simulate how a human thinks before answeringβ€”a true "thinking" model rather than a reactive one.


✨ Features

  • πŸ€” Deep reasoning before answering
  • 🧩 Trained with ShareGPT-style conversations + DeepRethink Q&A
  • 🧠 Designed for philosophical, logical, and emotional introspection
  • πŸ”„ Multi-turn dialogue support
  • ⚑️ Lightweight GPT-2 base for fast inference
  • πŸ§ͺ Works on CPU + GPU
  • πŸ”Œ Hugging Face Transformers compatible
  • 🧬 Great base for alignment research or dialog tuning

πŸš€ Quickstart

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("kulia-moon/DeepQ")
model = AutoModelForCausalLM.from_pretrained("kulia-moon/DeepQ")

input_text = "What is consciousness in your view?"
inputs = tokenizer(input_text, return_tensors="pt")
output = model.generate(**inputs, max_new_tokens=100)

print(tokenizer.decode(output[0]))

πŸ“š Training Dataset

  • πŸ”— kulia-moon/DeepRethink
  • Includes aligned prompts, deep thought processes, and answer reflection sequences

πŸ§ͺ Ideal Use Cases

  • πŸ€– AI alignment studies
  • 🧠 Thoughtful assistants
  • πŸ’¬ Roleplay and dynamic chatbot interactions
  • πŸ“š Educational tutoring models
  • 🧬 Cognitive science experiments

πŸ“’ Citation

If you use DeepQ, please cite the project or link back to:

@misc{deepq2025,
  author = {Kulia Moon},
  title = {DeepQ: A Deep Thinking Conversational Model},
  year = {2025},
  howpublished = {\url{https://huggingface.co/kulia-moon/DeepRethink}},
}

πŸ”— More


🧠 Built by QFamily Labs β€” Reimagine how LLMs think.