Instructions to use LeroyDyer/_Spydaz_Web_AGI_DeepThinker_LCARS_ with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use LeroyDyer/_Spydaz_Web_AGI_DeepThinker_LCARS_ with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="LeroyDyer/_Spydaz_Web_AGI_DeepThinker_LCARS_") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("LeroyDyer/_Spydaz_Web_AGI_DeepThinker_LCARS_") model = AutoModelForCausalLM.from_pretrained("LeroyDyer/_Spydaz_Web_AGI_DeepThinker_LCARS_") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use LeroyDyer/_Spydaz_Web_AGI_DeepThinker_LCARS_ with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "LeroyDyer/_Spydaz_Web_AGI_DeepThinker_LCARS_" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "LeroyDyer/_Spydaz_Web_AGI_DeepThinker_LCARS_", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/LeroyDyer/_Spydaz_Web_AGI_DeepThinker_LCARS_
- SGLang
How to use LeroyDyer/_Spydaz_Web_AGI_DeepThinker_LCARS_ with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "LeroyDyer/_Spydaz_Web_AGI_DeepThinker_LCARS_" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "LeroyDyer/_Spydaz_Web_AGI_DeepThinker_LCARS_", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "LeroyDyer/_Spydaz_Web_AGI_DeepThinker_LCARS_" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "LeroyDyer/_Spydaz_Web_AGI_DeepThinker_LCARS_", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Unsloth Studio new
How to use LeroyDyer/_Spydaz_Web_AGI_DeepThinker_LCARS_ with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for LeroyDyer/_Spydaz_Web_AGI_DeepThinker_LCARS_ to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for LeroyDyer/_Spydaz_Web_AGI_DeepThinker_LCARS_ to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for LeroyDyer/_Spydaz_Web_AGI_DeepThinker_LCARS_ to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="LeroyDyer/_Spydaz_Web_AGI_DeepThinker_LCARS_", max_seq_length=2048, ) - Docker Model Runner
How to use LeroyDyer/_Spydaz_Web_AGI_DeepThinker_LCARS_ with Docker Model Runner:
docker model run hf.co/LeroyDyer/_Spydaz_Web_AGI_DeepThinker_LCARS_
- "Success comes from defining each task in achievable steps. Every completed step is a success that brings you closer to your goal. If your steps are unreachable, failure is inevitable. Winners create more winners, while losers do the opposite. Success is a game of winners!"
- "To grow as a professional, set goals just beyond your current abilities. Achieving these milestones will not only overcome obstacles but also strengthen your skillset. If your tasks are too easy, you’ll never challenge yourself or improve, and life will pass you by!"
Quote for Motivation:
"Success comes from defining each task in achievable steps. Every completed step is a success that brings you closer to your goal. If your steps are unreachable, failure is inevitable. Winners create more winners, while losers do the opposite. Success is a game of winners!"
"To grow as a professional, set goals just beyond your current abilities. Achieving these milestones will not only overcome obstacles but also strengthen your skillset. If your tasks are too easy, you’ll never challenge yourself or improve, and life will pass you by!"
— # Leroy Dyer (1972-Present)
- Developed by: LeroyDyer
- License: apache-2.0
- Finetuned from model : LeroyDyer/_Spydaz_Web_AGI_DeepThink_Prime_R1
🧪 Training Methodology For this model : The focus has been mainly on methodology :
Chain of thoughts step by step planning tree of thoughts forest of thoughts graph of thoughts
Domains of Focus The model was trained with cross-domain expertise in:
✅ Coding and Software Engineering
✅ Medical Diagnostics and Advisory
✅ Financial Analysis and Logic
✅ General Problem Solving
✅ Daily Business Operations and Automation
🧠 Training Philosophy
Our training approach encourages cognitive emulation, blending multiple reasoning modes into a single thought engine. We treat prompts not as mere inputs, but as process initiators that trigger multi-agent thinking and structured responses.
The model has been instructed through role-based self-dialogue, encouraging:
Expert role-playing Internal agent debates Methodology selection Emotional tone modulation Structured narrative output
🧬 Method Implantation The model is trained to:
Emulate and follow graph-based reasoning paths Choose methodologies during task execution Maintain internal consistency through thinking traces Output structured answers that include planning, reflection, emotion, and critique
Specialized Tasks in Next Iterations:
Context-aware code repair Context-based story generation Emotive entity recognition Emotion-aware technical responses
These features are being refined in sub-layers before integration into role-based or domain-specific agents. Our processes focus on reasoning as well as expert knowledge which is extracted utilizing generated agents and experts whcih are used as consultants for the task or even producing components for the resulting output: in this respect the process has become a thought process of its own making: by applying the prompt across the domains of knowledge ; we find the varied process or thought traces can be interchangable between tasks as well as agent converstaitons and role playing as well as emotional speech for speech processing apps:
to enable for structured outputs we hard trained the moel utilizing various tags : in such that our mixed datsets of various reasoning techniques as well as planing and critique techniques can be utilized and combined into this training layer : We have now taen the concept of training layers and understand that the fine tuning process can truly train a collection of responses as well as create a variety of response types : if a task it too generalized it maynot return with a complexed prompt , or if the taks is too complexed a simple prompt will not enable for the answer to be extracted :
🔍 Knowledge Base and Evaluation Strategy
We emphasize knowledge diversity over conventional multiple-choice datasets. Our philosophy:
“Don’t teach answers through elimination. Teach the mind to reach correct conclusions through reasoning.”
Its important to deploy a varied collection of knowledhge based and eval datasets ( not great for training as in truth we do not use multiple choice in our coversations an you ould even be planting wrong options which may be investigated later y the model . when indeed we wish to provide a collection of right answers with varied methods to reach the correct outputs) GRPO Reward training can be useful in understancing the methods and routes your model may take : if you find your training reasoning process are not getting to answers which the modle has ot been traied on then we suggest using such methods to discover what the model is thining and lock in the correct routes inside the model with this form of training providing 3-4 potential explanaitions for the model to select from its pool: Later you can retrain the same data with precise routes : or preffeeered routes :
Implanting methodologys
In our prompt we now also deploy graphing and examples of methdologys the model should follow as well as roles it has been trained as as well as behaviours we expect the model to display : this can be from reasoning to emotive responses to charting a thinking process or problem solving process siumular to langchain ! ( but internal )
🧭 Roadmap Future releases will expand:
Agent simulation (multi-role task orchestration) Inner monologue tracing Graph-of-Thought → Action planning pipelines Dialogue with expert personas Visual output (Mermaid, Graphviz) integrated with reasoning
This model is part of the Spydaz Web AGI Project, a long-term initiative to build autonomous, multimodal, emotionally-aware AGI systems with fully internalized cognitive frameworks.
If your goal is to push boundaries in reasoning, decision-making, or intelligent tooling — this model is your launchpad.
- Downloads last month
- 5