Instructions to use Raffleraffle/manifoldgl with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use Raffleraffle/manifoldgl with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-7B") model = PeftModel.from_pretrained(base_model, "Raffleraffle/manifoldgl") - llama-cpp-python
How to use Raffleraffle/manifoldgl with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="Raffleraffle/manifoldgl", filename="igbundle_qwen7b_riemannian.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use Raffleraffle/manifoldgl with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Raffleraffle/manifoldgl # Run inference directly in the terminal: llama-cli -hf Raffleraffle/manifoldgl
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Raffleraffle/manifoldgl # Run inference directly in the terminal: llama-cli -hf Raffleraffle/manifoldgl
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf Raffleraffle/manifoldgl # Run inference directly in the terminal: ./llama-cli -hf Raffleraffle/manifoldgl
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf Raffleraffle/manifoldgl # Run inference directly in the terminal: ./build/bin/llama-cli -hf Raffleraffle/manifoldgl
Use Docker
docker model run hf.co/Raffleraffle/manifoldgl
- LM Studio
- Jan
- vLLM
How to use Raffleraffle/manifoldgl with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Raffleraffle/manifoldgl" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Raffleraffle/manifoldgl", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Raffleraffle/manifoldgl
- Ollama
How to use Raffleraffle/manifoldgl with Ollama:
ollama run hf.co/Raffleraffle/manifoldgl
- Unsloth Studio new
How to use Raffleraffle/manifoldgl with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Raffleraffle/manifoldgl to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Raffleraffle/manifoldgl to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for Raffleraffle/manifoldgl to start chatting
- Pi new
How to use Raffleraffle/manifoldgl with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf Raffleraffle/manifoldgl
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "Raffleraffle/manifoldgl" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use Raffleraffle/manifoldgl with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf Raffleraffle/manifoldgl
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default Raffleraffle/manifoldgl
Run Hermes
hermes
- Docker Model Runner
How to use Raffleraffle/manifoldgl with Docker Model Runner:
docker model run hf.co/Raffleraffle/manifoldgl
- Lemonade
How to use Raffleraffle/manifoldgl with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull Raffleraffle/manifoldgl
Run and chat with the model
lemonade run user.manifoldgl-{{QUANT_TAG}}List all available models
lemonade list
ManifoldGL – Information‑Geometric Adapter for LLMs
ManifoldGL is a parameter‑efficient adapter that enforces hyperbolic geometry on the latent space of large language models. It treats the meaning of a token as a fiber over a hyperbolic base manifold (a Poincaré ball), rather than a single vector in flat Euclidean space. Latent states are projected onto the ball, and attentions are computed using geodesic distance. A sheaf‑theoretic consistency loss and natural gradient optimization maintain semantic structure during training.
Motivation and theoretical background
Modern LLMs embed tokens in a Euclidean vector space. While convenient, Euclidean geometry has limited capacity to represent hierarchical structures: flat space grows polynomially, whereas hierarchical trees expand exponentially. By contrast, hyperbolic space grows exponentially and preserves both local and global relationships in a hierarchy【247949143190903†L115-L124】. Hyperbolic embeddings outperform Euclidean ones for lexical entailment, similarity and analogy tasks【247949143190903†L154-L169】. ManifoldGL leverages these properties by modelling the latent space as a fiber bundle over a hyperbolic base: each point in the Poincaré ball encodes a context, and its fiber contains a distribution of semantic components.
Results on ARC‑AGI benchmark
ManifoldGL fine‑tuned on Qwen2.5‑7B improves task accuracy on the ARC‑AGI benchmark from 12.4 % to 28.7 %, a 131.5 % relative improvement. The model also achieves a Manifold Faithfulness Rate (MFR) of 94.2 %, indicating high adherence to the hyperbolic constraints, and maintains a curvature close to the target κ = ‑1 (mean ‑0.98 ± 0.04). Ablation studies show that removing curvature regularization, natural gradients, sheaf consistency or the hyperbolic target significantly reduces accuracy; the Euclidean target ablation causes the largest drop (–10.9 %), highlighting the importance of hyperbolic geometry.
Files in this repository
This model card accompanies adapter weights trained with ManifoldGL. The files follow the structure of the original repository:
adapter_config.json– configuration for PEFT/LoRA loadingpytorch_adapter.bin– adapter weightsREADME.md– this model card
Quick start
from transformers import AutoModelForCausalLM
from peft import PeftModel
# Load the base model (Qwen2.5-7B)
base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-7B")
# Load the ManifoldGL adapter
model = PeftModel.from_pretrained(base_model, "jesusvilela/manifoldgl")
# Now use model.generate(...) to generate text with hyperbolic adapters
Usage
This adapter can be loaded with PEFT on top of any compatible Qwen2.5‑7B model. During generation, latent states are projected into hyperbolic space and meaning is represented as fibers. We recommend using FP32 precision for maximum stability.
Citation
If you use ManifoldGL in your work, please cite the accompanying thesis and repository.
- Downloads last month
- 4
Model tree for Raffleraffle/manifoldgl
Base model
Qwen/Qwen2.5-7B