YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

πŸ€– REY 0.1

REY 0.1 is an experimental conversational AI model developed as part of the Visvam AI project.

Creator: Akshaj Chiguru

REY 0.1 is designed to provide helpful conversational responses and to serve as the first prototype of the Visvam AI assistant system.


🧠 Base Model

REY 0.1 is fine-tuned from the base model:

Qwen/Qwen2.5-7B-Instruct

The model architecture comes from the Qwen family of large language models developed by Alibaba.


βš™οΈ Training

Training Platforms:

  • Google Colab
  • Kaggle

Fine-tuning Method:

  • Instruction tuning
  • Custom conversational dataset

Goal of training:

  • Improve conversational responses
  • Provide assistant-style replies
  • Build the foundation for the Visvam AI assistant

πŸ“¦ Model Format

The model is provided in GGUF format for local inference.

Compatible with:

  • llama.cpp
  • llama-cpp-python
  • local AI applications

πŸš€ Features

  • Conversational AI assistant
  • Instruction-following responses
  • Compatible with local inference engines
  • Designed to integrate with web interfaces

πŸ— Project

This model is part of the Visvam AI project.

Visvam AI focuses on building customizable local AI assistants that can run on personal devices.


πŸ‘¨β€πŸ’» Creator

Akshaj Chiguru

Founder of Visvam AI


πŸ“Š Version

Current Version:

REY 0.1

Status: Experimental Prototype

Future planned versions:

  • REY 0.2 – improved dataset and identity training
  • REY 0.3 – faster responses and optimization
  • REY 1.0 – stable release

⚠️ Limitations

This is an early experimental model and may:

  • produce incorrect responses
  • occasionally hallucinate
  • require prompt guidance

πŸ“œ License

This model follows the license of the base model Qwen/Qwen2.5-7B-Instruct.

Please review the base model license before commercial use.

Downloads last month
559
GGUF
Model size
8B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support