agentic-browser / README.md
anu151105's picture
Fix YAML frontmatter in README.md
34ab0df
|
raw
history blame
3.49 kB
metadata
title: πŸ€– Agentic Browser
emoji: πŸ€–
colorFrom: blue
colorTo: purple
sdk: docker
app_port: 8501
pinned: false
license: mit

πŸ€– Agentic Browser

A powerful, privacy-focused AI assistant that runs locally on your machine. This application allows you to interact with various open-source language models directly in your browser, with support for both lightweight and more powerful models.

🌟 Features

  • Local AI Models: Run models entirely on your hardware
  • Multiple Models: Choose between different models based on your needs
  • Privacy-Focused: Your data never leaves your machine
  • Easy to Use: Simple and intuitive chat interface
  • Customizable: Adjust parameters like temperature for different response styles

πŸš€ Getting Started

Prerequisites

  • Python 3.10 or higher
  • Git
  • At least 8GB RAM (16GB+ recommended for larger models)
  • At least 10GB free disk space for models

Installation

  1. Clone the repository:

    git clone https://huggingface.co/spaces/anu151105/agentic-browser
    cd agentic-browser
    
  2. Install dependencies:

    pip install -r requirements.txt
    
  3. Download models (this may take a while):

    python scripts/download_models.py --model tiny-llama
    

    For the full experience with all models:

    python scripts/download_models.py --model all
    

Running the Application

  1. Start the Streamlit app:

    streamlit run src/streamlit_app.py
    
  2. Open your browser to the URL shown in the terminal (usually http://localhost:8501)

  3. Select a model from the sidebar and click "Load Model"

  4. Start chatting!

πŸ€– Available Models

  • TinyLlama (1.1B): Fast and lightweight, runs well on most hardware

    • Great for quick responses
    • Lower resource requirements
    • Good for testing and development
  • Mistral-7B (7B): More powerful but requires more resources

    • Better understanding and generation
    • Requires more RAM and VRAM
    • Slower response times

βš™οΈ Configuration

You can customize the behavior of the application by setting environment variables in a .env file:

# Model settings
MODEL_DEVICE=auto  # 'auto', 'cuda', 'cpu'
MODEL_CACHE_DIR=./models

# Text generation settings
DEFAULT_TEMPERATURE=0.7
MAX_TOKENS=1024

# UI settings
THEME=light  # 'light' or 'dark'

πŸ› οΈ Development

Adding New Models

  1. Edit config/model_config.py to add your model configuration
  2. Update the DEFAULT_MODELS dictionary with your model details
  3. The model will be available in the UI after restarting the app

Running Tests

pytest tests/

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ™ Acknowledgments

  • Hugging Face for the amazing Transformers library
  • Streamlit for the awesome web framework
  • The open-source AI community for making these models available emoji: πŸš€ colorFrom: red colorTo: red sdk: docker app_port: 8501 tags:
  • streamlit pinned: false short_description: An autonomus browser agent for browser based tasks license: mit

Welcome to Streamlit!

Edit /src/streamlit_app.py to customize this app to your heart's desire. :heart:

If you have any questions, checkout our documentation and community forums.