text stringlengths 55 456k | metadata dict |
|---|---|
🎉 cursor-tools 0.6.0-alpha is here! Your AI teammates just got even smarter. Let me show you what's cooking... 🧵
---
🕵️♂️ Your browser commands are now like having DevTools on autopilot! Console logs and network activity captured by default:
```bash
cursor-tools browser act "Add pizza to cart" --url=pizzaplace.c... | {
"source": "eastlondoner/cursor-tools",
"title": "tweets.md",
"url": "https://github.com/eastlondoner/cursor-tools/blob/main/tweets.md",
"date": "2025-01-13T15:03:33",
"stars": 2408,
"description": "Give Cursor Agent an AI Team and Advanced Skills",
"file_size": 1738
} |
Querying perplexity using sonar-pro for: Angular vs React in 2024 with maxTokens: 8000
In 2024, both Angular and React remain popular choices for front-end development, each with its own strengths and use cases. Here's a comprehensive comparison of Angular vs React in 2024:
## Architecture and Purpose
**Angular:**
- ... | {
"source": "eastlondoner/cursor-tools",
"title": "web-compare.md",
"url": "https://github.com/eastlondoner/cursor-tools/blob/main/web-compare.md",
"date": "2025-01-13T15:03:33",
"stars": 2408,
"description": "Give Cursor Agent an AI Team and Advanced Skills",
"file_size": 3422
} |
Querying perplexity using sonar-pro for: cursor-tools installation guide with maxTokens: 8000
Here's a comprehensive guide on how to install and set up cursor-tools:
## Installation
To install cursor-tools, follow these steps:
1. Ensure you have Node.js 18 or later installed on your system[4].
2. Run the interactiv... | {
"source": "eastlondoner/cursor-tools",
"title": "web-results.md",
"url": "https://github.com/eastlondoner/cursor-tools/blob/main/web-results.md",
"date": "2025-01-13T15:03:33",
"stars": 2408,
"description": "Give Cursor Agent an AI Team and Advanced Skills",
"file_size": 2698
} |
Querying perplexity using sonar-pro for: go to https://ui.shadcn.com/docs and extract detailed setup instructions for a new project, including installation steps, configuration, and initial setup. Focus on the getting started and installation sections. with maxTokens: 8000 | {
"source": "eastlondoner/cursor-tools",
"title": "local-research/shadcn-ui-setup.md",
"url": "https://github.com/eastlondoner/cursor-tools/blob/main/local-research/shadcn-ui-setup.md",
"date": "2025-01-13T15:03:33",
"stars": 2408,
"description": "Give Cursor Agent an AI Team and Advanced Skills",
"file_s... |
# Contribution to Ryujinx
You can contribute to Ryujinx with PRs, testing of PRs and issues. Contributing code and other implementations is greatly appreciated alongside simply filing issues for problems you encounter.
Please read the entire document before continuing as it can potentially save everyone involved a sig... | {
"source": "ryujinx-mirror/ryujinx",
"title": "CONTRIBUTING.md",
"url": "https://github.com/ryujinx-mirror/ryujinx/blob/mirror/master/CONTRIBUTING.md",
"date": "2024-10-01T19:48:13",
"stars": 2387,
"description": "Hard-fork of the Ryujinx project",
"file_size": 10011
} |
[links/discord]: https://discord.gg/xmHPGDfVCa
[badges/discord]: https://img.shields.io/discord/1291765437100720243?label=ryujinx-mirror&logo=discord&logoColor=FFFFFF&color=5865F3
As of now, the [ryujinx-mirror/ryujinx](https://github.com/ryujinx-mirror/ryujinx) repository serves as a downstream hard fork of the origi... | {
"source": "ryujinx-mirror/ryujinx",
"title": "README.md",
"url": "https://github.com/ryujinx-mirror/ryujinx/blob/mirror/master/README.md",
"date": "2024-10-01T19:48:13",
"stars": 2387,
"description": "Hard-fork of the Ryujinx project",
"file_size": 9198
} |
# Documents Index
This repo includes several documents that explain both high-level and low-level concepts about Ryujinx and its functions. These are very useful for contributors, to get context that can be very difficult to acquire from just reading code.
Intro to Ryujinx
==================
Ryujinx is an open-sourc... | {
"source": "ryujinx-mirror/ryujinx",
"title": "docs/README.md",
"url": "https://github.com/ryujinx-mirror/ryujinx/blob/mirror/master/docs/README.md",
"date": "2024-10-01T19:48:13",
"stars": 2387,
"description": "Hard-fork of the Ryujinx project",
"file_size": 1531
} |
# ffmpeg (LGPLv3)
<details>
<summary>See License</summary>
```
GNU LESSER GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this... | {
"source": "ryujinx-mirror/ryujinx",
"title": "distribution/legal/THIRDPARTY.md",
"url": "https://github.com/ryujinx-mirror/ryujinx/blob/mirror/master/distribution/legal/THIRDPARTY.md",
"date": "2024-10-01T19:48:13",
"stars": 2387,
"description": "Hard-fork of the Ryujinx project",
"file_size": 36222
} |
# C# Coding Style
The general rule we follow is "use Visual Studio defaults".
Using an IDE that supports the `.editorconfig` standard will make this much simpler.
1. We use [Allman style](http://en.wikipedia.org/wiki/Indent_style#Allman_style) braces, where each brace begins on a new line. A single line statement bl... | {
"source": "ryujinx-mirror/ryujinx",
"title": "docs/coding-guidelines/coding-style.md",
"url": "https://github.com/ryujinx-mirror/ryujinx/blob/mirror/master/docs/coding-guidelines/coding-style.md",
"date": "2024-10-01T19:48:13",
"stars": 2387,
"description": "Hard-fork of the Ryujinx project",
"file_size... |
# Pull Request Guide
## Contributing Rules
All contributions to Ryujinx/Ryujinx repository are made via pull requests (PRs) rather than through direct commits. The pull requests are reviewed and merged by the maintainers after a review and at least two approvals from the core development team.
To merge pull requests... | {
"source": "ryujinx-mirror/ryujinx",
"title": "docs/workflow/pr-guide.md",
"url": "https://github.com/ryujinx-mirror/ryujinx/blob/mirror/master/docs/workflow/pr-guide.md",
"date": "2024-10-01T19:48:13",
"stars": 2387,
"description": "Hard-fork of the Ryujinx project",
"file_size": 5010
} |
# Retrieval-Augmented Generation (RAG) Project
**_Think it. Build it. bRAG it._ 🚀 bRAGAI's coming soon (🤫)**
**[Join the waitlist](https://bragai.dev/)** for exclusive early access, be among the first to try your AI-powered full-stack development assistant, and transform ideas into production-ready web apps in minu... | {
"source": "bRAGAI/bRAG-langchain",
"title": "README.md",
"url": "https://github.com/bRAGAI/bRAG-langchain/blob/main/README.md",
"date": "2024-11-16T07:41:36",
"stars": 2340,
"description": "Everything you need to know to build your own RAG application",
"file_size": 10051
} |
### Here are all the sources used to write-up the `[1]_rag_setup_overview.ipynb` file:
1. LangSmith Documentation: https://docs.smith.langchain.com/
2. RAG Quickstart: https://python.langchain.com/docs/tutorials/rag/
3. Count Tokens: https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with... | {
"source": "bRAGAI/bRAG-langchain",
"title": "docs/[1]_sources.md",
"url": "https://github.com/bRAGAI/bRAG-langchain/blob/main/docs/[1]_sources.md",
"date": "2024-11-16T07:41:36",
"stars": 2340,
"description": "Everything you need to know to build your own RAG application",
"file_size": 946
} |
### Here are all the sources used to write-up the `[2]_rag_with_multi_query.ipynb` file:
1. LangSmith Documentation: https://docs.smith.langchain.com/
2. Multi Query Retriever Documentation: https://python.langchain.com/docs/how_to/MultiQueryRetriever/
3. RAG Fusion Documentation: https://github.com/langchain-ai/langc... | {
"source": "bRAGAI/bRAG-langchain",
"title": "docs/[2]_sources.md",
"url": "https://github.com/bRAGAI/bRAG-langchain/blob/main/docs/[2]_sources.md",
"date": "2024-11-16T07:41:36",
"stars": 2340,
"description": "Everything you need to know to build your own RAG application",
"file_size": 1090
} |
### Here are all the sources used to write-up the `[3]_rag_routing_and_query_construction.ipynb` file:
1. https://docs.smith.langchain.com/ (LangSmith documentation)
2. https://python.langchain.com/docs/how_to/routing/ (Expression language & Embedding Router cookbook)
3. https://smith.langchain.com/public/c2ca61b4-381... | {
"source": "bRAGAI/bRAG-langchain",
"title": "docs/[3]_sources.md",
"url": "https://github.com/bRAGAI/bRAG-langchain/blob/main/docs/[3]_sources.md",
"date": "2024-11-16T07:41:36",
"stars": 2340,
"description": "Everything you need to know to build your own RAG application",
"file_size": 986
} |
### Here are all the sources used to write-up the `[4]_rag_indexing_and_advanced_retrieval.ipynb` file:
1. https://www.youtube.com/watch?v=8OJC21T2SL4 (Greg Kamradt's video on document chunking)
2. https://docs.smith.langchain.com/ (LangSmith documentation)
3. https://blog.langchain.dev/semi-structured-multi-modal-rag... | {
"source": "bRAGAI/bRAG-langchain",
"title": "docs/[4]_sources.md",
"url": "https://github.com/bRAGAI/bRAG-langchain/blob/main/docs/[4]_sources.md",
"date": "2024-11-16T07:41:36",
"stars": 2340,
"description": "Everything you need to know to build your own RAG application",
"file_size": 989
} |
### Here are all the sources used to write-up the `[5]_rag_retrieval_and_reranking.ipynb` file:
1. https://docs.smith.langchain.com/ (LangSmith documentation)
2. https://python.langchain.com/docs/integrations/retrievers/cohere-reranker#doing-reranking-with-coherererank (Cohere Re-Rank)
3. https://txt.cohere.com/rerank... | {
"source": "bRAGAI/bRAG-langchain",
"title": "docs/[5]_sources.md",
"url": "https://github.com/bRAGAI/bRAG-langchain/blob/main/docs/[5]_sources.md",
"date": "2024-11-16T07:41:36",
"stars": 2340,
"description": "Everything you need to know to build your own RAG application",
"file_size": 1022
} |
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior... | {
"source": "bRAGAI/bRAG-langchain",
"title": ".github/ISSUE_TEMPLATE/bug_report.md",
"url": "https://github.com/bRAGAI/bRAG-langchain/blob/main/.github/ISSUE_TEMPLATE/bug_report.md",
"date": "2024-11-16T07:41:36",
"stars": 2340,
"description": "Everything you need to know to build your own RAG application"... |
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: ''
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and con... | {
"source": "bRAGAI/bRAG-langchain",
"title": ".github/ISSUE_TEMPLATE/feature_request.md",
"url": "https://github.com/bRAGAI/bRAG-langchain/blob/main/.github/ISSUE_TEMPLATE/feature_request.md",
"date": "2024-11-16T07:41:36",
"stars": 2340,
"description": "Everything you need to know to build your own RAG ap... |
<p align="center">
<img src="assets/logo.png" height=100>
</p>
<div align="center">
<a href="https://yuewen.cn/videos"><img src="https://img.shields.io/static/v1?label=Step-Video&message=Web&color=green"></a>  
<a href="https://arxiv.org/abs/2502.10248"><img src="https://img.shields.io/static/v1?label=Tech ... | {
"source": "stepfun-ai/Step-Video-T2V",
"title": "README.md",
"url": "https://github.com/stepfun-ai/Step-Video-T2V/blob/main/README.md",
"date": "2025-02-08T08:46:51",
"stars": 2324,
"description": null,
"file_size": 13151
} |
# Contributing to openpi
We welcome contributions, improvements, and modifications. Everyone is welcome to use openpi in accordance to the [license](LICENSE). Contributors are also welcome to submit bug reports, feature requests, and pull requests. We can't promise to approve every pull request, and we are a small tea... | {
"source": "Physical-Intelligence/openpi",
"title": "CONTRIBUTING.md",
"url": "https://github.com/Physical-Intelligence/openpi/blob/main/CONTRIBUTING.md",
"date": "2024-10-21T15:23:28",
"stars": 2297,
"description": null,
"file_size": 3363
} |
# openpi
openpi holds open-source models and packages for robotics, published by the [Physical Intelligence team](https://www.physicalintelligence.company/).
Currently, this repo contains two types of models:
- the [π₀ model](https://www.physicalintelligence.company/blog/pi0), a flow-based diffusion vision-language-a... | {
"source": "Physical-Intelligence/openpi",
"title": "README.md",
"url": "https://github.com/Physical-Intelligence/openpi/blob/main/README.md",
"date": "2024-10-21T15:23:28",
"stars": 2297,
"description": null,
"file_size": 14641
} |
### Docker Setup
All of the examples in this repo provide instructions for being run normally, and also using Docker. Although not required, the Docker option is recommended as this will simplify software installation, produce a more stable environment, and also allow you to avoid installing ROS and cluttering your ma... | {
"source": "Physical-Intelligence/openpi",
"title": "docs/docker.md",
"url": "https://github.com/Physical-Intelligence/openpi/blob/main/docs/docker.md",
"date": "2024-10-21T15:23:28",
"stars": 2297,
"description": null,
"file_size": 1615
} |
# Running openpi models remotely
We provide utilities for running openpi models remotely. This is useful for running inference on more powerful GPUs off-robot, and also helps keep the robot and policy environments separate (and e.g. avoid dependency hell with robot software).
## Starting a remote policy server
To st... | {
"source": "Physical-Intelligence/openpi",
"title": "docs/remote_inference.md",
"url": "https://github.com/Physical-Intelligence/openpi/blob/main/docs/remote_inference.md",
"date": "2024-10-21T15:23:28",
"stars": 2297,
"description": null,
"file_size": 2216
} |
# Run Aloha (Real Robot)
This example demonstrates how to run with a real robot using an [ALOHA setup](https://github.com/tonyzhaozh/aloha). See [here](../../docs/remote_inference.md) for instructions on how to load checkpoints and run inference. We list the relevant checkpoint paths for each provided fine-tuned model... | {
"source": "Physical-Intelligence/openpi",
"title": "examples/aloha_real/README.md",
"url": "https://github.com/Physical-Intelligence/openpi/blob/main/examples/aloha_real/README.md",
"date": "2024-10-21T15:23:28",
"stars": 2297,
"description": null,
"file_size": 6501
} |
# Run Aloha Sim
## With Docker
```bash
export SERVER_ARGS="--env ALOHA_SIM"
docker compose -f examples/aloha_sim/compose.yml up --build
```
## Without Docker
Terminal window 1:
```bash
# Create virtual environment
uv venv --python 3.10 examples/aloha_sim/.venv
source examples/aloha_sim/.venv/bin/activate
uv pip sy... | {
"source": "Physical-Intelligence/openpi",
"title": "examples/aloha_sim/README.md",
"url": "https://github.com/Physical-Intelligence/openpi/blob/main/examples/aloha_sim/README.md",
"date": "2024-10-21T15:23:28",
"stars": 2297,
"description": null,
"file_size": 731
} |
# Run DROID
This example shows how to run the fine-tuned $\pi_0$-FAST-DROID model on the [DROID robot platform](https://github.com/droid-dataset/droid). We also offer a $\pi_0$-DROID model that is fine-tuned from $\pi_0$ and uses flow action decoding. You can use it by replacing `pi0_fast_droid` with `pi0_droid` in th... | {
"source": "Physical-Intelligence/openpi",
"title": "examples/droid/README.md",
"url": "https://github.com/Physical-Intelligence/openpi/blob/main/examples/droid/README.md",
"date": "2024-10-21T15:23:28",
"stars": 2297,
"description": null,
"file_size": 4841
} |
# LIBERO Benchmark
This example runs the LIBERO benchmark: https://github.com/Lifelong-Robot-Learning/LIBERO
Note: When updating requirements.txt in this directory, there is an additional flag `--extra-index-url https://download.pytorch.org/whl/cu113` that must be added to the `uv pip compile` command.
This example ... | {
"source": "Physical-Intelligence/openpi",
"title": "examples/libero/README.md",
"url": "https://github.com/Physical-Intelligence/openpi/blob/main/examples/libero/README.md",
"date": "2024-10-21T15:23:28",
"stars": 2297,
"description": null,
"file_size": 1985
} |
# Simple Client
A minimal client that sends observations to the server and prints the inference rate.
You can specify which runtime environment to use using the `--env` flag. You can see the available options by running:
```bash
uv run examples/simple_client/main.py --help
```
## With Docker
```bash
export SERVER_... | {
"source": "Physical-Intelligence/openpi",
"title": "examples/simple_client/README.md",
"url": "https://github.com/Physical-Intelligence/openpi/blob/main/examples/simple_client/README.md",
"date": "2024-10-21T15:23:28",
"stars": 2297,
"description": null,
"file_size": 588
} |
<div align="center">
<img src="https://github.com/user-attachments/assets/a4ccbc60-5248-4dca-8cec-09a6385c6d0f" width="768" height="192">
</div>
<strong>ClearerVoice-Studio</strong> is an open-source, AI-powered speech processing toolkit designed for researchers, developers, and end-users. It provides capabilities of ... | {
"source": "modelscope/ClearerVoice-Studio",
"title": "README.md",
"url": "https://github.com/modelscope/ClearerVoice-Studio/blob/main/README.md",
"date": "2024-11-12T07:26:34",
"stars": 2289,
"description": "An AI-Powered Speech Processing Toolkit and Open Source SOTA Pretrained Models, Supporting Speech ... |
# ClearVoice
## 👉🏻[HuggingFace Space Demo](https://huggingface.co/spaces/alibabasglab/ClearVoice)👈🏻 | 👉🏻[ModelScope Space Demo](https://modelscope.cn/studios/iic/ClearerVoice-Studio)👈🏻
## Table of Contents
- [1. Introduction](#1-introduction)
- [2. Usage](#2-usage)
## 1. Introduction
ClearVoice offers a ... | {
"source": "modelscope/ClearerVoice-Studio",
"title": "clearvoice/README.md",
"url": "https://github.com/modelscope/ClearerVoice-Studio/blob/main/clearvoice/README.md",
"date": "2024-11-12T07:26:34",
"stars": 2289,
"description": "An AI-Powered Speech Processing Toolkit and Open Source SOTA Pretrained Mode... |
# SpeechScore
## 👉🏻[HuggingFace Space Demo](https://huggingface.co/spaces/alibabasglab/SpeechScore)👈🏻
## Table of Contents
- [1. Introduction](#1-introduction)
- [2. Usage](#2-usage)
- [3. Acknowledgements](#3-acknowledgements)
## 1. Introduction
SpeechScore is a wrapper designed for assessing speech quality. ... | {
"source": "modelscope/ClearerVoice-Studio",
"title": "speechscore/README.md",
"url": "https://github.com/modelscope/ClearerVoice-Studio/blob/main/speechscore/README.md",
"date": "2024-11-12T07:26:34",
"stars": 2289,
"description": "An AI-Powered Speech Processing Toolkit and Open Source SOTA Pretrained Mo... |
# ClearerVoice-Studio: Train Speech Enhancement Models
## 1. Introduction
This repository provides training scripts for speech enhancement models. Currently, it supports fresh train or finetune for the following models:
|model name| sampling rate | Paper Link|
|----------|---------------|------------|
|FRCRN_SE_16K|... | {
"source": "modelscope/ClearerVoice-Studio",
"title": "train/speech_enhancement/README.md",
"url": "https://github.com/modelscope/ClearerVoice-Studio/blob/main/train/speech_enhancement/README.md",
"date": "2024-11-12T07:26:34",
"stars": 2289,
"description": "An AI-Powered Speech Processing Toolkit and Open... |
# ClearerVoice-Studio: Train Speech Separation Models
## 1. Introduction
This repository provides a flexible training or finetune scripts for speech separation models. Currently, it supports both 8kHz and 16kHz sampling rates:
|model name| sampling rate | Paper Link|
|----------|---------------|------------|
|MossFo... | {
"source": "modelscope/ClearerVoice-Studio",
"title": "train/speech_separation/README.md",
"url": "https://github.com/modelscope/ClearerVoice-Studio/blob/main/train/speech_separation/README.md",
"date": "2024-11-12T07:26:34",
"stars": 2289,
"description": "An AI-Powered Speech Processing Toolkit and Open S... |
# ClearerVoice-Studio: Train Speech Super-Resolution Models
## 1. Introduction
This repository provides a flexible training and finetune scripts for optimizing speech super-resolution models. One or multiple models can be trained to scale multiple lower sampling rates (>= 8kHz) to 48kHz sampling rate | {
"source": "modelscope/ClearerVoice-Studio",
"title": "train/speech_super_resolution/README.md",
"url": "https://github.com/modelscope/ClearerVoice-Studio/blob/main/train/speech_super_resolution/README.md",
"date": "2024-11-12T07:26:34",
"stars": 2289,
"description": "An AI-Powered Speech Processing Toolki... |
# ClearerVoice-Studio: Target Speaker Extraction Algorithms
## Table of Contents
- [1. Introduction](#1-introduction)
- [2. Usage](#2-usage)
- [3. Task: Audio-only Speaker Extraction Conditioned on a Reference Speech](#3-audio-only-speaker-extraction-conditioned-on-a-reference-speech)
- [4. Task: Audio-visual Speake... | {
"source": "modelscope/ClearerVoice-Studio",
"title": "train/target_speaker_extraction/README.md",
"url": "https://github.com/modelscope/ClearerVoice-Studio/blob/main/train/target_speaker_extraction/README.md",
"date": "2024-11-12T07:26:34",
"stars": 2289,
"description": "An AI-Powered Speech Processing To... |
The SRMRpy toolbox is licensed under the MIT license.
> Copyright (c) 2014 João F. Santos, Tiago H. Falk
>
> Permission is hereby granted, free of charge, to any person obtaining a copy
> of this software and associated documentation files (the "Software"), to deal
> in the Software without restriction, including wi... | {
"source": "modelscope/ClearerVoice-Studio",
"title": "speechscore/scores/srmr/LICENSE.md",
"url": "https://github.com/modelscope/ClearerVoice-Studio/blob/main/speechscore/scores/srmr/LICENSE.md",
"date": "2024-11-12T07:26:34",
"stars": 2289,
"description": "An AI-Powered Speech Processing Toolkit and Open... |
# Face detector
This face detector is adapted from `https://github.com/cs-giung/face-detection-pytorch`. | {
"source": "modelscope/ClearerVoice-Studio",
"title": "clearvoice/models/av_mossformer2_tse/faceDetector/README.md",
"url": "https://github.com/modelscope/ClearerVoice-Studio/blob/main/clearvoice/models/av_mossformer2_tse/faceDetector/README.md",
"date": "2024-11-12T07:26:34",
"stars": 2289,
"description":... |
# OpenDeepResearcher
This notebook implements an **AI researcher** that continuously searches for information based on a user query until the system is confident that it has gathered all the necessary details. It makes use of several services to do so:
- **SERPAPI**: To perform Google searches.
- **Jina**: To fetch a... | {
"source": "mshumer/OpenDeepResearcher",
"title": "README.md",
"url": "https://github.com/mshumer/OpenDeepResearcher/blob/main/README.md",
"date": "2025-02-03T23:08:25",
"stars": 2275,
"description": null,
"file_size": 4269
} |
<div align="center">
<picture>
<source srcset="figures/MiniMaxLogo-Dark.png" media="(prefers-color-scheme: dark)">
<img src="figures/MiniMaxLogo-Light.png" width="60%" alt="MiniMax-Text-01">
</source>
</picture>
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.minimaxi.... | {
"source": "MiniMax-AI/MiniMax-01",
"title": "MiniMax-Text-01-Model-Card.md",
"url": "https://github.com/MiniMax-AI/MiniMax-01/blob/main/MiniMax-Text-01-Model-Card.md",
"date": "2025-01-14T15:43:28",
"stars": 2231,
"description": null,
"file_size": 18721
} |
<div align="center">
<picture>
<source srcset="figures/MiniMaxLogo-Dark.png" media="(prefers-color-scheme: dark)">
<img src="figures/MiniMaxLogo-Light.png" width="60%" alt="MiniMax-VL-01">
</source>
</picture>
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.minimaxi.co... | {
"source": "MiniMax-AI/MiniMax-01",
"title": "MiniMax-VL-01-Model-Card.md",
"url": "https://github.com/MiniMax-AI/MiniMax-01/blob/main/MiniMax-VL-01-Model-Card.md",
"date": "2025-01-14T15:43:28",
"stars": 2231,
"description": null,
"file_size": 13386
} |
<div align="center">
<picture>
<source srcset="figures/MiniMaxLogo-Dark.png" media="(prefers-color-scheme: dark)">
<img src="figures/MiniMaxLogo-Light.png" width="60%" alt="MiniMax">
</source>
</picture>
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.minimaxi.com/en" ... | {
"source": "MiniMax-AI/MiniMax-01",
"title": "README.md",
"url": "https://github.com/MiniMax-AI/MiniMax-01/blob/main/README.md",
"date": "2025-01-14T15:43:28",
"stars": 2231,
"description": null,
"file_size": 24872
} |
# Awesome AI/ML Resources
This repository contains free resources and a roadmap to learn Machine Learning and Artificial Intelligence in 2025.
## 📌 AI/ML Key Concepts
- [Supervised Learning](https://medium.com/@kodeinkgp/supervised-learning-a-comprehensive-guide-7032b34d5097)
- [Unsupervised Learning](https://cloud.... | {
"source": "armankhondker/awesome-ai-ml-resources",
"title": "README.md",
"url": "https://github.com/armankhondker/awesome-ai-ml-resources/blob/main/README.md",
"date": "2025-02-09T00:12:17",
"stars": 2154,
"description": "Learn AI/ML for beginners with a roadmap and free resources. ",
"file_size": 12385... |
## Awesome AI/ML books to read
- [Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow](https://www.oreilly.com/library/view/hands-on-machine-learning/9781492032632/)
- [AI Engineering: Building Applications with Foundational Models](https://www.oreilly.com/library/view/ai-engineering/9781098166298/)
- [P... | {
"source": "armankhondker/awesome-ai-ml-resources",
"title": "books.md",
"url": "https://github.com/armankhondker/awesome-ai-ml-resources/blob/main/books.md",
"date": "2025-02-09T00:12:17",
"stars": 2154,
"description": "Learn AI/ML for beginners with a roadmap and free resources. ",
"file_size": 1544
} |
## Awesome AI/ML Interview Prep
- [Introduction to Machine Learning Interviews](https://huyenchip.com/ml-interviews-book/)
- [Acing AI Interviews](https://medium.com/acing-ai/acing-ai-interviews/home) | {
"source": "armankhondker/awesome-ai-ml-resources",
"title": "interviews.md",
"url": "https://github.com/armankhondker/awesome-ai-ml-resources/blob/main/interviews.md",
"date": "2025-02-09T00:12:17",
"stars": 2154,
"description": "Learn AI/ML for beginners with a roadmap and free resources. ",
"file_size... |
## Awesome AI/ML Newsletters
- [AI Weekly](https://aiweekly.co/)
- [Rundown AI](https://www.therundown.ai/)
- [The AI/ML Engineer](https://www.aimlengineer.io/)
- [Artificial Intelligence Made Simple](https://artificialintelligencemadesimple.substack.com/?utm_source=recommendations_page&utm_campaign=1744179) | {
"source": "armankhondker/awesome-ai-ml-resources",
"title": "newsletters.md",
"url": "https://github.com/armankhondker/awesome-ai-ml-resources/blob/main/newsletters.md",
"date": "2025-02-09T00:12:17",
"stars": 2154,
"description": "Learn AI/ML for beginners with a roadmap and free resources. ",
"file_si... |
## Awesome AI/ML Projects
### Beginner Projects
- [Titanic Survival Prediction](https://www.kaggle.com/c/titanic)
- [Iris Flower Classification](https://www.kaggle.com/datasets/uciml/iris)
- [House Price Prediction](https://www.kaggle.com/c/house-prices-advanced-regression-techniques)
- [Movie Recommendation System](h... | {
"source": "armankhondker/awesome-ai-ml-resources",
"title": "projects.md",
"url": "https://github.com/armankhondker/awesome-ai-ml-resources/blob/main/projects.md",
"date": "2025-02-09T00:12:17",
"stars": 2154,
"description": "Learn AI/ML for beginners with a roadmap and free resources. ",
"file_size": 1... |
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of... | {
"source": "clusterzx/paperless-ai",
"title": "CODE_OF_CONDUCT.md",
"url": "https://github.com/clusterzx/paperless-ai/blob/main/CODE_OF_CONDUCT.md",
"date": "2024-12-01T16:41:07",
"stars": 2149,
"description": "An automated document analyzer for Paperless-ngx using OpenAI API, Ollama and all OpenAI API com... |
## Contributing
1. Fork the repository
2. Create your feature branch (`git checkout -b feature/AmazingFeature`)
3. Commit your changes (`git commit -m 'Add some AmazingFeature'`)
4. Push to the branch (`git push origin feature/AmazingFeature`)
5. Open a Pull Request | {
"source": "clusterzx/paperless-ai",
"title": "CONTRIBUTING.md",
"url": "https://github.com/clusterzx/paperless-ai/blob/main/CONTRIBUTING.md",
"date": "2024-12-01T16:41:07",
"stars": 2149,
"description": "An automated document analyzer for Paperless-ngx using OpenAI API, Ollama and all OpenAI API compatibl... |
# Privacy Policy for Paperless-AI Chat Extension
Last updated: 16.01.2025
## 1. General Information
The Paperless-AI Chat Extension ("the Extension") is a browser extension designed to enhance document interaction in Paperless-ngx through AI-powered chat functionality. We are committed to protecting your privacy and... | {
"source": "clusterzx/paperless-ai",
"title": "PRIVACY_POLICY.md",
"url": "https://github.com/clusterzx/paperless-ai/blob/main/PRIVACY_POLICY.md",
"date": "2024-12-01T16:41:07",
"stars": 2149,
"description": "An automated document analyzer for Paperless-ngx using OpenAI API, Ollama and all OpenAI API compa... |
   ](https://stand-with-ukraine.pp.ua)
A (somewhat opinionated) list of SQL tips and tricks that I've picked up over the years.
There's so much you can you do with SQL but I've fo... | {
"source": "ben-nour/SQL-tips-and-tricks",
"title": "README.md",
"url": "https://github.com/ben-nour/SQL-tips-and-tricks/blob/main/README.md",
"date": "2024-09-19T05:29:04",
"stars": 2143,
"description": "SQL tips and tricks",
"file_size": 14798
} |
# Evo 2: Genome modeling and design across all domains of life

Evo 2 is a state of the art DNA language model for long context modeling and design. Evo 2 models DNA sequences at single-nucleotide resolution at up to 1 million base pair context length using the [StripedHyena 2](https://github.com/Zy... | {
"source": "ArcInstitute/evo2",
"title": "README.md",
"url": "https://github.com/ArcInstitute/evo2/blob/main/README.md",
"date": "2025-02-13T23:19:47",
"stars": 2116,
"description": "Genome modeling and design across all domains of life",
"file_size": 8382
} |
# Development Guidelines
This document contains critical information about working with this codebase. Follow these guidelines precisely.
## Core Development Rules
1. Package Management
- ONLY use uv, NEVER pip
- Installation: `uv add package`
- Running tools: `uv run tool`
- Upgrading: `uv add --dev pac... | {
"source": "modelcontextprotocol/python-sdk",
"title": "CLAUDE.md",
"url": "https://github.com/modelcontextprotocol/python-sdk/blob/main/CLAUDE.md",
"date": "2024-09-24T21:01:35",
"stars": 2103,
"description": "The official Python SDK for Model Context Protocol servers and clients",
"file_size": 3168
} |
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of... | {
"source": "modelcontextprotocol/python-sdk",
"title": "CODE_OF_CONDUCT.md",
"url": "https://github.com/modelcontextprotocol/python-sdk/blob/main/CODE_OF_CONDUCT.md",
"date": "2024-09-24T21:01:35",
"stars": 2103,
"description": "The official Python SDK for Model Context Protocol servers and clients",
"fi... |
# Contributing
Thank you for your interest in contributing to the MCP Python SDK! This document provides guidelines and instructions for contributing.
## Development Setup
1. Make sure you have Python 3.10+ installed
2. Install [uv](https://docs.astral.sh/uv/getting-started/installation/)
3. Fork the repository
4. C... | {
"source": "modelcontextprotocol/python-sdk",
"title": "CONTRIBUTING.md",
"url": "https://github.com/modelcontextprotocol/python-sdk/blob/main/CONTRIBUTING.md",
"date": "2024-09-24T21:01:35",
"stars": 2103,
"description": "The official Python SDK for Model Context Protocol servers and clients",
"file_siz... |
# MCP Python SDK
<div align="center">
<strong>Python implementation of the Model Context Protocol (MCP)</strong>
[![PyPI][pypi-badge]][pypi-url]
[![MIT licensed][mit-badge]][mit-url]
[![Python Version][python-badge]][python-url]
[![Documentation][docs-badge]][docs-url]
[![Specification][spec-badge]][spec-url]
[ as part of the Model Context Protocol project.
The security of our systems and user data is Anthropic’s top priority. We appre... | {
"source": "modelcontextprotocol/python-sdk",
"title": "SECURITY.md",
"url": "https://github.com/modelcontextprotocol/python-sdk/blob/main/SECURITY.md",
"date": "2024-09-24T21:01:35",
"stars": 2103,
"description": "The official Python SDK for Model Context Protocol servers and clients",
"file_size": 834
... |
# Python SDK Examples
This folders aims to provide simple examples of using the Python SDK. Please refer to the
[servers repository](https://github.com/modelcontextprotocol/servers)
for real-world servers. | {
"source": "modelcontextprotocol/python-sdk",
"title": "examples/README.md",
"url": "https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/README.md",
"date": "2024-09-24T21:01:35",
"stars": 2103,
"description": "The official Python SDK for Model Context Protocol servers and clients",
"fi... |
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior... | {
"source": "modelcontextprotocol/python-sdk",
"title": ".github/ISSUE_TEMPLATE/bug_report.md",
"url": "https://github.com/modelcontextprotocol/python-sdk/blob/main/.github/ISSUE_TEMPLATE/bug_report.md",
"date": "2024-09-24T21:01:35",
"stars": 2103,
"description": "The official Python SDK for Model Context ... |
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: ''
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and con... | {
"source": "modelcontextprotocol/python-sdk",
"title": ".github/ISSUE_TEMPLATE/feature_request.md",
"url": "https://github.com/modelcontextprotocol/python-sdk/blob/main/.github/ISSUE_TEMPLATE/feature_request.md",
"date": "2024-09-24T21:01:35",
"stars": 2103,
"description": "The official Python SDK for Mode... |
# MCP Simple Prompt
A simple MCP server that exposes a customizable prompt template with optional context and topic parameters.
## Usage
Start the server using either stdio (default) or SSE transport:
```bash
# Using stdio transport (default)
uv run mcp-simple-prompt
# Using SSE transport on custom port
uv run mcp... | {
"source": "modelcontextprotocol/python-sdk",
"title": "examples/servers/simple-prompt/README.md",
"url": "https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/servers/simple-prompt/README.md",
"date": "2024-09-24T21:01:35",
"stars": 2103,
"description": "The official Python SDK for Model ... |
# MCP Simple Resource
A simple MCP server that exposes sample text files as resources.
## Usage
Start the server using either stdio (default) or SSE transport:
```bash
# Using stdio transport (default)
uv run mcp-simple-resource
# Using SSE transport on custom port
uv run mcp-simple-resource --transport sse --port... | {
"source": "modelcontextprotocol/python-sdk",
"title": "examples/servers/simple-resource/README.md",
"url": "https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/servers/simple-resource/README.md",
"date": "2024-09-24T21:01:35",
"stars": 2103,
"description": "The official Python SDK for Mo... |
A simple MCP server that exposes a website fetching tool.
## Usage
Start the server using either stdio (default) or SSE transport:
```bash
# Using stdio transport (default)
uv run mcp-simple-tool
# Using SSE transport on custom port
uv run mcp-simple-tool --transport sse --port 8000
```
The server exposes a tool n... | {
"source": "modelcontextprotocol/python-sdk",
"title": "examples/servers/simple-tool/README.md",
"url": "https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/servers/simple-tool/README.md",
"date": "2024-09-24T21:01:35",
"stars": 2103,
"description": "The official Python SDK for Model Cont... |
<div align="center">
<img src="https://raw.githubusercontent.com/zml/zml.github.io/refs/heads/main/docs-assets/zml-banner.png" style="width:100%; height:120px;">
<a href="https://zml.ai">Website</a>
| <a href="#getting-started">Getting Started</a>
| <a href="https://docs.zml.ai">Documentation</a>
| <a href="h... | {
"source": "zml/zml",
"title": "README.md",
"url": "https://github.com/zml/zml/blob/master/README.md",
"date": "2024-09-17T09:13:32",
"stars": 2086,
"description": "Any model. Any hardware. Zero compromise. Built with @ziglang / @openxla / MLIR / @bazelbuild",
"file_size": 7197
} |
### Greetings, Programs!
If you want to run our example models, please head over to
[Getting Started](./tutorials/getting_started.md).
Ready to write some code? Try starting with [your first model in ZML](./tutorials/write_first_model.md), or familiarize yourself with the high-level [ZML concepts](./learn/concepts.m... | {
"source": "zml/zml",
"title": "docs/README.md",
"url": "https://github.com/zml/zml/blob/master/docs/README.md",
"date": "2024-09-17T09:13:32",
"stars": 2086,
"description": "Any model. Any hardware. Zero compromise. Built with @ziglang / @openxla / MLIR / @bazelbuild",
"file_size": 1091
} |
# Running Gated Huggingface Models with Token Authentication
Some models have restrictions and may require some sort of approval or agreement
process, which, by consequence, **requires token-authentication with Huggingface**.
The easiest way might be to use the `huggingface-cli login` command.
Alternatively, here is... | {
"source": "zml/zml",
"title": "docs/huggingface-access-token.md",
"url": "https://github.com/zml/zml/blob/master/docs/huggingface-access-token.md",
"date": "2024-09-17T09:13:32",
"stars": 2086,
"description": "Any model. Any hardware. Zero compromise. Built with @ziglang / @openxla / MLIR / @bazelbuild",
... |
# Adding Weights Files
Our [first model](../tutorials/write_first_model.md) did not need any weights files.
We just created weights and biases at runtime.
But real-world models typically need weights files, and maybe some other
supporting files.
We recommend, for easy deployments, you upload those files. In many in... | {
"source": "zml/zml",
"title": "docs/howtos/add_weights.md",
"url": "https://github.com/zml/zml/blob/master/docs/howtos/add_weights.md",
"date": "2024-09-17T09:13:32",
"stars": 2086,
"description": "Any model. Any hardware. Zero compromise. Built with @ziglang / @openxla / MLIR / @bazelbuild",
"file_size... |
# Deploying Models on a Server
To run models on remote GPU/TPU machines, it is inconvenient to have to check
out your project’s repository and compile it on every target. Instead, you more
likely want to cross-compile right from your development machine, **for every**
supported target architecture and accelerator.
Se... | {
"source": "zml/zml",
"title": "docs/howtos/deploy_on_server.md",
"url": "https://github.com/zml/zml/blob/master/docs/howtos/deploy_on_server.md",
"date": "2024-09-17T09:13:32",
"stars": 2086,
"description": "Any model. Any hardware. Zero compromise. Built with @ziglang / @openxla / MLIR / @bazelbuild",
... |
# Containerize a Model
A convenient way of [deploying a model](../howtos/deploy_on_server.md) is packaging
it up in a Docker container. Thanks to bazel, this is really easy to do. You
just have to append a few lines to your model's `BUILD.bazel`. Here is how it's
done.
**Note:** This walkthrough will work with your i... | {
"source": "zml/zml",
"title": "docs/howtos/dockerize_models.md",
"url": "https://github.com/zml/zml/blob/master/docs/howtos/dockerize_models.md",
"date": "2024-09-17T09:13:32",
"stars": 2086,
"description": "Any model. Any hardware. Zero compromise. Built with @ziglang / @openxla / MLIR / @bazelbuild",
... |
# How to port Pytorch models to ZML ?
## Requirements
We assume you already have a working ZML project,
and you can run it with a Bazel command like `bazel run //my_project:torch2zml`.
You can refer to [write your first model](../tutorials/write_first_model.md) to do so.
We also assume that you know enough Python to... | {
"source": "zml/zml",
"title": "docs/howtos/howto_torch2zml.md",
"url": "https://github.com/zml/zml/blob/master/docs/howtos/howto_torch2zml.md",
"date": "2024-09-17T09:13:32",
"stars": 2086,
"description": "Any model. Any hardware. Zero compromise. Built with @ziglang / @openxla / MLIR / @bazelbuild",
"f... |
# Huggingface Token Authentication
Some models have restrictions and may require some sort of approval or
agreement process, which, by consequence, **requires token-authentication with
Huggingface**.
Here is how you can generate a **"read-only public repositories"** access token
to log into your account on Huggingfac... | {
"source": "zml/zml",
"title": "docs/howtos/huggingface_access_token.md",
"url": "https://github.com/zml/zml/blob/master/docs/howtos/huggingface_access_token.md",
"date": "2024-09-17T09:13:32",
"stars": 2086,
"description": "Any model. Any hardware. Zero compromise. Built with @ziglang / @openxla / MLIR / ... |
# ZML Concepts
## Model lifecycle
ZML is an inference stack that helps running Machine Learning (ML) models, and
particulary Neural Networks (NN).
The lifecycle of a model is implemented in the following steps:
1. Open the model file and read the shapes of the weights, but leave the
weights on the disk.
2. Usin... | {
"source": "zml/zml",
"title": "docs/learn/concepts.md",
"url": "https://github.com/zml/zml/blob/master/docs/learn/concepts.md",
"date": "2024-09-17T09:13:32",
"stars": 2086,
"description": "Any model. Any hardware. Zero compromise. Built with @ziglang / @openxla / MLIR / @bazelbuild",
"file_size": 5795
... |
# ZML Style Guide
We prefer to keep it simple and adhere to the [Zig Style Guide](https://ziglang.org/documentation/0.13.0/#Style-Guide).
We use ZLS to auto-format code.
In addition, we try to adhere to the following house-rules:
### We favor:
```zig
const x: Foo = .{ .bar = 1 }
// over: const x = Foo{ .bar = 1}
... | {
"source": "zml/zml",
"title": "docs/misc/style_guide.md",
"url": "https://github.com/zml/zml/blob/master/docs/misc/style_guide.md",
"date": "2024-09-17T09:13:32",
"stars": 2086,
"description": "Any model. Any hardware. Zero compromise. Built with @ziglang / @openxla / MLIR / @bazelbuild",
"file_size": 8... |
# Getting Started with ZML
In this tutorial, we will install `ZML` and run a few models locally.
## Prerequisites
First, let's checkout the ZML codebase. In a terminal, run:
```
git clone https://github.com/zml/zml.git
cd zml/
```
We use `bazel` to build ZML and its dependencies. We recommend to download it
throug... | {
"source": "zml/zml",
"title": "docs/tutorials/getting_started.md",
"url": "https://github.com/zml/zml/blob/master/docs/tutorials/getting_started.md",
"date": "2024-09-17T09:13:32",
"stars": 2086,
"description": "Any model. Any hardware. Zero compromise. Built with @ziglang / @openxla / MLIR / @bazelbuild"... |
# Simplifying Dimension Handling with Tagged Tensors
### Coming Soon...
See [ZML Concepts](../learn/concepts.md) for an introduction to Tensors and Shapes. | {
"source": "zml/zml",
"title": "docs/tutorials/working_with_tensors.md",
"url": "https://github.com/zml/zml/blob/master/docs/tutorials/working_with_tensors.md",
"date": "2024-09-17T09:13:32",
"stars": 2086,
"description": "Any model. Any hardware. Zero compromise. Built with @ziglang / @openxla / MLIR / @b... |
# Writing your first model
**In this short guide, we will do the following:**
- clone ZML to work directly within the prepared example folder
- add Zig code to implement our model
- add some Bazel to integrate our code with ZML
- no weights files or anything external is required for this example
The reason we're doi... | {
"source": "zml/zml",
"title": "docs/tutorials/write_first_model.md",
"url": "https://github.com/zml/zml/blob/master/docs/tutorials/write_first_model.md",
"date": "2024-09-17T09:13:32",
"stars": 2086,
"description": "Any model. Any hardware. Zero compromise. Built with @ziglang / @openxla / MLIR / @bazelbu... |
# Contributing to lume
We deeply appreciate your interest in contributing to lume! Whether you're reporting bugs, suggesting enhancements, improving docs, or submitting pull requests, your contributions help improve the project for everyone.
## Reporting Bugs
If you've encountered a bug in the project, we encourage ... | {
"source": "trycua/lume",
"title": "CONTRIBUTING.md",
"url": "https://github.com/trycua/lume/blob/main/CONTRIBUTING.md",
"date": "2025-01-31T15:02:49",
"stars": 2066,
"description": "A lightweight CLI and local API server to create, run and manage macOS and Linux virtual machines (VMs) natively on Apple Si... |
<div align="center">
<h1>
<div class="image-wrapper" style="display: inline-block;">
<picture>
<source media="(prefers-color-scheme: dark)" alt="logo" height="150" srcset="img/logo_white.png" style="display: block; margin: auto;">
<source media="(prefers-color-scheme: light)" alt="logo" height="150" s... | {
"source": "trycua/lume",
"title": "README.md",
"url": "https://github.com/trycua/lume/blob/main/README.md",
"date": "2025-01-31T15:02:49",
"stars": 2066,
"description": "A lightweight CLI and local API server to create, run and manage macOS and Linux virtual machines (VMs) natively on Apple Silicon.",
"... |
## API Reference
<details open>
<summary><strong>Create VM</strong> - POST /vms</summary>
```bash
curl --connect-timeout 6000 \
--max-time 5000 \
-X POST \
-H "Content-Type: application/json" \
-d '{
"name": "lume_vm",
"os": "macOS",
"cpu": 2,
"memory": "4GB",
"diskSize":... | {
"source": "trycua/lume",
"title": "docs/API-Reference.md",
"url": "https://github.com/trycua/lume/blob/main/docs/API-Reference.md",
"date": "2025-01-31T15:02:49",
"stars": 2066,
"description": "A lightweight CLI and local API server to create, run and manage macOS and Linux virtual machines (VMs) natively... |
# Development Guide
This guide will help you set up your development environment and understand the process for contributing code to lume.
## Environment Setup
Lume development requires:
- Swift 6 or higher
- Xcode 15 or higher
- macOS Sequoia 15.2 or higher
- (Optional) VS Code with Swift extension
## Setting Up t... | {
"source": "trycua/lume",
"title": "docs/Development.md",
"url": "https://github.com/trycua/lume/blob/main/docs/Development.md",
"date": "2025-01-31T15:02:49",
"stars": 2066,
"description": "A lightweight CLI and local API server to create, run and manage macOS and Linux virtual machines (VMs) natively on ... |
# FAQs
### Where are the VMs stored?
VMs are stored in `~/.lume`.
### How are images cached?
Images are cached in `~/.lume/cache`. When doing `lume pull <image>`, it will check if the image is already cached. If not, it will download the image and cache it, removing any older versions.
### Are VM disks taking up a... | {
"source": "trycua/lume",
"title": "docs/FAQ.md",
"url": "https://github.com/trycua/lume/blob/main/docs/FAQ.md",
"date": "2025-01-31T15:02:49",
"stars": 2066,
"description": "A lightweight CLI and local API server to create, run and manage macOS and Linux virtual machines (VMs) natively on Apple Silicon.",... |
# Local File Organizer: AI File Management Run Entirely on Your Device, Privacy Assured
Tired of digital clutter? Overwhelmed by disorganized files scattered across your computer? Let AI do the heavy lifting! The Local File Organizer is your personal organizing assistant, using cutting-edge AI to bring order to your f... | {
"source": "QiuYannnn/Local-File-Organizer",
"title": "README.md",
"url": "https://github.com/QiuYannnn/Local-File-Organizer/blob/main/README.md",
"date": "2024-09-21T07:55:12",
"stars": 2062,
"description": "An AI-powered file management tool that ensures privacy by organizing local texts, images. Using L... |
There are so many reasons couples choose to enjoy a romantic getaway or honeymoon in Chicago, Illinois. The (often captioned) Windy City is a Midwest hub for food, culture, and things to do. Best of all, it's inclusive to everybody. No wonder Chicago's been voted the best big city in America.
Lovers come for a fairyta... | {
"source": "QiuYannnn/Local-File-Organizer",
"title": "sample_data/text_files/ccc.md",
"url": "https://github.com/QiuYannnn/Local-File-Organizer/blob/main/sample_data/text_files/ccc.md",
"date": "2024-09-21T07:55:12",
"stars": 2062,
"description": "An AI-powered file management tool that ensures privacy by... |
# Flux Gym
Dead simple web UI for training FLUX LoRA **with LOW VRAM (12GB/16GB/20GB) support.**
- **Frontend:** The WebUI forked from [AI-Toolkit](https://github.com/ostris/ai-toolkit) (Gradio UI created by https://x.com/multimodalart)
- **Backend:** The Training script powered by [Kohya Scripts](https://github.com/... | {
"source": "cocktailpeanut/fluxgym",
"title": "README.md",
"url": "https://github.com/cocktailpeanut/fluxgym/blob/main/README.md",
"date": "2024-09-05T11:25:42",
"stars": 2047,
"description": "Dead simple FLUX LoRA training UI with LOW VRAM support",
"file_size": 8279
} |
<div align="center">
<img src="./docs/images/github-cover-new.png" alt="RAG Web UI Demo">
<br />
<p>
<strong>Knowledge Base Management Based on RAG (Retrieval-Augmented Generation)</strong>
</p>
<p>
<a href="https://github.com/rag-web-ui/rag-web-ui/blob/main/LICENSE"><img src="https://img.shields.io/... | {
"source": "rag-web-ui/rag-web-ui",
"title": "README.md",
"url": "https://github.com/rag-web-ui/rag-web-ui/blob/main/README.md",
"date": "2025-01-14T01:48:30",
"stars": 2035,
"description": "RAG Web UI is an intelligent dialogue system based on RAG (Retrieval-Augmented Generation) technology.",
"file_siz... |
<div align="center">
<img src="./docs/images/github-cover-new.png" alt="RAG Web UI">
<br />
<p>
<strong>基于 RAG (Retrieval-Augmented Generation) 的知识库管理</strong>
</p>
<p>
<a href="https://github.com/rag-web-ui/rag-web-ui/blob/main/LICENSE"><img src="https://img.shields.io/github/license/rag-web-ui/rag-... | {
"source": "rag-web-ui/rag-web-ui",
"title": "README.zh-CN.md",
"url": "https://github.com/rag-web-ui/rag-web-ui/blob/main/README.zh-CN.md",
"date": "2025-01-14T01:48:30",
"stars": 2035,
"description": "RAG Web UI is an intelligent dialogue system based on RAG (Retrieval-Augmented Generation) technology.",... |
# Troubleshooting Guide
## Database Issues
### 1. Database Tables Not Found
If you encounter "table not found" or similar database errors after starting your services, follow these steps:
#### Check if MySQL is Ready
```bash
# Check MySQL container status
docker ps | grep db
# Check MySQL logs
docker logs ragwebu... | {
"source": "rag-web-ui/rag-web-ui",
"title": "docs/troubleshooting.md",
"url": "https://github.com/rag-web-ui/rag-web-ui/blob/main/docs/troubleshooting.md",
"date": "2025-01-14T01:48:30",
"stars": 2035,
"description": "RAG Web UI is an intelligent dialogue system based on RAG (Retrieval-Augmented Generatio... |
This is a [Next.js](https://nextjs.org) project bootstrapped with [`create-next-app`](https://nextjs.org/docs/app/api-reference/cli/create-next-app).
## Getting Started
First, run the development server:
```bash
npm run dev
# or
yarn dev
# or
pnpm dev
# or
bun dev
```
Open [http://localhost:3000](http://localhost:3... | {
"source": "rag-web-ui/rag-web-ui",
"title": "frontend/README.md",
"url": "https://github.com/rag-web-ui/rag-web-ui/blob/main/frontend/README.md",
"date": "2025-01-14T01:48:30",
"stars": 2035,
"description": "RAG Web UI is an intelligent dialogue system based on RAG (Retrieval-Augmented Generation) technol... |
---
name: Bug Report
about: Create a report to help us improve
title: ''
labels: 'bug'
assignees: ''
---
## 🚨 Before Creating an Issue
Please check our [Troubleshooting Guide](docs/troubleshooting.md) first. Many common issues can be resolved using the guide.
## Description
A clear and concise description of the is... | {
"source": "rag-web-ui/rag-web-ui",
"title": ".github/ISSUE_TEMPLATE/bug_report.md",
"url": "https://github.com/rag-web-ui/rag-web-ui/blob/main/.github/ISSUE_TEMPLATE/bug_report.md",
"date": "2025-01-14T01:48:30",
"stars": 2035,
"description": "RAG Web UI is an intelligent dialogue system based on RAG (Ret... |
<p align="center">
<a href="https://trychroma.com"><img src="https://user-images.githubusercontent.com/891664/227103090-6624bf7d-9524-4e05-9d2c-c28d5d451481.png" alt="Chroma logo"></a>
</p>
<p align="center">
<b>Chroma - the open-source embedding database</b>. <br />
The fastest way to build Python or JavaSc... | {
"source": "rag-web-ui/rag-web-ui",
"title": "backend/uploads/README.md",
"url": "https://github.com/rag-web-ui/rag-web-ui/blob/main/backend/uploads/README.md",
"date": "2025-01-14T01:48:30",
"stars": 2035,
"description": "RAG Web UI is an intelligent dialogue system based on RAG (Retrieval-Augmented Gener... |
# 🚀 十分钟搭建属于自己的 DeepSeek 知识库!完全开源、离线部署方案详解
## 💡 序言
还在为高额的 ChatGPT Plus 订阅费用发愁吗?担心公司机密文档上传到云端吗?本教程将带你使用完全开源的工具,在本地搭建一个基于 RAG (Retrieval-Augmented Generation) 技术的智能知识库系统。不仅完全离线,还能保护隐私,让你的文档秘密更有保障!
## 🛠️ 环境准备
在开始之前,请确保你的系统满足以下要求:
- 操作系统:Linux/macOS/Windows
- RAM:至少 8GB (推荐 16GB 以上)
- 硬盘空间:至少 20GB 可用空间
- 已安装:
- [D... | {
"source": "rag-web-ui/rag-web-ui",
"title": "docs/blog/deploy-local.md",
"url": "https://github.com/rag-web-ui/rag-web-ui/blob/main/docs/blog/deploy-local.md",
"date": "2025-01-14T01:48:30",
"stars": 2035,
"description": "RAG Web UI is an intelligent dialogue system based on RAG (Retrieval-Augmented Gener... |
# 零基础入门:如何用 RAG (检索增强生成) 打造知识库 QA 系统
## 写在前面
马上今年要过去了,这个项目是在 2025 年 1 月份闲暇时间发起一个类似于教育类的项目。
其目的更多是希望可以在不依赖其他大的基础设施,结合自己多个 RAG 项目的经验,
用大家手头上已有的工具,通过跑通一个全流程的 RAG 知识库项目,来帮助更多的同学认识和入门 RAG 和知识库。
所以在这个项目里面,你当前还不会看到很多关于 RAG 的细节,例如多路召回、HyDE、Query 改写等能力(当然,我看到社区里面有能力的同学已经在帮忙实现这些能力 ING 了)。
项目流程图:
```mermaid
graph TB
%%... | {
"source": "rag-web-ui/rag-web-ui",
"title": "docs/tutorial/README.md",
"url": "https://github.com/rag-web-ui/rag-web-ui/blob/main/docs/tutorial/README.md",
"date": "2025-01-14T01:48:30",
"stars": 2035,
"description": "RAG Web UI is an intelligent dialogue system based on RAG (Retrieval-Augmented Generatio... |
<div align='center'>
<h1>Emu3: Next-Token Prediction is All You Need</h1h1>
<h3></h3>
[Emu3 Team, BAAI](https://www.baai.ac.cn/english.html)
| [Project Page](https://emu.baai.ac.cn) | [Paper](https://arxiv.org/pdf/2409.18869) | [🤗HF Models](https://huggingface.co/collections/BAAI/emu3-66f4e64f70850ff358a2e60f) | [Mo... | {
"source": "baaivision/Emu3",
"title": "README.md",
"url": "https://github.com/baaivision/Emu3/blob/main/README.md",
"date": "2024-09-26T11:03:22",
"stars": 2010,
"description": "Next-Token Prediction is All You Need",
"file_size": 11326
} |
# Smol Models 🤏
Welcome to Smol Models, a family of efficient and lightweight AI models from Hugging Face. Our mission is to create powerful yet compact models, for text and vision, that can run effectively on-device while maintaining strong performance.
**News 📰**
- **Introducing [FineMath](https://huggingface.co/... | {
"source": "huggingface/smollm",
"title": "README.md",
"url": "https://github.com/huggingface/smollm/blob/main/README.md",
"date": "2024-11-04T13:01:54",
"stars": 1945,
"description": "Everything about the SmolLM2 and SmolVLM family of models ",
"file_size": 3599
} |
# SmolLM2

SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to r... | {
"source": "huggingface/smollm",
"title": "text/README.md",
"url": "https://github.com/huggingface/smollm/blob/main/text/README.md",
"date": "2024-11-04T13:01:54",
"stars": 1945,
"description": "Everything about the SmolLM2 and SmolVLM family of models ",
"file_size": 4787
} |
# Tools for local inference
Here you can find tools and demos for running SmolLM2 and SmolVLM locally, leveraing libraries such as llama.cpp, MLX, MLC and Transformers.js. | {
"source": "huggingface/smollm",
"title": "tools/README.md",
"url": "https://github.com/huggingface/smollm/blob/main/tools/README.md",
"date": "2024-11-04T13:01:54",
"stars": 1945,
"description": "Everything about the SmolLM2 and SmolVLM family of models ",
"file_size": 172
} |
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/SmolVLM.png" width="800" height="auto" alt="Image description">
# SmolVLM
[SmolVLM](https://huggingface.co/HuggingFaceTB/SmolVLM-Instruct) is a compact open multimodal model that accepts arbitrary sequences of image and text inputs... | {
"source": "huggingface/smollm",
"title": "vision/README.md",
"url": "https://github.com/huggingface/smollm/blob/main/vision/README.md",
"date": "2024-11-04T13:01:54",
"stars": 1945,
"description": "Everything about the SmolLM2 and SmolVLM family of models ",
"file_size": 5210
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.