Stable Diffusion Pipeline Collection by hikamin77 Dec 21, 2023 - gsdf/EasyNegative Viewer • Updated Feb 12, 2023 • 3 • 32.9k • 1.17k stablediffusionapi/anything-v5 Text-to-Image • Updated Jan 20, 2025 • 4.87k • 200
Papers Collection by llm-guru Dec 21, 2023 - LLM in a flash: Efficient Large Language Model Inference with Limited Memory Paper • 2312.11514 • Published Dec 12, 2023 • 263
LLM in a flash: Efficient Large Language Model Inference with Limited Memory Paper • 2312.11514 • Published Dec 12, 2023 • 263
Open AI alternatives Collection by Shyam2056 Dec 21, 2023 - mistralai/Mixtral-8x7B-Instruct-v0.1 47B • Updated Jul 24, 2025 • 439k • 4.65k
Papers Collection by dokkey Dec 21, 2023 - LLM in a flash: Efficient Large Language Model Inference with Limited Memory Paper • 2312.11514 • Published Dec 12, 2023 • 263
LLM in a flash: Efficient Large Language Model Inference with Limited Memory Paper • 2312.11514 • Published Dec 12, 2023 • 263
Stable Diffusion Models Collection by OneEye28 Dec 21, 2023 - stabilityai/stable-diffusion-xl-base-1.0 Text-to-Image • Updated Oct 30, 2023 • 1.96M • • 7.59k stabilityai/stable-diffusion-xl-refiner-1.0 Image-to-Image • Updated Sep 25, 2023 • 256k • 2.03k
text models Collection by jealk Dec 21, 2023 - mhenrichsen/context-aware-splitter-1b Text Generation • 1B • Updated Sep 19, 2023 • 8 • 5
Papers Collection by SalonbusAI Dec 21, 2023 - LLM in a flash: Efficient Large Language Model Inference with Limited Memory Paper • 2312.11514 • Published Dec 12, 2023 • 263
LLM in a flash: Efficient Large Language Model Inference with Limited Memory Paper • 2312.11514 • Published Dec 12, 2023 • 263
LLM for Apple Silicon Mac Collection by riccardomusmeci Jan 4, 2024 - mlx-community/OpenHermes-2.5-Mistral-7B Text Generation • Updated Feb 10, 2024 • 78 • 6 mlx-community/e5-mistral-7b-instruct-mlx Feature Extraction • Updated Jan 5, 2024 • 199 • 10
Speed Collection by zucco Dec 21, 2023 - LLM in a flash: Efficient Large Language Model Inference with Limited Memory Paper • 2312.11514 • Published Dec 12, 2023 • 263 PowerInfer: Fast Large Language Model Serving with a Consumer-grade GPU Paper • 2312.12456 • Published Dec 16, 2023 • 45
LLM in a flash: Efficient Large Language Model Inference with Limited Memory Paper • 2312.11514 • Published Dec 12, 2023 • 263
PowerInfer: Fast Large Language Model Serving with a Consumer-grade GPU Paper • 2312.12456 • Published Dec 16, 2023 • 45
Stable Diffusion Models Collection by OneEye28 Dec 21, 2023 - stabilityai/stable-diffusion-xl-base-1.0 Text-to-Image • Updated Oct 30, 2023 • 1.96M • • 7.59k stabilityai/stable-diffusion-xl-refiner-1.0 Image-to-Image • Updated Sep 25, 2023 • 256k • 2.03k
Stable Diffusion Pipeline Collection by hikamin77 Dec 21, 2023 - gsdf/EasyNegative Viewer • Updated Feb 12, 2023 • 3 • 32.9k • 1.17k stablediffusionapi/anything-v5 Text-to-Image • Updated Jan 20, 2025 • 4.87k • 200
text models Collection by jealk Dec 21, 2023 - mhenrichsen/context-aware-splitter-1b Text Generation • 1B • Updated Sep 19, 2023 • 8 • 5
Papers Collection by llm-guru Dec 21, 2023 - LLM in a flash: Efficient Large Language Model Inference with Limited Memory Paper • 2312.11514 • Published Dec 12, 2023 • 263
LLM in a flash: Efficient Large Language Model Inference with Limited Memory Paper • 2312.11514 • Published Dec 12, 2023 • 263
Papers Collection by SalonbusAI Dec 21, 2023 - LLM in a flash: Efficient Large Language Model Inference with Limited Memory Paper • 2312.11514 • Published Dec 12, 2023 • 263
LLM in a flash: Efficient Large Language Model Inference with Limited Memory Paper • 2312.11514 • Published Dec 12, 2023 • 263
Open AI alternatives Collection by Shyam2056 Dec 21, 2023 - mistralai/Mixtral-8x7B-Instruct-v0.1 47B • Updated Jul 24, 2025 • 439k • 4.65k
LLM for Apple Silicon Mac Collection by riccardomusmeci Jan 4, 2024 - mlx-community/OpenHermes-2.5-Mistral-7B Text Generation • Updated Feb 10, 2024 • 78 • 6 mlx-community/e5-mistral-7b-instruct-mlx Feature Extraction • Updated Jan 5, 2024 • 199 • 10
Papers Collection by dokkey Dec 21, 2023 - LLM in a flash: Efficient Large Language Model Inference with Limited Memory Paper • 2312.11514 • Published Dec 12, 2023 • 263
LLM in a flash: Efficient Large Language Model Inference with Limited Memory Paper • 2312.11514 • Published Dec 12, 2023 • 263
Speed Collection by zucco Dec 21, 2023 - LLM in a flash: Efficient Large Language Model Inference with Limited Memory Paper • 2312.11514 • Published Dec 12, 2023 • 263 PowerInfer: Fast Large Language Model Serving with a Consumer-grade GPU Paper • 2312.12456 • Published Dec 16, 2023 • 45
LLM in a flash: Efficient Large Language Model Inference with Limited Memory Paper • 2312.11514 • Published Dec 12, 2023 • 263
PowerInfer: Fast Large Language Model Serving with a Consumer-grade GPU Paper • 2312.12456 • Published Dec 16, 2023 • 45