yi-1.5-34b-ov

yi-1.5-34b-ov is an OpenVino int4 quantized version of 01-ai Yi v1.5 34b Chat, providing a fast inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.

yi-1.5-34b-ov is a leading general purpose foundation model.

This is a very high quality model, and one of the largest that runs effectively on a laptop (especially with 64 GB of RAM).

Model Description

  • Developed by: 01-ai
  • Quantized by: llmware
  • Model type: yi-34b-v1.5
  • Parameters: 34 billion
  • Model Parent: 01-ai/yi-1.5v-34b
  • Language(s) (NLP): English
  • License: Apache 2.0
  • Uses: General use cases
  • RAG Benchmark Accuracy Score: NA
  • Quantization: int4

Model Card Contact

llmware on github

llmware on hf

llmware website

Downloads last month
26
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for llmware/yi-1.5-34b-ov

Quantized
(21)
this model

Collection including llmware/yi-1.5-34b-ov