| | --- |
| | tags: |
| | - quantized |
| | - 2-bit |
| | - 3-bit |
| | - 4-bit |
| | - 5-bit |
| | - 6-bit |
| | - 8-bit |
| | - GGUF |
| | - transformers |
| | - safetensors |
| | - text-generation |
| | - conversational |
| | - function-calling |
| | - text-generation-inference |
| | - region:us |
| | - text-generation |
| | model_name: MaziyarPanahi/firefunction-v2-GGUF |
| | base_model: fireworks-ai/firefunction-v2 |
| | inference: false |
| | model_creator: fireworks-ai |
| | pipeline_tag: text-generation |
| | quantized_by: MaziyarPanahi |
| | license: llama3 |
| | --- |
| | # [MaziyarPanahi/firefunction-v2-GGUF](https://huggingface.co/MaziyarPanahi/firefunction-v2-GGUF) |
| | - Model creator: [fireworks-ai](https://huggingface.co/fireworks-ai) |
| | - Original model: [fireworks-ai/firefunction-v2](https://huggingface.co/fireworks-ai/firefunction-v2) |
| |
|
| | ## Description |
| | [MaziyarPanahi/firefunction-v2-GGUF](https://huggingface.co/MaziyarPanahi/firefunction-v2-GGUF) contains GGUF format model files for [fireworks-ai/firefunction-v2](https://huggingface.co/fireworks-ai/firefunction-v2). |
| |
|
| | ### About GGUF |
| |
|
| | GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. |
| |
|
| | Here is an incomplete list of clients and libraries that are known to support GGUF: |
| |
|
| | * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. |
| | * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. |
| | * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. |
| | * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. |
| | * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. |
| | * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. |
| | * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. |
| | * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. |
| | * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. |
| | * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. |
| |
|
| | ## Special thanks |
| |
|
| | π Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
| |
|
| | Original README |
| | --- |
| | # FireFunction V2: Fireworks Function Calling Model |
| |
|
| | [**Try on Fireworks**](https://fireworks.ai/models/fireworks/firefunction-v2) | [**API Docs**](https://readme.fireworks.ai/docs/function-calling) | [**Demo App**](https://functional-chat.vercel.app/) | [**Discord**](https://discord.gg/mMqQxvFD9A) |
| |
|
| | <img src="https://cdn-uploads.huggingface.co/production/uploads/64b6f3a72f5a966b9722de88/nJNtxLzWswBDKK1iOZblb.png" alt="firefunction" width="400"/> |
| |
|
| | FireFunction is a state-of-the-art function calling model with a commercially viable license. View detailed info in our [announcement blog](https://fireworks.ai/blog/firefunction-v2-launch-post). Key info and highlights: |
| |
|
| | **Comparison with other models:** |
| | - Competitive with GPT-4o at function-calling, scoring 0.81 vs 0.80 on a medley of public evaluations |
| | - Trained on Llama 3 and retains Llama 3βs conversation and instruction-following capabilities, scoring 0.84 vs Llama 3βs 0.89 on MT bench |
| | - Significant quality improvements over FireFunction v1 across the broad range of metrics |
| |
|
| |
|
| | **General info:** |
| |
|
| | πΎ Successor of the [FireFunction](https://fireworks.ai/models/fireworks/firefunction-v1) model |
| |
|
| | π Support of parallel function calling (unlike FireFunction v1) and good instruction following |
| |
|
| | π‘ Hosted on the [Fireworks](https://fireworks.ai/models/fireworks/firefunction-v2) platform at < 10% of the cost of GPT 4o and 2x the speed |
| |
|