File size: 580 Bytes
af34f14
 
 
 
 
 
 
 
1963678
 
 
 
 
 
af34f14
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
---
license: llama2
language:
- en
pipeline_tag: conversational
tags:
- merge
---

This quant was made for [infermatic.ai](https://infermatic.ai/)

Dynamic FP8 quant of [goliath-120b-Instruct-FP8-Dynamic](https://huggingface.co/alpindale/goliath-120b) made with AutoFP8.


# Goliath 120B

An auto-regressive causal LM created by combining 2x finetuned [Llama-2 70B](https://huggingface.co/meta-llama/llama-2-70b-hf) into one.

# Prompting Format

Both Vicuna and Alpaca will work, but due the initial and final layers belonging primarily to Xwin, I expect Vicuna to work the best.