| | --- |
| | license: mit |
| | library_name: peft |
| | language: |
| | - en |
| | pipeline_tag: text-generation |
| | tags: |
| | - facebook |
| | - meta |
| | - pytorch |
| | - llama |
| | - llama-2 |
| | --- |
| | |
| | **Website**: [FireAct Agent](https://fireact-agent.github.io) |
| |
|
| | # **FireAct Llama-2/CodeLlama** |
| | FireAct Llama/CodeLlama is a collection of fine-tuned generative text models for performing ReAct with external search tools. Links to other models can be found in the Index section. |
| |
|
| | ## Foundation Model Details |
| | *Note: As the foundation models, Llama-2 and CodeLlama, are developed by Meta, please also read the guidance and license on their website, [Llama-2](https://huggingface.co/meta-llama) and [CodeLlama](https://huggingface.co/codellama), before using FireAct models.* |
| |
|
| | **Model Developers** System 2 Research, Cambridge LTL, Monash University, Princeton PLI. |
| |
|
| | **Variations** FireAct models including Llama-2-7B full fine-tuned models, and Llama-2-[7B,13B], CodeLlama-[7B,13B,34B] LoRA fine-tuned models. All released models are fine-tuned on multi-task (HotpotQA/StrategyQA/MMLU) and multi-type (ReAct/CoT/Reflexion) data. |
| |
|
| | **Input** Models input text only. |
| |
|
| | **Output** Models generate text only. |
| |
|
| | ## Index |
| | **Full Fine-tuned Model** |
| |
|
| | FireAct Llama-2: |
| | - [fireact_llama_2_7b](https://huggingface.co/forestai/fireact_llama_2_7b) |
| |
|
| | **LoRA Fine-tuned Model** |
| |
|
| | FireAct Llama-2: |
| | - [fireact_llama_2_7b_lora](https://huggingface.co/forestai/fireact_llama_2_7b_lora) |
| | - [fireact_llama_2_13b_lora](https://huggingface.co/forestai/fireact_llama_2_13b_lora) |
| |
|
| | FireAct CodeLlama: |
| | - [fireact_codellama_7b_lora](https://huggingface.co/forestai/fireact_codellama_7b_lora) |
| | - [fireact_codellama_13b_lora](https://huggingface.co/forestai/fireact_codellama_13b_lora) |
| | - [fireact_codellama_34b_lora](https://huggingface.co/forestai/fireact_codellama_34b_lora) |
| |
|
| |
|
| | ## LoRA Training procedure |
| |
|
| |
|
| | The following `bitsandbytes` quantization config was used during training: |
| | - load_in_8bit: True |
| | - load_in_4bit: False |
| | - llm_int8_threshold: 6.0 |
| | - llm_int8_skip_modules: None |
| | - llm_int8_enable_fp32_cpu_offload: False |
| | - llm_int8_has_fp16_weight: False |
| | - bnb_4bit_quant_type: fp4 |
| | - bnb_4bit_use_double_quant: False |
| | - bnb_4bit_compute_dtype: float32 |
| | ### Framework versions |
| |
|
| |
|
| | - PEFT 0.4.0 |
| |
|