Safetensors
mistral
File size: 1,477 Bytes
73b72d8
 
 
6796346
 
 
 
 
73b72d8
 
65674e9
73b72d8
6796346
 
 
 
73b72d8
 
 
 
6796346
 
 
 
 
 
 
 
 
 
 
 
 
 
 
73b72d8
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
---
base_model:
- mistralai/Mistral-7B-Instruct-v0.3
datasets:
- NingLab/MMECInstruct
license: cc-by-4.0
pipeline_tag: image-text-to-text
library_name: transformers
---

# CASLIE-M

This repository contains the CASLIE-M model presented in the paper [Captions Speak Louder than Images: Generalizing Foundation Models for E-commerce from High-quality Multimodal Instruction Data](https://huggingface.co/papers/2410.17337).

Project page: https://ninglab.github.io/CASLIE/
Code: https://github.com/ninglab/CASLIE

## CASLIE Models
The CASLIE-M model is instruction-tuned from the medium-size base model [Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3).

## Sample Usage

To conduct inference, run `python inference.py --model_path $model_path --task $task --output_path $output_path`.

`$model_path` is the path of the instruction-tuned model.

`$task` specifies the task to be tested.

`$output_path` specifies the path where you want to save the inference output.

Example:
```bash
python inference.py --model_path NingLab/CASLIE-M --task answerability_prediction --output_path ap.json
```

## Citation
```bibtex
@article{ling2024captions,
    title={Captions Speak Louder than Images (CASLIE): Generalizing Foundation Models for E-commerce from High-quality Multimodal Instruction Data},
    author={Ling, Xinyi and Peng, Bo and Du, Hanwen and Zhu, Zhihui and Ning, Xia},
    journal={arXiv preprint arXiv:2410.17337},
    year={2024}
}
```