Improve model card: Add pipeline tag, library name, links, and sample usage
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,18 +1,40 @@
|
|
| 1 |
---
|
| 2 |
-
license: cc-by-4.0
|
| 3 |
-
datasets:
|
| 4 |
-
- NingLab/MMECInstruct
|
| 5 |
base_model:
|
| 6 |
- meta-llama/Llama-2-13b-chat-hf
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
---
|
| 8 |
|
| 9 |
# CASLIE-L
|
| 10 |
|
| 11 |
-
This
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
|
| 13 |
## CASLIE Models
|
| 14 |
The CASLIE-L model is instruction-tuned from the large base model [Llama-2-13b-chat](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf).
|
| 15 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 16 |
## Citation
|
| 17 |
```bibtex
|
| 18 |
@article{ling2024captions,
|
|
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
| 2 |
base_model:
|
| 3 |
- meta-llama/Llama-2-13b-chat-hf
|
| 4 |
+
datasets:
|
| 5 |
+
- NingLab/MMECInstruct
|
| 6 |
+
license: cc-by-4.0
|
| 7 |
+
library_name: transformers
|
| 8 |
+
pipeline_tag: image-text-to-text
|
| 9 |
---
|
| 10 |
|
| 11 |
# CASLIE-L
|
| 12 |
|
| 13 |
+
This repository contains the models for "[Captions Speak Louder than Images: Generalizing Foundation Models for E-commerce from High-quality Multimodal Instruction Data](https://huggingface.co/papers/2410.17337)".
|
| 14 |
+
|
| 15 |
+
**Project Page**: [https://ninglab.github.io/CASLIE/](https://ninglab.github.io/CASLIE/)
|
| 16 |
+
**Code Repository**: [https://github.com/ninglab/CASLIE](https://github.com/ninglab/CASLIE)
|
| 17 |
+
|
| 18 |
+
## Introduction
|
| 19 |
+
Leveraging multimodal data to drive breakthroughs in e-commerce applications through Multimodal Foundation Models (MFMs) is gaining increasing attention. This work introduces [MMECInstruct](https://huggingface.co/datasets/NingLab/MMECInstruct), the first-ever, large-scale, and high-quality multimodal instruction dataset for e-commerce. We also develop CASLIE, a simple, lightweight, yet effective framework for integrating multimodal information for e-commerce. Leveraging MMECInstruct, we fine-tune a series of e-commerce MFMs within CASLIE, denoted as CASLIE models.
|
| 20 |
|
| 21 |
## CASLIE Models
|
| 22 |
The CASLIE-L model is instruction-tuned from the large base model [Llama-2-13b-chat](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf).
|
| 23 |
|
| 24 |
+
## Sample Usage (Modality-unified Inference)
|
| 25 |
+
To conduct inference with the CASLIE models, refer to the following example directly from the [official GitHub repository](https://github.com/ninglab/CASLIE#modality-unified-inference).
|
| 26 |
+
|
| 27 |
+
`$model_path` is the path of the instruction-tuned model.
|
| 28 |
+
|
| 29 |
+
`$task` specifies the task to be tested.
|
| 30 |
+
|
| 31 |
+
`$output_path` specifies the path where you want to save the inference output.
|
| 32 |
+
|
| 33 |
+
Example:
|
| 34 |
+
```
|
| 35 |
+
python inference.py --model_path NingLab/CASLIE-M --task answerability_prediction --output_path ap.json
|
| 36 |
+
```
|
| 37 |
+
|
| 38 |
## Citation
|
| 39 |
```bibtex
|
| 40 |
@article{ling2024captions,
|