Update README.md
Browse files
README.md
CHANGED
|
@@ -44,13 +44,20 @@ pipeline_tag: text-generation
|
|
| 44 |
This model, named `Evolutionary Multi-Modal Model`, is a multimodal transformer designed to handle a variety of tasks including vision and audio processing. It is built on top of the `adapter-transformers` and `transformers` libraries and is intended to be a versatile base model for both direct use and fine-tuning.
|
| 45 |
|
| 46 |
--
|
| 47 |
-
**Developed
|
| 48 |
-
|
| 49 |
-
**
|
| 50 |
-
|
| 51 |
-
**
|
| 52 |
-
|
| 53 |
-
**
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 54 |
|
| 55 |
|
| 56 |
|
|
@@ -72,7 +79,7 @@ print(generated_text)
|
|
| 72 |
```
|
| 73 |
### Downstream Use
|
| 74 |
|
| 75 |
-
The model can be fine-tuned for specific tasks such as visual question answering (VQA), image captioning, and audio recognition.
|
| 76 |
|
| 77 |
### Out-of-Scope Use
|
| 78 |
|
|
|
|
| 44 |
This model, named `Evolutionary Multi-Modal Model`, is a multimodal transformer designed to handle a variety of tasks including vision and audio processing. It is built on top of the `adapter-transformers` and `transformers` libraries and is intended to be a versatile base model for both direct use and fine-tuning.
|
| 45 |
|
| 46 |
--
|
| 47 |
+
**Developed
|
| 48 |
+
by:** Independent researcher
|
| 49 |
+
**Funded
|
| 50 |
+
by :** Self-funded
|
| 51 |
+
**Shared
|
| 52 |
+
by :** Independent researcher
|
| 53 |
+
**Model
|
| 54 |
+
type:** MEvolutionary Multi-Modal Model
|
| 55 |
+
**Language(s)
|
| 56 |
+
(NLP):** English zh
|
| 57 |
+
**License:**
|
| 58 |
+
Apache-2.0
|
| 59 |
+
**Finetuned from model**
|
| 60 |
+
None
|
| 61 |
|
| 62 |
|
| 63 |
|
|
|
|
| 79 |
```
|
| 80 |
### Downstream Use
|
| 81 |
|
| 82 |
+
The model can be fine-tuned for specific tasks such as visual question answering (VQA), image captioning, and audio recognition.
|
| 83 |
|
| 84 |
### Out-of-Scope Use
|
| 85 |
|