Initial proposal of a Model card

#1
Files changed (1) hide show
  1. README.md +26 -0
README.md ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ base_model:
5
+ - codellama/CodeLlama-7b-hf
6
+ pipeline_tag: text-generation
7
+ ---
8
+
9
+ # Task-Aware MoE LoRA for Universal information Extraction
10
+
11
+ This is a novel Universal Information Extraction model. Based on the GoLLIE model (https://huggingface.co/HiTZ/GoLLIE-7B), this model substitutes the LoRA
12
+ adapter by a Mixture of Expert models, with a task-aware router.
13
+
14
+
15
+ ### Model description
16
+ - **Developed by:** Lubingzhi Guo
17
+ - **Institution:** University of Glasgow.
18
+ - **Model type:** Text generation.
19
+ - **Languages:** English
20
+ - **License:** LLaMA2 License for the base and merged model,
21
+ - **Fine-tuned from model:** CODE-LLaMA2 7B (codellama/CodeLlama-7b-hf)
22
+
23
+ ### Citation
24
+ If you use this model, please cite the following paper:
25
+
26
+ > L. Guo, J. Sanz-Cruzado, R. McCreadie. Selecting the Right Experts: Generalizing Information Extraction for Unseen Scenarios via Task-Aware Expert Weighting. 28th European Conference on Artificial Intelligence (ECAI 2025), Bologna, Italy, October 2025, pp. 4161-4168. DOI: https://doi.org/10.3233/FAIA251308