Anisha Bhatnagar commited on
Commit
125208c
·
1 Parent(s): daeb1c8

removed image

Browse files
Files changed (1) hide show
  1. app.py +2 -2
app.py CHANGED
@@ -110,8 +110,8 @@ print ('Step2: feed-forward process')
110
  #title = "Hate-LLaMA - An Instruction-tuned Audio-Visual Language Model for Hate Content Detection"
111
 
112
  description = """
113
- <h1 align="center"> Hate-LLaMA <img src="/file=hate_llama_icon.png", border="0" style="margin: 0 auto; height: 200px;" /></h1>
114
- <h4 align="center"> An Audio-Visual Language Model for Hate Content Detection </h4>
115
 
116
  Hate-LLaMA , is a multi-modal framework, designed to detect hate in videos and classify them as HATE or NON HATE. Hate-LLaMA finetunes Video-LLaMA (which uses the LLaMA-7b-chat model as backbone). The model is able to analyse both the audio and visual content to perform the classification task. After uploading a video and clicking submit, the model outputs a simple statement identifying if the video has hate or not.
117
 
 
110
  #title = "Hate-LLaMA - An Instruction-tuned Audio-Visual Language Model for Hate Content Detection"
111
 
112
  description = """
113
+ <h1 align="center"> Hate-LLaMA </h1>
114
+ <h3 align="center"> An Audio-Visual Language Model for Hate Content Detection </h3>
115
 
116
  Hate-LLaMA , is a multi-modal framework, designed to detect hate in videos and classify them as HATE or NON HATE. Hate-LLaMA finetunes Video-LLaMA (which uses the LLaMA-7b-chat model as backbone). The model is able to analyse both the audio and visual content to perform the classification task. After uploading a video and clicking submit, the model outputs a simple statement identifying if the video has hate or not.
117