gus-mxx's picture
Update README.md
5185185 verified
---
license: other
license_name: closed-data
license_link: LICENSE
task_categories:
- image-classification
language:
- en
tags:
- violence
- non-violence
- University-Assignment
size_categories:
- n<1K
---
# Dataset Card for Violence Detection Dataset
This dataset has been created in February 2026 for the Machile Learning Data Challenge Assignment from the course Unboxing the Algorithm at Erasmus University Rotterdam. The goal of the assignment is to select a problem that is societally relevant, hence we decided to train a ML model to identify violent imagery, with the aim that individuals involved in content moderation would be less exposed to this kind of harmful content that is psychologically detrimental.
## Dataset Details
The dataset contains images that were used in a Machile Learning algorithm for a binary classification between "violent" and "non-violent" classes. For an in depth description of the dataset please look below
### Dataset Description
Our final data set consists of four sources:
1) Our first source is the database created by Aktı et al. (2019a) for their paper Vision-based Fight Detection from Surveillance Cameras. Their database is accessible on Github Aktı (2019b), and it consists of a collection of CCTV videos already separated in two folders “fight”, “nofight”. We used the “fight” and “no-fight” folder and transformed a section of the videos into fragments of images, with in total 2000 images, using VideoToFrames. (n.d.). In terms of data reuse, it is explicitly mentioned in the research paper that the database has been made openly available with the goal of being able to reuse it, provided that proper referencing is made.
2) Our second database source is the GitHub Database from ChinaZhangPeng (2023), containing 2421 images of fights. Unfortunately, in the documentation on GitHub there is no mention of the license/reuse policy for the dataset, nor mentions to the research paper for which this database was used. We attempted to contact the author, but sadly they have not added contact details on their GitHub profile. Since the GitHub repository has been made openly available, and we exhausted our resources in attempting to give more recognition to the author, we depart from the assumption that it is allowed for this database to be reused.
3) Our third source dataset consists of affectionate imagery of imagery of individuals kissing, clapping, hugging, sitting, in order to have non violent images,which consists of a set of 19,200 images in different categories (Olafenwa, 2020), from which we selected a fraction of for our model training. This dataset originates from the paper written by He et al., (2015), for which database reuse is allowed.
4) Lastly, we simulated fight and no-fight scenarios amongst ourselves, with the intention to portray an alternative to train a violence detection algorithm with the data curators not being exposed to violent imagery themselves. However, the reason for which we did not solely rely on this method is due to the bias that would be created in the dataset if we took this approach. The model will likely pick up only on certain environment characteristics where the simulated fights occurred. Moreover, the lack of data diversity by only having four individuals enacting the scenarios, would render biases in terms of who would be identified by the algorithm. Lastly, we were concerned that fight simulations would not capture real-world scenarios, which would defeat the purpose of the algorithm. For this reason we decided to incorporate other database sources.
- **Curated by:** Agustin Medina
- **Language(s) (NLP):** ENG
- **License:** Closed access, no reuse possible
### Dataset Sources
- **Repository:** On GoogleDrive https://drive.google.com/drive/folders/1OIMzSrGy1sGR96rX9M5n6q7rVT_4uPVa?usp=sharing
- **Paper:** Accessible within the course Assignment page on Canvas: https://canvas.eur.nl/courses/51870/assignments/261437
## Uses
This dataset is intended to be exclusively used for the Machile Learning Data Challenge Assignment from the course Unboxing the Algorithm at Erasmus University Rotterdam.
### Direct Use
No further use is allowed
### Out-of-Scope Use
This dataset is only intended to be accessed by the group members and the instructor of the course. Reuse is not allowed due to the data containing directly identifiable data files, and sensitive data relating to fights
## Dataset Structure
The dataset contains .png and .jpeg data files. The root folders are devided between "train" and "validation" folders, each of which are subsequently devided in "violent" and " non-violent" folders, following the binary classification that the algorithm performs.
### Curation Rationale
Our data collection decision was based on balancing real-life content of violent and non violent imagery, combined with simulated violent and non violent situations. Our choice of data was based on: 1) Ethical considerations, by simulating fights amongst ourselves to train the model; 2) Psychological, not choosing a topic that would be too psychologically detrimental for us to inspect; 3) Reducing bias to the best of our capabilities with the time constraints of this assignment; and 4) Legal, by reusing databases whose licensing terms would allow us to make use of them.
### Source Data
Violent and non-violent imagery reused from GitHub databases, and some of them generated by simulating situations
#### Who are the source data producers?
Source 1) Aktı, Ş. N. [seymanurakti]. (2019b). Fight-detection-surv-dataset [Data set]. GitHub. https://github.com/seymanurakti/fight-detection-surv-dataset
Source 2) ChinaZhangPeng (2023) Violence-Image-Dataset (version 1) [Data set]. Github. https://github.com/ChinaZhangPeng/Violence-Image-Dataset/blob/master/README.md
Source 3) Olafenwa, M. [OlafenwaMoses]. (2020). Action-Net: A dataset of images for human actions (Version 1.0) [Data set]. GitHub. https://github.com/OlafenwaMoses/Action-Net
Source 4) Self-generated imagery from simulated situations
### Annotations
The following annotations are based on the calendar week of 2026
Week 7 of the year: Extraction of fight and no-fight imagery from the following dataser: ChinaZhangPeng (2023) Violence-Image-Dataset (version 1) [Data set]. Github. https://github.com/ChinaZhangPeng/Violence-Image-Dataset/blob/master/README.md
Week 8: Discovery of the following dataset:Aktı, Ş. N. [seymanurakti]. (2019b). Fight-detection-surv-dataset [Data set]. GitHub. https://github.com/seymanurakti/fight-detection-surv-dataset
Week 9: Image generation from video fragments of CCTV camera using VideoToFrame (https://videotoframes.net/) using the following GitHub library Aktı, Ş. N. [seymanurakti]. (2019b). Fight-detection-surv-dataset [Data set]. GitHub. https://github.com/seymanurakti/fight-detection-surv-dataset
Week 9: Extraction of fight imagery from Olafenwa, M. [OlafenwaMoses]. (2020). Action-Net: A dataset of images for human actions (Version 1.0) [Data set]. GitHub. https://github.com/OlafenwaMoses/Action-Net
Week 10: Writing of research paper
Week 11: submission of research paper on Canvas environment. Dataset archived on GoogleDrive
#### Who are the annotators?
Agustin Medina (520066)
Phuc Le Nguyen (598023)
Maraliya Koch (782378)
Niklas Schulteis (642836)
#### Personal and Sensitive Information
The data is directly identifiable since facial imagery of individuals is directly visible. No de-identification of the imagery has been performed. The data is also sensitive since it partially contains real-life images of violent incidents, making both the victim and the perpetuator vulnerable if they are identified in the database. For that reason, we have decided to make the database close access.
## Bias, Risks, and Limitations
Regarding biases, the biggest risk is that the algorithm might learn the shortcuts in the input data instead of learning “violence” itself. In terms of context bias, around a quarter of the input data consists of CCTV footage, the model may learn that context rather than actual fighting. To counter that, we added the equivalent number of non-violent CCTV frames. Moreover, the model can also latch into backgrounds like the streets and low-light settings even when no fight is happening, since these are the backgrounds for the majority of pictures marked as “violence” in the input dataset. Although demographic diversity was taken into consideration, the proportion of white men is relatively overrepresented, which may lead to the model unfairly associating that group with aggression. Finally, to counter negative-class bias, we included hard negatives such as sports contact, dancing, hugging, and kissing, in order to train the model to distinguish between close physical contact and physical violence.
### Recommendations
This database is not meant for reuse
## Dataset Card Authors
Agutin Medina (520066)
## Dataset Card Contact
Agustin Medina (agustinmedina1999@gmail.com)