toxic-tweets / README.md
Jules
Updated README to include toxic_comments notebook
d8b0510 unverified

A newer version of the Streamlit SDK is available: 1.57.0

Upgrade
metadata
title: Toxic Tweets
emoji: 🤢
colorFrom: yellow
colorTo: orange
sdk: streamlit
app_file: app.py
pinned: false

Toxic Tweets

Developing a Language Model to classify toxic 🤢 tweets using HugginFace, Streamlit and GitHub.

Jules Blount 31430956

Milestone 1

To begin I should mention I already have a home server with docker already installed with multiple containers running.
The tutorial I followed to install docker on my server is located Here

Docker runtime environment verification:

Python prompt from Python container:

Milestone 2

For milestone-2, I was tasked to develop a streamlit app that allows the user to enter a text, select a pretrained model and get the sentiment analysis of the text using HuggingFace transformers library and HuggingFace Spaces.

Streamlite app is located here

https://user-images.githubusercontent.com/45794969/230540060-2a790672-6e8c-4c14-8842-6237a88ff91d.mp4

Milestone 3

For milestone-3, I was challenged to build a multi-headed model that’s capable of detecting different types of of toxicity like threats, obscenity, insults, and identity-based hate better than Perspective’s current models. The classifier was be developed using a pretrained language model of my choice, I chose DistilBERT.

Streamlite app is located here

Model development and training can be found in the toxic_comments notebook here