Spam Classification Dataset
Overview
The Spam Classification Dataset contains a collection of SMS messages labeled as either "spam" or "ham" (non-spam). This dataset is designed for binary text classification tasks, where the goal is to classify an SMS message as either spam or non-spam based on its content.
Dataset Structure
The dataset is provided as a single CSV file named spam.csv. It contains 5,572 entries, with each entry corresponding to an SMS message. The dataset includes the following columns:
Columns
- Category: The label for the message, indicating whether it is spam (
spam) or not (ham). - Message: The content of the SMS message.
Summary Statistics
- Total Messages: 5,572
- Unique Categories: 2 (
spam,ham) - Most Frequent Message: "Sorry, I'll call later" (appears 30 times)
- Most Frequent Category:
ham(4,825 occurrences)
Usage
This dataset is suitable for binary classification tasks, particularly for training and evaluating models that can detect spam messages in SMS data. It can be used to develop and benchmark various natural language processing (NLP) models, such as:
- Naive Bayes Classifier
- Support Vector Machines (SVM)
- Logistic Regression
- Deep Learning Models (e.g., LSTM, BERT)
Example Use Cases
- Spam Detection: Training a machine learning model to automatically detect spam messages.
- Text Preprocessing: Exploring text preprocessing techniques like tokenization, stopword removal, and stemming/lemmatization.
- Feature Engineering: Experimenting with different feature extraction methods, such as TF-IDF, word embeddings, or n-grams.
Requirements
To run analyses on this dataset, you'll need the following Python libraries:
pip install pandas
pip install numpy
pip install scikit-learn
pip install nltk
Example Code
Loading the Dataset
You can load the dataset using the following code snippet:
import pandas as pd
# Load the dataset
df = pd.read_csv('spam.csv')
# Display the first few rows
print(df.head())
Preprocessing Example
Here’s an example of how you might preprocess the text data for training:
import nltk
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import TfidfVectorizer
# Download stopwords
nltk.download('stopwords')
from nltk.corpus import stopwords
# Split the data
X_train, X_test, y_train, y_test = train_test_split(df['Message'], df['Category'], test_size=0.2, random_state=42)
# Initialize TF-IDF Vectorizer
tfidf = TfidfVectorizer(stop_words=stopwords.words('english'))
# Fit and transform the training data
X_train_tfidf = tfidf.fit_transform(X_train)
X_test_tfidf = tfidf.transform(X_test)
Model Training Example
Here’s how you might train a simple Naive Bayes classifier:
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import accuracy_score, classification_report
# Train the model
model = MultinomialNB()
model.fit(X_train_tfidf, y_train)
# Make predictions
y_pred = model.predict(X_test_tfidf)
# Evaluate the model
print("Accuracy:", accuracy_score(y_test, y_pred))
print(classification_report(y_test, y_pred))