File size: 3,699 Bytes
74c6a41
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
from sklearn.multioutput import MultiOutputClassifier
from sklearn.multiclass import OneVsRestClassifier
from sklearn.metrics import classification_report, accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import MultinomialNB
from sklearn.svm import SVC
from xgboost import XGBClassifier
from sklearn.neural_network import MLPClassifier
import numpy as np
import pandas as pd

# Logistic Regression (use OneVsRest)
def multilabel_logistic_regression(X, y):
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
    
    model = OneVsRestClassifier(LogisticRegression(solver='lbfgs', max_iter=1000))
    model.fit(X_train, y_train)
    
    y_pred = model.predict(X_test)
    return classification_report(y_test, y_pred), accuracy_score(y_test, y_pred)

# Decision Tree
def multilabel_decision_tree(X, y):
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
    
    model = MultiOutputClassifier(DecisionTreeClassifier())
    model.fit(X_train, y_train)
    
    y_pred = model.predict(X_test)
    return classification_report(y_test, y_pred), accuracy_score(y_test, y_pred)

# Random Forest
def multilabel_random_forest(X, y):
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
    
    model = MultiOutputClassifier(RandomForestClassifier(n_estimators=100, random_state=42))
    model.fit(X_train, y_train)
    
    y_pred = model.predict(X_test)
    return classification_report(y_test, y_pred), accuracy_score(y_test, y_pred)

# SVM (with OneVsRest)
def multilabel_svm(X, y):
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

    model = OneVsRestClassifier(SVC(kernel='rbf', probability=True))
    model.fit(X_train, y_train)

    y_pred = model.predict(X_test)
    return classification_report(y_test, y_pred), accuracy_score(y_test, y_pred)

# k-NN (KNeighborsClassifier supports multi-label directly)
def multilabel_knn(X, y, k=5):
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

    model = KNeighborsClassifier(n_neighbors=k)
    model.fit(X_train, y_train)

    y_pred = model.predict(X_test)
    return classification_report(y_test, y_pred), accuracy_score(y_test, y_pred)

# Naive Bayes (MultinomialNB with OneVsRest)
def multilabel_naive_bayes(X, y):
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

    model = OneVsRestClassifier(MultinomialNB())
    model.fit(X_train, y_train)

    y_pred = model.predict(X_test)
    return classification_report(y_test, y_pred), accuracy_score(y_test, y_pred)

# XGBoost (with OneVsRest)
def multilabel_xgboost(X, y):
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

    model = OneVsRestClassifier(XGBClassifier(use_label_encoder=False, eval_metric='logloss'))
    model.fit(X_train, y_train)

    y_pred = model.predict(X_test)
    return classification_report(y_test, y_pred), accuracy_score(y_test, y_pred)

# MLP (Neural Net)
def multilabel_mlp(X, y):
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

    model = MultiOutputClassifier(MLPClassifier(hidden_layer_sizes=(100,), max_iter=500))
    model.fit(X_train, y_train)

    y_pred = model.predict(X_test)
    return classification_report(y_test, y_pred), accuracy_score(y_test, y_pred)