File size: 2,694 Bytes
9858a3e
 
 
 
 
 
 
 
 
 
8947b21
9858a3e
 
 
 
 
 
 
45dc243
9858a3e
f924f04
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9858a3e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f924f04
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
---
license: mit
datasets:
- databoyface/python-tf-ome-src-v4.1
language:
- en
---

# Orthogonal Model of Emotions

A Text Classifier created using TensorFlow and Keras

## Author

C.J. Pitchford

## Published

18 August 2025

## Model and Weights

    Model: "sequential"
    _________________________________________________________________
     Layer (type)                Output Shape              Param #   
    =================================================================
     embedding (Embedding)       (None, 1000, 64)          6400000   
                                                                 
     bidirectional (Bidirection  (None, 1000, 128)         66048     
     al)                                                             
                                                                 
     global_max_pooling1d (Glob  (None, 128)               0         
     alMaxPooling1D)                                                 
                                                                 
     dense (Dense)               (None, 64)                8256      
                                                                 
     dropout (Dropout)           (None, 64)                0         
                                                                 
     dense_1 (Dense)             (None, 47)                3055      
                                                                 
    =================================================================
    Total params: 6477359 (24.71 MB)
    Trainable params: 6477359 (24.71 MB)
    Non-trainable params: 0 (0.00 Byte)

## Usage


    import numpy as np
    import tensorflow as tf

    import tensorflow.keras.preprocessing.text as text

    import pickle
    from tensorflow.keras.preprocessing.sequence import pad_sequences

    # 1. Load pre-trained model
    model = tf.keras.models.load_model('OME4tf/ome-4a-model.h5')

    # 2. Load tokenizer and label encoder
    with open('OME4tf/ome-4a-tokenizer.pkl', 'rb') as f:
        tokenizer = pickle.load(f)
    with open('OME4tf/ome-4a-label_encoder.pkl', 'rb') as f:
        label_encoder = pickle.load(f)

    # 3. Test model with prediction on text "I failed to hide my distress."
    text = "I failed to hide my distress."
    text_seq = tokenizer.texts_to_sequences([text])
    max_len = 1000
    text_seq = pad_sequences(text_seq, maxlen=max_len, padding='post')
    pred_probs = model.predict(text_seq)
    pred_label = np.argmax(pred_probs, axis=1)
    print(f"Statement: {text}\nPrediction: {label_encoder.classes_[pred_label][0]}")

## Additional

Tokenizer and label encoder included as JSON to avoid using `pickle` files.