LakshmiHarika commited on
Commit
9eec7d1
Β·
verified Β·
1 Parent(s): e8c3b76

Create 6Feature Engineering.py

Browse files
Files changed (1) hide show
  1. pages/6Feature Engineering.py +730 -0
pages/6Feature Engineering.py ADDED
@@ -0,0 +1,730 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import streamlit as st
2
+
3
+ st.markdown("""
4
+ <style>
5
+ /* Set a soft background color */
6
+ body {
7
+ background-color: #eef2f7;
8
+ }
9
+ /* Style for main title */
10
+ h1 {
11
+ color: black;
12
+ font-family: 'Roboto', sans-serif;
13
+ font-weight: 700;
14
+ text-align: center;
15
+ margin-bottom: 25px;
16
+ }
17
+ /* Style for headers */
18
+ h2 {
19
+ color: black;
20
+ font-family: 'Roboto', sans-serif;
21
+ font-weight: 600;
22
+ margin-top: 30px;
23
+ }
24
+
25
+ /* Style for subheaders */
26
+ h3 {
27
+ color: red;
28
+ font-family: 'Roboto', sans-serif;
29
+ font-weight: 500;
30
+ margin-top: 20px;
31
+ }
32
+ .custom-subheader {
33
+ color: black;
34
+ font-family: 'Roboto', sans-serif;
35
+ font-weight: 600;
36
+ margin-bottom: 15px;
37
+ }
38
+ /* Paragraph styling */
39
+ p {
40
+ font-family: 'Georgia', serif;
41
+ line-height: 1.8;
42
+ color: black;
43
+ margin-bottom: 20px;
44
+ }
45
+ /* List styling with checkmark bullets */
46
+ .icon-bullet {
47
+ list-style-type: none;
48
+ padding-left: 20px;
49
+ }
50
+ .icon-bullet li {
51
+ font-family: 'Georgia', serif;
52
+ font-size: 1.1em;
53
+ margin-bottom: 10px;
54
+ color: black;
55
+ }
56
+ .icon-bullet li::before {
57
+ content: "β—†";
58
+ padding-right: 10px;
59
+ color: black;
60
+ }
61
+ /* Sidebar styling */
62
+ .sidebar .sidebar-content {
63
+ background-color: #ffffff;
64
+ border-radius: 10px;
65
+ padding: 15px;
66
+ }
67
+ .sidebar h2 {
68
+ color: #495057;
69
+ }
70
+ .step-box {
71
+ font-size: 18px;
72
+ background-color: #F0F8FF;
73
+ padding: 15px;
74
+ border-radius: 10px;
75
+ box-shadow: 2px 2px 8px #D3D3D3;
76
+ line-height: 1.6;
77
+ }
78
+ .box {
79
+ font-size: 18px;
80
+ background-color: #F0F8FF;
81
+ padding: 15px;
82
+ border-radius: 10px;
83
+ box-shadow: 2px 2px 8px #D3D3D3;
84
+ line-height: 1.6;
85
+ }
86
+ .title {
87
+ font-size: 26px;
88
+ font-weight: bold;
89
+ color: #E63946;
90
+ text-align: center;
91
+ margin-bottom: 15px;
92
+ }
93
+ .formula {
94
+ font-size: 20px;
95
+ font-weight: bold;
96
+ color: #2A9D8F;
97
+ background-color: #F7F7F7;
98
+ padding: 10px;
99
+ border-radius: 5px;
100
+ text-align: center;
101
+ margin-top: 10px;
102
+ }
103
+ /* Custom button style */
104
+ .streamlit-button {
105
+ background-color: #00FFFF;
106
+ color: #000000;
107
+ font-weight: bold;
108
+ }
109
+ </style>
110
+ """, unsafe_allow_html=True)
111
+
112
+ st.markdown("<h1 class='header-title'>πŸ› οΈ Feature Engineering πŸ“Œ</h1>", unsafe_allow_html=True)
113
+
114
+ st.markdown(
115
+ """
116
+ <div class='info-box'>
117
+ <p>πŸ”Ή When we take existing features from collected data and create new useful features, where this is automatically engineered made from existing features and the technique of creating the features is known as <span class='highlight'>Feature Engineering</span>.</p>
118
+ <p> These engineered features enhance machine learning models.</p>
119
+ <p> A subpart of feature engineering is Feature Extraction.</p>
120
+ </div>
121
+ """,
122
+ unsafe_allow_html=True
123
+ )
124
+
125
+ st.subheader(":violet[Feature Extraxtion]")
126
+ st.markdown(
127
+ """
128
+ <div class='info-box'>
129
+ <p>πŸ“ Feature Extraction is the process where text data which is natural language is given to machine to understand the natural language.</p>
130
+ <ul>
131
+ <li>Text is converted into vectors using specific algorithms.</li>
132
+ <li>Preserving meaningful information is key.</li>
133
+ <li>Helps in better text analysis & machine learning</li>
134
+ </ul>
135
+ </div>
136
+ """,
137
+ unsafe_allow_html=True
138
+ )
139
+
140
+ st.header("Vectorization🧭")
141
+ st.markdown(
142
+ """
143
+ <div class='info-box'>
144
+ <p>Vectorization is the process of converting text into vector.</p>
145
+ <p>This allows ML models to process text data effectively.</p>
146
+ </div>
147
+ """,
148
+ unsafe_allow_html=True
149
+ )
150
+
151
+ st.subheader(":violet[Vectorization techniques]")
152
+ st.markdown("""
153
+ There a different techniques to convert text into vector format.They are :
154
+ <ul class="icon-bullet">
155
+ <li>One-Hot Vectorization </li>
156
+ <li>Bag of Words(BOW) </li>
157
+ <li>Term Frequency - Inverse Document Frequency(TF-IDF)</li>
158
+ </ul>
159
+ """, unsafe_allow_html=True)
160
+
161
+ st.markdown("""
162
+ There are advance vectorization techniques.They are :
163
+ <ul class="icon-bullet">
164
+ <li>Word Embedding </li>
165
+ <li>Word2Vec </li>
166
+ <li>Fasttext</li>
167
+ </ul>
168
+ """, unsafe_allow_html=True)
169
+
170
+ st.sidebar.title("Navigation 🧭")
171
+ file_type = st.sidebar.radio(
172
+ "Choose a Vectorization technique :",
173
+ ("One-Hot Vectorization", "Bag of Words(BOW)", "Term Frequency - Inverse Document Frequency(TF-IDF)"))
174
+
175
+ if file_type == "One-Hot Vectorization":
176
+ st.title(":red[One-Hot Vectorization]")
177
+ st.markdown("""
178
+ ### πŸ“Œ What is One-Hot Vectorization?
179
+ - It is a type of vectorization technique where text is converted into a numerical vector.
180
+ - This technique helps in representing words as unique vectors for machine learning models.
181
+ """)
182
+
183
+ st.markdown("""
184
+ ### πŸ› οΈ Steps in One-Hot Vectorization:
185
+ - Create a Vocabulary ➑️ (A set of all unique words in the collected corpus).
186
+ - Find the Length of Vocabulary ➑️ (Total number of unique words = d-dimensions).
187
+ - Convert Each Word into a Vector:
188
+ - Every unique word is transformed into a vector.
189
+ - Each vector has d-dimensions, where each dimension corresponds to a unique word.
190
+ - Words are converted individually, and then combined to form a vector.
191
+ This technique ensures that each word is treated uniquely and efficiently in NLP tasks.
192
+ """)
193
+
194
+ st.markdown("""
195
+ - Each word gets a unique vector representation.
196
+ - The number of dimensions = total vocabulary size.
197
+ - Words are vectorized separately, then combined into document vectors.
198
+ """)
199
+
200
+ st.markdown("""
201
+ | **Word** | **Vector Representation** |
202
+ |----------|--------------------------|
203
+ | **toy** | [1,0,0,0,0] |
204
+ | **is** | [0,1,0,0,0] |
205
+ | **good** | [0,0,1,0,0] |
206
+ | **not** | [0,0,0,1,0] |
207
+ | **bad** | [0,0,0,0,1] |
208
+ """, unsafe_allow_html=True)
209
+
210
+ st.markdown("""
211
+ - d₁ β†’ v₁ β†’ `[[1,0,0,0,0] , [0,1,0,0,0] , [0,0,1,0,0]]`
212
+ - dβ‚‚ β†’ vβ‚‚ β†’ `[[1,0,0,0,0] , [0,1,0,0,0] , [0,0,0,1,0] , [0,0,1,0,0]]`
213
+ - d₃ β†’ v₃ β†’ `[[0,0,0,0,1], [1,0,0,0,0]]`
214
+ This One-Hot Vectorization technique converts words into numerical vectors while preserving their uniqueness.
215
+ """)
216
+
217
+ st.markdown("""
218
+ ### Key Takeaways:
219
+ - Each word is represented as a 5-dimensional vector.
220
+ - Every dimension corresponds to a unique word in the vocabulary.
221
+ - This method is useful for transforming text into a numerical format for Machine Learning tasks.
222
+ """)
223
+
224
+ st.subheader(":red[Advantages]")
225
+ st.markdown('''
226
+ - One-Hot Vectorization is easy to implement
227
+ ''')
228
+ st.subheader(":red[Disadvantages]")
229
+
230
+ st.subheader(":blue[Different Document Length]")
231
+ st.markdown('''
232
+ - 1.Every document have different no.of words (here we're not converting document to vector , we're converting word to vector)
233
+ - We can't convert into tabular data
234
+ - It would be possible to convert into tabular data when we're converting document into vector(this is solved by Bag of Words(BOW))
235
+ ''')
236
+
237
+ st.subheader(":blue[Sparsity]")
238
+ st.markdown('''
239
+ - The vector which is created using one-hot vectorization gives sparse vector
240
+ - Entire data is given to any alogorithm and machine is going to learn fom data and algorithm it is baised towards zero values as the data is sparse data
241
+ - This issue in ML is known as overfitting
242
+ - It is solved in Deep learning
243
+ ''')
244
+
245
+ st.subheader(":blue[Curse of Dimensionality]")
246
+ st.markdown('''
247
+ - Document increases ↑ Vocabulary ↑ and vector increases ↑ dimensionality also increases ↑
248
+ - Ml performance decreases ↓ - as the dimensionality totally depends on vocabulary and it shootup as the document increases and different
249
+ ''')
250
+
251
+ st.subheader(":blue[Out of Vocabulary Issue]")
252
+ st.markdown('''
253
+ - Document only converted during training time and we're giving our own dataset
254
+ - If the word is not present in our dataset while training it can't convert into vector format results in key error
255
+ - This is solved by Fasttext
256
+ ''')
257
+
258
+ st.subheader(":blue[Inability to Preserve Semantic Meaning]")
259
+ st.markdown('''
260
+ - While converting text β†’ vector format (same relationship should be preserved)
261
+ - We need to convert document into vector in such a way that semantic relationship should be preserved
262
+ - Similarity ⬆️ and Distance ⬇️
263
+ - Similarity ∝ 1 / Distance
264
+ - Distance between vectors should be very small
265
+ - If this is satisfied then the technique has good semantic meaning
266
+ ''')
267
+
268
+ st.subheader(":blue[Lack of Sequential Information]")
269
+ st.markdown('''
270
+ - Sequential information is not preserved
271
+ ''')
272
+
273
+
274
+ elif file_type == "Bag of Words(BOW)":
275
+ st.title(":red[Bag of Words(BOW)]")
276
+ st.markdown("""
277
+ ### πŸ“Œ What is Bag of Words(BOW)?
278
+ - It is a type of vectorization technique where text is converted into a numerical vector.
279
+ - To overcome the problem of different document length(can't convert into tabular data) BOW is implemented.
280
+ """)
281
+
282
+ st.markdown("""
283
+ ### πŸ› οΈ Steps in Bag of Words(BOW):
284
+ - Create a Vocabulary ➑️ (A set of all unique words in the collected corpus).
285
+ - Find the Length of Vocabulary ➑️ (Total number of unique words = d-dimensions).
286
+ - Each document is converted into vector which is in d- dimension
287
+ - Every dimeension belongs to a unique word
288
+ - Bag of Words is actually interested in how many times the word is occuring
289
+ - If the two documents are same they will find out a similarity based on same words repeating in 2 different documents
290
+ - By converting into documents into vectors we can concatenate all vectors to form tabular data
291
+ - where roes are documents and columns represent features which are unique words
292
+ - Every dimension value will be count
293
+ - how many times the word is occuring in document
294
+ """)
295
+ st.markdown(
296
+ "<div class='corpus-box'>"
297
+ "<strong>Document 1:</strong> I love cricket I <br>"
298
+ "<strong>Document 2:</strong> I hate cricket <br>"
299
+ "<strong>Document 3:</strong> I like cricket"
300
+ "</div>",
301
+ unsafe_allow_html=True,
302
+ )
303
+
304
+ st.subheader(":green[Unique Words (Vocabulary)]")
305
+ st.markdown(
306
+ "<p class='content'>The set of unique words in our corpus is: <strong>{I, love, cricket, hate, like}</strong>. "
307
+ "This set forms the vocabulary, and the number of unique words determines the vector dimensions.</p>",
308
+ unsafe_allow_html=True,
309
+ )
310
+
311
+ st.subheader(":green[Word Count Representation]")
312
+ st.markdown(
313
+ "<p class='content'>Each document is converted into a numerical vector by counting the occurrences of words "
314
+ "from the vocabulary within each document.</p>",
315
+ unsafe_allow_html=True,
316
+ )
317
+
318
+ st.markdown(
319
+ "<div class='vector-box'><strong>Vector Representation:</strong><br>"
320
+ "Document 1 ➝ [2,1,1,0,0] (I = 2, love = 1, cricket = 1, hate = 0, like = 0)<br>"
321
+ "Document 2 ➝ [1,0,1,1,0] (I = 1, love = 0, cricket = 1, hate = 1, like = 0)<br>"
322
+ "Document 3 ➝ [1,0,1,0,1] (I = 1, love = 0, cricket = 1, hate = 0, like = 1)"
323
+ "</div>",
324
+ unsafe_allow_html=True,
325
+ )
326
+
327
+ st.subheader(":green[Tabular Representation]")
328
+ st.markdown(
329
+ "<p class='content'>Since all three vectors have the same number of dimensions, we can merge them into a tabular format:</p>",
330
+ unsafe_allow_html=True,
331
+ )
332
+
333
+ st.subheader(":red[Advantages]")
334
+ st.markdown('''
335
+ - Bag of Words(BOW) is easy to implement
336
+ - Here we can convert the data into tabular data
337
+ ''')
338
+
339
+ st.subheader(":red[Disadvantages]")
340
+
341
+ st.subheader(":blue[Curse of Dimensionality]")
342
+ st.markdown('''
343
+ - Document increases ↑ Vocabulary ↑ and vector increases ↑ dimensionality also increases ↑
344
+ - Ml performance decreases ↓ - as the dimensionality totally depends on vocabulary and it shootup as the document increases and different
345
+ - As the corpus increases , vocabulary increases -- dimensionality increses
346
+ ''')
347
+
348
+
349
+ st.subheader(":blue[Sparsity]")
350
+ st.markdown('''
351
+ - The vector which is created using BOW gives sparse vector
352
+ - Entire data is given to any alogorithm and machine is going to learn fom data and algorithm it is baised towards zero values as the data is sparse data
353
+ - This issue in ML is known as overfitting
354
+ - It is solved in Deep learning
355
+ ''')
356
+
357
+ st.subheader(":blue[Out of Vocabulary Issue]")
358
+ st.markdown('''
359
+ - Document only converted during training time and we're giving our own dataset
360
+ - If the word is not present in our dataset while training it can't convert into vector format results in key error
361
+ - This is solved by Fasttext
362
+ ''')
363
+
364
+ st.subheader(":blue[Inability to Preserve Semantic Meaning]")
365
+ st.markdown('''
366
+ - It can't completely preserve semantic meaning (slightly preserves it)
367
+ - Here based on count(no.of times the particular word is occuring) it can sometimes preserve semantic meaning
368
+ - Based on uniqueness of the words the semantic meaning is preserved
369
+ - More the uniqueness , more the documents will be far away
370
+ - Less no.of unique words , it'll be close to each other
371
+ ''')
372
+
373
+ st.subheader(":blue[Lack of Sequential Information]")
374
+ st.markdown('''
375
+ - Sequential information is not preserved
376
+ ''')
377
+
378
+ st.code('''
379
+ from sklearn.feature_extraction.text import CountVectorizer
380
+ corpus = pd.DataFrame({"Review":["biryani is is is good","biryani is not good","biryani is too costly"]})
381
+ ## object of the CountVectorizer class
382
+ cv = CountVectorizer(lowercase=True,strip_accents="unicode",analyzer="word",stop_words=stp,token_pattern=r"((?u)\b\w\w+\b))")
383
+ cv.fit(corpus["Review"]) ### learning vocabulary
384
+ vector = cv.transform(corpus["Review"]) ### it converts into vector form based on cv and vocabulary learned
385
+ cv.get_feature_names_out()
386
+ cv.vocabulary_
387
+ vector.toarray()
388
+
389
+ ''')
390
+
391
+ st.header("Binary Bag of Words(BBOW)")
392
+ st.markdown('''
393
+ - Extension of Bag of Words(BOW) is Binary Bag of Words(BBOW)
394
+ ''')
395
+
396
+ st.markdown("""
397
+ ### πŸ› οΈ Steps in Binary Bag of Words(BBOW):
398
+ - Create a vocabulary (set of unique words)
399
+ - Each document is converted into vector form(d-dimension)
400
+ - In bag of words the value is count , but in binary bag of words it tells whether the word is preseent or not
401
+ - So, that it is way more easier to find the distance between vectors (here distance is nothing but no.of unique words)
402
+ - If the unique words are more --> distance is high
403
+ - Calculation of distance will be way more faster than bag of words
404
+ - distance is total no.of unique words between two documents
405
+ """)
406
+
407
+
408
+ elif file_type == "Term Frequency - Inverse Document Frequency(TF-IDF)":
409
+ st.title(":red[Term Frequency - Inverse Document Frequency(TF-IDF)]")
410
+ st.markdown("""
411
+ ### πŸ“Œ What is TF-IDF ?
412
+ - It is a type of vectorization technique where text is converted into a numerical vector.
413
+ """)
414
+
415
+ st.subheader(":violet[πŸ› οΈ Steps in TF-IDF]")
416
+
417
+ st.markdown(
418
+ """
419
+ <ul>
420
+ <li><strong>Create a vocabulary:</strong> A set of unique words from the corpus.</li>
421
+ <li><strong>Convert each document into a vector:</strong> A d-dimensional representation.</li>
422
+ <li><strong>Calculate Term Frequency (TF):</strong> Measures the importance of a word within a document.</li>
423
+ </ul>
424
+ """,
425
+ unsafe_allow_html=True,
426
+ )
427
+
428
+ st.markdown("<div class='formula'>TF(wα΅’, dα΅’) = (Occurrences of wα΅’ in dα΅’) / (Total words in dα΅’)</div>", unsafe_allow_html=True)
429
+
430
+ st.markdown(
431
+ """
432
+ <ul>
433
+ <li><strong>Compute Inverse Document Frequency (IDF):</strong> Measures how important a word is across all documents.</li>
434
+ <li><strong>For every word in the vocabulary, apply IDF:</strong></li>
435
+ </ul>
436
+ """,
437
+ unsafe_allow_html=True,
438
+ )
439
+
440
+ st.markdown("<div class='formula'>IDF(wα΅’, C) = log(N/n)</div>", unsafe_allow_html=True)
441
+
442
+ st.markdown(
443
+ """
444
+ - <strong>N:</strong> Total number of documents in the corpus.<br>
445
+ - <strong>n:</strong> Number of documents containing the word wα΅’.<br>
446
+ - TF-IDF helps in understanding word significance while reducing the impact of commonly used words.
447
+ """,
448
+ unsafe_allow_html=True,
449
+ )
450
+ st.markdown("<h1 class='title'>πŸ“Œ Example of TF-IDF</h1>", unsafe_allow_html=True)
451
+
452
+ st.markdown(
453
+ """
454
+ <div class='box'>
455
+ <strong>Given a corpus with 3 documents:</strong><br><br>
456
+ <strong>d1:</strong> w1, w2, w3, w1 β†’ v1 <br>
457
+ <strong>d2:</strong> w1, w2, w2, w3, w4, w2, w3 β†’ v2 <br>
458
+ <strong>d3:</strong> w1, w5 β†’ v3 <br><br>
459
+ <strong>Vocabulary:</strong> {w1, w2, w3, w4, w5} <br>
460
+ <strong>Vocabulary Size:</strong> 5 (d-dimension)
461
+ </div>
462
+ """,
463
+ unsafe_allow_html=True,
464
+ )
465
+
466
+ st.markdown("<h2 style='color: #6A0572;'>πŸ“Š Term Frequency (TF) Calculation</h2>", unsafe_allow_html=True)
467
+
468
+ st.markdown(
469
+ """
470
+ <ul>
471
+ <li>TF measures how often a word appears in a document.</li>
472
+ <li>Formula: <span class='highlight'>TF(wα΅’, dα΅’) = (Occurrences of wα΅’ in dα΅’) / (Total words in dα΅’)</span></li>
473
+ <li>TF values change based on the document.</li>
474
+ </ul>
475
+ """,
476
+ unsafe_allow_html=True,
477
+ )
478
+
479
+ st.markdown(
480
+ """
481
+ <div class='formula'>
482
+ TF(w1, d1) = 2/4 = 0.5 <br>
483
+ TF(w2, d1) = 1/4 = 0.25 <br>
484
+ TF(w3, d1) = 1/4 = 0.25 <br>
485
+ TF(w4, d1) = 0/4 = 0 <br>
486
+ TF(w5, d1) = 0/4 = 0 <br>
487
+ </div>
488
+ """,
489
+ unsafe_allow_html=True,
490
+ )
491
+
492
+ st.markdown(
493
+ """
494
+ <ul>
495
+ <li>TF values always range from <strong>0 to 1</strong>.</li>
496
+ <li>Case-1: <span class='highlight'>TF = 0</span> β†’ Word is not present in the document.</li>
497
+ <li>Case-2: <span class='highlight'>TF = 1</span> β†’ Word is the only word in the document.</li>
498
+ </ul>
499
+ """,
500
+ unsafe_allow_html=True,
501
+ )
502
+
503
+ st.markdown("<h2 style='color: #6A0572;'>πŸ“‰ Inverse Document Frequency (IDF) Calculation</h2>", unsafe_allow_html=True)
504
+
505
+ st.markdown(
506
+ """
507
+ <ul>
508
+ <li>IDF measures how important a word is across the entire corpus.</li>
509
+ <li>Formula: <span class='highlight'>IDF(wα΅’, C) = log(N/n)</span></li>
510
+ <li>N = Total number of documents.</li>
511
+ <li>n = Number of documents containing wα΅’.</li>
512
+ <li>IDF values range from <strong>0 to ∞</strong>.</li>
513
+ </ul>
514
+ """,
515
+ unsafe_allow_html=True,
516
+ )
517
+
518
+ st.markdown("<h2 style='color: #6A0572;'>πŸ“Œ TF-IDF Calculation</h2>", unsafe_allow_html=True)
519
+
520
+ st.markdown(
521
+ """
522
+ <ul>
523
+ <li>We calculate TF-IDF by multiplying TF and IDF values.</li>
524
+ <li>Formula: <span class='highlight'>TF-IDF = TF * IDF</span></li>
525
+ <li>TF-IDF helps reduce the impact of frequent words while keeping rare words important.</li>
526
+ </ul>
527
+ """,
528
+ unsafe_allow_html=True,
529
+ )
530
+
531
+ st.markdown(
532
+ """
533
+ <div class='formula'>
534
+ d1 β†’ v1 = [0, 0.04, 0.04, 0, 0] (TF * IDF values)
535
+ </div>
536
+ """,
537
+ unsafe_allow_html=True,
538
+ )
539
+
540
+ st.markdown(
541
+ """
542
+ - The final TF-IDF values may be low, high, or even zero depending on term frequency and document frequency.
543
+ """,
544
+ unsafe_allow_html=True,
545
+ )
546
+
547
+ st.markdown("<h1 class='title'>πŸ“Œ TF-IDF Key Insights</h1>", unsafe_allow_html=True)
548
+
549
+ st.markdown(
550
+ """
551
+ <h3 style='color: #6A0572;'>πŸ“ˆ Case 1: High TF-IDF Values</h3>
552
+ <ul>
553
+ <li>If the word appears <strong>frequently</strong> in a document β†’ <span class='highlight'>High TF-IDF</span></li>
554
+ </ul>
555
+ """,
556
+ unsafe_allow_html=True,
557
+ )
558
+
559
+ st.markdown(
560
+ """
561
+ <h3 style='color: #6A0572;'>πŸ“‰ Case 2: Low TF-IDF Values</h3>
562
+ <ul>
563
+ <li>If the word appears <strong>rarely</strong> in a document β†’ <span class='highlight'>Low TF-IDF</span></li>
564
+ <li>TF is always in the range: <strong>[0 - 1]</strong></li>
565
+ <li>IDF is in the range: <strong>[0 - ∞)</strong></li>
566
+ </ul>
567
+ """,
568
+ unsafe_allow_html=True,
569
+ )
570
+
571
+ st.markdown(
572
+ """
573
+ <h3 style='color: #6A0572;'>πŸ“Š Understanding TF (Term Frequency)</h3>
574
+ <ul>
575
+ <li>TF gives <strong>more importance</strong> to words that occur <strong>frequently</strong> in a document.</li>
576
+ <li>As the word frequency <span class='highlight'>increases</span> β†’ TF <span class='highlight'>increases</span>.</li>
577
+ </ul>
578
+ """,
579
+ unsafe_allow_html=True,
580
+ )
581
+
582
+ st.markdown(
583
+ """
584
+ <h3 style='color: #6A0572;'>πŸ“‰ Understanding IDF (Inverse Document Frequency)</h3>
585
+ <ul>
586
+ <li>IDF Formula: <span class='highlight'>IDF(wα΅’, C) = log(N/n)</span></li>
587
+ <li><strong>N:</strong> Total number of documents</li>
588
+ <li><strong>n:</strong> Number of documents containing the word</li>
589
+ </ul>
590
+ """,
591
+ unsafe_allow_html=True,
592
+ )
593
+
594
+ st.markdown(
595
+ """
596
+ <div class='formula'>
597
+ <strong>When n is small:</strong> <br>
598
+ - N/n increases β†’ log(N/n) increases ⬆️ <br>
599
+ - Word is rare in the corpus β†’ Higher importance in IDF <br><br>
600
+ <strong>When n is large:</strong> <br>
601
+ - N/n decreases β†’ log(N/n) decreases ⬇️ <br>
602
+ - Word is common β†’ Lower importance in IDF <br><br>
603
+ <strong>When N = n:</strong> log(N/n) = 0 (word appears in every document)
604
+ </div>
605
+ """,
606
+ unsafe_allow_html=True,
607
+ )
608
+
609
+ st.markdown(
610
+ """
611
+ <h3 style='color: #6A0572;'>πŸ“Œ TF-IDF Calculation</h3>
612
+ <ul>
613
+ <li><strong>TF</strong> focuses on words <strong>frequent</strong> in a document.</li>
614
+ <li><strong>IDF</strong> focuses on words <strong>rare</strong> in the corpus.</li>
615
+ <li><span class='highlight'>TF-IDF is high</span> for words that appear <strong>often in a document</strong> but <strong>rarely in the corpus</strong>.</li>
616
+ </ul>
617
+ """,
618
+ unsafe_allow_html=True,
619
+ )
620
+
621
+ st.subheader(":red[Why log is used]")
622
+ st.markdown("<h1 class='title'>πŸ“Œ Understanding TF-IDF Scaling</h1>", unsafe_allow_html=True)
623
+
624
+ st.markdown(
625
+ """
626
+ <h3 style='color: #6A0572;'> Minimum and Maximum Values of N/n</h3>
627
+ <ul>
628
+ <li>When <strong>n is maximum</strong> β†’ <span class='highlight'>N/n = 1</span></li>
629
+ <li>At <strong>training time</strong>: <span class='highlight'>1 ≀ n ≀ N</span></li>
630
+ <li>At <strong>test time</strong>: <span class='highlight'>0 ≀ n ≀ N</span> (due to Out-of-Vocabulary words)</li>
631
+ </ul>
632
+ """,
633
+ unsafe_allow_html=True,
634
+ )
635
+
636
+ st.markdown(
637
+ """
638
+ <h3 style='color: #6A0572;'> IDF Dominance Over TF</h3>
639
+ <ul>
640
+ <li>If <strong>n decreases</strong> β†’ <span class='highlight'>N/n increases (max)</span></li>
641
+ <li>TF scale is very <span class='highlight'>small</span>, but IDF scale is very <span class='highlight'>high</span></li>
642
+ <li>IDF can <span class='highlight'>dominate</span> TF, favoring rare words over frequent ones</li>
643
+ </ul>
644
+ """,
645
+ unsafe_allow_html=True,
646
+ )
647
+
648
+ st.markdown(
649
+ """
650
+ <h3 style='color: #6A0572;'>How Log Solves IDF Dominance?</h3>
651
+ <ul>
652
+ <li>Applying <span class='highlight'>log</span> reduces the dominance of IDF</li>
653
+ <li>Logarithm <span class='highlight'>rounds off</span> values to a balanced scale</li>
654
+ <li>It prevents bias towards rare words and maintains proportionality</li>
655
+ </ul>
656
+ """,
657
+ unsafe_allow_html=True,
658
+ )
659
+
660
+ st.markdown(
661
+ """
662
+ <div class='formula'>
663
+ <strong>TF balances frequent words, while log(IDF) prevents rare-word dominance! πŸš€</strong>
664
+ </div>
665
+ """,
666
+ unsafe_allow_html=True,
667
+ )
668
+
669
+ st.subheader(":red[Advantages]")
670
+ st.markdown('''
671
+ - Easy to implement
672
+ - Can convert into tabular format
673
+ - It gives importance to both frequently occuring word and rarely occuring in corpus
674
+ ''')
675
+ st.subheader(":red[Disadvantages]")
676
+
677
+ st.subheader(":blue[Curse of Dimensionality]")
678
+ st.markdown('''
679
+ - Document increases ↑ Vocabulary ↑ and vector increases ↑ dimensionality also increases ↑
680
+ - Ml performance decreases ↓ - as the dimensionality totally depends on vocabulary and it shootup as the document increases and different
681
+ - As the corpus increases , vocabulary increases -- dimensionality increses
682
+ ''')
683
+
684
+
685
+ st.subheader(":blue[Sparsity]")
686
+ st.markdown('''
687
+ - The vector which is created using BOW gives sparse vector
688
+ - Entire data is given to any alogorithm and machine is going to learn fom data and algorithm it is baised towards zero values as the data is sparse data
689
+ - This issue in ML is known as overfitting
690
+ - It is solved in Deep learning
691
+ ''')
692
+
693
+ st.subheader(":blue[Out of Vocabulary Issue]")
694
+ st.markdown('''
695
+ - Document only converted during training time and we're giving our own dataset
696
+ - If the word is not present in our dataset while training it can't convert into vector format results in key error
697
+ - This is solved by Fasttext
698
+ ''')
699
+
700
+ st.subheader(":blue[Inability to Preserve Semantic Meaning]")
701
+ st.markdown('''
702
+ - It slightly preserves semantic meaning
703
+ ''')
704
+
705
+ st.subheader(":blue[Lack of Sequential Information]")
706
+ st.markdown('''
707
+ - Sequential information is not preserved
708
+ - Because in TF-IDF we're giving importance to words as we're doing word tokenization
709
+ - In ML no algorithm is capable of preserving sequential information
710
+ - This is only solved by Deep-learning concept
711
+ - But by applying a trick to BOW/BBOW/TF-IDF we can slightly preserve sequential information
712
+ - That technique is known as n-gram
713
+ ''')
714
+
715
+ st.header(":red[n-gram]")
716
+ st.markdown('''
717
+ - n-gram default will always be 1-gram in BOW/BBOW/TF-IDF
718
+ - Based on n-gram onlt it can create a vocabulary
719
+ - n- gram is mostly used upto 1,2,3 gram only because as dimension increases ML performance decreases
720
+ - n-gram is used to slightly preserve sequential information
721
+ ''')
722
+
723
+ st.code('''
724
+ from sklearn.feature_extraction.text import TfidfVectorizer\
725
+ corpus = pd.DataFrame({"Review":["biryani is is is is rΓ©sume is good","biryani biryani biryani is not good","biryani is too costly"]})
726
+ tf = TfidfVectorizer()
727
+ vector = tf.fit_transform(corpus["Review"])
728
+ vector.toarray()
729
+ tf.vocabulary_
730
+ ''')