import streamlit as st st.markdown(""" """, unsafe_allow_html=True) st.markdown("

πŸ› οΈ Feature Engineering πŸ“Œ

", unsafe_allow_html=True) st.markdown( """

πŸ”Ή When we take existing features from collected data and create new useful features, where this is automatically engineered made from existing features and the technique of creating the features is known as Feature Engineering.

These engineered features enhance machine learning models.

A subpart of feature engineering is Feature Extraction.

""", unsafe_allow_html=True ) st.subheader(":violet[Feature Extraxtion]") st.markdown( """

πŸ“ Feature Extraction is the process where text data which is natural language is given to machine to understand the natural language.

""", unsafe_allow_html=True ) st.header("Vectorization🧭") st.markdown( """

Vectorization is the process of converting text into vector.

This allows ML models to process text data effectively.

""", unsafe_allow_html=True ) st.subheader(":violet[Vectorization techniques]") st.markdown(""" There a different techniques to convert text into vector format.They are : """, unsafe_allow_html=True) st.markdown(""" There are advance vectorization techniques.They are : """, unsafe_allow_html=True) st.sidebar.title("Navigation 🧭") file_type = st.sidebar.radio( "Choose a Vectorization technique :", ("One-Hot Vectorization", "Bag of Words(BOW)", "Term Frequency - Inverse Document Frequency(TF-IDF)")) if file_type == "One-Hot Vectorization": st.title(":red[One-Hot Vectorization]") st.markdown(""" ### πŸ“Œ What is One-Hot Vectorization? - It is a type of vectorization technique where text is converted into a numerical vector. - This technique helps in representing words as unique vectors for machine learning models. """) st.markdown(""" ### πŸ› οΈ Steps in One-Hot Vectorization: - Create a Vocabulary ➑️ (A set of all unique words in the collected corpus). - Find the Length of Vocabulary ➑️ (Total number of unique words = d-dimensions). - Convert Each Word into a Vector: - Every unique word is transformed into a vector. - Each vector has d-dimensions, where each dimension corresponds to a unique word. - Words are converted individually, and then combined to form a vector. This technique ensures that each word is treated uniquely and efficiently in NLP tasks. """) st.markdown(""" - Each word gets a unique vector representation. - The number of dimensions = total vocabulary size. - Words are vectorized separately, then combined into document vectors. """) st.markdown(""" | **Word** | **Vector Representation** | |----------|--------------------------| | **toy** | [1,0,0,0,0] | | **is** | [0,1,0,0,0] | | **good** | [0,0,1,0,0] | | **not** | [0,0,0,1,0] | | **bad** | [0,0,0,0,1] | """, unsafe_allow_html=True) st.markdown(""" - d₁ β†’ v₁ β†’ `[[1,0,0,0,0] , [0,1,0,0,0] , [0,0,1,0,0]]` - dβ‚‚ β†’ vβ‚‚ β†’ `[[1,0,0,0,0] , [0,1,0,0,0] , [0,0,0,1,0] , [0,0,1,0,0]]` - d₃ β†’ v₃ β†’ `[[0,0,0,0,1], [1,0,0,0,0]]` This One-Hot Vectorization technique converts words into numerical vectors while preserving their uniqueness. """) st.markdown(""" ### Key Takeaways: - Each word is represented as a 5-dimensional vector. - Every dimension corresponds to a unique word in the vocabulary. - This method is useful for transforming text into a numerical format for Machine Learning tasks. """) st.subheader(":red[Advantages]") st.markdown(''' - One-Hot Vectorization is easy to implement ''') st.subheader(":red[Disadvantages]") st.subheader(":blue[Different Document Length]") st.markdown(''' - 1.Every document have different no.of words (here we're not converting document to vector , we're converting word to vector) - We can't convert into tabular data - It would be possible to convert into tabular data when we're converting document into vector(this is solved by Bag of Words(BOW)) ''') st.subheader(":blue[Sparsity]") st.markdown(''' - The vector which is created using one-hot vectorization gives sparse vector - Entire data is given to any alogorithm and machine is going to learn fom data and algorithm it is baised towards zero values as the data is sparse data - This issue in ML is known as overfitting - It is solved in Deep learning ''') st.subheader(":blue[Curse of Dimensionality]") st.markdown(''' - Document increases ↑ Vocabulary ↑ and vector increases ↑ dimensionality also increases ↑ - Ml performance decreases ↓ - as the dimensionality totally depends on vocabulary and it shootup as the document increases and different ''') st.subheader(":blue[Out of Vocabulary Issue]") st.markdown(''' - Document only converted during training time and we're giving our own dataset - If the word is not present in our dataset while training it can't convert into vector format results in key error - This is solved by Fasttext ''') st.subheader(":blue[Inability to Preserve Semantic Meaning]") st.markdown(''' - While converting text β†’ vector format (same relationship should be preserved) - We need to convert document into vector in such a way that semantic relationship should be preserved - Similarity ⬆️ and Distance ⬇️ - Similarity ∝ 1 / Distance - Distance between vectors should be very small - If this is satisfied then the technique has good semantic meaning ''') st.subheader(":blue[Lack of Sequential Information]") st.markdown(''' - Sequential information is not preserved ''') elif file_type == "Bag of Words(BOW)": st.title(":red[Bag of Words(BOW)]") st.markdown(""" ### πŸ“Œ What is Bag of Words(BOW)? - It is a type of vectorization technique where text is converted into a numerical vector. - To overcome the problem of different document length(can't convert into tabular data) BOW is implemented. """) st.markdown(""" ### πŸ› οΈ Steps in Bag of Words(BOW): - Create a Vocabulary ➑️ (A set of all unique words in the collected corpus). - Find the Length of Vocabulary ➑️ (Total number of unique words = d-dimensions). - Each document is converted into vector which is in d- dimension - Every dimeension belongs to a unique word - Bag of Words is actually interested in how many times the word is occuring - If the two documents are same they will find out a similarity based on same words repeating in 2 different documents - By converting into documents into vectors we can concatenate all vectors to form tabular data - where roes are documents and columns represent features which are unique words - Every dimension value will be count - how many times the word is occuring in document """) st.markdown( "
" "Document 1: I love cricket I
" "Document 2: I hate cricket
" "Document 3: I like cricket" "
", unsafe_allow_html=True, ) st.subheader(":green[Unique Words (Vocabulary)]") st.markdown( "

The set of unique words in our corpus is: {I, love, cricket, hate, like}. " "This set forms the vocabulary, and the number of unique words determines the vector dimensions.

", unsafe_allow_html=True, ) st.subheader(":green[Word Count Representation]") st.markdown( "

Each document is converted into a numerical vector by counting the occurrences of words " "from the vocabulary within each document.

", unsafe_allow_html=True, ) st.markdown( "
Vector Representation:
" "Document 1 ➝ [2,1,1,0,0] (I = 2, love = 1, cricket = 1, hate = 0, like = 0)
" "Document 2 ➝ [1,0,1,1,0] (I = 1, love = 0, cricket = 1, hate = 1, like = 0)
" "Document 3 ➝ [1,0,1,0,1] (I = 1, love = 0, cricket = 1, hate = 0, like = 1)" "
", unsafe_allow_html=True, ) st.subheader(":green[Tabular Representation]") st.markdown( "

Since all three vectors have the same number of dimensions, we can merge them into a tabular format:

", unsafe_allow_html=True, ) st.subheader(":red[Advantages]") st.markdown(''' - Bag of Words(BOW) is easy to implement - Here we can convert the data into tabular data ''') st.subheader(":red[Disadvantages]") st.subheader(":blue[Curse of Dimensionality]") st.markdown(''' - Document increases ↑ Vocabulary ↑ and vector increases ↑ dimensionality also increases ↑ - Ml performance decreases ↓ - as the dimensionality totally depends on vocabulary and it shootup as the document increases and different - As the corpus increases , vocabulary increases -- dimensionality increses ''') st.subheader(":blue[Sparsity]") st.markdown(''' - The vector which is created using BOW gives sparse vector - Entire data is given to any alogorithm and machine is going to learn fom data and algorithm it is baised towards zero values as the data is sparse data - This issue in ML is known as overfitting - It is solved in Deep learning ''') st.subheader(":blue[Out of Vocabulary Issue]") st.markdown(''' - Document only converted during training time and we're giving our own dataset - If the word is not present in our dataset while training it can't convert into vector format results in key error - This is solved by Fasttext ''') st.subheader(":blue[Inability to Preserve Semantic Meaning]") st.markdown(''' - It can't completely preserve semantic meaning (slightly preserves it) - Here based on count(no.of times the particular word is occuring) it can sometimes preserve semantic meaning - Based on uniqueness of the words the semantic meaning is preserved - More the uniqueness , more the documents will be far away - Less no.of unique words , it'll be close to each other ''') st.subheader(":blue[Lack of Sequential Information]") st.markdown(''' - Sequential information is not preserved ''') st.code(''' from sklearn.feature_extraction.text import CountVectorizer corpus = pd.DataFrame({"Review":["biryani is is is good","biryani is not good","biryani is too costly"]}) ## object of the CountVectorizer class cv = CountVectorizer(lowercase=True,strip_accents="unicode",analyzer="word",stop_words=stp,token_pattern=r"((?u)\b\w\w+\b))") cv.fit(corpus["Review"]) ### learning vocabulary vector = cv.transform(corpus["Review"]) ### it converts into vector form based on cv and vocabulary learned cv.get_feature_names_out() cv.vocabulary_ vector.toarray() ''') st.header("Binary Bag of Words(BBOW)") st.markdown(''' - Extension of Bag of Words(BOW) is Binary Bag of Words(BBOW) ''') st.markdown(""" ### πŸ› οΈ Steps in Binary Bag of Words(BBOW): - Create a vocabulary (set of unique words) - Each document is converted into vector form(d-dimension) - In bag of words the value is count , but in binary bag of words it tells whether the word is preseent or not - So, that it is way more easier to find the distance between vectors (here distance is nothing but no.of unique words) - If the unique words are more --> distance is high - Calculation of distance will be way more faster than bag of words - distance is total no.of unique words between two documents """) elif file_type == "Term Frequency - Inverse Document Frequency(TF-IDF)": st.title(":red[Term Frequency - Inverse Document Frequency(TF-IDF)]") st.markdown(""" ### πŸ“Œ What is TF-IDF ? - It is a type of vectorization technique where text is converted into a numerical vector. """) st.subheader(":violet[πŸ› οΈ Steps in TF-IDF]") st.markdown( """ """, unsafe_allow_html=True, ) st.markdown("
TF(wα΅’, dα΅’) = (Occurrences of wα΅’ in dα΅’) / (Total words in dα΅’)
", unsafe_allow_html=True) st.markdown( """ """, unsafe_allow_html=True, ) st.markdown("
IDF(wα΅’, C) = log(N/n)
", unsafe_allow_html=True) st.markdown( """ - N: Total number of documents in the corpus.
- n: Number of documents containing the word wα΅’.
- TF-IDF helps in understanding word significance while reducing the impact of commonly used words. """, unsafe_allow_html=True, ) st.markdown("

πŸ“Œ Example of TF-IDF

", unsafe_allow_html=True) st.markdown( """
Given a corpus with 3 documents:

d1: w1, w2, w3, w1 β†’ v1
d2: w1, w2, w2, w3, w4, w2, w3 β†’ v2
d3: w1, w5 β†’ v3

Vocabulary: {w1, w2, w3, w4, w5}
Vocabulary Size: 5 (d-dimension)
""", unsafe_allow_html=True, ) st.markdown("

πŸ“Š Term Frequency (TF) Calculation

", unsafe_allow_html=True) st.markdown( """ """, unsafe_allow_html=True, ) st.markdown( """
TF(w1, d1) = 2/4 = 0.5
TF(w2, d1) = 1/4 = 0.25
TF(w3, d1) = 1/4 = 0.25
TF(w4, d1) = 0/4 = 0
TF(w5, d1) = 0/4 = 0
""", unsafe_allow_html=True, ) st.markdown( """ """, unsafe_allow_html=True, ) st.markdown("

πŸ“‰ Inverse Document Frequency (IDF) Calculation

", unsafe_allow_html=True) st.markdown( """ """, unsafe_allow_html=True, ) st.markdown("

πŸ“Œ TF-IDF Calculation

", unsafe_allow_html=True) st.markdown( """ """, unsafe_allow_html=True, ) st.markdown( """
d1 β†’ v1 = [0, 0.04, 0.04, 0, 0] (TF * IDF values)
""", unsafe_allow_html=True, ) st.markdown( """ - The final TF-IDF values may be low, high, or even zero depending on term frequency and document frequency. """, unsafe_allow_html=True, ) st.markdown("

πŸ“Œ TF-IDF Key Insights

", unsafe_allow_html=True) st.markdown( """

πŸ“ˆ Case 1: High TF-IDF Values

""", unsafe_allow_html=True, ) st.markdown( """

πŸ“‰ Case 2: Low TF-IDF Values

""", unsafe_allow_html=True, ) st.markdown( """

πŸ“Š Understanding TF (Term Frequency)

""", unsafe_allow_html=True, ) st.markdown( """

πŸ“‰ Understanding IDF (Inverse Document Frequency)

""", unsafe_allow_html=True, ) st.markdown( """
When n is small:
- N/n increases β†’ log(N/n) increases ⬆️
- Word is rare in the corpus β†’ Higher importance in IDF

When n is large:
- N/n decreases β†’ log(N/n) decreases ⬇️
- Word is common β†’ Lower importance in IDF

When N = n: log(N/n) = 0 (word appears in every document)
""", unsafe_allow_html=True, ) st.markdown( """

πŸ“Œ TF-IDF Calculation

""", unsafe_allow_html=True, ) st.subheader(":red[Why log is used]") st.markdown("

πŸ“Œ Understanding TF-IDF Scaling

", unsafe_allow_html=True) st.markdown( """

Minimum and Maximum Values of N/n

""", unsafe_allow_html=True, ) st.markdown( """

IDF Dominance Over TF

""", unsafe_allow_html=True, ) st.markdown( """

How Log Solves IDF Dominance?

""", unsafe_allow_html=True, ) st.markdown( """
TF balances frequent words, while log(IDF) prevents rare-word dominance! πŸš€
""", unsafe_allow_html=True, ) st.subheader(":red[Advantages]") st.markdown(''' - Easy to implement - Can convert into tabular format - It gives importance to both frequently occuring word and rarely occuring in corpus ''') st.subheader(":red[Disadvantages]") st.subheader(":blue[Curse of Dimensionality]") st.markdown(''' - Document increases ↑ Vocabulary ↑ and vector increases ↑ dimensionality also increases ↑ - Ml performance decreases ↓ - as the dimensionality totally depends on vocabulary and it shootup as the document increases and different - As the corpus increases , vocabulary increases -- dimensionality increses ''') st.subheader(":blue[Sparsity]") st.markdown(''' - The vector which is created using BOW gives sparse vector - Entire data is given to any alogorithm and machine is going to learn fom data and algorithm it is baised towards zero values as the data is sparse data - This issue in ML is known as overfitting - It is solved in Deep learning ''') st.subheader(":blue[Out of Vocabulary Issue]") st.markdown(''' - Document only converted during training time and we're giving our own dataset - If the word is not present in our dataset while training it can't convert into vector format results in key error - This is solved by Fasttext ''') st.subheader(":blue[Inability to Preserve Semantic Meaning]") st.markdown(''' - It slightly preserves semantic meaning ''') st.subheader(":blue[Lack of Sequential Information]") st.markdown(''' - Sequential information is not preserved - Because in TF-IDF we're giving importance to words as we're doing word tokenization - In ML no algorithm is capable of preserving sequential information - This is only solved by Deep-learning concept - But by applying a trick to BOW/BBOW/TF-IDF we can slightly preserve sequential information - That technique is known as n-gram ''') st.header(":red[n-gram]") st.markdown(''' - n-gram default will always be 1-gram in BOW/BBOW/TF-IDF - Based on n-gram onlt it can create a vocabulary - n- gram is mostly used upto 1,2,3 gram only because as dimension increases ML performance decreases - n-gram is used to slightly preserve sequential information ''') st.code(''' from sklearn.feature_extraction.text import TfidfVectorizer\ corpus = pd.DataFrame({"Review":["biryani is is is is rΓ©sume is good","biryani biryani biryani is not good","biryani is too costly"]}) tf = TfidfVectorizer() vector = tf.fit_transform(corpus["Review"]) vector.toarray() tf.vocabulary_ ''')