NLP / Roadmap_of_NLP.py
Pasham123's picture
Rename Pages/Roadmap_of_NLP.py to Roadmap_of_NLP.py
a296b74 verified
import streamlit as st
st.markdown(
"""
<style>
/* App Background */
.stApp {
background: linear-gradient(to right , #00BFFF, #FF1493 ,#1E9DFF); /* Gradient dark professional background */
color: #ffffff;
padding: 20px;
}
/* Align content to the left */
.block-container {
text-align: left; /* Left align for content */
padding: 2rem; /* Padding for aesthetics */
}
/* Header and Subheader Text */
h1 {
color: #90EE90 !important; /* Custom styling for the main header */
font-family: 'Arial', sans-serif !important;
font-weight: bold !important;
text-align: center;
}
h2, h3, h4 {
color: #ADFF2F !important; /* Custom styling for subheaders */
font-family: 'Arial', sans-serif !important;
font-weight: bold !important;
}
/* Paragraph Text */
p {
color: #00DED1 !important; /* Custom styling for paragraphs */
font-family: 'Arial', sans-serif !important;
line-height: 1.6;
}
</style>
""",
unsafe_allow_html=True
)
st.markdown(
"""
<h1><center>RoadMap of NLP Project</center></h1>
""",
unsafe_allow_html=True
)
# Explanation for steps
st.markdown("<h5>Step 1: Understand the Problem Statement</h5>", unsafe_allow_html=True)
st.write("""
- Understand the problem.
- Either the client provides the problem or you create one.
""")
st.markdown("<h5>Step 2: Data Collection</h5>", unsafe_allow_html=True)
st.write("""
- Data Collection is a crucial step in any Natural Language Processing (NLP) project.
- The quality, quantity, and relevance of the data directly influence the performance of NLP models.
- In NLP, the data consists of text or speech and is often unstructured
""")
st.markdown("<h5>Step 3: Perform Simple EDA</h5>", unsafe_allow_html=True)
st.write("""
- To know about the quality of the cpllected text data.
-Our collected data contains raw data, so simple EDA is important to about the unwanted things in our data.
""")
st.markdown("<h5>Step 4:Pre-processing</h5>", unsafe_allow_html=True)
st.write("""
- Preprocessing is an essential step in the Natural Language Processing (NLP) pipeline.
- It involves transforming raw text data into a structured format that can be effectively used by machine learning models.
- Preprocessing ensures that the text is clean, consistent, and free from noise.
""")
st.markdown("<h5>Step 5: Perform Original EDA</h5>", unsafe_allow_html=True)
st.write("""
- Conduct in-depth exploration of pre-processed data tailored to the problem statement.
""")
st.markdown("<h5>Step 6: Feature Engineering</h5>", unsafe_allow_html=True)
st.write("""
- Create new features from the existing data to enhance the model's performance.
- How to convert our text data to numerical representation called as Vectors.
""")
st.markdown("<h5>Step 7: Train the Model</h5>", unsafe_allow_html=True)
st.write("""
- Train the model using feature-engineered data.
- Select appropriate machine learning algorithms.
""")
st.markdown("<h5>Step 8: Test the Model</h5>", unsafe_allow_html=True)
st.write("""
- Use a test dataset to evaluate the model's performance.
""")
st.markdown("<h5>Step 9: Deploy the Model</h5>", unsafe_allow_html=True)
st.write("""
- Make the model accessible via a web app or API.
""")
st.markdown("<h5>Step 10: Monitor the Model</h5>", unsafe_allow_html=True)
st.write("""
- Continuously track the model's performance and retrain as needed.
""")
st.image(image_url,use_container_width = True)
st.markdown("<p>In upcoming pages, you will learn about each step in detail!</p>", unsafe_allow_html=True)