Spaces:
Sleeping
Sleeping
| import streamlit as st | |
| import numpy as np | |
| import matplotlib.pyplot as plt | |
| import pandas as pd | |
| import cv2 | |
| st.set_page_config( | |
| page_title="HomePage", | |
| page_icon="🚀", | |
| layout="wide" | |
| ) | |
| # Global CSS for consistent styling across all pages | |
| st.markdown(""" | |
| <style> | |
| body, .stApp { | |
| color: #4F4F4F; /* Replace with your desired font color */ | |
| background-color: #FFFFFF; | |
| } | |
| h1, h2, h3, h4, h5, h6 { | |
| color: #BB3385; | |
| } | |
| p { | |
| color: #4F4F4F; | |
| } | |
| ul li { | |
| color: #4F4F4F; | |
| } | |
| </style> | |
| """, unsafe_allow_html=True) | |
| st.markdown( | |
| """ | |
| <style> | |
| .stApp { | |
| background-image: url("https://huggingface.co/spaces/LakshmiHarika/MachineLearning/resolve/main/DALL%C2%B7E%202024-12-03%2023.34.47%20-%20A%20simple%20and%20elegant%20background%20image%20for%20an%20AI-themed%20web%20application.%20The%20background%20should%20feature%20a%20soft%20gradient%20transitioning%20from%20white%20to%20ligh.webp"); | |
| background-size: cover; | |
| background-repeat: no-repeat; | |
| background-attachment: fixed; | |
| } | |
| </style> | |
| """, | |
| unsafe_allow_html=True | |
| ) | |
| # Ensure session state for navigation | |
| if "current_page" not in st.session_state: | |
| st.session_state.current_page = "main" | |
| # Navigation function | |
| def navigate_to(page_name): | |
| st.session_state.current_page = page_name | |
| # Main Page | |
| if st.session_state.current_page == "main": | |
| # Page Title | |
| st.markdown(""" | |
| <div style="text-align: left; margin-top: 20px;"> | |
| <h2 style="color: #BB3385;">What is Data?📊✨</h2> | |
| </div> | |
| """, unsafe_allow_html=True) | |
| # Introduction Text | |
| st.write(""" | |
| **Data** is the measurements that are collected as a source of Information. | |
| It refers raw facts, figures, and observations that can be collected, stored, and processed. | |
| It has no meaning on its own until it is organized or analyzed to derive useful information.""") | |
| # Types of Data Section | |
| st.markdown(""" | |
| <div style="text-align: left; margin-top: 20px;"> | |
| <h2 style="color: #2a52be;">Types of Data</h2> | |
| </div> | |
| """, unsafe_allow_html=True) | |
| # Radio Button for Data Type Selection | |
| data_type = st.radio( | |
| "Select the type of Data:", | |
| ("Structured Data", "Unstructured Data", "Semi-Structured Data") | |
| ) | |
| if data_type == "Structured Data": | |
| st.markdown(""" | |
| <div style="text-align: left; margin-top: 20px;"> | |
| <h3 style="color: #e25822;">What is Structured Data?</h3> | |
| </div> | |
| """, unsafe_allow_html=True) | |
| st.markdown(""" | |
| <div style="text-align: left; margin-top: 20px;"> | |
| <h4 style="color: #5b2c6f;">Definition:</h4> | |
| </div> | |
| """, unsafe_allow_html=True) | |
| st.write(""" | |
| **Structured data** refers to information that is organized and formatted in a predefined manner, making it easy to store, retrieve, and analyze. | |
| It is typically stored in tabular formats like rows and columns, where each field contains a specific type of information. | |
| """) | |
| st.markdown(""" | |
| <div style="text-align: left; margin-top: 20px;"> | |
| <h4 style="color: #5b2c6f;">Characteristics:</h4> | |
| </div> | |
| """, unsafe_allow_html=True) | |
| st.write(""" | |
| - Follows a fixed schema. | |
| - Can be easily searched using query languages like SQL. | |
| - Suitable for quantitative analysis. | |
| """) | |
| st.markdown(""" | |
| <div style="text-align: left; margin-top: 20px;"> | |
| <h4 style="color: #5b2c6f;">Examples:</h4> | |
| </div> | |
| """, unsafe_allow_html=True) | |
| st.write(""" | |
| A database of students with fields like ID, name, age, and gender: | |
| """) | |
| student_data = { | |
| "Id": [100, 101, 102, 103], | |
| "Name": ["Lakshmi Harika", "Varshitha", "Hari Chandan", "Shamitha"], | |
| "Age": [22, 23, 22, 23], | |
| "Gender": ["Female", "Female", "Male", "Female"] | |
| } | |
| df = pd.DataFrame(student_data) | |
| st.markdown(df.style.set_table_styles( | |
| [{'selector': 'thead th', 'props': 'font-weight: bold;'}] | |
| ).hide(axis="index").to_html(), unsafe_allow_html=True) | |
| st.markdown(""" | |
| <div style="text-align: left; margin-top: 20px;"> | |
| <h4 style="color: #5b2c6f;">Data Formats in Structured Data:</h4> | |
| </div> | |
| """, unsafe_allow_html=True) | |
| st.write("Click to explore Structured Data Formats:") | |
| if st.button("Explore Excel"): | |
| navigate_to("explore_excel") | |
| elif data_type == "Unstructured Data": | |
| st.markdown(""" | |
| <div style="text-align: left; margin-top: 20px;"> | |
| <h3 style="color: #e25822;">What is Unstructured Data?</h3> | |
| </div> | |
| """, unsafe_allow_html=True) | |
| st.markdown(""" | |
| <div style="text-align: left; margin-top: 20px;"> | |
| <h4 style="color: #5b2c6f;">Definition:</h4> | |
| </div> | |
| """, unsafe_allow_html=True) | |
| st.write(""" | |
| **Unstructured data** refers to information that does not follow a predefined format or structure. | |
| It is typically raw data that lacks a clear, organized schema, making it harder to store and analyze using traditional tools. | |
| Examples include multimedia files (images, videos, audio), emails, and social media posts. | |
| """) | |
| st.markdown(""" | |
| <div style="text-align: left; margin-top: 20px;"> | |
| <h4 style="color: #5b2c6f;">Characteristics:</h4> | |
| </div> | |
| """, unsafe_allow_html=True) | |
| st.write(""" | |
| - Does not follow a specific schema or structure. | |
| - Cannot be stored in traditional tabular formats like rows and columns. | |
| - Requires advanced tools like machine learning or natural language processing (NLP) for analysis. | |
| """) | |
| st.markdown(""" | |
| <div style="text-align: left; margin-top: 20px;"> | |
| <h4 style="color: #5b2c6f;">Examples:</h4> | |
| </div> | |
| """, unsafe_allow_html=True) | |
| st.write(""" | |
| - **Images**: Photos, screenshots, or scanned documents. | |
| - **Audio**: Podcasts, voice recordings, or music files. | |
| - **Videos**: Recorded lectures, surveillance footage, or YouTube videos. | |
| - **Text**: Emails, social media posts, and blog articles. | |
| """) | |
| st.markdown(""" | |
| <div style="text-align: left; margin-top: 20px;"> | |
| <h4 style="color: #5b2c6f;">Data Formats in UnStructured Data:</h4> | |
| </div> | |
| """, unsafe_allow_html=True) | |
| st.write("Click to explore Unstructured Data Formats:") | |
| col1, col2, col3 = st.columns(3) | |
| with col1: | |
| if st.button("📸 Images & Videos"): | |
| navigate_to("explore_images_video") | |
| with col2: | |
| if st.button("🎵 Audio"): | |
| navigate_to("explore_audio") | |
| with col3: | |
| if st.button("✍️ Text"): | |
| navigate_to("explore_text") | |
| elif data_type == "Semi-Structured Data": | |
| st.markdown(""" | |
| <div style="text-align: left; margin-top: 20px;"> | |
| <h3 style="color: #e25822;">What is Semi-Structured Data?</h3> | |
| </div> | |
| """, unsafe_allow_html=True) | |
| st.markdown(""" | |
| <div style="text-align: left; margin-top: 20px;"> | |
| <h4 style="color: #5b2c6f;">Definition:</h4> | |
| </div> | |
| """, unsafe_allow_html=True) | |
| st.write(""" | |
| **Semi-Structured data** refers to information that does not follow a strict tabular format but contains tags or markers to separate data elements. | |
| This type of data is more flexible than structured data but still organized enough to allow for easier analysis than unstructured data. | |
| """) | |
| st.markdown(""" | |
| <div style="text-align: left; margin-top: 20px;"> | |
| <h4 style="color: #5b2c6f;">Characteristics:</h4> | |
| </div> | |
| """, unsafe_allow_html=True) | |
| st.write(""" | |
| - Contains markers or tags (e.g., XML, JSON keys) to provide structure. | |
| - More flexible than structured data, allowing for varying schemas. | |
| - Easier to process than unstructured data. | |
| - Can store hierarchical relationships. | |
| """) | |
| st.markdown(""" | |
| <div style="text-align: left; margin-top: 20px;"> | |
| <h4 style="color: #5b2c6f;">Examples:</h4> | |
| </div> | |
| """, unsafe_allow_html=True) | |
| st.write(""" | |
| Examples of semi-structured data include: | |
| - **CSV**: Comma-separated values in plain-text files. | |
| - **JSON**: A lightweight data-interchange format used in web applications. | |
| - **XML**: Extensible Markup Language for structured document encoding. | |
| - **HTML**: Markup language for web pages. | |
| """) | |
| st.markdown(""" | |
| <div style="text-align: left; margin-top: 20px;"> | |
| <h4 style="color: #5b2c6f;">Data Formats in Semi-Structured Data:</h4> | |
| </div> | |
| """, unsafe_allow_html=True) | |
| st.write("Click to explore Semi-Structured Data Formats:") | |
| col1, col2, col3, col4 = st.columns(4) | |
| with col1: | |
| if st.button("📄 CSV"): | |
| navigate_to("explore_csv") | |
| with col2: | |
| if st.button("📋 JSON"): | |
| navigate_to("explore_json") | |
| with col3: | |
| if st.button("📜 XML"): | |
| navigate_to("explore_xml") | |
| with col4: | |
| if st.button("🌐 HTML"): | |
| navigate_to("explore_html") | |
| # Pages for Each Format | |
| elif st.session_state.current_page == "explore_excel": | |
| # Section about Excel | |
| st.markdown(""" | |
| <h2 style="color: #BB3385;">Excel</h2> | |
| """, unsafe_allow_html=True) | |
| st.write(""" | |
| - **Excel** is a powerful spreadsheet software developed by Microsoft. | |
| - It is widely used for: | |
| - Data organization | |
| - Analysis | |
| - Visualization | |
| - Key features include: | |
| - Storing data in tabular format | |
| - Performing complex calculations | |
| - Creating charts | |
| - Applying various data manipulation techniques | |
| - Excel is an essential tool for managing and analyzing structured data in various industries. | |
| """) | |
| st.markdown(""" | |
| <h3 style="color: #5b2c6f;">Reading Excel Files in Python</h3> | |
| """, unsafe_allow_html=True) | |
| # Code example | |
| st.code(""" | |
| import pandas as pd | |
| # Read the Excel file | |
| data = pd.read_excel('path_to_file.xlsx') | |
| print(data.head()) # displays first 5 rows in excel file | |
| """, language="python") | |
| st.write("### Working with Sheets in Excel") | |
| # Importing a Single Sheet | |
| st.write("#### Importing a Single Excel Sheet") | |
| st.code(""" | |
| df = pd.read_excel('path_to_file.xlsx', sheet_name=0) | |
| print(df) | |
| """, language="python") | |
| # Importing Multiple Sheets | |
| st.write("#### Importing Multiple Sheets from Excel") | |
| st.code(""" | |
| df_dict = pd.read_excel('path_to_file.xlsx', sheet_name=[0, 1, 2]) | |
| for sheet, data in df_dict.items(): | |
| print(f"Sheet: {sheet}") | |
| print(data.head()) | |
| """, language="python") | |
| st.write("### Exporting Data to Excel Files") | |
| # Exporting a Single DataFrame to Excel | |
| st.write("#### Exporting a Single DataFrame") | |
| st.code(""" | |
| data = pd.DataFrame({ | |
| 'name': ['a', 'b', 'c', 'd'], | |
| 'age': [12, 23, 44, 43] | |
| }) | |
| # Export the DataFrame to an Excel file | |
| data.to_excel('single_sheet_output.xlsx', index=False) | |
| """, language="python") | |
| # Exporting Multiple DataFrames to Multiple Sheets | |
| st.write("#### Exporting Multiple DataFrames to Different Sheets") | |
| st.code(""" | |
| data1 = pd.DataFrame({ | |
| 'name': ['a', 'b', 'c', 'd'], | |
| 'age': [12, 23, 44, 43] | |
| }) | |
| data2 = pd.DataFrame({ | |
| 'maths': [43, 32, 45, 45], | |
| 'science': [32, 54, 45, 13] | |
| }) | |
| data3 = pd.DataFrame({ | |
| 'hindi': [32, 45, 53, 53], | |
| 'english': [53, 32, 24, 65] | |
| }) | |
| # Export multiple DataFrames to an Excel file with multiple sheets | |
| with pd.ExcelWriter('multi_sheet_output.xlsx') as writer: | |
| data1.to_excel(writer, sheet_name='Personal Info', index=False) | |
| data2.to_excel(writer, sheet_name='Academic Scores', index=False) | |
| data3.to_excel(writer, sheet_name='Language Scores', index=False) | |
| """, language="python") | |
| st.write("### Common Issues with Excel Files") | |
| # 1. File Format Compatibility | |
| st.write("#### 1. File Format Compatibility") | |
| st.write("Excel files may come in different formats like `.xls` and `.xlsx`, which can lead to compatibility issues.") | |
| st.code(""" | |
| data = pd.read_excel('file.xls', engine='xlrd') # For .xls files | |
| data = pd.read_excel('file.xlsx', engine='openpyxl') # For .xlsx files | |
| print(data.head()) | |
| """, language="python") | |
| # 2. Encoding Issues | |
| st.write("#### 2. Encoding Issues") | |
| st.write("Sometimes Excel files might have special characters that cause encoding problems.") | |
| st.code(""" | |
| data = pd.read_excel('file.xlsx', encoding='utf-8') # Replace with the correct encoding | |
| print(data.head()) | |
| """, language="python") | |
| # 3. Missing or Incomplete Data | |
| st.write("#### 3. Missing or Incomplete Data") | |
| st.write("Missing values can lead to errors during data processing.") | |
| st.code(""" | |
| data = pd.read_excel('file.xlsx') | |
| data.fillna(0, inplace=True) # Replace NaN values with 0 or other defaults | |
| print(data.head()) | |
| """, language="python") | |
| # 4. Large File Sizes | |
| st.write("#### 4. Large File Sizes") | |
| st.write("Large Excel files may cause performance issues or run out of memory.") | |
| st.code(""" | |
| chunk_size = 1000 | |
| for chunk in pd.read_excel('large_file.xlsx', chunksize=chunk_size): | |
| print(chunk.head()) | |
| """, language="python") | |
| # 5. Sheet Name Selection | |
| st.write("#### 5. Sheet Name Selection") | |
| st.write("Excel files may have multiple sheets, and reading the wrong one can lead to incorrect analysis.") | |
| st.code(""" | |
| data = pd.read_excel('file.xlsx', sheet_name='Sheet1') | |
| print(data.head()) | |
| """, language="python") | |
| # 6. Data Type Conversion | |
| st.write("#### 6. Data Type Conversion") | |
| st.write("Excel files may have columns with inconsistent or incorrect data types.") | |
| st.code(""" | |
| data = pd.read_excel('file.xlsx') | |
| data['column_name'] = data['column_name'].astype(int) | |
| print(data.dtypes) | |
| """, language="python") | |
| # 8. Merged Cells | |
| st.write("#### 7. Merged Cells") | |
| st.write("Merged cells in Excel can lead to missing or misaligned data.") | |
| st.code(""" | |
| data = pd.read_excel('file.xlsx', merge_cells=False) | |
| print(data.head()) | |
| """, language="python") | |
| # 10. Date Parsing | |
| st.write("#### 8. Date Parsing") | |
| st.write("Dates in Excel files may not be interpreted correctly.") | |
| st.code(""" | |
| data = pd.read_excel('file.xlsx', parse_dates=['date_column']) | |
| print(data.dtypes) | |
| """, language="python") | |
| col1 = st.columns(1) | |
| with col1: | |
| if st.button("⬅️ Back to Previous Page"): | |
| navigate_to("main") | |
| elif st.session_state.current_page == "explore_images_video": | |
| st.markdown(""" | |
| <h2 style="color: #BB3385;">Introduction to Images and Videos📸🖼️</h2> | |
| """, unsafe_allow_html=True) | |
| # Subheading 1: What is an Image? | |
| st.write(""" | |
| <div style="text-align: left; margin-top: 20px;"> | |
| <h3 style="color: #5b2c6f;">What is an Image?</h3> | |
| <p style="font-size: 16px; color: #333;"> | |
| An image is a two-dimensional representation of the visible light spectrum, often captured | |
| using devices like cameras or scanners. It can store details such as <strong>colors</strong>, <strong>shapes</strong>, and <strong>textures</strong>, | |
| enabling us to visually interpret and analyze information. | |
| Common formats include JPEG, PNG, and BMP. | |
| </p> | |
| </div> | |
| """, unsafe_allow_html=True) | |
| # Subheading 2: What is a Video? | |
| st.write(""" | |
| <div style="text-align: left; margin-top: 20px;"> | |
| <h3 style="color: #5b2c6f;">What is a Video?</h3> | |
| <p style="font-size: 16px; color: #333;"> | |
| A video is a collection of images, often referred to as frames, displayed one after another quickly | |
| to show continuous movement. Each frame captures a moment in time, and when these frames are played | |
| sequentially, they show continuous movement. Videos typically have a frame rate (e.g., 30 frames | |
| per second or 60 frames per second), which determines how many frames are displayed per second. | |
| Common video formats include MP4, AVI, and MOV. | |
| </p> | |
| </div> | |
| """, unsafe_allow_html=True) | |
| # Subheading 3: Why is an Image Called a Grid-Like Structure? | |
| st.write(""" | |
| <div style="text-align: left; margin-top: 20px;"> | |
| <h3 style="color: #5b2c6f;">Why is an Image Called a Grid-Like Structure?</h3> | |
| <p style="font-size: 16px; color: #333;"> | |
| Images are called <strong>grid-like structures</strong> because they are composed of <strong>pixels</strong> arranged in rows and columns, | |
| forming a rectangular grid. Each <strong>pixel</strong> contains information about <strong>shape</strong>, <strong>color</strong>, and <strong>patterns</strong>, which | |
| together define the image's content. | |
| The total number of <strong>pixels</strong> is determined by the image's height and width (resolution), and a higher resolution provides better clarity. | |
| </p> | |
| <p style="font-size: 16px; color: #333;"> | |
| In images, <strong>pixels</strong> act as features, and the entire grid represents a single data point. This combination | |
| of features and data points gives images their grid-like nature. | |
| </p> | |
| <p style="font-size: 16px; color: #333;"> | |
| While images and tabular data are both grid-like, the difference lies in interpretation: in images, the | |
| grid represents one data point, while in tabular data, rows represent data points, and columns represent features. | |
| </p> | |
| </div> | |
| """, unsafe_allow_html=True) | |
| # Interactive Pixel Grid Section | |
| st.subheader("Interactive Pixel Grid") | |
| # User Input for Height and Width | |
| height = st.number_input("Enter Image Height (pixels):", min_value=1, max_value=50, value=10, step=1) | |
| width = st.number_input("Enter Image Width (pixels):", min_value=1, max_value=50, value=10, step=1) | |
| # Display Resolution | |
| resolution = height * width | |
| st.write(f"**Image Resolution**: {resolution} pixels") | |
| # Generate and Display Pixel Grid | |
| st.write("**Pixel Grid Visualization:**") | |
| grid = np.random.rand(int(height), int(width)) # Generate random grid values | |
| fig, ax = plt.subplots() | |
| cax = ax.imshow(grid, cmap="Pastel1") | |
| plt.colorbar(cax, ax=ax) # Add color bar for context | |
| ax.set_title("Pixel Grid") | |
| ax.set_xlabel("Width(pixels)", fontsize=8) # Set smaller font size | |
| ax.set_ylabel("Height(pixels)", fontsize=8) # Set smaller font size | |
| # Render the Plot | |
| st.pyplot(fig) | |
| # Section: What are Color Spaces? | |
| st.write(""" | |
| <div style="text-align: left; margin-top: 20px;"> | |
| <h3 style="color: #5b2c6f;">What are Color Spaces?</h3> | |
| <p style="font-size: 16px; color: #333;"> | |
| A <strong>color space</strong> is a technique used to represent the <strong>colors of an image</strong> in a | |
| numerical format. It allows us to preserve the <strong>color information</strong> while converting it into a | |
| form that machines can understand. Since machines cannot <strong>"see"</strong> images as humans do, they interpret | |
| <strong>numerical values</strong>. Therefore, color spaces are crucial for converting images into a format | |
| that can be processed by a machine. | |
| </p> | |
| </div> | |
| """, unsafe_allow_html=True) | |
| # Section: Example of How ML Models Work with Images | |
| st.write(""" | |
| <div style="text-align: left; margin-top: 20px;"> | |
| <h4 style="color: #e25822;">For Example:</h4> | |
| <p style="font-size: 16px; color: #333;"> | |
| Imagine you're building a <strong>machine learning model</strong> to classify images of | |
| <strong>dogs and cats</strong>. You provide the model with images, but since the machine cannot understand | |
| images directly, you need to convert them into <strong>numerical data</strong>. This is where | |
| <strong>color spaces</strong> play a vital role. They help convert the <strong>color information</strong> in | |
| the images into numbers that the machine can process, allowing it to <strong>learn from the data</strong> | |
| and make accurate predictions. | |
| </p> | |
| </div> | |
| """, unsafe_allow_html=True) | |
| # Section: Common Color Spaces | |
| st.write(""" | |
| <div style="text-align: left; margin-top: 20px;"> | |
| <h4 style="color: #5b2c6f;">Common Color Spaces</h4> | |
| <p style="font-size: 16px; color: #333;"> | |
| These are some of the <strong>common color spaces</strong> used in <strong>image processing</strong>: | |
| </p> | |
| <ol style="font-size: 16px; color: #333;"> | |
| <li><strong>Black and White</strong></li> | |
| <li><strong>Grayscale</strong></li> | |
| <li><strong>Red, Green, Blue (RGB)</strong></li> | |
| </ol> | |
| </div> | |
| """, unsafe_allow_html=True) | |
| st.subheader("What is Black and White Color Space?") | |
| st.write(""" | |
| Black and White color space, also known as binary color space, represents an image using only two colors: | |
| **black** and **white**. | |
| - **0** represents **black**. | |
| - **1** or **255** (depending on the encoding) represents **white**. | |
| Each pixel in this color space is either completely black or completely white. | |
| Black and White color space eliminates all color information, focusing entirely on light intensity. | |
| """) | |
| st.image( | |
| "https://huggingface.co/spaces/LakshmiHarika/MachineLearning/resolve/main/pages/Screenshot%202024-12-23%20175703.png", | |
| caption="Black and White Color Space.", | |
| use_container_width=True) | |
| # Section: What is Grayscale Color Space? | |
| st.subheader("What is Grayscale Color Space?") | |
| st.write(""" | |
| Grayscale color space represents an image using different shades of gray, ranging from **black** to **white**. | |
| - **0** represents **black** (no light intensity). | |
| - **255** represents **white** (maximum light intensity). | |
| - Values between **0 and 255** represent varying shades of gray. | |
| Grayscale eliminates color information, focusing entirely on the intensity of light in an image. Each pixel has only one intensity value, making it a simpler and more compact representation compared to color images. | |
| """) | |
| # Create grayscale gradient with labeled intensity values | |
| gradient = np.linspace(0, 255, 256) # Generate gradient values | |
| gradient = np.tile(gradient, (10, 1)) # Repeat the gradient to make it visually clear | |
| # Plot the gradient | |
| fig, ax = plt.subplots(figsize=(8, 1), facecolor='none') # Reduce height by half | |
| ax.imshow(gradient, cmap='gray', aspect='auto') | |
| ax.set_xticks(np.linspace(0, 255, 11)) # Set ticks for every 25.5 (0, 25, ..., 255) | |
| ax.set_xticklabels([str(int(x)) for x in np.linspace(0, 255, 11)], fontsize=8, color='red') # Adjust font size | |
| ax.set_yticks([]) # Remove y-axis ticks | |
| ax.set_title("Grayscale Representation", fontsize=10) | |
| # Save the figure with a transparent background | |
| plt.savefig('grayscale_representation.png', transparent=True) | |
| # Render the plot in Streamlit | |
| st.pyplot(fig) | |
| st.image( | |
| "https://huggingface.co/spaces/LakshmiHarika/MachineLearning/resolve/main/Images/gray_scale.jpg", | |
| caption="Gray Scale Color Space.", | |
| use_container_width=True) | |
| st.subheader("What is RGB Color Space?") | |
| st.write(""" | |
| RGB color space represents an image using three primary colors: **Red**, **Green**, and **Blue**. These colors form the basis of digital images and can be combined in different intensities to create a wide range of colors. | |
| A colored image in the RGB color space is split into three separate channels: | |
| - **Red Channel**: Contains the intensity of the red color at each pixel. | |
| - **Green Channel**: Contains the intensity of the green color at each pixel. | |
| - **Blue Channel**: Contains the intensity of the blue color at each pixel. | |
| Each of these channels is represented as a **2D array**, where: | |
| - Each pixel in the 2D array contains a value ranging from **0** (no intensity) to **255** (maximum intensity) for that color. | |
| By combining the three channels, a wide range of colors can be formed. For example: | |
| - **(255, 0, 0)** represents pure **Red**. | |
| - **(0, 255, 0)** represents pure **Green**. | |
| - **(0, 0, 255)** represents pure **Blue**. | |
| - **(255, 255, 255)** represents **White**, where all channels are at maximum intensity. | |
| - **(0, 0, 0)** represents **Black**, where all channels have no intensity. | |
| - Combining colors, such as **Red + Green = Yellow**, **Green + Blue = Cyan**, and **Blue + Red = Magenta**, creates even more colors. By adjusting the intensity of each channel, millions of unique colors can be generated. | |
| Computers interpret RGB images as **3D arrays**: | |
| - The **width** and **height** of the 3D array correspond to the dimensions of the image. | |
| - The **depth** of the 3D array corresponds to the number of color channels. | |
| Altogether, these three channels combine to form a complete color image, enabling vibrant, precise, and dynamic representations of colors in digital media. | |
| """) | |
| st.image( | |
| "https://huggingface.co/spaces/LakshmiHarika/MachineLearning/resolve/main/Images/rgb_1.jpg", | |
| use_container_width=True) | |
| st.image( | |
| "https://huggingface.co/spaces/LakshmiHarika/MachineLearning/resolve/main/Images/rgb_2.jpg", | |
| use_container_width=True) | |
| st.image( | |
| "https://huggingface.co/spaces/LakshmiHarika/MachineLearning/resolve/main/Images/rgb_3.jpg", | |
| use_container_width=True) | |
| st.write(""" | |
| In the next section, we'll dive into the exciting world of **image processing using OpenCV**. We'll cover how to: | |
| - **Read, display, and manipulate images** programmatically. | |
| - Understand the **core operations** used in computer vision. | |
| - Transform images to uncover hidden insights. | |
| Curious to see how?👇Click **Image Operations with OpenCV** to start your journey into OpenCV Basics!🚀 | |
| """) | |
| col1, col2 = st.columns(2) | |
| with col1: | |
| if st.button("⬅️ Back to Previous Page"): | |
| navigate_to("main") | |
| with col2: | |
| if st.button("➡️ Image Operations with OpenCV"): | |
| navigate_to("opencv_operations") | |
| elif st.session_state.current_page == "opencv_operations": | |
| # Introduction to OpenCV Page | |
| st.markdown(""" | |
| <h2 style="color: #BB3385;">OpenCV(Open Source Computer Vision Library)</h2> | |
| """, unsafe_allow_html=True) | |
| # Informative Content | |
| st.write(""" | |
| Before diving into OpenCV basics, let's understand a few key points: | |
| - In Python, we have several libraries to work with images. One of the most powerful and popular libraries is **OpenCV**. | |
| - With **OpenCV**, we can provide machines with **artificial vision**, enabling them to perceive and process visual information. | |
| - OpenCV allows us to work with both **images and videos**, making it a versatile tool for various computer vision applications. | |
| """) | |
| # What is OpenCV Section | |
| st.markdown(""" | |
| <h3 style="color: #9400d3;">What is OpenCV?</h3> | |
| """, unsafe_allow_html=True) | |
| st.write(""" | |
| OpenCV, short for **Open Source Computer Vision Library**, is a popular open-source library designed for real-time computer vision and image processing tasks. | |
| **Key Points**: | |
| - **Purpose**: OpenCV helps provide artificial vision to machines, enabling them to understand and process visual information like images and videos. | |
| - **Features**: OpenCV allows you to work with images and videos for tasks like transformation, filtering, and enhancement. It also supports real-time processing, making it ideal for dynamic applications. | |
| - **Applications**: Commonly used in tasks such as image recognition, motion detection, video analytics, and robotics. | |
| OpenCV is cross-platform, free to use, and designed for high performance, making it an essential tool for computer vision projects. | |
| """) | |
| # Installing OpenCV Section | |
| st.markdown(""" | |
| <h3 style="color: #9400d3;">Installing OpenCV</h3> | |
| """, unsafe_allow_html=True) | |
| st.write(""" | |
| To start working with OpenCV, you need to install it in your Python environment. Here’s how: | |
| """) | |
| st.write("1. Install OpenCV using pip:") | |
| st.code("pip install opencv-python", language="bash") | |
| st.write("2. Import OpenCV in your Python script:") | |
| st.code(""" | |
| import cv2 | |
| print(cv2.__version__) # This will display the installed OpenCV version | |
| """, language="python") | |
| st.write("With OpenCV installed, let's learn basic image handling in OpenCV.") | |
| st.write("## Basic Operations in OpenCV") | |
| # Heading for Reading Images with Custom Color | |
| st.markdown(""" | |
| <h3 style="color: #9400d3;">Reading an Image</h3> | |
| """, unsafe_allow_html=True) | |
| # About imread() function | |
| st.write(""" | |
| To read an image and convert it into a machine-readable format, we use the **imread()** function from the cv2 module. | |
| It reads the image file and converts it into a numerical array, where each element represents pixel intensity. | |
| """) | |
| # Code example | |
| st.code(""" | |
| # Read the image | |
| img = cv2.imread('path_to_image.jpg') # Replace 'path_to_image.jpg' with the image file path | |
| # Display the numerical matrix | |
| print(img) # This will print the image as an array of pixel values | |
| """, language="python") | |
| # Explanation for Grayscale Conversion | |
| st.write(""" | |
| By default, the `imread()` function reads an image as a 3D array in the RGB color space. | |
| To convert the image to grayscale, pass `0` as the second argument to the `imread()` function. This will return a 2D array where each pixel value represents intensity. | |
| """) | |
| # Code example for Grayscale Conversion | |
| st.code(""" | |
| # Read the image in grayscale | |
| gray_img = cv2.imread('path_to_image.jpg', 0) # Replace 'path_to_image.jpg' with your image file path | |
| # Display the numerical matrix for the grayscale image | |
| print(gray_img) # This will print the 2D array representing pixel intensity | |
| """, language="python") | |
| # Displaying Images with OpenCV in Custom Color | |
| st.markdown(""" | |
| <h3 style="color: #9400d3;">Displaying Images with OpenCV</h3> | |
| """, unsafe_allow_html=True) | |
| # Explanation of the functions | |
| st.write(""" | |
| After creating or reading an image, we can display it using OpenCV. Here’s how the key functions work together: | |
| #### imshow() | |
| - The `imshow()` function creates a **pop-up window** to display the image. | |
| - Internally, it converts the numerical array into a visual image. | |
| - **Parameters**: | |
| 1. `Window Name`: Title of the pop-up window (string). | |
| 2. `Image Array`: The array representing the image. | |
| #### waitKey() | |
| - **Purpose**: Waits for a key press and adds a delay before closing the pop-up window. | |
| - **Key Modes**: | |
| - `waitKey(0)` or `waitKey()`: Keeps the window open indefinitely until a key is pressed. | |
| - `waitKey(n)`: Delays for `n` milliseconds, closing the window after the delay if no key is pressed. | |
| #### destroyAllWindows() | |
| - The `destroyAllWindows()` function is used to close the pop-up window from **RAM**. | |
| - It ensures that all windows opened by `imshow()` are completely removed. | |
| - Without this, the window may stay allocated in memory even after being closed visually. | |
| These three functions must work together to display and manage images effectively. | |
| """) | |
| st.code(""" | |
| # imshow() | |
| cv2.imshow(window_name, img_array) | |
| # window_name: The title of the pop-up window | |
| # img_array: The image data (Array) | |
| # waitKey() | |
| cv2.waitKey(delay_in_milliseconds) | |
| # delay_in_milliseconds: Time in milliseconds to keep the window open | |
| # Use 0 for infinite delay until a key is pressed | |
| # destroyAllWindows() | |
| cv2.destroyAllWindows() | |
| # This ensures all windows opened by imshow() are cleared from RAM | |
| """, language="python") | |
| # Heading for Saving Images | |
| st.markdown(""" | |
| <h3 style="color: #9400d3;">Saving an Image</h3> | |
| """, unsafe_allow_html=True) | |
| # About imwrite() function | |
| st.write(""" | |
| To save an image file in OpenCV, we use the **imwrite()** function. | |
| It converts the numerical array (image data) back into an image file format, such as `.jpg`, `.png`, or `.bmp`. | |
| """) | |
| # Code example | |
| st.code(""" | |
| # Example: Save an image | |
| cv2.imwrite('saved_image.jpg', image_array) # 'saved_image.jpg' is the name of the output file | |
| print("Image saved successfully!") | |
| """, language="python") | |
| # Add a link to the OpenCV documentation | |
| st.markdown(""" | |
| For more detailed information on OpenCV functions and tutorials, visit the official OpenCV documentation: | |
| [OpenCV Documentation](https://docs.opencv.org/4.x/) | |
| """) | |
| st.write(""" | |
| In the next section, we'll take a closer look at **image creation and manipulation using OpenCV**. We'll discuss: | |
| - **Creating different types of images** (black-and-white, grayscale, and RGB). | |
| - **Splitting images** into individual channels. | |
| - **Converting images** between various color spaces. | |
| Curious to learn more?👇Click **Explore Image Creation and Manipulation** to continue your journey with OpenCV! 🚀 | |
| """) | |
| col1, col2, col3 = st.columns(3) | |
| with col2: | |
| if st.button("📸 Images & Videos"): | |
| navigate_to("explore_images_video") # Main page: Images & Videos | |
| with col1: | |
| if st.button("⬅️ Image Operations with OpenCV"): | |
| navigate_to("opencv_operations") # Previous page: Image Operations with OpenCV | |
| with col3: | |
| if st.button("➡️ Explore Image Creation and Manipulation"): | |
| navigate_to("image_operations") | |
| elif st.session_state.current_page == "image_operations": | |
| # Heading for the section | |
| st.markdown(""" | |
| <h2 style="color: #BB3385;">Creating, Splitting, and Converting Images with OpenCV</h2> | |
| """, unsafe_allow_html=True) | |
| # Short introduction to the section | |
| st.write(""" | |
| In this section, we’ll learn how to create different types of images, split them into their color channels, and convert between various color spaces to manipulate images more effectively. | |
| """) | |
| # Heading for Creating Black and White Image | |
| st.markdown(""" | |
| <h3 style="color: #9400d3;">Creating a Black and White Image</h3> | |
| """, unsafe_allow_html=True) | |
| # Explanation | |
| st.write(""" | |
| In OpenCV, black and white images are created by filling a matrix with pixel values: | |
| - **Black image**: All pixel values are set to 0. | |
| - **White image**: All pixel values are set to 255. | |
| """) | |
| # Code example | |
| st.code(""" | |
| white_img = np.full((500, 500), 255, dtype=np.uint8) # Create a white image | |
| black_img = np.zeros((500, 500), dtype=np.uint8) # Create a black image | |
| # Display the images | |
| cv2.imshow("White", white_img) | |
| cv2.imshow("Black", black_img) | |
| cv2.waitKey(0) # 0 means infinite delay | |
| cv2.destroyAllWindows() | |
| """, language="python") | |
| # Heading for Creating Grayscale Image | |
| st.markdown(""" | |
| <h3 style="color: #9400d3;">Creating a Grayscale Image</h3> | |
| """, unsafe_allow_html=True) | |
| # Explanation | |
| st.write(""" | |
| In OpenCV, grayscale images are created by filling a matrix with pixel intensity values. The values range from 0 (black) to 255 (white). | |
| """) | |
| # Code example | |
| st.code(""" | |
| gray_img = np.full((500, 500), 127, dtype=np.uint8) # Create a grayscale image (127 represents medium gray) | |
| # Display the grayscale image | |
| cv2.imshow("Grayscale", gray_img) | |
| cv2.waitKey(0) # 0 means infinite delay | |
| cv2.destroyAllWindows() | |
| """, language="python") | |
| # Heading for cv2.merge() function | |
| st.markdown(""" | |
| <h3 style="color: #e25822;">Merging Color Channels</h3> | |
| """, unsafe_allow_html=True) | |
| # About cv2.merge() function | |
| st.write(""" | |
| To combine multiple single-channel images (like Red, Green, and Blue) into a single multi-channel image, we use the **cv2.merge()** function. | |
| This function merges individual color channels into a complete color image. | |
| """) | |
| # Syntax example for cv2.merge() | |
| st.code(""" | |
| # Merging individual color channels (Blue, Green, Red) | |
| merged_image = cv2.merge([blue_channel, green_channel, red_channel]) | |
| # blue_channel,green_channel,red_channel are single-channel images representing individual color channels(Blue, Green, Red) | |
| """, language="python") | |
| # Heading for Creating RGB Image | |
| st.markdown(""" | |
| <h3 style="color: #9400d3;">Creating a Colored RGB Image</h3> | |
| """, unsafe_allow_html=True) | |
| # Explanation | |
| st.write(""" | |
| To create a colored image, we use individual color channels (Red, Green, Blue) and merge them using `cv2.merge()`. | |
| In this example: | |
| - The **Blue channel** is filled with 255 (full intensity). | |
| - The **Green channel** is set to 0 (no intensity). | |
| - The **Red channel** is also set to 0 (no intensity). | |
| The channels are then merged into a single RGB image, which is displayed using OpenCV. | |
| """) | |
| # Code example | |
| st.code(""" | |
| # Create individual color channels | |
| b = np.full((300, 300), 255, dtype=np.uint8) # Blue channel | |
| g = np.zeros((300, 300), dtype=np.uint8) # Green channel | |
| r = np.zeros((300, 300), dtype=np.uint8) # Red channel | |
| # Merge the color channels to create RGB images | |
| b_img = cv2.merge([b, g, r]) # Blue image | |
| g_img = cv2.merge([g, b, r]) # Green image | |
| r_img = cv2.merge([r, g, b]) # Red image | |
| # Display the images | |
| cv2.imshow("Blue", b_img) | |
| cv2.imshow("Green", g_img) | |
| cv2.imshow("Red", r_img) | |
| cv2.waitKey(0) # Wait until a key is pressed | |
| cv2.destroyAllWindows() # Close all OpenCV windows | |
| """, language="python") | |
| st.image( | |
| "https://huggingface.co/spaces/LakshmiHarika/MachineLearning/resolve/main/Images/Merging%20rgb.png", | |
| use_container_width=True) | |
| # Heading for Splitting Channels | |
| st.markdown(""" | |
| <h3 style="color: #e25822;">Splitting Channels</h3> | |
| """, unsafe_allow_html=True) | |
| # About cv2.split() function | |
| st.write(""" | |
| The `cv2.split()` function in OpenCV is used to divide an image into its individual color channels. | |
| It creates separate single-channel arrays for each color, allowing you to work with them independently. | |
| For example, it can split an RGB image into its Red, Green, and Blue channels. | |
| """) | |
| # Syntax for cv2.split() function | |
| st.code(""" | |
| # Syntax for cv2.split() | |
| channels = cv2.split(image) | |
| # image: The input image (e.g., an RGB image). | |
| # channels: A list of single-channel images (e.g., Blue, Green, Red). | |
| """, language="python") | |
| # Heading for the section | |
| st.markdown(""" | |
| <h3 style="color: #9400d3;">Splitting and Merging Color Channels</h3> | |
| """, unsafe_allow_html=True) | |
| # Code Example for Splitting and Merging Color Channels | |
| st.code(""" | |
| img = cv2.imread(r"P:\\BSG(P)\\7b144ce3dff5652ff59f2eb694eba472.jpg") # Read the image | |
| b, g, r = cv2.split(img) # Split the image into Blue, Green, and Red channels | |
| zeros = np.zeros(img.shape[:-1], dtype=np.uint8) # Create a zeros array to hold the empty channels | |
| blue_channel = cv2.merge([b, zeros, zeros]) # Merge the Blue channel with zeros for Green and Red | |
| green_channel = cv2.merge([zeros, g, zeros]) # Merge the Green channel with zeros for Blue and Red | |
| red_channel = cv2.merge([zeros, zeros, r]) # Merge the Red channel with zeros for Blue and Green | |
| # Display the individual color channels and the original image | |
| cv2.imshow("Blue_channel", blue_channel) | |
| cv2.imshow("Green_channel", green_channel) | |
| cv2.imshow("Red_channel", red_channel) | |
| cv2.imshow("Original_img", cv2.merge([b, g, r])) | |
| cv2.waitKey(0) | |
| cv2.destroyAllWindows()""", language="python") | |
| st.image( | |
| "https://huggingface.co/spaces/LakshmiHarika/MachineLearning/resolve/main/Images/splitting_rgb_img.png", | |
| use_container_width=True) | |
| st.write("Once you upload an image, it will be split into its color channels (Blue, Green, and Red), with each channel displayed separately. You can then download the processed image.") | |
| # Allow user to upload an image | |
| uploaded_file = st.file_uploader("Upload an image", type=["jpg", "png", "jpeg"]) | |
| if uploaded_file is not None: | |
| # Convert the uploaded image to an OpenCV-compatible format | |
| image = np.array(bytearray(uploaded_file.read()), dtype=np.uint8) | |
| img = cv2.imdecode(image, 1) # Decode into an image | |
| # Split the image into Blue, Green, and Red channels | |
| b, g, r = cv2.split(img) | |
| # Create a zeros array to hold the empty channels | |
| zeros = np.zeros(img.shape[:-1], dtype=np.uint8) | |
| # Merge the Blue channel with zeros for Green and Red | |
| blue_channel = cv2.merge([b, zeros, zeros]) | |
| green_channel = cv2.merge([zeros, g, zeros]) | |
| red_channel = cv2.merge([zeros, zeros, r]) | |
| # Display the images with captions | |
| st.image(blue_channel, caption="Blue Channel", channels="BGR", use_container_width=True) | |
| st.image(green_channel, caption="Green Channel", channels="BGR", use_container_width=True) | |
| st.image(red_channel, caption="Red Channel", channels="BGR", use_container_width=True) | |
| # Merge the channels back together for the original image | |
| original_img = cv2.merge([b, g, r]) | |
| # Display the original image | |
| st.image(original_img, caption="Original Image", channels="BGR", use_container_width=True) | |
| # Optionally, provide a download link for the processed image | |
| st.download_button( | |
| label="Download Merged Image", | |
| data=cv2.imencode('.jpg', original_img)[1].tobytes(), | |
| file_name="merged_image.jpg", | |
| mime="image/jpeg" | |
| ) | |
| else: | |
| st.write("Please upload an image to proceed.") | |
| # Heading for cv2.cvtColor() function | |
| st.markdown(""" | |
| <h3 style="color: #9400d3;">Converting Color Spaces</h3> | |
| """, unsafe_allow_html=True) | |
| # About cv2.cvtColor() function | |
| st.write(""" | |
| The **`cv2.cvtColor()`** function in OpenCV is used to convert an image from one color space to another. | |
| This function is widely used for various color space transformations, such as converting a color image to grayscale or converting between RGB and HSV. | |
| """) | |
| # Syntax example for cv2.cvtColor() | |
| st.code(""" | |
| # Converting an RGB image to Grayscale | |
| gray_img = cv2.cvtColor(rgb_image, cv2.COLOR_BGR2GRAY)""", language="python") # Convert from BGR to Grayscale | |
| st.image( | |
| "https://huggingface.co/spaces/LakshmiHarika/MachineLearning/resolve/main/Images/rgb_to_grayscale.png", | |
| use_container_width=True) | |
| st.write(""" | |
| In the next section, we will dive into **video processing using OpenCV**. We will explore how to: | |
| - Use various OpenCV functions for handling video data. | |
| - Play videos using OpenCV. | |
| - Capture images from live video streams. | |
| Stay tuned for an exciting exploration of video handling! | |
| """) | |
| col1, col2, col3 = st.columns(3) | |
| # Column 1 - Button to go to the Image operations page | |
| with col1: | |
| if st.button("⬅️ Image Operations with OpenCV"): | |
| navigate_to("opencv_operations") | |
| # Column 2 - Button for the main page (Images & Videos) | |
| with col2: | |
| if st.button("📸 Images & Videos"): | |
| navigate_to("explore_images_video") # Main page: Images & Videos | |
| # Column 3 - Button to go to the Videos page | |
| with col3: | |
| if st.button("➡️ Video Processing with OpenCV"): | |
| navigate_to("video_processing") # Videos page | |
| elif st.session_state.current_page == "video_processing": | |
| # Heading for Introduction to Video Processing | |
| st.markdown(""" | |
| <h3 style="color: #9400d3;">Introduction to Video Processing</h3> | |
| """, unsafe_allow_html=True) | |
| # Explanation about Video Processing | |
| st.write(""" | |
| In computer vision, **video processing** refers to the analysis and manipulation of video data, which is essentially a series of images (frames) displayed in sequence. Each frame is processed individually, and the sequence is used to analyze changes or actions over time. | |
| Video processing allows us to work with various types of video data, including video files or real-time video streams. Using OpenCV, we can read, display, manipulate, and save video files, as well as capture video from a camera or webcam. | |
| """) | |
| # Heading for How OpenCV Handles Videos | |
| st.markdown(""" | |
| <h3 style="color: #9400d3;">How OpenCV Handles Videos</h3> | |
| """, unsafe_allow_html=True) | |
| # Explanation about How OpenCV Handles Videos | |
| st.write(""" | |
| OpenCV provides simple and efficient methods to handle videos. Videos are essentially a sequence of images (frames) shown in rapid succession. OpenCV reads and processes each frame of the video in real time, much like how it handles individual images. | |
| To work with videos in OpenCV, the primary function is **`cv2.VideoCapture()`**, which allows you to: | |
| - **Load** video files or live video streams. | |
| - **Read** individual frames from the video. | |
| - **Display** the frames in a window. | |
| - **Process** each frame just like an image. | |
| Once the video is loaded, OpenCV processes each frame in a loop until the video ends or the user stops it. You can apply image processing techniques to each frame, such as transformations, filtering, or object detection, before displaying or saving the modified video. | |
| """) | |
| # Heading for Playing Videos with OpenCV | |
| st.markdown(""" | |
| <h3 style="color: #9400d3;">Playing Videos with OpenCV</h3> | |
| """, unsafe_allow_html=True) | |
| # Explanation | |
| st.write(""" | |
| To play a video using OpenCV, we load the video with **`cv2.VideoCapture()`** and display each frame using **`cv2.imshow()`**. You can stop the video by pressing 'q'. | |
| """) | |
| # Code example for playing a video | |
| st.code(""" | |
| # Load the video | |
| vid = cv2.VideoCapture('path_to_video.mp4') | |
| # Loop to read frames | |
| while True: | |
| succ, img = vid.read() # Read each frame | |
| if not succ: # Exit if no frame is read | |
| break | |
| cv2.imshow("video", img) # Show the frame | |
| # Press 'q' to quit | |
| if cv2.waitKey(1) & 255 == ord("q"): | |
| break | |
| # Release resources and close window | |
| cv2.destroyAllWindows() | |
| """, language="python") | |
| # Heading for cv2.read() | |
| st.markdown(""" | |
| <h3 style="color: #e25822;"> Understanding vid.read()</h3> | |
| """, unsafe_allow_html=True) | |
| # Explanation for vid.read() | |
| st.write(""" | |
| The **`vid.read()`** function is used to read one frame at a time from the video file. | |
| It returns two values: | |
| 1. **`succ`**: A boolean that indicates whether the frame was successfully read. | |
| - **True** if the frame was read successfully. | |
| - **False** if the frame could not be read (usually when the video ends). | |
| 2. **`img`**: The actual frame (image) read from the video. This frame is returned as a NumPy array and can be processed just like any image. | |
| """) | |
| # Short Heading | |
| st.markdown(""" | |
| <h3 style="color: #e25822;">Understanding cv2.waitKey()</h3> | |
| """, unsafe_allow_html=True) | |
| # Explanation | |
| st.write(""" | |
| The line `if cv2.waitKey(1) & 255 == ord('q'):` is used in OpenCV to check if a specific key is pressed while processing video. Here’s a simple explanation of what it does: | |
| - **`cv2.waitKey(1)`**: | |
| - Waits for a key press for **1 millisecond**. | |
| - If a key is pressed, it returns the key’s code. If no key is pressed, it returns `-1`. | |
| - **`& 255`**: | |
| - Ensures the key code is compatible across different systems. | |
| - Keeps only the last **8 bits**, which represent the key code. | |
| - **`ord('q')`**: | |
| - Finds the ASCII code for the letter `'q'`. | |
| - The ASCII code for `'q'` is **113**. | |
| - This is used to check if the user pressed the `'q'` key to stop the program. | |
| ### Full Condition: | |
| ```python | |
| if cv2.waitKey(1) & 255 == ord('q'): | |
| break | |
| ``` | |
| This stops the video when the 'q' key is pressed. | |
| """) | |
| # Heading for Converting BGR to Grayscale | |
| st.markdown(""" | |
| <h3 style="color: #9400d3;">Converting BGR Video to Grayscale</h3> | |
| """, unsafe_allow_html=True) | |
| # Brief Explanation | |
| st.write(""" | |
| You can handle video frames one at a time and process them as needed. The following example shows how to: | |
| - Convert each frame of a video from BGR to grayscale. | |
| - Display both the original and grayscale video frames side by side. | |
| """) | |
| # Code Example | |
| st.code(""" | |
| vid = cv2.VideoCapture('path_to_video.mp4') | |
| while True: | |
| succ, img = vid.read() | |
| if succ == False: | |
| break | |
| # Convert frame from BGR to grayscale | |
| img1 = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) | |
| # Display the original (colored) and grayscale frames | |
| cv2.imshow("Colored Video", img) | |
| cv2.imshow("Grayscale Video", img1) | |
| # Press 'q' to quit the video | |
| if cv2.waitKey(1) & 255 == ord("q"): | |
| break | |
| cv2.destroyAllWindows() | |
| """, language="python") | |
| # Heading for Splitting Video into Color Channels | |
| st.markdown(""" | |
| <h3 style="color: #9400d3;">Splitting Colored Video into Different Channels</h3> | |
| """, unsafe_allow_html=True) | |
| # Brief Explanation | |
| st.write(""" | |
| Each frame of a colored video consists of three channels: Blue, Green, and Red (BGR). | |
| The following example demonstrates how to: | |
| - Split the video frames into separate Blue, Green, and Red channels. | |
| - Display the original video alongside each color channel. | |
| """) | |
| # Code Example | |
| st.code(""" | |
| vid = cv2.VideoCapture('path_to_video.mp4') | |
| while True: | |
| succ, img = vid.read() | |
| if succ == False: | |
| break | |
| # Convert frame to grayscale | |
| img1 = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) | |
| # Split the frame into B, G, R channels | |
| b, g, r = cv2.split(img) | |
| z = np.zeros(img.shape[:-1], dtype=np.uint8) | |
| blue_channel = cv2.merge([b, z, z]) | |
| green_channel = cv2.merge([z, g, z]) | |
| red_channel = cv2.merge([z, z, r]) | |
| # Display the frames | |
| cv2.imshow("Colored Video", img) | |
| cv2.imshow("Grayscale Video", img1) | |
| cv2.imshow("Blue Channel", blue_channel) | |
| cv2.imshow("Green Channel", green_channel) | |
| cv2.imshow("Red Channel", red_channel) | |
| # Press 'q' to quit | |
| if cv2.waitKey(1) & 255 == ord("q"): | |
| break | |
| cv2.destroyAllWindows() | |
| """, language="python") | |
| # Heading for Capturing Frames via Webcam | |
| st.markdown(""" | |
| <h3 style="color: #9400d3;">Capturing Frames While Live Streaming Using Webcam</h3> | |
| """, unsafe_allow_html=True) | |
| # Brief Explanation | |
| st.write(""" | |
| OpenCV allows you to access your webcam for live video streaming. The `cv2.VideoCapture()` function is used to activate the webcam. Here's how it works: | |
| - **`cv2.VideoCapture(0)`**: | |
| - The argument `0` tells OpenCV to access the default webcam on your computer. | |
| - If you have multiple cameras, you can pass other IDs (like `1`, `2`) to access them. | |
| - It creates a connection with the webcam and starts capturing frames in real time. | |
| The following example demonstrates how to: | |
| - Activate the webcam. | |
| - Display the live stream. | |
| - Close the webcam window by pressing the 'p' key. | |
| """) | |
| # Code Example | |
| st.code(""" | |
| vid = cv2.VideoCapture(0) # 0 indicates the default webcam | |
| while True: | |
| succ, img = vid.read() | |
| if succ == False: # Optional: Check if the webcam is working | |
| print("Camera not working") | |
| break | |
| # Display the live stream | |
| cv2.imshow("Live Stream", img) | |
| # Press 'p' to stop the live stream | |
| if cv2.waitKey(1) & 255 == ord("p"): | |
| break | |
| vid.release() | |
| cv2.destroyAllWindows() | |
| """, language="python") | |
| # Heading for Capturing and Saving Frames | |
| st.markdown(""" | |
| <h3 style="color: #9400d3;">Capturing and Saving Frames</h3> | |
| """, unsafe_allow_html=True) | |
| # Brief Explanation | |
| st.write(""" | |
| This code uses OpenCV to access the webcam, display the video feed, and save specific frames as image files: | |
| - **Webcam Activation**: The `cv2.VideoCapture(0)` function initializes the default webcam. | |
| - **Capturing Frames**: Press **'s'** to capture and save the current frame to a specified directory. | |
| - **Stopping the Stream**: Press **'p'** to stop the webcam and close the application. | |
| """) | |
| # Code Example | |
| st.code(""" | |
| vid = cv2.VideoCapture(0) # Open webcam | |
| c = 0 # Counter for naming saved images | |
| while True: | |
| succ, img = vid.read() | |
| if succ == False: # Check if the webcam is working | |
| print("Camera not working") | |
| break | |
| cv2.imshow("Live Stream", img) # Display live stream | |
| # Save frame as an image file when 's' is pressed | |
| if cv2.waitKey(1) & 255 == ord("s"): | |
| cv2.imwrite('path_to_save_directory/{}.jpg'.format(c), img) | |
| print("Image is captured and saved") | |
| c += 1 # Increment counter for next image name | |
| # Quit live stream when 'p' is pressed | |
| if cv2.waitKey(1) & 255 == ord("p"): | |
| break | |
| vid.release() | |
| cv2.destroyAllWindows() | |
| """, language="python") | |
| # Concluding the Current Section | |
| st.write(""" | |
| In the next section, we will explore **image transformations using OpenCV**. We will cover how to: | |
| - Rotate images at various angles. | |
| - Flip images horizontally and vertically. | |
| - Scale and resize images to different dimensions. | |
| Get ready to learn about powerful image transformation techniques! | |
| """) | |
| col1, col2, col3 = st.columns(3) | |
| # Column 1 - Button to go back to the Image Operations page | |
| with col1: | |
| if st.button("⬅️ Explore Image Creation and Manipulation"): | |
| navigate_to("image_operations") # Navigates to the Image Operations page | |
| # Column 2 - Button for the main page (Images & Videos) | |
| with col2: | |
| if st.button("📸 Images & Videos"): | |
| navigate_to("explore_images_video") # Main page: Images & Videos | |
| # Column 3 - Button to go to the Image Transformations page | |
| with col3: | |
| if st.button("➡️ Image Transformations with OpenCV"): | |
| navigate_to("image_transformations") # Next page: Image Transformations | |
| elif st.session_state.current_page == "image_transformations": | |
| # Content for Image Transformations Page | |
| st.markdown(""" | |
| <h2 style="color: #BB3385;">Image Augmentation Techniques</h2> | |
| """, unsafe_allow_html=True) | |
| # Page: What is Image Augmentation? | |
| # Heading | |
| st.markdown(""" | |
| <h3 style="color: #9400d3;">What is Image Augmentation?</h3> | |
| """, unsafe_allow_html=True) | |
| # Definition | |
| st.write(""" | |
| Image augmentation is a method used to enhance the size and variety of an image dataset by applying transformations to existing images. | |
| These transformations introduce variations while preserving the core features of the image, making it useful for training machine learning models to handle diverse inputs. | |
| **How It Works** | |
| Image augmentation applies transformations like resizing, rotation, flipping, and more to the original image. These changes simulate real-world variations, ensuring that machine learning models can identify patterns even with differences in perspective, scale, or lighting conditions. | |
| The key idea is to preserve the original features of the image while introducing diversity. For example, if we take an image and apply five different transformations, we generate five new variations of that image. This provides the model with more data to learn from, improving its performance and ability to generalize. | |
| """) | |
| # Types of Image Augmentation | |
| st.markdown(""" | |
| <h3 style="color: #9400d3;">Types of Image Augmentation</h3> | |
| """, unsafe_allow_html=True) | |
| st.write(""" | |
| Image augmentation is broadly categorized into two types: | |
| 1. **Affine Transformations** | |
| 2. **Non-Affine Transformations** | |
| """) | |
| # Affine Transformations | |
| st.markdown(""" | |
| <h3 style="color: #9400d3;">Affine Transformations</h3> | |
| """, unsafe_allow_html=True) | |
| st.write(""" | |
| **Affine Transformations** are transformations where: | |
| 1. The transformed image and the original image maintain **parallelism between lines**. | |
| 2. In some cases, the **angle between lines** and the **length of the lines** may also be preserved. | |
| These transformations ensure that the geometric relationships within the image remain intact, even as the image is resized, rotated, or shifted. | |
| Affine transformations are performed using a mathematical operation known as an **Affine Matrix**, which maps the original image coordinates to new coordinates. | |
| """) | |
| st.markdown(""" | |
| <h3 style="color: #e25822;">Common Affine Transformations:</h3> | |
| """, unsafe_allow_html=True) | |
| st.write(""" | |
| 1. **Scaling**: Changing the size of the image while maintaining its proportions. | |
| 2. **Translation**: Shifting the image horizontally, vertically, or both. | |
| 3. **Rotation**: Rotating the image around a specified center point. | |
| 4. **Shearing**: Slanting the image along the x or y axis, creating a skewed effect. | |
| 5. **Cropping**: Extracting a specific portion of the image, usually to focus on a region of interest. | |
| These transformations are linear, meaning the relationships between points in the image remain consistent. | |
| """) | |
| st.image( | |
| "https://huggingface.co/spaces/LakshmiHarika/MachineLearning/resolve/main/Images/affine_transformations.png", | |
| use_container_width=True) | |
| # Explanation for Translation | |
| st.markdown(""" | |
| <h3 style="color: #9400d3;">Translation</h3> | |
| """, unsafe_allow_html=True) | |
| st.write(""" | |
| **Translation** involves moving an image from one location to another along the x-axis, y-axis, or both. It adjusts the position of the image on the canvas without modifying its original content. | |
| The transformation is performed using a translation matrix: | |
| """) | |
| st.write(""" | |
| The translation matrix is represented as: | |
| [[1, 0, tx], [0, 1, ty]] | |
| Here: | |
| - **tx**: Specifies the shift along the x-axis (horizontal axis). | |
| - **ty**: Specifies the shift along the y-axis (vertical axis). | |
| """) | |
| st.code(""" | |
| # Load the image | |
| img = cv2.imread('path_to_image.jpg') | |
| # Define translation parameters | |
| tx = 100 # Shift 100 pixels along the x-axis | |
| ty = 50 # Shift 50 pixels along the y-axis | |
| # Create the translation matrix | |
| translation_matrix = np.array([[1, 0, tx], [0, 1, ty]], dtype=np.float32) | |
| # Apply translation | |
| translated_img = cv2.warpAffine(img, translation_matrix, (300, 300)) | |
| # Display the images | |
| cv2.imshow("Original Image", img) | |
| cv2.imshow("Translated Image", translated_img) | |
| cv2.waitKey(0) | |
| cv2.destroyAllWindows() | |
| """, language="python") | |
| # Explanation for Rotation | |
| st.markdown(""" | |
| <h3 style="color: #9400d3;">Rotation</h3> | |
| """, unsafe_allow_html=True) | |
| st.write(""" | |
| **Rotation** involves rotating an image around a specified center point by a given angle. It changes the orientation of the image while preserving its content. | |
| The rotation is performed using a rotation matrix: | |
| [[cos(θ), -sin(θ), tx], [sin(θ), cos(θ), ty]] | |
| Here: | |
| - **θ (theta)**: Specifies the rotation angle in degrees. | |
| - **tx, ty**: Specifies the adjustments to reposition the rotated image. | |
| - **Scale**: A factor that can resize the image during rotation. | |
| """) | |
| # Code Example | |
| st.code(""" | |
| # Load the image | |
| img = cv2.imread('path_to_image.jpg') | |
| # Define the rotation matrix | |
| r_m = cv2.getRotationMatrix2D((1347, 900), 50, 1) # Center at (1347, 900), Rotate by 50 degrees, Scale = 1 | |
| # Apply rotation | |
| r_img = cv2.warpAffine(img, r_m, (580, 500), borderMode=cv2.BORDER_DEFAULT) | |
| # Display the images | |
| cv2.imshow("Original Image", img) | |
| cv2.imshow("Rotated Image", r_img) | |
| cv2.waitKey(0) | |
| cv2.destroyAllWindows() | |
| """, language="python") | |
| # Explanation for Direct Rotation | |
| st.markdown(""" | |
| <h3 style="color: #9400d3;">Direct Rotation Using cv2.rotate</h3> | |
| """, unsafe_allow_html=True) | |
| st.write(""" | |
| OpenCV provides a direct method for rotating images with predefined angles: `cv2.rotate`. | |
| This method simplifies rotation operations for 90°, 180°, and 270° (clockwise or counterclockwise) without requiring a custom rotation matrix. | |
| - **`cv2.ROTATE_180`**: Rotates the image by 180°. | |
| - **`cv2.ROTATE_90_CLOCKWISE`**: Rotates the image by 90° clockwise. | |
| - **`cv2.ROTATE_90_COUNTERCLOCKWISE`**: Rotates the image by 90° counterclockwise. | |
| """) | |
| # Code Example | |
| st.code(""" | |
| # Rotate the image using predefined rotation modes | |
| img1 = cv2.rotate(img, cv2.ROTATE_180) # Rotate 180 degrees | |
| img2 = cv2.rotate(img, cv2.ROTATE_90_CLOCKWISE) # Rotate 90 degrees clockwise | |
| img3 = cv2.rotate(img, cv2.ROTATE_90_COUNTERCLOCKWISE) # Rotate 90 degrees counterclockwise | |
| # Display the images | |
| cv2.imshow("Original Image", img) | |
| cv2.imshow("Rotated 180°", img1) | |
| cv2.imshow("Rotated 90° Clockwise", img2) | |
| cv2.imshow("Rotated 90° Counterclockwise", img3) | |
| cv2.waitKey(0) | |
| cv2.destroyAllWindows() | |
| """, language="python") | |
| # Explanation for Shearing | |
| st.markdown(""" | |
| <h3 style="color: #9400d3;">Shearing</h3> | |
| """, unsafe_allow_html=True) | |
| st.write(""" | |
| **Shearing** is a transformation that slants the shape of an image along the x-axis, y-axis, or both. It skews the content of the image, creating a shifted or stretched effect. | |
| The transformation is performed using a shearing matrix: | |
| """) | |
| st.write(""" | |
| The shearing matrix is represented as: | |
| For x-axis shear: | |
| [[1, shx, 0], [0, 1, 0]] | |
| For y-axis shear: | |
| [[1, 0, 0], [shy, 1, 0]] | |
| Here: | |
| - **shx**: Shear factor along the x-axis. | |
| - **shy**: Shear factor along the y-axis. | |
| """) | |
| st.code(""" | |
| # Load the image | |
| img = cv2.imread('path_to_image.jpg') | |
| # Define shearing parameters | |
| shx = 1 # Shear factor along the x-axis | |
| shy = 3 # Shear factor along the y-axis | |
| tx = 0 # Translation along the x-axis | |
| ty = 0 # Translation along the y-axis | |
| # Create the shearing matrix | |
| shearing_matrix = np.array([[1, shx, tx], [shy, 1, ty]], dtype=np.float32) | |
| # Apply the shearing transformation | |
| sheared_img = cv2.warpAffine(img, shearing_matrix, (300, 300)) | |
| # Display the original and sheared images | |
| cv2.imshow("Original Image", img) | |
| cv2.imshow("Sheared Image", sheared_img) | |
| cv2.waitKey(0) | |
| cv2.destroyAllWindows() | |
| """, language="python") | |
| # Explanation for Scaling | |
| st.markdown(""" | |
| <h3 style="color: #9400d3;">Scaling</h3> | |
| """, unsafe_allow_html=True) | |
| st.write(""" | |
| **Scaling** is a transformation that changes the size of an image. It can be used to enlarge or shrink the image while maintaining its original proportions or altering them. | |
| Scaling is performed using a scaling matrix: | |
| """) | |
| st.write(""" | |
| The scaling matrix is represented as: | |
| [[sx, 0, 0], [0, sy, 0]] | |
| Here: | |
| - **sx**: Scaling factor along the x-axis. | |
| - **sy**: Scaling factor along the y-axis. | |
| - If `sx` and `sy` are greater than 1, the image is enlarged. | |
| - If `sx` and `sy` are less than 1, the image is shrunk. | |
| """) | |
| st.code(""" | |
| # Load the image | |
| img = cv2.imread('path_to_image.jpg') | |
| # Define scaling and translation parameters | |
| sx, sy = 2, 1 # Scale by 2 along the x-axis and 1 along the y-axis | |
| tx, ty = 0, 0 # No translation | |
| # Create the scaling matrix | |
| scaling_matrix = np.array([[sx, 0, tx], [0, sy, ty]], dtype=np.float32) | |
| # Apply scaling | |
| scaled_img = cv2.warpAffine(img, scaling_matrix, (2 * 300, 300)) | |
| # Display the images | |
| cv2.imshow("Original Image", img) | |
| cv2.imshow("Scaled Image", scaled_img) | |
| cv2.waitKey(0) | |
| cv2.destroyAllWindows() | |
| """, language="python") | |
| # Explanation for Cropping | |
| st.markdown(""" | |
| <h3 style="color: #9400d3;">Cropping</h3> | |
| """, unsafe_allow_html=True) | |
| st.write(""" | |
| **Cropping** is a transformation that extracts a specific portion of an image, usually to focus on a region of interest. | |
| It is achieved by selecting a rectangular region of the image using pixel coordinates. | |
| The process involves defining the coordinates for: | |
| - **Top-left corner (x1, y1)**: Starting point of the crop. | |
| - **Bottom-right corner (x2, y2)**: Ending point of the crop. | |
| """) | |
| st.code(""" | |
| # Load the image | |
| img = cv2.imread('path_to_image.jpg') | |
| # Define crop coordinates | |
| x1, y1 = 50, 50 # Top-left corner | |
| x2, y2 = 200, 200 # Bottom-right corner | |
| # Crop the image | |
| cropped_img = img[y1:y2, x1:x2] | |
| # Display the images | |
| cv2.imshow("Original Image", img) | |
| cv2.imshow("Cropped Image", cropped_img) | |
| cv2.waitKey(0) | |
| cv2.destroyAllWindows() | |
| """, language="python") | |
| elif st.session_state.current_page == "explore_audio": | |
| st.markdown(""" | |
| <h3 style="color: #e25822;">Exploring Audio</h3> | |
| """, unsafe_allow_html=True) | |
| st.write(""" | |
| Audio formats include MP3 and WAV for storing sound. | |
| """) | |
| if st.button("Go Back"): | |
| navigate_to("main") | |
| elif st.session_state.current_page == "explore_text": | |
| st.markdown(""" | |
| <h3 style="color: #e25822;">Exploring Text</h3> | |
| """, unsafe_allow_html=True) | |
| st.write(""" | |
| Text includes unstructured data like emails or plain-text files. | |
| """) | |
| if st.button("Go Back"): | |
| navigate_to("main") | |
| elif st.session_state.current_page == "explore_csv": | |
| st.markdown(""" | |
| <h3 style="color: #e25822;">Exploring CSV</h3> | |
| """, unsafe_allow_html=True) | |
| st.write(""" | |
| CSV is a simple text-based format where data fields are separated by commas. | |
| """) | |
| if st.button("Go Back"): | |
| navigate_to("main") | |
| elif st.session_state.current_page == "explore_json": | |
| st.markdown(""" | |
| <h3 style="color: #e25822;">Exploring JSON</h3> | |
| """, unsafe_allow_html=True) | |
| st.write(""" | |
| JSON is a semi-structured format used for APIs and data exchange. | |
| """) | |
| if st.button("Go Back"): | |
| navigate_to("main") | |
| elif st.session_state.current_page == "explore_xml": | |
| st.markdown(""" | |
| <h3 style="color: #e25822;">Exploring XML</h3> | |
| """, unsafe_allow_html=True) | |
| st.write(""" | |
| XML uses tags to structure semi-structured data. | |
| """) | |
| if st.button("Go Back"): | |
| navigate_to("main") | |
| elif st.session_state.current_page == "explore_html": | |
| st.markdown(""" | |
| <h3 style="color: #e25822;">Exploring HTML</h3> | |
| """, unsafe_allow_html=True) | |
| st.write(""" | |
| HTML structures web pages using elements like <div> and <p>. | |
| """) | |