import streamlit as st import pandas as pd from PIL import Image import numpy as np st.markdown(""" """, unsafe_allow_html=True) st.subheader("UnStructured Data") st.markdown(""" Unstructured data refers to information that does not have a predefined format or organizational structure. Examples : """, unsafe_allow_html=True) st.sidebar.title("Navigation 🧭") file_type = st.sidebar.radio( "Choose a file type :", ("IMAGE", "AUDIO", "VIDEO", "TEXT")) if file_type == "IMAGE": st.title("Image 🖼️") st.markdown(""" - Image is a 2D representation of a visible light spectrum which is collection of wavelength values - Image in unstructured data refers to a visual file that lacks a predefined format or schema for its content - Its information, such as shapes, colors, or objects, is not inherently organized for traditional databases typically requires specialized tools or algorithms (like image processing or machine learning) to extract meaningful insights. """,unsafe_allow_html=True) st.sidebar.header("Explore Image Data ✨") data_type1 = st.sidebar.radio("Select Information", ["Image Information","Basic Operations", "Color Space","Image Augumentation"]) if data_type1 == "Image Information": st.header("**Image Information**") st.header('**How an image is formed**') st.subheader('''**Source of light**''') st.markdown(''' - 2D grid like structure which is divided by horizontal and vertical lines - Every grid is pixel - Every pixel is a feature and the information can be shapes, patterns, color - Height * width = pixels ''') st.markdown(''' - As no.of rows , columns or height and width increases --> pixel increases - As pixel increases there is more information --> gives higher clarity - As resolution increases --> clarity of pixels increases - Every single image is considered as a data point and each grid or pixel is a feature - Tabular structure and grid like structure has similar structure but different in interpretation ''') if st.button("GitHub Link 🔗 (Image)"): st.write("**GitHub Repository:** [Provide your GitHub link here]") elif data_type1 == "Basic Operations": st.header('**Basic Operations**') st.markdown(""" - OpenCV offers a variety of tools to work with images, including the ability to load, display, modify, and save them. - OpenCV handles both images and videos. - Especially images are handled by pil --> pillow - These operations are essential for tasks such as image processing, computer vision. """) st.subheader('**Image Operations**') st.markdown(""" - OpenCV allows users to perform basic operations like reading, displaying, and saving images. - Along with that it also performs resizing, cropping, and filtering of images based on requirements. - **Key Functions:** `imread()`, `imshow()`, `imwrite()` - **`imread()`**: Reads an image from the disk, storing it as an array. - **`imshow()`**: Displays the image in a window, allowing for easy visualization. - **`imwrite()`**: Saves the image to a specified location on your storage. ```python # Example Usage of OpenCV Functions import cv2 image = cv2.imread('path') # Reads an image - it always converts to 3D array which uses RGB color space cv2.imshow('Image', array) # Displays the image - Creates a pop-up window --> give any name in string format - and then take the array and display cv2.waitKey() # It adds a delay --> for how many milliseconds the pop-up window should be active in screen cv2.destroyAllWindows() # To remove from RAM --> automatically removes from RAM cv2.imwrite('output.jpg', image) # Converts array to shape cv2.resize(image) # Resizes an image to a given dimension. """, unsafe_allow_html=True) st.subheader('**Image to Tabular Data**') st.markdown(''' - Basically there are 5 steps to convert a iamge into tabular data - **Image** which is in 2D format converts into array uses color space(gray) using a image operation `imread()`. - **Array** which can be of any dimension so to make sure that every array having same dimension or (pixels) we use `resize()` operation - **Resize** which can be done in 2 ways which are `Compression` and `Expansion` - **`Compression`** - It removes pixels which has a disadvantage of loosing the information - **`Expansion`** - It adds rows or columns which has a disadvantage of adding noise - lots of information is lost --> type of features lost are **Spatial Features** - **Flatten** after resizing the image it converts every nd array to 1D array - **Concatenation** after flattening the array then concatenate every 1D array ---> So that image is converted into tabular data - Images ------> Array ------> Resize ------> Flatten ------>Concatenation - using thse steps images are converted into tabular data ''') st.subheader('**Conditions for an Array to be considered as Image representation**') st.markdown(''' - Array can be only thought as image representation if and only if: - **It should be either 2D or 3D array** - **Data type of array should be `np.uint8`(unsigned int)** - Every numpy can be an image which is 2D, 3D - Where 2D, 3D array data type should be unsigned int8 ''') st.subheader('**Handling Images with OpenCV**') st.markdown(""" Handling images involves tasks like reading, displaying, resizing, and modifying images. **Example: Loading and Displaying an Image** ```python import cv2 img = cv2.imread('image.jpg') cv2.imshow('Loaded Image', img) cv2.waitKey() cv2.destroyAllWindows() ``` """) st.subheader("**Basic Code**") st.markdown(""" Below is an example demonstrating fundamental OpenCV operations. The code reads an image, resizes it and then displays and saves the final image. """, unsafe_allow_html=True) st.markdown(""" **1. Read the image** import cv2 image = cv2.imread('path') # Provide the path to your image if image is None: print("Error: Image not found") exit() **2. Display the original image** ```python cv2.imshow('Original Image', image) cv2.waitKey() # It adds the delay and closes the window when a key is given cv2.destroyAllWindows() ``` **3. Resize the image to 200x200 pixels** ```python resized_image = cv2.resize(image, (200, 200)) ``` **4. Save the loaded image** ```python cv2.imwrite('output_image.jpg',image) print("Image saved successfully!") ``` """, unsafe_allow_html=True) st.markdown("""

Code Explanation

- **Step 1: Reading the Image**: `cv2.imread()` loads the image from the specified file path. If the file is not found, the script will terminate with an error message. - **Step 2: Displaying the Original Image**: `cv2.imshow()` displays the original image in a window. The window will remain open until a key is pressed. - **Step 3: Resizing the Image**: `cv2.resize()` resizes the image to specified dimensions (in this case, 200x200 pixels) - **Step 4: Saving the Loaded Image**: `cv2.imwrite()` saves the final loaded image to a file. By following these steps, you can perform common image operations """, unsafe_allow_html=True) elif data_type1 == "Color Space": st.header('**Color Space**') st.markdown(""" There are 3 types of Color Spaces """, unsafe_allow_html=True) st.header('**Black & White Color Space**') st.markdown(''' - It preserves only two colors which are **black(0) and white(1)** - For converting image into numerical as the image is 2D format numpy is used - Here there is no color preservation - As image is represented in 0's and 1's which will be only in black and white in grid format - To neglect the color preservation of black and white color space next gray scale is used ''') st.header('**Gray Scale Color Space**') st.markdown(''' - It preserves total 256 colors which is [0 - 255] - Where 0 represent black and 255 represent white , [1 - 255] --> shades of grey color - If the image is colored then the both gray scale and colorspace are not used - Gray scale converts to different shades of gray - Black and white converts to either black or white ''') st.header('**RGB Color Space**') st.markdown(''' - While converting image to numerical it can't convert as it has 3 colors --> as it converts to 3D array - There are 3 channels in RGB Color Space - *Red Channel* - *Green Channel* - *Blue Channel* ''') st.subheader('**Red Channel**') st.markdown(''' - It is 2D array which has [0 - 255] - 0 means black and 255 is red , [0 - 255] between is shades of red - Red channel is taken and kept at depth of 1 --> depth always represents color - As it is always constant ''') st.subheader('**Green Channel**') st.markdown(''' - It is 2D array which has [0 - 255] - 0 means black and 255 is green , [0 - 255] between is shades of green - Red channel is taken and kept at depth of 2 --> depth always represents color - As it is always constant ''') st.subheader('**Blue Channel**') st.markdown(''' - It is 2D array which has [0 - 255] - 0 means black and 255 is blue , [0 - 255] between is shades of blue - Red channel is taken and kept at depth of 3 --> depth always represents color - The combination of 3 chnnels give 3D array where depth represents color and it is always constant ''') st.subheader("**Color Space Conversion with `cv2.cvtColor()`**") st.markdown(""" - OpenCV provides the `cv2.cvtColor()` function to converts one colorspace to any colospace. **Example: Displaying the Grayscale Image** ```python import cv2 img = cv2.imread('image path') gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) cv2.imwrite('gray_image.jpg', gray_img) ``` - The grayscale image is saved to a file gray_image.jpg. """) st.subheader("**Types of Color Space Conversions**") st.markdown(""" - **RGB to Grayscale**: `cv2.COLOR_RGB2GRAY` converts a color image to grayscale - **Grayscale to RGB**: `cv2.COLOR_GRAY2RGB` converts a grayscale image back to a 3-channel RGB image """) st.subheader('**Splitting and Merging Channels**') st.markdown(""" - **Splitting Channels**: Use `cv2.split()` to separate the image into individual channels (B, G, R). ```python import cv2 blue, green, red = cv2.split(image) ``` - **Merging Channels**: Combine individual channels back into a single image using `cv2.merge()`. ```python import cv2 merged_image = cv2.merge((blue, green, red)) ``` """, unsafe_allow_html=True) st.markdown(""" **1. Read the image** ```python image = cv2.imread('input_image.jpg') if image is None: print("Error: Image not found") exit() ``` **2. Convert RGB to Grayscale** ```python gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) ``` **3. Convert Grayscale back to RGB** ```python rgb_image = cv2.cvtColor(gray_image, cv2.COLOR_GRAY2BGR) ``` **4. Split channels (B, G, R)** ```python blue, green, red = cv2.split(image) ``` **5. Merge channels back** ```python merged_image = cv2.merge((blue, green, red)) ``` **6. Save the processed image** ```python cv2.imwrite('processed_image.jpg', merged_image) ``` """, unsafe_allow_html=True) st.markdown(""" - Explanation : - **Step 1**: Read the image using `cv2.imread()`. Terminates if the image file is not found. - **Step 2**: Convert the image to Grayscale using `cv2.cvtColor()`. This simplifies the image to a single channel. - **Step 3**: Convert the Grayscale image back to RGB to enable further processing. - **Step 4**: Use `cv2.split()` to separate the image into individual color channels. - **Step 5**: Combine the channels back into a single image using `cv2.merge()`. - **Step 6**: Save the final loaded image with `cv2.imwrite()`. """, unsafe_allow_html=True) elif data_type1 == "Image Augumentation": st.header('**Image Augumentation**') st.markdown(''' - Image Augumentation is a technique of creating new data **(augumented image)** from your existing data - It is used to tranform imbalanced data to balanced data - When we're using image augumentation imbalanced data is converted into balanced data and lots of new information is added - **Image Augumentation is on original image when tranformations are applied it changes to transformd image** - Here transformations are of 2 types: ''', unsafe_allow_html=True) st.subheader('**Affine Transformation**') st.markdown(''' - Fffine transformation is a type of geometric transformation that preserves the straightness of lines and the parallelism of edges in an image. - **Key Characterisistics are:** - **Preserves Parallelism** : If transformed image and original image parallelism between the lines are preserved - **Preserves Lines** : Length of the lines are preserved - **Does Not Preserve Angles** : It doesn't preserve angle and distance but sometimes angle between lines is preserved - There are 5 types of affine transformations: ''', unsafe_allow_html=True) st.markdown(''' - **Image Augumentation using Affine Tranformation matrix:** - Formula: $$ I'(x',y') = ATM \cdot I(x,y) - I(x,y) ---> original image - ATM ---> affine tranformation matrix - I'(x',y') ---> augumented image $$ ''', unsafe_allow_html=True) st.header("Affine Transformation Workflow") st.markdown(""" The general steps for performing affine transformation in OpenCV: 1. Load the Image 2. Define Source and Destination Points 3. Calculate the Transformation Matrix 4. Apply `cv2.warpAffine()` 5. Display or Save the Transformed Image """, unsafe_allow_html=True) st.subheader('**Translation**') st.markdown(''' - Translation is a type of affine transformation matrix used to shift the image - It shifts the image both in x-axis direction and y-axis direction - Formula: $$ I(x,y) \cdot Translation matrix = I'(x',y') $$ - Formula: $$ x' = x + t_x \\ y' = y + t_y $$ - **Translation matrix**: - [1 0 Tx 0 1 Ty] - **Tx** - Tx --> move on x-axis - Tx --> Right shift in +ve - Ty --> Left shift in -ve - **Ty** - Ty --> move on y-axis - +ve which is downwards - -ve which is upwards ''') st.code(''' import cv2 ## creation of translation matrix tx = 100 ty = 100 t_m = np.array([[1,0,tx],[0,1,ty]],dtype=np.float32) ## applies the translation affine transformation using warpAffine t_img = cv2.warpAffine(img,t_m,(2560,1600),borderMode=cv2.BORDER_CONSTANT,borderValue=(0,0,0)) ## save and display the image cv2.imshow("org_img",img) cv2.imshow("trans_img",t_img) cv2.waitKey() cv2.destroyAllWindows() ''') st.subheader('**Rotation**') st.markdown(''' - Rotation is a affine transformation matrix which rotates the image at one angle - Formula: $$ I(x,y) \cdot Rotation matrix = I'(x',y') $$ - Formula: $$ x' = x \cdot \cos(θ) - y \cdot \sin(θ) \\ y' = x \cdot \sin(θ) + y \cdot \cos(θ) $$ - **Rotation matrix:** - [cos(θ) sin(θ) Tx=0 sin(θ) cos(θ) Ty=0 ] - θ is angle between image pixel and x-axis - always rotation is in anti-clockwise direction when angle is +ve - always rotation is in clockwise direction when angle is -ve - cos(θ) is anti-clockwise - Angle is between choosen pixel values ''') st.code(''' import cv2 ## creation of rotation matrix r_m = cv2.getRotationMatrix2D((800,1280),0,1) ## applies the rotation affine transformation using warpAffine r_img = cv2.warpAffine(img,r_m,(2560,1600)) ## save and display the image cv2.imshow("org_img",img) cv2.imshow("rotated_img",r_img) cv2.waitKey() cv2.destroyAllWindows() ''') st.subheader('**Scaling**') st.markdown(''' - Scaling is a affine transformation matrix used for zoom-in and zoom-out which is **(compression and expansion)** - Formula: $$ I(x,y) \cdot Scaling matrix = I'(x',y') $$ - **Scaling matrix:** - [Sx 0 Tx 0 Sy Ty] - Sx is how much we can scale on x-axis - Sy is how much we can scale on y-axis - Formula: $$ x' = x + shearX \cdot y \\ y' = y + shearY \cdot x $$ ''') st.code(''' import cv2 ## creation of scaling matrix sx = 0.3 sy = 0.3 tx = 0 ty = 0 sc_m = np.array([[sx,0,tx],[0,sy,ty]],dtype=np.float32) ## applies the scaling affine transformation using warpAffine scale_img = cv2.warpAffine(img,sc_m,(2560,1600)) ## save and display the image cv2.imshow("org_img",img) cv2.imshow("scaled_img",scale_img) cv2.waitKey() cv2.destroyAllWindows() ''') st.subheader('**Shearing**') st.markdown(''' - Shearing is a affine transformation matrix which is used for expansion on x-axis and y-axis - Formula: $$ I(x,y) \cdot Shearing matrix = I'(x',y') $$ - **Shearing matrix:** - [1 Shx Tx Shy 1 Ty] - Formula: $$ x' = x + shearX \cdot y \\ y' = y + shearY \cdot x $$ ''') st.code(''' import cv2 ## creation of shearing matrix shx = 0.3 shy = 1 tx = 100 ty = 100 sh_m = np.array([[1,shx,tx],[shy,1,ty]],dtype=np.float32) ## applies the shearing affine transformation using warpAffine shear_img = cv2.warpAffine(img,sh_m,(2560,1600)) ## save and display the image cv2.imshow("org_img",img) cv2.imshow("shear_img",shear_img) cv2.waitKey() cv2.destroyAllWindows() ''') st.subheader('**Cropping**') st.markdown(''' - Cropping is a affine transformation matrix used to crop the image in size using indexing ''') st.code(''' import cv2 ## loads the image img = cv2.imread(path) ## by indexing crop the image c_m = img[98:408,390:565] ## save and display the image cv2.imshow("org_img",img) cv2.imshow("crop_img",c_m) cv2.waitKey() cv2.destroyAllWindows() ''') if file_type == "VIDEO": st.title("**Video🎥**") st.markdown(''' - Video refers to a sequence of frames (images) captured or processed over time. - OpenCV provides robust tools to read, process, and write videos using its VideoCapture class. ''') st.header('**Video Handling with OpenCV**') st.markdown(''' In OpenCV, videos are treated as a sequence of images called frames. We can process videos using the `cv2.VideoCapture()` class, which allows you to: - Read video files from your system. - Capture live video from a webcam or other video input devices. - Process each frame in the video stream individually. ''') st.subheader('**Reading a Video File**') st.markdown(''' To read a video file, OpenCV uses the `cv2.VideoCapture()` function. It loads the video file and allows you to process each frame sequentially. The following example demonstrates how to: - Read frames from a video file. - Display the video in a window. ''') st.subheader("**Key Methods in Video Capturing**") st.markdown(""" Here are some key methods used in video capturing with OpenCV: - **`cv2.VideoCapture()`**: Opens the video source, either a camera (index 0 for default webcam) or a video file path. - **`read()`**: Reads frames from the video stream. It returns a boolean value and the frame (`frame`). - **`release()`**: Releases the video capture object and closes the video stream. - **`cv2.namedWindow()`**:It actually creates a pop-up window and all the features of pop-up window will be created - **`cv2.imshow()`**: As it is not creating pop-up window ,it internally calls the namedWindow() where we can create our own pop-up window - **`cv2.waitKey()`**: It internally creates a po-up window and for that window it is going to add delay.Waits for a key event for a specified amount of time (in milliseconds). It returns the ASCII value of the key pressed.If 0 is passed, it waits indefinitely until a key is pressed. - **`cv2.destroyAllWindows()`**: Closes all OpenCV windows that were opened during the program's execution. - **`cv2.MouseCallBack()`**: It is a technique where we can make lot of things automated - **`cv2.SetMouseCallBack()`**:It will be automatically activated when we hover the mouse inside pop-up window. - It is going to call user-defined function which tracks the events of mouse - It contains 5 parameters - **5 parameters:** - def fun(event,x,yflags,param): - **Event**: It is going to track all windows what event has performed inside pop-up window - **x,y**: x,y are rows and columns - **flags**: It is used to track an event where we can include additional features - **param**: It is used to add additional functions These methods are useful in forming the foundation of real-time video processing in OpenCV, and also essential for handling the display and closing of images in OpenCV applications. """) st.subheader("**Reading and Displaying a Video**") st.code(""" import cv2 # Open the video file video = cv2.VideoCapture("path_to_video.mp4") # Loop to read and display frames while True: success, frame = video.read() # Read a frame if not success: print("Video Ended") break cv2.imshow("Video Playback", frame) # Display the frame # Break loop on 'q' key press if cv2.waitKey(1) & 255 == ord('q'): break video.release() # Release the video file cv2.destroyAllWindows() # Close all OpenCV windows """, language="python") st.markdown("---") st.subheader("**Understanding `cv2.waitKey()` and Key Input**") st.write(""" The line `if cv2.waitKey(1) & 255 == ord('q'):` is used in OpenCV to handle keyboard input while processing video frames. Here’s a breakdown: - **`cv2.waitKey(1)`**: - Waits for a key press for `1` millisecond. - Returns the ASCII value of the key pressed, or `-1` if no key is pressed. - **`& 255`**: - Extracts only the last 8 bits (ASCII value). - **`ord('q')`**: - Provides the ASCII value of the character `'q'`. - The condition checks if the user pressed the `'q'` key to quit the program. """) st.header("**Capturing and Saving a Specific Frame**") st.markdown(""" - Use OpenCV to capture a specific frame from a video and save it as an image file. """) st.subheader("Example: Saving a Frame") st.code(""" import cv2 video = cv2.VideoCapture("path_to_video.mp4") # Replace with 0 for webcam while True: success, frame = video.read() if not success: break cv2.imshow("Video", frame) # Save frame on 's' key press if cv2.waitKey(1) & 255 == ord('s'): cv2.imwrite("captured_frame.jpg", frame) print("Frame saved as captured_frame.jpg") # Break loop on 'q' key press if cv2.waitKey(1) & 255 == ord('q'): break video.release() cv2.destroyAllWindows() """, language="python") st.header("**Capturing Video from Webcam**") st.markdown(""" - The `cv2.VideoCapture()` function can also be used to capture live video from your webcam or connected camera devices. """) st.subheader("**Example of Capturing Video from Webcam**") st.code(""" import cv2 # Open video capture (0 for primary webcam) video = cv2.VideoCapture(0) # Loop to read frames while True: success, frame = video.read() # Read a frame if not success: break cv2.imshow("Webcam", frame) # Display the frame # Break loop on 'q' key press if cv2.waitKey(1) & 255 == ord('q'): break video.release() # Release the video capture object cv2.destroyAllWindows() # Close all OpenCV windows """, language="python") st.header("**Processing Video: Converting to Grayscale**") st.markdown(""" You can process each frame of the video in real-time.By following steps: - Convert each frame of a video to grayscale. - Display the processed video. """) st.subheader("**Example of Converting Video to Grayscale**") st.code(""" import cv2 # Open the video file video = cv2.VideoCapture("path_to_video.mp4") # Loop to read and process frames while True: success, frame = video.read() # Read a frame if not suc: break gray_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # Convert frame to grayscale cv2.imshow("Grayscale Video", gray_frame) # Display the processed frame # Break loop on 'q' key press if cv2.waitKey(30) & 255 == ord('q'): break video.release() # Release the video file cv2.destroyAllWindows() # Close all OpenCV windows """, language="python") st.header("**Splitting Channels in a Video Frame**") st.markdown(""" - You can split the three color channels (Blue, Green, and Red) from a video frame and process them individually. """) st.subheader("Example: Splitting Video Frame Channels") st.code(""" import cv2 # Open video capture video = cv2.VideoCapture("path_to_video.mp4") # Replace with 0 for webcam while True: success, frame = video.read() if not success: break # Split the frame into channels b, g, r = cv2.split(frame) # Merge and display individual channels blue_img = cv2.merge([b, g*0, r*0]) green_img = cv2.merge([b*0, g, r*0]) red_img = cv2.merge([b*0, g*0, r]) cv2.imshow("Original Frame", frame) cv2.imshow("Blue Channel", blue_img) cv2.imshow("Green Channel", green_img) cv2.imshow("Red Channel", red_img) # Break loop on 'q' key press if cv2.waitKey(1) & 255 == ord('q'): break video.release() cv2.destroyAllWindows() """, language="python") if st.button("GitHub Link 🔗 (Video)"): st.write("**GitHub Repository:** [Provide your GitHub link here]")