source stringlengths 39 119 | target stringlengths 263 6.03k |
|---|---|
Module: Automation, Requirement: Quick Setup - Discover Elements (Advanced) | Test Cases:
description: Discover Elements (Advanced), pre_condition: 1.User should be registered to Tenjin Application 2.Tenjin Agent should be connected
Test Steps:
description: User able to do discover advanced, expected_result: Browser starts learning the discovered element
description: User able to view the elements discovering, expected_result: User should view the elements discovering
|
Module: Automation, Requirement: Ouick Setup - Logout Flow creation | Test Cases:
description: Logout Flow creation, pre_condition: 1.User should be registered to Tenjin Application 2.Tenjin Agent should be connected
Test Steps:
description: User able to Create a Logout flow, expected_result: User should create a Logout flow
description: When user click on Proceed button, expected_result: New Function/Functions created page should be displayed.
|
Module: Automation, Requirement: Utility Bot maintanence | Test Cases:
description: Utility Bot maintanence, pre_condition: 1.User should be registered to Tenjin Application 2.Tenjin Agent should be connected
Test Steps:
description: User able to view the Utility bots in application utility bot page, expected_result: User should view the Utility bots in application utility bot page
description: User able to create new utility bot in the application, expected_result: User should create new utility bot in the application
description: User able to click on add new button, expected_result: User should click on add new button
description: When user click on add new button, expected_result: Create Utility Bot popup window should be displayed
description: User able enter TestBot name in the text field, expected_result: User should enter TestBot name in the text field
description: User able to input Description in the text field, expected_result: User should input Description in the text field
description: User able to click on create button, expected_result: User should click on create button
description: When user click on Create button , expected_result: Utility bot details should be displayed
description: User able to click on Launch App & Start recording, expected_result: User should click on Launch App & Start recording
description: When user click on Launch App & Start recording button, expected_result: Select Browser popup window should be displayed
description: User able to click on Browser dropdown, expected_result: User should click on Browser dropdown
description: User able to select the different browsers from the dropdown, expected_result: User should select the different browsers from the dropdown
description: User able to click on Continue button, expected_result: User should click on Continue button
description: When user click on Continue button, expected_result: Browser will launch automatically and user should record the elements/actions
description: User able to click on Stop recording button, expected_result: User should click on Stop recording button
description: When user click on Stop recording button, expected_result: User able to view the recorded actions on the Utility Bot details page
description: User able to review the actions, expected_result: User should review the actions
description: User able to click on Save button, expected_result: User should click on Save button
description: When user click on Save button , expected_result: Actions will be saved in the bot
description: User able to click on Discard & Record Again button, expected_result: User should click on Discard & Record Again button
description: When user click on Discard & Record Again button, expected_result: Browser will be automatically closed and it will stop recording
description: User able to click on Quit session button, expected_result: User should click on Quit session button
description: When user click on Quit session button, expected_result: Bot Build page should be displayed
description: User able to delete the utility bot, expected_result: User should delete the utility bot
description: User able click on Utility bots, expected_result: User should click on Utility bot
description: When user click on Utility bots, expected_result: Utility bot details page should be displayed
|
Module: Automation, Requirement: View the created function | Test Cases:
description: Created function, pre_condition: 1.User should be registered to Tenjin Application 2.Tenjin Agent should be connected
Test Steps:
description: User able to click on Function button on Utility bot page, expected_result: User should click on Function button on Utility bot page
description: When user click on Function button, expected_result: Created function should be displayed on Function page
description: When user click on Function name, expected_result: Elements and Navigation page should be displayed
description: When user click on Elements tab, expected_result: User can view the different page
description: When user click on View element button, expected_result: Recorded elements is displayed
description: When user click on Discover icon , expected_result: Elements Properties screen should be displayed
description: When user click on edit icon, expected_result: User should edit only label name
description: When user click on Go back button , expected_result: Elements page should be displayed
description: When user click on Navigation tab, expected_result: User can view the navigation flow steps in the Navigation page
|
Module: Automation, Requirement: Create a bot through Metadata function | Test Cases:
description: Metadata function, pre_condition: 1.User should be registered to Tenjin Application 2.Tenjin Agent should be connected
Test Steps:
description: When user click on Learn button , expected_result: User can view the navigation function flow
description: When user click on Navigation function step, expected_result: Should display the recorded elements on the screen
description: When user click on Proceed button, expected_result: Browser will learn the recorded element
description: When user click on Record a new Flow button , expected_result: Should display the flow name text field and description text field
description: User able to enter the Flow name in the text field, expected_result: User should enter the flow name
description: User able to enter the description in the text field, expected_result: User should enter the description
description: When user click on Create button, expected_result: User should record a new flow
description: When user click on view element button once new flow has been recorded, expected_result: User can view recorded elements on Element page.
|
Module: Automation, Requirement: Create a bot by using recording the actions on the browser | Test Cases:
description: Recording the actions on the browser, pre_condition: 1.User should be registered to Tenjin Application 2.Tenjin Agent should be connected 3.Browser Stack should be connected through token key
Test Steps:
description: User able to create a bot by using recording the actions on the browser , expected_result: Test-Bot will be created
description: User able to view the Start Building your Bot screen, expected_result: User should view the Start Building your Test-bot screen
description: User able to Click on the record action on the browser, expected_result: Application should get launched on the browser
description: User able to fill the inputs on the launched browser screen of the Application, expected_result: User should fill the inputs on the launched browser screen of the application
description: User again come back to the Tenjin Online screen and stop the recording, expected_result: User should again come back to the Tenjin Online screen and stop the recording
description: User able to View the recorded bot actions , expected_result: User should View the recorded bot actions
description: User able to Click on the Run testbot button , expected_result: User able to view the "Choose Agent and Browser "pop -up.
description: User able to Select the agent and the browser , expected_result: User should Select the agent and the browser
description: User able to Click on Run button, expected_result: The browser will get launched and the execution or automation starts
description: User able to view the Entire Automation Process on the lauched browser window, expected_result: User should view the Entire Automation Process on the lauched browser window
description: User able to view the Browser closing process once after the Execution and Tenjin execution page will get displayed, expected_result: User should view the Browser closing process once after the Execution and the Tenjin execution page will get displayed
|
Module: Automation, Requirement: Build your Test-Bot from elements | Test Cases:
description: Build testbot, pre_condition: 1.User should be registered to Tenjin Application 2.Tenjin Agent should be connected 3. For Android /iOS mobile should be connected
Test Steps:
description: User able Build your Test-Bot from elements, expected_result: Test-Bot will be created
description: User able to view the Start Building your Bot screen, expected_result: User should view the Start Building your Test-bot screen
description: User able to Build the test-bot from elements, expected_result: User should to Build the test bot from elements
description: User able to view the available screens of the bot, expected_result: User should view the available screens of the bot
description: User able to import the login and Navigate to function bot, expected_result: User should import the login and Navigate to function bot
description: User able add the required available screen in the bot screens and update the flow, expected_result: User should add the required available screen in the bot screens and update the flow
description: User able to Input details in those screens , expected_result: User able to Input details in those screens
description: User able to import the Logout bot, expected_result: User should import the Logout bot
description: User able to Click on the Run testbot button , expected_result: User able to view the "Choose Agent and Browser "pop -up.
description: User able to Select the agent and the browser , expected_result: User should Select the agent and the browser
description: User able to Click on Run button, expected_result: The browser will get launched and the execution or automation starts
description: User able to view the Entire Automation Process on the lauched browser window, expected_result: User should view the Entire Automation Process on the lauched browser window
|
Module: Automation, Requirement: Add the all the actions and edit those. | Test Cases:
description: Adding the actions, pre_condition: 1.User should be registered to Tenjin Application 2.Tenjin Agent should be connected 3. For Android /iOS mobile should be connected
Test Steps:
description: User able to add the Click action, expected_result: User should add the Click action
description: User able to add the input action, expected_result: User should add the input action
description: User able to get value for the actions and store it in a new variable, expected_result: User should get value for the actions and store it in a new variable
description: User able to Add the compare actions and validate the values, expected_result: User should Add the compare actions and validate the values
description: User able add the desired wait time within the block, expected_result: User should add the desired wait time within the block
description: User able to add the desired wait time outside the blocks, expected_result: User able to add the desired wait time outside the blocks
description: User able to add the screenshot within the block, expected_result: User should add the screenshot within the block
description: User able to add the screenshot outside the block, expected_result: User should add the screenshot outside the block
description: User able to add the download file action and download the file, expected_result: User should add the download file action and download the file
description: User able to add attach file functionality action and attach file, expected_result: User should to add attach file functionality action and attach file
description: User able to copy the existing actions, expected_result: User should copy the existing actions
description: User able to remove the existing actions, expected_result: User should remove the existing actions
description: User able to add double click action and it should double click during execution, expected_result: User should add double click action and it should double click during execution
description: User able to add close window action and it should double click during the execution, expected_result: User should add close window action and it should double click during the execution
|
Module: Automation, Requirement: Web Bot Execution- Across browsers | Test Cases:
description: User able to do an execution on Chrome/Firefox and Edge Browser, pre_condition: nan
Test Steps:
description: User able to do an execution on Chrome Browser, expected_result: User should do an execution on Chrome Browser
description: User able to do an execution on Firefox Browser, expected_result: User should do an execution on Firefox Browser
description: User able to do an execution on Edge Browser, expected_result: User able do an execution on Edge Browser
|
Module: Automation, Requirement: Test Data - Export and Import | Test Cases:
description: To Check Whether user able to export and import the test data , pre_condition: nan
Test Steps:
description: User not give any values in the bot built page, expected_result: User should not give any values in the bot buil page and be default the values will be in paranthesis
|
Module: Automation, Requirement: Encryption and decryption | Test Cases:
description: As user, I must be able to view the encrypted data in the Database for Run , pre_condition: 1)Login into TenjinOnline 2)Test cases should be created 3)Test cycle should be created 4)Testcases should be added to the test cycle 5)Ensure Tenjin agent is running
6)Configure Mongo DB for DB Testing
Test Steps:
description: Go to execution on the side navigation bar, expected_result: Execution page will be displayed
description: Create a run, expected_result: nan
description: Add test cases to run, expected_result: nan
description: Execute the run, expected_result: nan
description: Go to Mongo DB, expected_result: nan
description: Enter the Query for Run level, expected_result: The encrypted Records will be displayed
|
Module: Automation, Requirement: Hard stop and soft stop- Android Test Bot , iOS | Test Cases:
description: As user, I must be able to copy the bot to do soft stop at action level, pre_condition: 1) Configure the mobile device by enabling the developer options and
usb debugging mode and connected to PC. 2) Tenjin Agent should be running 3)Project should be created 4)Application should be created and mapped to the project 5)Bot should be created 6)Advance option should be selected as "Continue if action fail"
Test Steps:
description: Login into TenjinOnline, expected_result: Project Details should be shown
description: Click on Automation from side Navigation bar, expected_result: Tesbot list page will be displayed
description: Click on bot name, expected_result: Testbuild page will be displayed
description: Select the action,click on edit and click on copy the action, expected_result: The particular action should be copied
description: Verify copied action is updated as "Continue if action fail", expected_result: The copied action should be updated as "Continue if action fail"
description: Click on "Run TestBot" button, expected_result: New popup window "Choose Agent and Device" should be displayed
description: Choose agent name and device name and click on Run, expected_result: Bot will be executed
description: Verify if first action fail, it will check for the remaining action, expected_result: The particular action should get failed and it will check for the next action.
If it is working as expected it will get passed. If it is not working as expected the action will get failed
description: User able to view the "Result" in the Bot Result Tab, expected_result: User should view the "Result" line as passed, failed
|
Module: Automation, Requirement: Hard stop and soft stop- Web Bot | Test Cases:
description: As user, I must be able to do update the soft stop for "Build your elemet method" at Testbot level, pre_condition: 1) Tenjin Agent should be running 2)Project should be created 2)Application should be created for web and mapped to the project 5)Bot should be created for Record actions on Browser method 6)Advance option should be selected as "Continue if action fail"
Test Steps:
description: Login into TenjinOnline, expected_result: Project Details should be shown
description: Click on Automation from side Navigation bar, expected_result: Tesbot list page will be displayed
description: Click on bot name and click on edit, expected_result: User able to edit the details
description: Click on "Advance Options" settings icon and update the radio button as"Stop if action fail", expected_result: Testbot should be updated as "Stop if action fail"
description: Verify all the actions are updated as ''Stop if action fail'', expected_result: All the actions are updated as "Stop if action fail"
description: Click on "Run TestBot" button, expected_result: New popup window "Choose Agent and Device" should be displayed
description: Choose agent name and device name and click on Run, expected_result: Bot will be executed
description: Verify if first action fail, it will check for the remaining action, expected_result: The particular action should get failed and it will check for the next action.
If it is working as expected it will get passed. If it is not working as expected the action will get failed
description: User able to view the "Result" in the Bot Result Tab, expected_result: User should view the "Result" line as passed, failed
|
Module: Automation, Requirement: Regex and compare - Android , iOS | Test Cases:
description: User should able to execute the bot for Regex and compare with invalid regular expression, pre_condition: 1) Configure the mobile device by enabling the developer options and
usb debugging mode and connected to PC. 2) Tenjin Agent should be running 3)Project should be created 4)Application should be created and mapped to the project 5)Bot should be created
Test Steps:
description: Click on "Run TestBot" button, expected_result: New popup window "Choose Agent and Device" should be displayed
description: Choose agent name and device name and click on Run, expected_result: It should transver to "Bot Result" tab
description: User able to view the "Result" in the Bot Result Tab, expected_result: User should view the "Result" line as failed and
invalid regular expression should not match
description: Click on "Run TestBot" button, expected_result: New popup window "Choose Agent and Device" should be displayed
description: Choose agent name and Choose the browser and click on Run, expected_result: It should transver to "Bot Result" tab
description: User able to view the "Result" in the Bot Result Tab, expected_result: User should view the "Result" line as failed and
invalid regular expression should not match
description: Click on "Run TestBot" button, expected_result: New popup window "Choose Agent and Device" should be displayed
description: Choose agent name and Choose the browser and click on Run, expected_result: It should transver to "Bot Result" tab
description: User able to view the "Result" in the Bot Result Tab, expected_result: User should view the "Result" line as failed and
invalid regular expression should not match
|
Module: Automation, Requirement: Regex and compare - Web Bot | Test Cases:
description: As user, I must not be able to do Regex and compare for "Build your element " with invalid regular expression, pre_condition: 1)Application should be created for Web 2)Bot should be created for Build your element method 3)Ensure Tenjin agent is running
Test Steps:
description: Login into TenjinOnline, expected_result: Project Details should be shown
description: Click on Automation from side Navigation bar, expected_result: Tesbot list page will be displayed
description: Click on bot name, expected_result: Testbuild page will be displayed
description: User able to view the available screens of the bot, expected_result: Should view the available screens of the bot
description: Add the required available actions in the bot screens and click on add to flow, expected_result: Should add the required available actions and check the message "block created successfully"
description: Click on 3 dots and select the "Regex and compare" action, expected_result: Enter the invalid regular expression and click on update flow
|
Module: Automation, Requirement: Get substring | Test Cases:
description: As user, I must be able get the sub string by inputting invalid syntex for scanned element, pre_condition: 1) Configure the mobile device by enabling the developer options and
usb debugging mode and connected to PC. 2) Tenjin Agent should be running 3)Project should be created 4)Application should be created and mapped to the project
5)Bot should be created
Test Steps:
description: Login into TenjinOnline, expected_result: Project Details should be shown
|
Module: Execution Automation, Requirement: Run level for Android /iOS | Test Cases:
description: User should do execution at Run level for Android /iOS, pre_condition: nan
Test Steps:
description: Go to Execution Page, expected_result: nan
description: Click on Runs tab, expected_result: nan
description: Click on Create a run, expected_result: Create run popup window should be dispalyed
description: Enter Run name, expected_result: nan
description: Enter Run Type, expected_result: nan
description: Click on Create a run, expected_result: It will traverse to the Cases in Run tab
description: Click on All Cases tab, expected_result: nan
description: Select the test cases, expected_result: nan
description: Click on "Add cases to Run" button, expected_result: Cases will be added to the run
description: Select the added test cases, expected_result: nan
description: Click on "Start Run", expected_result: Select Agent and Device name popup window should be displayed
description: User able to view the "Include Devices and Cloud?" toggle button , expected_result: User should view the "Include Devices and Cloud?" toggle button
description: User if select the toggle button"Include Devices and Cloud?", expected_result: Application can be launched in Browser Stack
description: Select Agent Name , expected_result: nan
description: Select Device name , expected_result: nan
description: Click on Start Button, expected_result: User should view the Execution at the case run level
|
Module: Execution Automation, Requirement: Run level for Web | Test Cases:
description: User should do execution at Run level for Web, pre_condition: nan
Test Steps:
description: Go to Execution Page, expected_result: nan
description: Click on Runs tab, expected_result: nan
description: Click on Create a run, expected_result: Create run popup window should be dispalyed
description: Enter Run name, expected_result: nan
description: Enter Run Type, expected_result: nan
description: Click on Create a run, expected_result: It will traverse to the Cases in Run tab
description: Click on All Cases tab, expected_result: nan
description: Select the test cases, expected_result: nan
description: Click on "Add cases to Run" button, expected_result: Cases will be added to the run
description: Select the added test cases, expected_result: nan
description: Click on "Start Run", expected_result: Select Agent and Browser popup window should be displayed
description: Select Agent Name , expected_result: nan
description: Select the browser, expected_result: User can choose any one browser from Google Chrome/ Mozilla Firefox/Microsoft edge
description: Click on Start Button, expected_result: User should view the Execution at the case run level
|
Module: Execution Automation, Requirement: Run level for API | Test Cases:
description: User should do execution at Run level for API bot, pre_condition: nan
Test Steps:
description: Go to Execution Page, expected_result: nan
description: Click on Runs tab, expected_result: nan
description: Click on Create a run, expected_result: Create run popup window should be dispalyed
description: Enter Run name, expected_result: nan
description: Enter Run Type, expected_result: nan
description: Click on Create a run, expected_result: It will traverse to the Cases in Run tab
description: Click on All Cases tab, expected_result: nan
description: Select the test cases, expected_result: nan
description: Click on "Add cases to Run" button, expected_result: Cases will be added to the run
description: Select the added test cases, expected_result: nan
description: Click on "Start Run", expected_result: Select Agent and Browser popup window should be displayed
description: Select Agent Name , expected_result: nan
description: Click on Start Button, expected_result: User should view the Execution at the case run level
|
Module: Execution Automation, Requirement: Run level for Android and web | Test Cases:
description: User should do execution at Run level for Android and web, pre_condition: nan
Test Steps:
description: Go to Execution Page, expected_result: nan
description: Click on Runs tab, expected_result: nan
description: Click on Create a run, expected_result: Create run popup window should be dispalyed
description: Enter Run name, expected_result: nan
description: Enter Run Type, expected_result: nan
description: Click on Create a run, expected_result: It will traverse to the Cases in Run tab
description: Click on All Cases tab, expected_result: nan
description: Select the test cases, expected_result: nan
description: Click on "Add cases to Run" button, expected_result: Cases will be added to the run
description: Select the added test cases, expected_result: nan
description: Click on "Start Run", expected_result: Select Agent and Device popup window should be displayed
description: User able to view the "Include Devices and Cloud?" toggle button , expected_result: User should view the "Include Devices and Cloud?" toggle button
description: User if select the toggle button"Include Devices and Cloud?", expected_result: Application can be launched in Browser Stack
description: Select Agent Name , expected_result: nan
description: Select the browser, expected_result: User can choose any one browser from Google Chrome/ Mozilla Firefox/Microsoft edge
description: Select the Device Name, expected_result: User should view the Execution at the case run level
description: Click on Start Button, expected_result: nan
|
Module: Execution Automation, Requirement: Run level for Android /iOS/Web | Test Cases:
description: User should not able to do execution at Run level for Android/IOS/web, pre_condition: nan
Test Steps:
description: Go to Execution Page, expected_result: nan
description: Click on Runs tab, expected_result: nan
description: Click on Create a run, expected_result: Create run popup window should be dispalyed
description: Enter Run name, expected_result: nan
description: Enter Run Type, expected_result: nan
description: Click on Create a run, expected_result: It will traverse to the Cases in Run tab
description: Click on All Cases tab, expected_result: nan
description: Select the test cases, expected_result: nan
description: Click on "Add cases to Run" button, expected_result: Cases will be added to the run
description: Select the added test cases, expected_result: nan
description: Click on "Start Run", expected_result: Select Agent and Device name popup window should be displayed
description: User able to view the "Include Devices and Cloud?" toggle button , expected_result: User should view the "Include Devices and Cloud?" toggle button
description: User if select the toggle button"Include Devices and Cloud?", expected_result: Application can be launched in Browser Stack
description: Select Agent Name , expected_result: nan
description: Select Device name , expected_result: nan
description: Click on Cancel Button, expected_result: Redirect to test run summary page
|
Module: Execution Automation, Requirement: Execution for passed run | Test Cases:
description: User should able to view the passed execution at Run level, pre_condition: nan
Test Steps:
description: Go to Execution Page, expected_result: nan
description: Click on Runs tab, expected_result: nan
description: Click on Create a run, expected_result: Create run popup window should be dispalyed
description: Enter Run name, expected_result: nan
description: Enter Run Type, expected_result: nan
description: Click on Create a run, expected_result: It will traverse to the Cases in Run tab
description: Click on All Cases tab, expected_result: nan
description: Select the test cases, expected_result: nan
description: Click on "Add cases to Run" button, expected_result: Cases will be added to the run
description: Click on "Cases in Run " tab, select the added test cases, expected_result: nan
description: Click on "Start Run", expected_result: Select Agent and Device name popup window should be displayed
description: Select Agent Name , expected_result: nan
description: Select Device name , expected_result: nan
description: Click on Start Button, expected_result: User should view the Execution at the case run level
description: Click case ID , expected_result: It will traverse to Test cases Summary page
description: Click on Step name, expected_result: It will traverse to Test Step Summary page
description: If the execution is passed in the Test case summary page, expected_result: User can the view Result of the Run as passed along with Percentage
description: If the execution is passed in the Test step summary page, expected_result: User can the view Result of the Run as passed along with Percentage and status of the each action as passed in the Test step summary page
|
Module: Execution Automation, Requirement: Execution for failed run | Test Cases:
description: User should able to view the Failed execution at Run level, pre_condition: nan
Test Steps:
description: Go to Execution Page, expected_result: nan
description: Click on Runs tab, expected_result: nan
description: Click on Create a run, expected_result: Create run popup window should be dispalyed
description: Enter Run name, expected_result: nan
description: Enter Run Type, expected_result: nan
description: Click on Create a run, expected_result: It will traverse to the Cases in Run tab
description: Click on All Cases tab, expected_result: nan
description: Select the test cases, expected_result: nan
description: Click on "Add cases to Run" button, expected_result: Cases will be added to the run
description: Select the added test cases, expected_result: nan
description: Click on "Start Run", expected_result: Select Agent and Device name popup window should be displayed
description: Select Agent Name , expected_result: nan
description: Select Device name , expected_result: nan
description: Click on Start Button, expected_result: User should view the Execution at the case run level
description: Click case ID , expected_result: It will traverse to Test cases Summary page
description: Click on Step name, expected_result: It will traverse to Test Step Summary page
description: If the execution is failed in the Test case summary page, expected_result: User can the view Result of the Run as failed along with Percentage
description: If the execution is failed in the Test step summary page, expected_result: User can the view Result of the Run as failed along with Percentage and status of the action as failed in the Test step summary page
|
Module: Execution Automation, Requirement: Execution for abort run | Test Cases:
description: User should able to view the Abort execution at Run level, pre_condition: nan
Test Steps:
description: Go to Execution Page, expected_result: nan
description: Click on Runs tab, expected_result: nan
description: Click on Create a run, expected_result: Create run popup window should be dispalyed
description: Enter Run name, expected_result: nan
description: Enter Run Type, expected_result: nan
description: Click on Create a run, expected_result: It will traverse to the Cases in Run tab
description: Click on All Cases tab, expected_result: nan
description: Select the test cases, expected_result: nan
description: Click on "Add cases to Run" button, expected_result: Cases will be added to the run
description: Select the added test cases, expected_result: nan
description: Click on "Start Run", expected_result: Select Agent and Device name popup window should be displayed
description: Select Agent Name , expected_result: nan
description: Select Device name , expected_result: nan
description: Click on Start Button, expected_result: User should view the Execution at the case run level
description: Click case ID , expected_result: It will traverse to Test cases Summary page
description: Click on Step name, expected_result: It will traverse to Test Step Summary page
description: Click on Abort, expected_result: The particular action will abort and the remaining actions will be shown the status as No Result
description: If the execution is Aborted in the Test case summary page, expected_result: User can the view Result of the Run as No Result along with Percentage
description: If the execution is Aborted in the Test step summary page, expected_result: User can the view Result of the Run as No Result along with Percentage
description: If the execution is Aborted in the Test step summary page, expected_result: User can view the status of the action asAborted and the remaining actions will be displayed as No result in the Test step summary page
|
Module: Execution Automation, Requirement: Copy the create run | Test Cases:
description: User should able to copy the create RUN, pre_condition: nan
Test Steps:
description: Go to Execution Page, expected_result: nan
description: Click on Runs tab, expected_result: nan
description: Click on existing Run, expected_result: Copy Run Pop window will displayed
description: Click on copy icon, expected_result: nan
description: Enter the name of for the NEW RUN , expected_result: User able to write the nEW RUN name
description: Click on copy button, expected_result: Copied Run will be displayed in Test Run page
|
Module: Execution Automation, Requirement: Copy the run with same name | Test Cases:
description: User able to copy the Run with same name, pre_condition: nan
Test Steps:
description: Go to Execution Page, expected_result: nan
description: Click on Runs tab, expected_result: nan
description: Click on existing Run, expected_result: nan
description: Click on copy icon, expected_result: nan
description: Enter the name of for the NEW RUN with same name, expected_result: nan
description: Click on copy button, expected_result: Check the error message"Run exists with the same name, Please use different name"
|
Module: Execution Automation, Requirement: Copy the bot | Test Cases:
description: User not able to copy the bot, pre_condition: nan
Test Steps:
description: Go to Execution Page, expected_result: nan
description: Click on Runs tab, expected_result: nan
description: Click on existing Run, expected_result: nan
description: Click on copy icon, expected_result: nan
description: Enter the name of for the NEW RUN with same name, expected_result: nan
description: Click on cancel button, expected_result: Redirect to test run summary page
|
Module: Execution Automation, Requirement: Delete newly created run | Test Cases:
description: User able to Delete the newly created run , pre_condition: nan
Test Steps:
description: Go to Execution Page, expected_result: nan
description: Click on Runs tab, expected_result: nan
description: Click on existing Run, expected_result: nan
description: Click on Delete icon , expected_result: Check the message "Test run deleted successfully"
|
Module: Execution Automation, Requirement: Delete the executed run | Test Cases:
description: User not able to delete the executed run , pre_condition: nan
Test Steps:
description: Go to Execution Page, expected_result: nan
description: Click on Runs tab, expected_result: nan
description: Click on existing Run, expected_result: nan
description: Click on Delete icon , expected_result: Check the error message"Some of the cases for steps in Run are executed , run cannot be deleted"
|
Module: Execution Automation, Requirement: view the "Start Run" for newly created run | Test Cases:
description: User able to view the "Start Run" for newly created run , pre_condition: nan
Test Steps:
description: Go to Execution Page, expected_result: nan
description: Click on Runs tab, expected_result: nan
description: Click on Create a run, expected_result: Create run popup window should be dispalyed
description: Enter Run name, expected_result: nan
description: Enter Run Type, expected_result: nan
description: Click on Create a run, expected_result: It will traverse to the Cases in Run tab
description: Click on All Cases tab, expected_result: nan
description: Select the test cases, expected_result: nan
description: Click on "Add cases to Run" button, expected_result: Cases will be added to the run
description: Select the added test cases, expected_result: nan
description: Click on "Start Run", expected_result: Select Agent and Device name popup window should be displayed
description: Select Agent Name , expected_result: nan
description: Select Device name , expected_result: nan
description: Click on Start Button, expected_result: User should view the Execution at the case run level
|
Module: Execution Automation, Requirement: view the "Run Again" for existing run | Test Cases:
description: User able to view the "Run Again" for existing run , pre_condition: nan
Test Steps:
description: Go to Execution Page, expected_result: nan
description: Click on Runs tab, expected_result: nan
description: Click on "Run Again", expected_result: nan
description: Select Agent Name , expected_result: nan
description: Select Device name , expected_result: nan
description: Click on "Start " button, expected_result: User should view the Execution at the case run level
|
Module: Execution Automation, Requirement: download the Test Run report | Test Cases:
description: User should download the Test Run report, pre_condition: nan
Test Steps:
description: Go to Execution Page, expected_result: nan
description: Click on Runs tab, expected_result: nan
description: Click on Create a run, expected_result: Create run popup window should be dispalyed
description: Enter Run name, expected_result: nan
description: Enter Run Type, expected_result: nan
description: Click on Create a run, expected_result: It will traverse to the Cases in Run tab
description: Click on All Cases tab, expected_result: nan
description: Select the test cases, expected_result: nan
description: Click on "Add cases to Run" button, expected_result: Cases will be added to the run
description: Select the added test cases, expected_result: nan
description: Click on "Start Run", expected_result: Select Agent and Device name popup window should be displayed
description: Select Agent Name , expected_result: nan
description: Select Device name , expected_result: nan
description: Click on Start Button, expected_result: User should view the Execution at the case run level
description: Click case ID , expected_result: It will traverse to Test cases Summary page
description: Click on Step name, expected_result: It will traverse to Test Step Summary page
description: If the execution is passed/failed/Aborted in the Test step summary page, expected_result: User can the view Result of the Run along with Percentage
description: Go to Run summary page, expected_result: nan
description: Click on Download icon, expected_result: Test Run report will be downloaded for Passed/failed/Aborted
|
Module: Execution Automation, Requirement: Execution Status in the Run Summary page | Test Cases:
description: User should see the Execution Status in the Run Summary page, pre_condition: nan
Test Steps:
description: Go to Execution Page, expected_result: nan
description: In the Run Summary page, expected_result: User should view the Execution status in the Run Summary page
description: User should view the total number of Test cases, expected_result: nan
description: User should view the total number of Percentage , expected_result: nan
description: User should view the status in progress bar, expected_result: nan
description: User can view the status as Completed, expected_result: nan
description: User can view the status as Passed, expected_result: nan
description: User can view the status as Failed, expected_result: nan
description: User can view the status as No Result, expected_result: nan
description: User should view the number of test cases Completed, expected_result: nan
description: User should view the number of test cases Passed, expected_result: nan
description: User should view the number of test cases Failed, expected_result: nan
description: User should view the number of test cases No Result, expected_result: nan
|
Module: Execution Automation, Requirement: view the the Device details for Android | Test Cases:
description: User should view the the Device details for Android in the Run Summary page, pre_condition: nan
Test Steps:
description: Go to Execution Page, expected_result: nan
description: Go to Run summary page, expected_result: nan
description: User should view the Device details in the Run Summary page, expected_result: nan
description: User should view the Device name for Android platform, expected_result: nan
description: User should view the Model and Manufacturer , expected_result: nan
description: User should view the Platform name and version , expected_result: nan
description: User should view the the Total Memory (RAM), expected_result: nan
description: User should view the Monitor Start Date and Time, expected_result: nan
description: User should view the Monitor End Date and Time, expected_result: nan
|
Module: Execution Automation, Requirement: view the Device Performance Metrics for Android | Test Cases:
description: User should view the Device Performance Metrics for Android in the Run Summary page, pre_condition: nan
Test Steps:
description: Go to Execution Page, expected_result: nan
description: Go to Run summary page, expected_result: nan
description: User can view the Device performance Metrics details once the execution is completed, expected_result: nan
description: User can view the CPU Load, expected_result: nan
description: User can view the Memory Usage, expected_result: nan
description: User Can view the Battery Percentage, expected_result: nan
description: When user click on CPU Load , expected_result: User can view the CPU Load popup window with Time metrics Graph
description: When user click on Memory Usage, expected_result: User can view the Memory Usage popup window with Time metrics Graph
description: When user click on Battery percentage, expected_result: User can view the Battery Performance popup window with Time metrics Graph
description: When user on Graphical view icon , expected_result: User can view graphical representation of device performance metrics with Time metrics of Battery performance , CPU load and Memory use graph
|
Module: Execution Automation, Requirement: view the Run type | Test Cases:
description: User should view the Run type, pre_condition: nan
Test Steps:
description: User should view the Run type as Automated in the Run summary page, expected_result: nan
description: User should view the Run type as Automated in the Test cases summary page, expected_result: nan
description: User should view the Run type as Automated in the Test steps summary page, expected_result: nan
|
Module: Execution Automation, Requirement: view the Start Date and Time | Test Cases:
description: User should view the Start Date and Time, pre_condition: nan
Test Steps:
description: User should view the Start Date and Time in the Run Summary page, expected_result: nan
description: User should view the Start Date and Time in the Test cases Summary page, expected_result: nan
description: User should view the Start Date and Time in the Test Steps Summary page, expected_result: nan
|
Module: Execution Automation, Requirement: view the Elapsed Time | Test Cases:
description: User should view the Elapsed Time, pre_condition: nan
Test Steps:
description: User should view the Elapsed time in the Run Summary page , expected_result: nan
description: User should view the Elapsed time in the Test cases Summary page , expected_result: nan
description: User should view the Elapsed time in the Test Steps Summary page , expected_result: nan
|
Module: Execution Automation, Requirement: view the Run ID | Test Cases:
description: User should view the Run ID , pre_condition: nan
Test Steps:
description: User should view the Run ID in the Run Summary page , expected_result: nan
description: User should view the Run ID in the Test cases Summary page , expected_result: nan
description: User should view the Run ID in the Test Steps Summary page , expected_result: nan
|
Module: Execution Automation, Requirement: view the Project owner name | Test Cases:
description: User should view the Project owner name, pre_condition: nan
Test Steps:
description: User should view the Project Owner name in the Run Summary page , expected_result: nan
description: User should view the Project Owner name in the Test cases Summary page , expected_result: nan
description: User should view the Project Owner name in the Test Steps Summary page , expected_result: nan
|
Module: Execution Automation, Requirement: view the Refresh icon | Test Cases:
description: User should view the Refresh icon, pre_condition: nan
Test Steps:
description: User should view the Refresh icon in the Run Summary page , expected_result: nan
description: User should view the Refresh icon in the Test cases Summary page , expected_result: nan
description: User should view the Refresh icon in the Test Steps Summary page , expected_result: nan
|
Module: Execution Automation, Requirement: view the posted defects ID in the Defects column | Test Cases:
description: User should view the posted defects ID in the Defects column, pre_condition: nan
Test Steps:
description: User should view the posted defectsID in theDefects column of Run Summary page , expected_result: nan
description: User should view the posted defectsID in theDefects column in the Test cases Summary page , expected_result: nan
description: User should view the posted defectsID in theDefects column in the Test step Summary page , expected_result: nan
|
Module: Execution Automation, Requirement: view the details in the "Steps in cases" card | Test Cases:
description: User able to view the details in the "Steps in cases" card, pre_condition: nan
Test Steps:
description: Go to Execution Page, expected_result: nan
description: Go to Test Cases Summary page, expected_result: nan
description: User should view the the Step name, expected_result: nan
description: User should view the Step ID, expected_result: nan
description: User should view the Bot name, expected_result: nan
description: User should view the Actions along with actions number, expected_result: nan
description: User should view the number of screenshots attached for that step, expected_result: nan
description: User should view the Defects and Attachements , expected_result: nan
description: User should view the status of the Steps, expected_result: nan
|
Module: Execution Automation, Requirement: view the step summary in the Test Step summary page | Test Cases:
description: User should view the step summary in the Test Step summary page, pre_condition: nan
Test Steps:
description: Go to Execution Page, expected_result: nan
description: Go to Test case summary page , expected_result: nan
description: Go to Test step summary page, expected_result: nan
description: User should view the actions in the Testbot Flow in the Test step summary page , expected_result: nan
description: User can view the sequence in the Testbot flow, expected_result: nan
description: User can view the screenshots in the Screenshots tab, expected_result: nan
description: User can view the test data in the Test Data tab, expected_result: nan
description: User can view the compare values in the Verifications tab , expected_result: nan
description: User can view the output values in the Output values tab , expected_result: nan
description: User can view the attachements in the Attachments tab, expected_result: nan
description: User can view the posted defects in the defects column, expected_result: nan
|
Module: Execution Automation, Requirement: post the attachements for the particular test steps | Test Cases:
description: User able to post the attachements for the particular test steps, pre_condition: nan
Test Steps:
description: Go to Execution Page, expected_result: nan
description: Go to Test Cases Summary page, expected_result: nan
description: Click on Attachments plus icon , expected_result: Attachments popup window is displayed
description: User can drag and drop the files or Browse, expected_result: nan
description: Once user uploaded the attachments, expected_result: User can view the attached file name along with size of the file
description: Click on Proceed, expected_result: Attachments successfully uploaded confirmation message is displayed and user can view the attached files in the Attachments tab and also user can view thenumber of files attached
|
Module: Execution Automation, Requirement: view the encrypted data in the Database for Run(Android/iOS) | Test Cases:
description: Able to view the encrypted data in the Database for Run(Android/iOS), pre_condition: Login into Tenjin Online 2)Test cases should be created 3)Test cycle should be created 4)Testcases should be added to the test cycle 5)Ensure Tenjin agent is running 6)Configure dBeaver for DB Testing
Test Steps:
description: Go to execution on the side navigation bar, expected_result: Execution page will be displayed
description: Create a run, expected_result: nan
description: Add test cases to run, expected_result: nan
description: Execute the run, expected_result: nan
description: Go to dBeaver, expected_result: nan
description: Enter the Query for dataset level, expected_result: The encrypted Records will be displayed
|
Module: Execution Automation, Requirement: view the encrypted data in the Database for Run (web) | Test Cases:
description: Able to view the encrypted data in the Database for Run (web), pre_condition: Login into Tenjin Online 2)Test cases should be created 3)Test cycle should be created 4)Testcases should be added to the test cycle 5)Ensure Tenjin agent is running 6)Configure dBeaver for DB Testing
Test Steps:
description: Go to execution on the side navigation bar, expected_result: Execution page will be displayed
description: Create a run, expected_result: nan
description: Add test cases to run, expected_result: nan
description: Execute the run, expected_result: nan
description: Go to dBeaver, expected_result: nan
description: Enter the Query for Run level, expected_result: The encrypted Records will be displayed
|
Module: Execution Automation, Requirement: view the summary page | Test Cases:
description: User able to view the summary page, pre_condition: nan
Test Steps:
description: User has not added test cases to the Test Cycle and not started the test cycle, expected_result: User able to view"Test Cycle is not started, Please add some test cases and click on 'Start Test Cycle' Button to proceed!"
description: User adding the test cases to test cycle, expected_result: User able to view" Start Test cycle" button
description: User able to view the Testcases card: :"Total cases","Tested "and "To be executed" in summary page, expected_result: nan
description: User able to view the Execution status card: Passed, failed, no result, blocked
, expected_result: nan
description: User able to view the defected recorded card: Defects, resolved, In process, expected_result: nan
description: User able to view the run statics card: Completed, In progress,Yet to start, No result, expected_result: nan
description: User able to view the Recent runs card: Run name, Status, expected_result: nan
description: User able to view the Time frame card: Completed, Remaining, Start date and date, expected_result: nan
description: User able to view the Team View card:If testcases are assigned to user, expected_result: User able to view the project owner name, cases allocated with no. of cases, tested and defects
description: User able to view the Team View card:If no testcases are assigned to user, expected_result: User able to view "No data available to display"
|
Module: Execution Automation, Requirement: view the testcases page | Test Cases:
description: User able to view the testcases page, pre_condition: nan
Test Steps:
description: User not added the testcases to test cycle, expected_result: User able to view ""Currently there are no test cases added to TestCycle, Please add"
description: When user click on add cases to run, expected_result: It will navigate to Test design page and add the cases to cycle
description: When user checks the checkbox, expected_result: User can Assign testcases and Remove cases from cycle
description: User able to sort by Summary(A-Z) and Summary(Z-A), expected_result: Alphabetical order sorting should happen by sorting "Summary (A-Z)" and Reverse-Alphabetical order sorting should happen by sorting "Summary (Z-A)"
description: User able to sort by card view/ Table View , expected_result: User should able to view by card view /Table View
description: User able to view the Show filter button, expected_result: Filter drawer should be displayed
description: In filter drawer user able to input /search by summary ,key , description, expected_result: nan
description: User able to select any one option from the Executed dropdrown -Any,Last 30 days,Last 15 days, expected_result: nan
description: User able to select any one option from the Run Type dropdrown -Any,Automated , Manual, expected_result: nan
description: User able to select any one option from the Assignee dropdrown -Any,,Project owner ,User, expected_result: nan
description: User able to select any one option from the Priority dropdrown -Any,Low,Medium ,high , highest, expected_result: nan
description: User able to select any one option from the Complexity dropdrown -Any,,Low,Medium ,high , highest and highly complex, expected_result: nan
description: When user click on Apply button, expected_result: User able to filter based on the search
description: When user click on clear button, expected_result: User will not be able to filter based on the search
description: User able to view the Hide filter button, expected_result: Filter drawer will be closed
description: User able to traverse to the other page by using pagination, expected_result: User should traverse to the other page by using pagination
|
Module: Manual Execution, Requirement: Manual Test case creation | Test Cases:
description: User able to create the manual test cases with steps, pre_condition: nan
Test Steps:
description: User able to click on the test cases or test design icon on the window, expected_result: Test cases page will get displayed
description: User able to click on add test cases button on the test cases page, expected_result: Create test case page will get displayed
description: User able to fill all the mandatory details and select The Run Type as manual on the test case creation page, expected_result: User should fill all the mandatory details and select The Run Type as manual on the test case creation page
description: User able to click on the create button, expected_result: Test case details page will be displayed
description: User able to add the test step by clicking on the add step button, expected_result: User should add the test step by clicking on the add step button
description: User able to enter all the mandatory details and save the step, expected_result: User should to enter all the mandatory details and save the step
|
Module: Manual Execution, Requirement: Import Manual test case | Test Cases:
description: User able to import the manual test cases with steps, pre_condition: nan
Test Steps:
description: User able to click on the test cases or test design icon on the window, expected_result: Test cases page will get displayed
description: User able to click on the Import Test Case from CSV , expected_result: Upload template page will get displayed
description: User able to download the template, expected_result: User should download the template
description: User able to enter all the mandatory details and give Run type as "Manual"., expected_result: User able to enter all the mandatory details and give Run type as "Manual".
description: User able to import the test case, expected_result: User should import the test case
|
Module: Manual Execution, Requirement: Add test case to cycle | Test Cases:
description: User able to add the created manual test cases to Cycle, pre_condition: nan
Test Steps:
description: User able to add the created manual test cases to Cycle, expected_result: User should to add the created manual test cases to Createed Cycle
|
Module: Manual Execution, Requirement: Add test case to run | Test Cases:
description: User able to add the created ,manual test cases to Run, pre_condition: nan
Test Steps:
description: User able to add the created ,manual test cases to Run, expected_result: User able to add the created manual test cases to Run
|
Module: Manual Execution, Requirement: Execute the run | Test Cases:
description: User able to execute the run ,Download the report, pre_condition: nan
Test Steps:
description: User able to click on the start run button, expected_result: User should view the manual execution summary page
description: User check all the datas which are displayed is correct or not, expected_result: User should check all the datas which are displayed is correct or not
description: User able to enter the Actual result, expected_result: User should enter the Actual result
description: User able to click on the save results button, expected_result: User should click on the save results button
description: User able to click on Pass button, expected_result: User should to click Pass button
description: User able to click on Failed button, expected_result: User should click on Failed button
description: User able to click on Not Run button, expected_result: User should click on Not Run button
description: User able to click on the All cases / Current cases button, expected_result: User should click on the All cases / Current cases button
description: User able to click on the All step/ Current steps, expected_result: User able to click on the All step/ Current steps
description: User able to click on next button, expected_result: User should click on next button
description: User able to pass or fail or not run all the steps by checking checking the checkbox on the all steps window, expected_result: User should pass or fail or not run all the steps by checking checking the checkbox on the all steps window
description: User able view all the test cases status after the passing or failing , execution in the showing test card, expected_result: User should view all the test cases status after the passing or failing , execution in the showing test card
description: User able to attach the attachments, expected_result: User should attach the attachments
description: User able to attach the attachments, expected_result: User should attach the attachments
description: User able to log the defects, expected_result: User should log the defects
description: User able to complete the run, expected_result: User should click on complete the run button and manaul run summary page will be displayed
description: User able acess the entire manual run summary page and view all the necessary details, expected_result: User should acess the entire manual run summary page and view all the necessary details
|
Module: Execution Automation, Requirement: view the passed execution at Run level for manual run | Test Cases:
description: User should able to view the passed execution at Run level for manual run, pre_condition: nan
Test Steps:
description: Go to Execution Page, expected_result: nan
description: Click on Runs tab, expected_result: nan
description: Click on Create a run, expected_result: Create run popup window should be dispalyed
description: Enter Run name, expected_result: nan
description: Enter Run Type, expected_result: nan
description: Click on Create a run, expected_result: It will traverse to the Cases in Run tab
description: Click on All Cases tab, expected_result: nan
description: Select the test cases, expected_result: nan
description: Click on "Add cases to Run" button, expected_result: Cases will be added to the run
description: Click on "Cases in Run " tab, select the added test cases, expected_result: nan
description: Click on "Start Run", expected_result: Select Agent and Device name popup window should be displayed
description: User check all the datas which are displayed is correct or not, expected_result: nan
description: User able to enter the Actual result, expected_result: nan
description: User able to click on the save results button, expected_result: nan
description: User able to click on Pass button, expected_result: nan
description: User able to click on complete run, expected_result: User able to view the Execution Summary page with status as "Passed"
description: If the execution is passed in the Test case summary page, expected_result: User can the view Result of the Run as passed along with Percentage
|
Module: Execution Automation, Requirement: view the Failed execution at Run level for manual run | Test Cases:
description: User should able to view the Failed execution at Run level for manual run, pre_condition: nan
Test Steps:
description: Go to Execution Page, expected_result: nan
description: Click on Runs tab, expected_result: nan
description: Click on Create a run, expected_result: Create run popup window should be dispalyed
description: Enter Run name, expected_result: nan
description: Enter Run Type, expected_result: nan
description: Click on Create a run, expected_result: It will traverse to the Cases in Run tab
description: Click on All Cases tab, expected_result: nan
description: Select the test cases, expected_result: nan
description: Click on "Add cases to Run" button, expected_result: Cases will be added to the run
description: Select the added test cases, expected_result: nan
description: Click on "Start Run", expected_result: Select Agent and Device name popup window should be displayed
description: User check all the datas which are displayed is correct or not, expected_result: nan
description: User able to enter the Actual result, expected_result: nan
description: User able to click on the save results button, expected_result: nan
description: User able to click on FAIL button, expected_result: nan
description: User able to click on complete run, expected_result: User able to view the Execution Summary page with status as "failed"
description: If the execution is failed in the Test case summary page, expected_result: User can the view Result of the Run as failed along with Percentage
|
Module: Execution Automation, Requirement: view the blocked execution at Run level for manual run | Test Cases:
description: User should able to view the Blocked execution at Run level for manual run, pre_condition: nan
Test Steps:
description: Go to Execution Page, expected_result: nan
description: Click on Runs tab, expected_result: nan
description: Click on Create a run, expected_result: Create run popup window should be dispalyed
description: Enter Run name, expected_result: nan
description: Enter Run Type, expected_result: nan
description: Click on Create a run, expected_result: It will traverse to the Cases in Run tab
description: Click on All Cases tab, expected_result: nan
description: Select the test cases, expected_result: nan
description: Click on "Add cases to Run" button, expected_result: Cases will be added to the run
description: Select the added test cases, expected_result: nan
description: Click on "Start Run", expected_result: Select Agent and Device name popup window should be displayed
description: User check all the datas which are displayed is correct or not, expected_result: nan
description: User able to enter the Actual result, expected_result: nan
description: User able to click on the save results button, expected_result: nan
description: User able to click on NO RESULT button, expected_result: nan
description: User able to click on complete run, expected_result: User able to view the Execution Summary page with status as "blocked"
description: If the execution is failed in the Test case summary page, expected_result: User can the view Result of the Run as failed along with Percentage
|
Module: Automation- Enable/Disable, Requirement: Enable/Disable | Test Cases:
description: User able to execute the multiple disable and eanble action at run level for web/android/iOS, pre_condition: 1)Tenjin Agent should be running 2)Project should be created 3)Application should be created and mapped to the project 4)Bot should be created 5)Testcases should be created 6)Test cycle should be created 7)Testcases should be added to Test cycle
Test Steps:
description: Login into Tenjin online, expected_result: Project Details should be shown
description: Go to execution page, expected_result: Click on run tab
description: Create the run, expected_result: Enter the run name and select the run type
description: Click on Create a run, expected_result: It will traverse to the Cases in Run tab
description: Click on All Cases tab, expected_result: Select the test cases
description: Click on "Add cases to Run" button, expected_result: Cases will be added to the run
description: Click on start run, expected_result: Select Agent and Device name from the drop down
description: Click on Start run button, expected_result: User able to view the execution of enabled action
description: If user enable the "show all actions" toggle button, expected_result: User able to view the disabled action while executing and highlighted in grey color
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.