title
stringlengths
3
221
text
stringlengths
17
477k
parsed
listlengths
0
3.17k
Python | Subtraction of dictionaries
27 Aug, 2021 Sometimes, while working with dictionaries, we might have a utility problem in which we need to perform elementary operations among the common keys of dictionaries. This can be extended to any operation to be performed. Let’s discuss subtraction of like key values and ways to solve it in this article. The combination of the above two can be used to perform this particular task. This is just a shorthand to the longer method of loops and can be used to perform this task in one line. Python3 # Python3 code to demonstrate working of# Subtraction of dictionaries# Using dictionary comprehension + keys() # Initialize dictionariestest_dict1 = {'gfg' : 6, 'is' : 4, 'best' : 7}test_dict2 = {'gfg' : 10, 'is' : 6, 'best' : 10} # printing original dictionariesprint("The original dictionary 1 : " + str(test_dict1))print("The original dictionary 2 : " + str(test_dict2)) # Using dictionary comprehension + keys()# Subtraction of dictionariesres = {key: test_dict2[key] - test_dict1.get(key, 0) for key in test_dict2.keys()} # printing resultprint("The difference dictionary is : " + str(res)) The original dictionary 1 : {'gfg': 6, 'is': 4, 'best': 7} The original dictionary 2 : {'gfg': 10, 'is': 6, 'best': 10} The difference dictionary is : {'gfg': 4, 'is': 2, 'best': 3} The combination of the above methods can be used to perform this particular task. In this, the Counter function converts the dictionary in the form in which the minus operator can perform the task of subtraction. Python3 # Python3 code to demonstrate working of# Subtraction of dictionaries# Using Counter() + "-" operatorfrom collections import Counterfrom collections import subtract # Initialize dictionariestest_dict1 = {'gfg' : 6, 'is' : 4, 'best' : 7}test_dict2 = {'gfg' : 10, 'is' : 6, 'best' : 10} # printing original dictionariesprint("The original dictionary 1 : " + str(test_dict1))print("The original dictionary 2 : " + str(test_dict2)) # Using Counter() + "-" operator# Subtraction of dictionariestemp1 = Counter(test_dict1)temp2 = Counter(test_dict2)res = temp2 - temp1 # printing resultprint("The difference dictionary is : " + str(dict(res))) # Using the subtract method of test_dict2.subtract(test_dict1) # printing resultprint("The difference dictionary is : " + str(dict(test_dict2))) The original dictionary 1 : {'gfg': 6, 'is': 4, 'best': 7} The original dictionary 2 : {'gfg': 10, 'is': 6, 'best': 10} The difference dictionary is : {'gfg': 4, 'is': 2, 'best': 3} wvuq4qtrpgnnizfujqrdqfojlf6091nhax5bv35c sweetyty Python dictionary-programs Python Python Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n27 Aug, 2021" }, { "code": null, "e": 332, "s": 28, "text": "Sometimes, while working with dictionaries, we might have a utility problem in which we need to perform elementary operations among the common keys of dictionaries. This can be extended to any operation to be performed. Let’s discuss subtraction of like key values and ways to solve it in this article. " }, { "code": null, "e": 516, "s": 332, "text": "The combination of the above two can be used to perform this particular task. This is just a shorthand to the longer method of loops and can be used to perform this task in one line. " }, { "code": null, "e": 524, "s": 516, "text": "Python3" }, { "code": "# Python3 code to demonstrate working of# Subtraction of dictionaries# Using dictionary comprehension + keys() # Initialize dictionariestest_dict1 = {'gfg' : 6, 'is' : 4, 'best' : 7}test_dict2 = {'gfg' : 10, 'is' : 6, 'best' : 10} # printing original dictionariesprint(\"The original dictionary 1 : \" + str(test_dict1))print(\"The original dictionary 2 : \" + str(test_dict2)) # Using dictionary comprehension + keys()# Subtraction of dictionariesres = {key: test_dict2[key] - test_dict1.get(key, 0) for key in test_dict2.keys()} # printing resultprint(\"The difference dictionary is : \" + str(res))", "e": 1144, "s": 524, "text": null }, { "code": null, "e": 1326, "s": 1144, "text": "The original dictionary 1 : {'gfg': 6, 'is': 4, 'best': 7}\nThe original dictionary 2 : {'gfg': 10, 'is': 6, 'best': 10}\nThe difference dictionary is : {'gfg': 4, 'is': 2, 'best': 3}" }, { "code": null, "e": 1542, "s": 1328, "text": "The combination of the above methods can be used to perform this particular task. In this, the Counter function converts the dictionary in the form in which the minus operator can perform the task of subtraction. " }, { "code": null, "e": 1550, "s": 1542, "text": "Python3" }, { "code": "# Python3 code to demonstrate working of# Subtraction of dictionaries# Using Counter() + \"-\" operatorfrom collections import Counterfrom collections import subtract # Initialize dictionariestest_dict1 = {'gfg' : 6, 'is' : 4, 'best' : 7}test_dict2 = {'gfg' : 10, 'is' : 6, 'best' : 10} # printing original dictionariesprint(\"The original dictionary 1 : \" + str(test_dict1))print(\"The original dictionary 2 : \" + str(test_dict2)) # Using Counter() + \"-\" operator# Subtraction of dictionariestemp1 = Counter(test_dict1)temp2 = Counter(test_dict2)res = temp2 - temp1 # printing resultprint(\"The difference dictionary is : \" + str(dict(res))) # Using the subtract method of test_dict2.subtract(test_dict1) # printing resultprint(\"The difference dictionary is : \" + str(dict(test_dict2)))", "e": 2335, "s": 1550, "text": null }, { "code": null, "e": 2589, "s": 2335, "text": " The original dictionary 1 : {'gfg': 6, 'is': 4, 'best': 7}\n The original dictionary 2 : {'gfg': 10, 'is': 6, 'best': 10}\n The difference dictionary is : {'gfg': 4, 'is': 2, 'best': 3}" }, { "code": null, "e": 2632, "s": 2591, "text": "wvuq4qtrpgnnizfujqrdqfojlf6091nhax5bv35c" }, { "code": null, "e": 2641, "s": 2632, "text": "sweetyty" }, { "code": null, "e": 2668, "s": 2641, "text": "Python dictionary-programs" }, { "code": null, "e": 2675, "s": 2668, "text": "Python" }, { "code": null, "e": 2691, "s": 2675, "text": "Python Programs" } ]
Multiply two numbers represented by Linked Lists
22 Jun, 2022 Given two numbers represented by linked lists, write a function that returns the multiplication of these two linked lists. Examples: Input : 9->4->6 8->4 Output : 79464 Input : 3->2->1 1->2 Output : 3852 Solution: Traverse both lists and generate the required numbers to be multiplied and then return the multiplied values of the two numbers. Algorithm to generate the number from linked list representation: 1) Initialize a variable to zero 2) Start traversing the linked list 3) Add the value of first node to this variable 4) From the second node, multiply the variable by 10 and also take modulus of this value by 10^9+7 and then add the value of the node to this variable. 5) Repeat step 4 until we reach the last node of the list. Use the above algorithm with both of linked lists to generate the numbers. Below is the program for multiplying two numbers represented as linked lists: Chapters descriptions off, selected captions settings, opens captions settings dialog captions off, selected English This is a modal window. Beginning of dialog window. Escape will cancel and close the window. End of dialog window. C++ C Java Python3 C# Javascript // C++ program to Multiply two numbers// represented as linked lists#include<bits/stdc++.h>#include<stdio.h>using namespace std; // Linked list nodestruct Node{ int data; struct Node* next;}; // Function to create a new node // with given datastruct Node *newNode(int data){ struct Node *new_node = (struct Node *) malloc(sizeof(struct Node)); new_node->data = data; new_node->next = NULL; return new_node;} // Function to insert a node at the // beginning of the Linked Listvoid push(struct Node** head_ref, int new_data){ // allocate node struct Node* new_node = newNode(new_data); // link the old list off the new node new_node->next = (*head_ref); // move the head to point to the new node (*head_ref) = new_node;} // Multiply contents of two linked listslong long multiplyTwoLists (Node* first, Node* second){ long long N= 1000000007; long long num1 = 0, num2 = 0; while (first || second){ if(first){ num1 = ((num1)*10)%N + first->data; first = first->next; } if(second) { num2 = ((num2)*10)%N + second->data; second = second->next; } } return ((num1%N)*(num2%N))%N;} // A utility function to print a linked listvoid printList(struct Node *node){ while(node != NULL) { cout<<node->data; if(node->next) cout<<"->"; node = node->next; } cout<<"\n";} // Driver program to test above functionint main(){ struct Node* first = NULL; struct Node* second = NULL; // create first list 9->4->6 push(&first, 6); push(&first, 4); push(&first, 9); printf("First List is: "); printList(first); // create second list 8->4 push(&second, 4); push(&second, 8); printf("Second List is: "); printList(second); // Multiply the two lists and see result cout<<"Result is: "; cout<<multiplyTwoLists(first, second); return 0;} // This code is contributed by Sania Kumari Gupta (kriSania804) // C program to Multiply two numbers// represented as linked lists#include <stdio.h>#include <stdlib.h>// Linked list nodetypedef struct Node { int data; struct Node* next;} Node; // Function to create a new node// with given datastruct Node* newNode(int data){ Node* new_node = (Node*)malloc(sizeof(Node)); new_node->data = data; new_node->next = NULL; return new_node;} // Function to insert a node at the// beginning of the Linked Listvoid push(struct Node** head_ref, int new_data){ // allocate node struct Node* new_node = newNode(new_data); // link the old list off the new node new_node->next = (*head_ref); // move the head to point to the new node (*head_ref) = new_node;} // Multiply contents of two linked listslong long multiplyTwoLists(Node* first, Node* second){ long long N = 1000000007; long long num1 = 0, num2 = 0; while (first || second) { if (first) { num1 = ((num1)*10) % N + first->data; first = first->next; } if (second) { num2 = ((num2)*10) % N + second->data; second = second->next; } } return ((num1 % N) * (num2 % N)) % N;} // A utility function to print a linked listvoid printList(struct Node* node){ while (node != NULL) { printf("%d", node->data); if (node->next) printf("->"); node = node->next; } printf("\n");} // Driver program to test above functionint main(){ struct Node* first = NULL; struct Node* second = NULL; // create first list 9->4->6 push(&first, 6); push(&first, 4); push(&first, 9); printf("First List is: "); printList(first); // create second list 8->4 push(&second, 4); push(&second, 8); printf("Second List is: "); printList(second); // Multiply the two lists and see result printf("Result is: "); printf("%lld", multiplyTwoLists(first, second)); return 0;} // This code is contributed by Sania Kumari Gupta// (kriSania804) // Java program to Multiply two numbers// represented as linked listsimport java.util.*; public class GFG{ // Linked list node static class Node { int data; Node next; Node(int data){ this.data = data; next = null; } } // Multiply contents of two linked lists static long multiplyTwoLists(Node first, Node second) { long N = 1000000007; long num1 = 0, num2 = 0; while (first != null || second != null){ if(first != null){ num1 = ((num1)*10)%N + first.data; first = first.next; } if(second != null) { num2 = ((num2)*10)%N + second.data; second = second.next; } } return ((num1%N)*(num2%N))%N; } // A utility function to print a linked list static void printList(Node node) { while(node != null) { System.out.print(node.data); if(node.next != null) System.out.print("->"); node = node.next; } System.out.println(); } // Driver program to test above function public static void main(String args[]) { // create first list 9->4->6 Node first = new Node(9); first.next = new Node(4); first.next.next = new Node(6); System.out.print("First List is: "); printList(first); // create second list 8->4 Node second = new Node(8); second.next = new Node(4); System.out.print("Second List is: "); printList(second); // Multiply the two lists and see result System.out.print("Result is: "); System.out.println(multiplyTwoLists(first, second)); }} // This code is contributed by adityapande88 # Python3 to multiply two numbers# represented as Linked Lists # Linked list node class class Node: # Function to initialize the node def __init__(self, data): self.data = data self.next = None # Linked List Classclass LinkedList: # Function to initialize the # LinkedList class. def __init__(self): # Initialize head as None self.head = None # Function to insert a node at the # beginning of the Linked List def push(self, new_data): # Create a new Node new_node = Node(new_data) # Make next of the new Node as head new_node.next = self.head # Move the head to point to new Node self.head = new_node # Function to print the Linked List def printList(self): ptr = self.head while (ptr != None): print(ptr.data, end = '') if ptr.next != None: print('->', end = '') ptr = ptr.next print() # Multiply contents of two Linked Listsdef multiplyTwoLists(first, second): num1 = 0 num2 = 0 first_ptr = first.head second_ptr = second.head while first_ptr != None or second_ptr != None: if first_ptr != None: num1 = (num1 * 10) + first_ptr.data first_ptr = first_ptr.next if second_ptr != None: num2 = (num2 * 10) + second_ptr.data second_ptr = second_ptr.next return num1 * num2 # Driver codeif __name__=='__main__': first = LinkedList() second = LinkedList() # Create first Linked List 9->4->6 first.push(6) first.push(4) first.push(9) # Printing first Linked List print("First list is: ", end = '') first.printList() # Create second Linked List 8->4 second.push(4) second.push(8) # Printing second Linked List print("Second List is: ", end = '') second.printList() # Multiply two linked list and # print the result result = multiplyTwoLists(first, second) print("Result is: ", result) # This code is contributed by kirtishsurangalikar // C++ program to Multiply two numbers// represented as linked listsusing System;public class GFG{ // Linked list node public class Node { public int data; public Node next; public Node(int data) { this.data = data; this.next = null; } } // Multiply contents of two linked lists public static long multiplyTwoLists(Node first, Node second) { var N = 1000000007; var num1 = 0; var num2 = 0; while (first != null || second != null) { if (first != null) { num1 = ((num1)*10) % N + first.data; first = first.next; } if (second != null) { num2 = ((num2)*10) % N + second.data; second = second.next; } } return ((num1 % N) * (num2 % N)) % N; } // A utility function to print a linked list public static void printList(Node node) { while (node != null) { Console.Write(node.data); if (node.next != null) { Console.Write("->"); } node = node.next; } Console.WriteLine(); } // Driver program to test above function public static void Main(String[] args) { // create first list 9->4->6 var first = new Node(9); first.next = new Node(4); first.next.next = new Node(6); Console.Write("First List is: "); GFG.printList(first); // create second list 8->4 var second = new Node(8); second.next = new Node(4); Console.Write("Second List is: "); GFG.printList(second); // Multiply the two lists and see result Console.Write("Result is: "); Console.WriteLine( GFG.multiplyTwoLists(first, second)); }} // This code is contributed by mukulsomukesh <script>// Javascript program to Multiply two numbers// represented as linked lists // Linked list nodeclass Node{ constructor(data) { this.data=data; this.next = null; }} // Multiply contents of two linked listsfunction multiplyTwoLists(first,second){ let N = 1000000007; let num1 = 0, num2 = 0; while (first != null || second != null){ if(first != null){ num1 = ((num1)*10)%N + first.data; first = first.next; } if(second != null) { num2 = ((num2)*10)%N + second.data; second = second.next; } } return ((num1%N)*(num2%N))%N; } // A utility function to print a linked list function printList(node) { while(node != null) { document.write(node.data); if(node.next != null) document.write("->"); node = node.next; } document.write("<br>"); } // Driver program to test above function// create first list 9->4->6let first = new Node(9);first.next = new Node(4);first.next.next = new Node(6);document.write("First List is: ");printList(first); // create second list 8->4let second = new Node(8);second.next = new Node(4);document.write("Second List is: ");printList(second); // Multiply the two lists and see resultdocument.write("Result is: ");document.write(multiplyTwoLists(first, second)+"<br>"); // This code is contributed by avanitrachhadiya2155</script> First List is: 9->4->6 Second List is: 8->4 Result is: 79464 Time Complexity: O(max(n1, n2)), where n1 and n2 represents the number of nodes present in the first and second linked list respectively.Auxiliary Space: O(1), no extra space is required, so it is a constant. This article is contributed by Harsh Agarwal. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. harishkumar88 andrew1234 princiraj1992 daljeetsingh22sk9 adityapande88 kirtishsurangalikar avanitrachhadiya2155 mukulsomukesh krisania804 samim2000 Amazon Modular Arithmetic Linked List Amazon Linked List Modular Arithmetic Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. LinkedList in Java Introduction to Data Structures What is Data Structure: Types, Classifications and Applications Linked List vs Array Implementing a Linked List in Java using Class Find Length of a Linked List (Iterative and Recursive) Function to check if a singly linked list is palindrome Queue - Linked List Implementation Write a function to get the intersection point of two Linked Lists Remove duplicates from an unsorted linked list
[ { "code": null, "e": 52, "s": 24, "text": "\n22 Jun, 2022" }, { "code": null, "e": 175, "s": 52, "text": "Given two numbers represented by linked lists, write a function that returns the multiplication of these two linked lists." }, { "code": null, "e": 186, "s": 175, "text": "Examples: " }, { "code": null, "e": 274, "s": 186, "text": "Input : 9->4->6\n 8->4\nOutput : 79464\n\nInput : 3->2->1\n 1->2\nOutput : 3852" }, { "code": null, "e": 480, "s": 274, "text": "Solution: Traverse both lists and generate the required numbers to be multiplied and then return the multiplied values of the two numbers. Algorithm to generate the number from linked list representation: " }, { "code": null, "e": 819, "s": 480, "text": "1) Initialize a variable to zero\n2) Start traversing the linked list\n3) Add the value of first node to this variable\n4) From the second node, multiply the variable by 10\n and also take modulus of this value by 10^9+7\n and then add the value of the node to this \n variable.\n5) Repeat step 4 until we reach the last node of the list. " }, { "code": null, "e": 895, "s": 819, "text": "Use the above algorithm with both of linked lists to generate the numbers. " }, { "code": null, "e": 975, "s": 895, "text": "Below is the program for multiplying two numbers represented as linked lists: " }, { "code": null, "e": 984, "s": 975, "text": "Chapters" }, { "code": null, "e": 1011, "s": 984, "text": "descriptions off, selected" }, { "code": null, "e": 1061, "s": 1011, "text": "captions settings, opens captions settings dialog" }, { "code": null, "e": 1084, "s": 1061, "text": "captions off, selected" }, { "code": null, "e": 1092, "s": 1084, "text": "English" }, { "code": null, "e": 1116, "s": 1092, "text": "This is a modal window." }, { "code": null, "e": 1185, "s": 1116, "text": "Beginning of dialog window. Escape will cancel and close the window." }, { "code": null, "e": 1207, "s": 1185, "text": "End of dialog window." }, { "code": null, "e": 1211, "s": 1207, "text": "C++" }, { "code": null, "e": 1213, "s": 1211, "text": "C" }, { "code": null, "e": 1218, "s": 1213, "text": "Java" }, { "code": null, "e": 1226, "s": 1218, "text": "Python3" }, { "code": null, "e": 1229, "s": 1226, "text": "C#" }, { "code": null, "e": 1240, "s": 1229, "text": "Javascript" }, { "code": "// C++ program to Multiply two numbers// represented as linked lists#include<bits/stdc++.h>#include<stdio.h>using namespace std; // Linked list nodestruct Node{ int data; struct Node* next;}; // Function to create a new node // with given datastruct Node *newNode(int data){ struct Node *new_node = (struct Node *) malloc(sizeof(struct Node)); new_node->data = data; new_node->next = NULL; return new_node;} // Function to insert a node at the // beginning of the Linked Listvoid push(struct Node** head_ref, int new_data){ // allocate node struct Node* new_node = newNode(new_data); // link the old list off the new node new_node->next = (*head_ref); // move the head to point to the new node (*head_ref) = new_node;} // Multiply contents of two linked listslong long multiplyTwoLists (Node* first, Node* second){ long long N= 1000000007; long long num1 = 0, num2 = 0; while (first || second){ if(first){ num1 = ((num1)*10)%N + first->data; first = first->next; } if(second) { num2 = ((num2)*10)%N + second->data; second = second->next; } } return ((num1%N)*(num2%N))%N;} // A utility function to print a linked listvoid printList(struct Node *node){ while(node != NULL) { cout<<node->data; if(node->next) cout<<\"->\"; node = node->next; } cout<<\"\\n\";} // Driver program to test above functionint main(){ struct Node* first = NULL; struct Node* second = NULL; // create first list 9->4->6 push(&first, 6); push(&first, 4); push(&first, 9); printf(\"First List is: \"); printList(first); // create second list 8->4 push(&second, 4); push(&second, 8); printf(\"Second List is: \"); printList(second); // Multiply the two lists and see result cout<<\"Result is: \"; cout<<multiplyTwoLists(first, second); return 0;} // This code is contributed by Sania Kumari Gupta (kriSania804)", "e": 3312, "s": 1240, "text": null }, { "code": "// C program to Multiply two numbers// represented as linked lists#include <stdio.h>#include <stdlib.h>// Linked list nodetypedef struct Node { int data; struct Node* next;} Node; // Function to create a new node// with given datastruct Node* newNode(int data){ Node* new_node = (Node*)malloc(sizeof(Node)); new_node->data = data; new_node->next = NULL; return new_node;} // Function to insert a node at the// beginning of the Linked Listvoid push(struct Node** head_ref, int new_data){ // allocate node struct Node* new_node = newNode(new_data); // link the old list off the new node new_node->next = (*head_ref); // move the head to point to the new node (*head_ref) = new_node;} // Multiply contents of two linked listslong long multiplyTwoLists(Node* first, Node* second){ long long N = 1000000007; long long num1 = 0, num2 = 0; while (first || second) { if (first) { num1 = ((num1)*10) % N + first->data; first = first->next; } if (second) { num2 = ((num2)*10) % N + second->data; second = second->next; } } return ((num1 % N) * (num2 % N)) % N;} // A utility function to print a linked listvoid printList(struct Node* node){ while (node != NULL) { printf(\"%d\", node->data); if (node->next) printf(\"->\"); node = node->next; } printf(\"\\n\");} // Driver program to test above functionint main(){ struct Node* first = NULL; struct Node* second = NULL; // create first list 9->4->6 push(&first, 6); push(&first, 4); push(&first, 9); printf(\"First List is: \"); printList(first); // create second list 8->4 push(&second, 4); push(&second, 8); printf(\"Second List is: \"); printList(second); // Multiply the two lists and see result printf(\"Result is: \"); printf(\"%lld\", multiplyTwoLists(first, second)); return 0;} // This code is contributed by Sania Kumari Gupta// (kriSania804)", "e": 5327, "s": 3312, "text": null }, { "code": "// Java program to Multiply two numbers// represented as linked listsimport java.util.*; public class GFG{ // Linked list node static class Node { int data; Node next; Node(int data){ this.data = data; next = null; } } // Multiply contents of two linked lists static long multiplyTwoLists(Node first, Node second) { long N = 1000000007; long num1 = 0, num2 = 0; while (first != null || second != null){ if(first != null){ num1 = ((num1)*10)%N + first.data; first = first.next; } if(second != null) { num2 = ((num2)*10)%N + second.data; second = second.next; } } return ((num1%N)*(num2%N))%N; } // A utility function to print a linked list static void printList(Node node) { while(node != null) { System.out.print(node.data); if(node.next != null) System.out.print(\"->\"); node = node.next; } System.out.println(); } // Driver program to test above function public static void main(String args[]) { // create first list 9->4->6 Node first = new Node(9); first.next = new Node(4); first.next.next = new Node(6); System.out.print(\"First List is: \"); printList(first); // create second list 8->4 Node second = new Node(8); second.next = new Node(4); System.out.print(\"Second List is: \"); printList(second); // Multiply the two lists and see result System.out.print(\"Result is: \"); System.out.println(multiplyTwoLists(first, second)); }} // This code is contributed by adityapande88", "e": 7195, "s": 5327, "text": null }, { "code": "# Python3 to multiply two numbers# represented as Linked Lists # Linked list node class class Node: # Function to initialize the node def __init__(self, data): self.data = data self.next = None # Linked List Classclass LinkedList: # Function to initialize the # LinkedList class. def __init__(self): # Initialize head as None self.head = None # Function to insert a node at the # beginning of the Linked List def push(self, new_data): # Create a new Node new_node = Node(new_data) # Make next of the new Node as head new_node.next = self.head # Move the head to point to new Node self.head = new_node # Function to print the Linked List def printList(self): ptr = self.head while (ptr != None): print(ptr.data, end = '') if ptr.next != None: print('->', end = '') ptr = ptr.next print() # Multiply contents of two Linked Listsdef multiplyTwoLists(first, second): num1 = 0 num2 = 0 first_ptr = first.head second_ptr = second.head while first_ptr != None or second_ptr != None: if first_ptr != None: num1 = (num1 * 10) + first_ptr.data first_ptr = first_ptr.next if second_ptr != None: num2 = (num2 * 10) + second_ptr.data second_ptr = second_ptr.next return num1 * num2 # Driver codeif __name__=='__main__': first = LinkedList() second = LinkedList() # Create first Linked List 9->4->6 first.push(6) first.push(4) first.push(9) # Printing first Linked List print(\"First list is: \", end = '') first.printList() # Create second Linked List 8->4 second.push(4) second.push(8) # Printing second Linked List print(\"Second List is: \", end = '') second.printList() # Multiply two linked list and # print the result result = multiplyTwoLists(first, second) print(\"Result is: \", result) # This code is contributed by kirtishsurangalikar", "e": 9402, "s": 7195, "text": null }, { "code": "// C++ program to Multiply two numbers// represented as linked listsusing System;public class GFG{ // Linked list node public class Node { public int data; public Node next; public Node(int data) { this.data = data; this.next = null; } } // Multiply contents of two linked lists public static long multiplyTwoLists(Node first, Node second) { var N = 1000000007; var num1 = 0; var num2 = 0; while (first != null || second != null) { if (first != null) { num1 = ((num1)*10) % N + first.data; first = first.next; } if (second != null) { num2 = ((num2)*10) % N + second.data; second = second.next; } } return ((num1 % N) * (num2 % N)) % N; } // A utility function to print a linked list public static void printList(Node node) { while (node != null) { Console.Write(node.data); if (node.next != null) { Console.Write(\"->\"); } node = node.next; } Console.WriteLine(); } // Driver program to test above function public static void Main(String[] args) { // create first list 9->4->6 var first = new Node(9); first.next = new Node(4); first.next.next = new Node(6); Console.Write(\"First List is: \"); GFG.printList(first); // create second list 8->4 var second = new Node(8); second.next = new Node(4); Console.Write(\"Second List is: \"); GFG.printList(second); // Multiply the two lists and see result Console.Write(\"Result is: \"); Console.WriteLine( GFG.multiplyTwoLists(first, second)); }} // This code is contributed by mukulsomukesh", "e": 11327, "s": 9402, "text": null }, { "code": "<script>// Javascript program to Multiply two numbers// represented as linked lists // Linked list nodeclass Node{ constructor(data) { this.data=data; this.next = null; }} // Multiply contents of two linked listsfunction multiplyTwoLists(first,second){ let N = 1000000007; let num1 = 0, num2 = 0; while (first != null || second != null){ if(first != null){ num1 = ((num1)*10)%N + first.data; first = first.next; } if(second != null) { num2 = ((num2)*10)%N + second.data; second = second.next; } } return ((num1%N)*(num2%N))%N; } // A utility function to print a linked list function printList(node) { while(node != null) { document.write(node.data); if(node.next != null) document.write(\"->\"); node = node.next; } document.write(\"<br>\"); } // Driver program to test above function// create first list 9->4->6let first = new Node(9);first.next = new Node(4);first.next.next = new Node(6);document.write(\"First List is: \");printList(first); // create second list 8->4let second = new Node(8);second.next = new Node(4);document.write(\"Second List is: \");printList(second); // Multiply the two lists and see resultdocument.write(\"Result is: \");document.write(multiplyTwoLists(first, second)+\"<br>\"); // This code is contributed by avanitrachhadiya2155</script>", "e": 12917, "s": 11327, "text": null }, { "code": null, "e": 12978, "s": 12917, "text": "First List is: 9->4->6\nSecond List is: 8->4\nResult is: 79464" }, { "code": null, "e": 13187, "s": 12978, "text": "Time Complexity: O(max(n1, n2)), where n1 and n2 represents the number of nodes present in the first and second linked list respectively.Auxiliary Space: O(1), no extra space is required, so it is a constant." }, { "code": null, "e": 13609, "s": 13187, "text": "This article is contributed by Harsh Agarwal. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. " }, { "code": null, "e": 13623, "s": 13609, "text": "harishkumar88" }, { "code": null, "e": 13634, "s": 13623, "text": "andrew1234" }, { "code": null, "e": 13648, "s": 13634, "text": "princiraj1992" }, { "code": null, "e": 13666, "s": 13648, "text": "daljeetsingh22sk9" }, { "code": null, "e": 13680, "s": 13666, "text": "adityapande88" }, { "code": null, "e": 13700, "s": 13680, "text": "kirtishsurangalikar" }, { "code": null, "e": 13721, "s": 13700, "text": "avanitrachhadiya2155" }, { "code": null, "e": 13735, "s": 13721, "text": "mukulsomukesh" }, { "code": null, "e": 13747, "s": 13735, "text": "krisania804" }, { "code": null, "e": 13757, "s": 13747, "text": "samim2000" }, { "code": null, "e": 13764, "s": 13757, "text": "Amazon" }, { "code": null, "e": 13783, "s": 13764, "text": "Modular Arithmetic" }, { "code": null, "e": 13795, "s": 13783, "text": "Linked List" }, { "code": null, "e": 13802, "s": 13795, "text": "Amazon" }, { "code": null, "e": 13814, "s": 13802, "text": "Linked List" }, { "code": null, "e": 13833, "s": 13814, "text": "Modular Arithmetic" }, { "code": null, "e": 13931, "s": 13833, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 13950, "s": 13931, "text": "LinkedList in Java" }, { "code": null, "e": 13982, "s": 13950, "text": "Introduction to Data Structures" }, { "code": null, "e": 14046, "s": 13982, "text": "What is Data Structure: Types, Classifications and Applications" }, { "code": null, "e": 14067, "s": 14046, "text": "Linked List vs Array" }, { "code": null, "e": 14114, "s": 14067, "text": "Implementing a Linked List in Java using Class" }, { "code": null, "e": 14169, "s": 14114, "text": "Find Length of a Linked List (Iterative and Recursive)" }, { "code": null, "e": 14225, "s": 14169, "text": "Function to check if a singly linked list is palindrome" }, { "code": null, "e": 14260, "s": 14225, "text": "Queue - Linked List Implementation" }, { "code": null, "e": 14327, "s": 14260, "text": "Write a function to get the intersection point of two Linked Lists" } ]
Difference between the COPY and ADD commands in a Dockerfile
10 May, 2022 When creating Dockerfiles, it’s often necessary to transfer files from the host system into the Docker image. These could be property files, native libraries, or other static content that our applications will require at runtime. The Dockerfile specification provides two ways to copy files from the source system into an image: the COPY and ADD directives.Here we will look at the difference between them and when it makes sense to use each one. Sometimes you see COPY or ADD being used in a Dockerfile, but 99% of the time you should be using COPY. Here’s why? COPY and ADD are both Dockerfile instructions that serve similar purposes. They let you copy files from a specific location into a Docker image.COPY takes in a src and destination. It only lets you copy in a local or directory from your host (the machine-building the Docker image) into the Docker image itself. COPY <src> <dest> ADD lets you do that too, but it also supports 2 other sources. First, you can use a URL instead of a local file/directory. Secondly, you can extract tar from the source directory into the destination. ADD <src> <dest> In most cases, if you’re using a URL, you download a zip file and then use the RUN command to extract it. However, you might as well just use RUN and curl instead of ADD here, so you chain everything into 1 RUN command to make a smaller Docker image. A valid use case for ADD is when you want to extract a local tar file into a specific directory in your Docker image. This is exactly what the Alpine image does with ADD rootfs.tar.gz /. If one is copying local files to your Docker image, always use COPY because it’s more explicit. While functionality is similar, the ADD directive is more powerful in two ways as follows: It can handle remote URLsIt can also auto-extract tar files. It can handle remote URLs It can also auto-extract tar files. Let’s look at these more closely. First, the ADD directive can accept a remote URL for its source argument. The COPY directive, on the other hand, can only accept local files. Note: Using ADD to fetch remote files and copying is not typically ideal. This is because the file will increase the overall Docker Image size. Instead, we should use curl or wget to fetch remote files and remove them when no longer needed. Second, the ADD directive will automatically expand tar files into the image file system. While this can reduce the number of Dockerfile steps required to build an image, it may not be desired in all cases. Note: The auto-expansion only occurs when the source file is local to the host system. When to use ADD or COPY: According to the Dockerfile best practices guide, we should always prefer COPY over ADD unless we specifically need one of the two additional features of ADD. As noted above, using ADD command automatically expands tar files and certain compressed formats, which can lead to unexpected files being written to the file system in our images. Conclusion: Here we have seen the two primary ways to copy files into a Docker image: ADD and COPY. While functionally similar, the COPY directive is preferred for most cases. This is because the ADD directive provides additional functionality that should be used with caution and only when needed. Let us see the differences in a tabular form as follows: mayank007rawa Picked Difference Between Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Difference between var, let and const keywords in JavaScript Difference Between Method Overloading and Method Overriding in Java Difference between Compile-time and Run-time Polymorphism in Java Similarities and Difference between Java and C++ Difference between Internal and External fragmentation Difference between DELETE and TRUNCATE Differences and Applications of List, Tuple, Set and Dictionary in Python Difference between Abstraction and Encapsulation in Java with Examples Difference Between map() And flatMap() In Java Stream Difference between URL and URI
[ { "code": null, "e": 53, "s": 25, "text": "\n10 May, 2022" }, { "code": null, "e": 283, "s": 53, "text": "When creating Dockerfiles, it’s often necessary to transfer files from the host system into the Docker image. These could be property files, native libraries, or other static content that our applications will require at runtime." }, { "code": null, "e": 500, "s": 283, "text": "The Dockerfile specification provides two ways to copy files from the source system into an image: the COPY and ADD directives.Here we will look at the difference between them and when it makes sense to use each one." }, { "code": null, "e": 616, "s": 500, "text": "Sometimes you see COPY or ADD being used in a Dockerfile, but 99% of the time you should be using COPY. Here’s why?" }, { "code": null, "e": 928, "s": 616, "text": "COPY and ADD are both Dockerfile instructions that serve similar purposes. They let you copy files from a specific location into a Docker image.COPY takes in a src and destination. It only lets you copy in a local or directory from your host (the machine-building the Docker image) into the Docker image itself." }, { "code": null, "e": 946, "s": 928, "text": "COPY <src> <dest>" }, { "code": null, "e": 1149, "s": 946, "text": "ADD lets you do that too, but it also supports 2 other sources. First, you can use a URL instead of a local file/directory. Secondly, you can extract tar from the source directory into the destination." }, { "code": null, "e": 1166, "s": 1149, "text": "ADD <src> <dest>" }, { "code": null, "e": 1607, "s": 1166, "text": "In most cases, if you’re using a URL, you download a zip file and then use the RUN command to extract it. However, you might as well just use RUN and curl instead of ADD here, so you chain everything into 1 RUN command to make a smaller Docker image. A valid use case for ADD is when you want to extract a local tar file into a specific directory in your Docker image. This is exactly what the Alpine image does with ADD rootfs.tar.gz /. " }, { "code": null, "e": 1703, "s": 1607, "text": "If one is copying local files to your Docker image, always use COPY because it’s more explicit." }, { "code": null, "e": 1794, "s": 1703, "text": "While functionality is similar, the ADD directive is more powerful in two ways as follows:" }, { "code": null, "e": 1855, "s": 1794, "text": "It can handle remote URLsIt can also auto-extract tar files." }, { "code": null, "e": 1881, "s": 1855, "text": "It can handle remote URLs" }, { "code": null, "e": 1917, "s": 1881, "text": "It can also auto-extract tar files." }, { "code": null, "e": 1951, "s": 1917, "text": "Let’s look at these more closely." }, { "code": null, "e": 2094, "s": 1951, "text": "First, the ADD directive can accept a remote URL for its source argument. The COPY directive, on the other hand, can only accept local files. " }, { "code": null, "e": 2168, "s": 2094, "text": "Note: Using ADD to fetch remote files and copying is not typically ideal." }, { "code": null, "e": 2335, "s": 2168, "text": "This is because the file will increase the overall Docker Image size. Instead, we should use curl or wget to fetch remote files and remove them when no longer needed." }, { "code": null, "e": 2542, "s": 2335, "text": "Second, the ADD directive will automatically expand tar files into the image file system. While this can reduce the number of Dockerfile steps required to build an image, it may not be desired in all cases." }, { "code": null, "e": 2629, "s": 2542, "text": "Note: The auto-expansion only occurs when the source file is local to the host system." }, { "code": null, "e": 2994, "s": 2629, "text": "When to use ADD or COPY: According to the Dockerfile best practices guide, we should always prefer COPY over ADD unless we specifically need one of the two additional features of ADD. As noted above, using ADD command automatically expands tar files and certain compressed formats, which can lead to unexpected files being written to the file system in our images." }, { "code": null, "e": 3293, "s": 2994, "text": "Conclusion: Here we have seen the two primary ways to copy files into a Docker image: ADD and COPY. While functionally similar, the COPY directive is preferred for most cases. This is because the ADD directive provides additional functionality that should be used with caution and only when needed." }, { "code": null, "e": 3351, "s": 3293, "text": "Let us see the differences in a tabular form as follows: " }, { "code": null, "e": 3365, "s": 3351, "text": "mayank007rawa" }, { "code": null, "e": 3372, "s": 3365, "text": "Picked" }, { "code": null, "e": 3391, "s": 3372, "text": "Difference Between" }, { "code": null, "e": 3489, "s": 3391, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 3550, "s": 3489, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 3618, "s": 3550, "text": "Difference Between Method Overloading and Method Overriding in Java" }, { "code": null, "e": 3684, "s": 3618, "text": "Difference between Compile-time and Run-time Polymorphism in Java" }, { "code": null, "e": 3733, "s": 3684, "text": "Similarities and Difference between Java and C++" }, { "code": null, "e": 3788, "s": 3733, "text": "Difference between Internal and External fragmentation" }, { "code": null, "e": 3827, "s": 3788, "text": "Difference between DELETE and TRUNCATE" }, { "code": null, "e": 3901, "s": 3827, "text": "Differences and Applications of List, Tuple, Set and Dictionary in Python" }, { "code": null, "e": 3972, "s": 3901, "text": "Difference between Abstraction and Encapsulation in Java with Examples" }, { "code": null, "e": 4026, "s": 3972, "text": "Difference Between map() And flatMap() In Java Stream" } ]
How to close all activities at once in android?
This example demonstrates how do I close all activities at once in android app. Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project. Step 2 − Add the following code to res/layout/activity_main.xml <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".MainActivity"> <TextView android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_above="@id/button" android:text="Activity_One" android:gravity="center" android:layout_marginBottom="20sp" /> <Button android:id="@+id/button" android:layout_width="match_parent" android:layout_height="wrap_content" android:text="Click here to start Second Activity!" android:layout_centerInParent="true" /> </RelativeLayout> Step 3 − Add the following code to src/MainActivity.java import android.content.Intent; import android.support.v7.app.AppCompatActivity; import android.os.Bundle; import android.view.View; import android.widget.Button; public class MainActivity extends AppCompatActivity { Button button; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); button = (Button)findViewById(R.id.button); button.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { Intent intent = new Intent(MainActivity.this, SecondActivity.class); startActivity(intent); } }); } } Step 4 − Create a new Activity(SecondActivity) and add the following code to res/layout/activity_second.xml. <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".SecondActivity"> <Button android:id="@+id/terminateButton" android:layout_width="match_parent" android:layout_height="wrap_content" android:text="Terminate all the activities" android:layout_centerInParent="true"/> </RelativeLayout> Step 5 − Add the following code to src/SecondActivity.java import android.support.v7.app.AppCompatActivity; import android.os.Bundle; import android.view.View; import android.widget.Button; import android.os.Process; public class SecondActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_second); Button button = (Button)findViewById(R.id.terminateButton); button.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { finish(); } }); } protected void onDestroy(){ Process.killProcess(Process.myPid()); super.onDestroy(); } } Step 6 − Add the following code to androidManifest.xml <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="app.com.sample"> <application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:roundIcon="@mipmap/ic_launcher_round" android:supportsRtl="true" android:theme="@style/AppTheme"> <activity android:name=".SecondActivity"></activity> <activity android:name=".MainActivity"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> </manifest> Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen − Click here to download the project code.
[ { "code": null, "e": 1142, "s": 1062, "text": "This example demonstrates how do I close all activities at once in android app." }, { "code": null, "e": 1271, "s": 1142, "text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project." }, { "code": null, "e": 1335, "s": 1271, "text": "Step 2 − Add the following code to res/layout/activity_main.xml" }, { "code": null, "e": 2111, "s": 1335, "text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<RelativeLayout\n xmlns:android=\"http://schemas.android.com/apk/res/android\"\n xmlns:tools=\"http://schemas.android.com/tools\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\"\n tools:context=\".MainActivity\">\n <TextView\n android:layout_width=\"match_parent\"\n android:layout_height=\"wrap_content\"\n android:layout_above=\"@id/button\"\n android:text=\"Activity_One\"\n android:gravity=\"center\"\n android:layout_marginBottom=\"20sp\" />\n <Button\n android:id=\"@+id/button\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"wrap_content\"\n android:text=\"Click here to start Second Activity!\"\n android:layout_centerInParent=\"true\" />\n</RelativeLayout>" }, { "code": null, "e": 2168, "s": 2111, "text": "Step 3 − Add the following code to src/MainActivity.java" }, { "code": null, "e": 2872, "s": 2168, "text": "import android.content.Intent;\nimport android.support.v7.app.AppCompatActivity;\nimport android.os.Bundle;\nimport android.view.View;\nimport android.widget.Button;\npublic class MainActivity extends AppCompatActivity {\n Button button;\n @Override\n protected void onCreate(Bundle savedInstanceState) {\n super.onCreate(savedInstanceState);\n setContentView(R.layout.activity_main);\n button = (Button)findViewById(R.id.button);\n button.setOnClickListener(new View.OnClickListener() {\n @Override\n public void onClick(View v) {\n Intent intent = new Intent(MainActivity.this, SecondActivity.class);\n startActivity(intent);\n }\n });\n }\n}" }, { "code": null, "e": 2981, "s": 2872, "text": "Step 4 − Create a new Activity(SecondActivity) and add the following code to res/layout/activity_second.xml." }, { "code": null, "e": 3512, "s": 2981, "text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<RelativeLayout\n xmlns:android=\"http://schemas.android.com/apk/res/android\"\n xmlns:tools=\"http://schemas.android.com/tools\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\"\n tools:context=\".SecondActivity\">\n <Button\n android:id=\"@+id/terminateButton\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"wrap_content\"\n android:text=\"Terminate all the activities\"\n android:layout_centerInParent=\"true\"/>\n</RelativeLayout>" }, { "code": null, "e": 3571, "s": 3512, "text": "Step 5 − Add the following code to src/SecondActivity.java" }, { "code": null, "e": 4284, "s": 3571, "text": "import android.support.v7.app.AppCompatActivity;\nimport android.os.Bundle;\nimport android.view.View;\nimport android.widget.Button;\nimport android.os.Process;\npublic class SecondActivity extends AppCompatActivity {\n @Override\n protected void onCreate(Bundle savedInstanceState) {\n super.onCreate(savedInstanceState);\n setContentView(R.layout.activity_second);\n Button button = (Button)findViewById(R.id.terminateButton);\n button.setOnClickListener(new View.OnClickListener() {\n @Override\n public void onClick(View v) {\n finish();\n }\n });\n }\n protected void onDestroy(){\n Process.killProcess(Process.myPid());\n super.onDestroy();\n }\n}" }, { "code": null, "e": 4339, "s": 4284, "text": "Step 6 − Add the following code to androidManifest.xml" }, { "code": null, "e": 5068, "s": 4339, "text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<manifest xmlns:android=\"http://schemas.android.com/apk/res/android\" package=\"app.com.sample\">\n <application\n android:allowBackup=\"true\"\n android:icon=\"@mipmap/ic_launcher\"\n android:label=\"@string/app_name\"\n android:roundIcon=\"@mipmap/ic_launcher_round\"\n android:supportsRtl=\"true\"\n android:theme=\"@style/AppTheme\">\n <activity android:name=\".SecondActivity\"></activity>\n <activity android:name=\".MainActivity\">\n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.LAUNCHER\" />\n </intent-filter>\n </activity>\n </application>\n</manifest>" }, { "code": null, "e": 5415, "s": 5068, "text": "Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen −" }, { "code": null, "e": 5456, "s": 5415, "text": "Click here to download the project code." } ]
Simulating Tennis Matches with Python or Moneyball for Tennis | by Osho Jha | Towards Data Science
Over the past few months, I’ve thought a lot about sports betting. Sure the regulatory changes help, but mostly it’s hard for me to look at a problem with so much widely available data and not attempt it. I grew up both playing and watching tennis. When I started really getting into the sport, Pete Sampras’s had more or less retired, Agassi hadn’t quite had his renaissance but was still a competitor, and newer guys like Safin (fresh off a victory against Sampras in the US Open final) had established reputations as dangerous and talented players. Then came Roger Federer. He turned everything on its head — forget the world records, he was putting up some of the most amazing match stats I had ever seen. It was textbook play in terms of winners, winners to errors ratio, and first serve percentage, and break points converted. You didn’t need to see the stats though, you could watch more or less any of his matches and see a near perfect execution of any shot. Later on, when I stopped watching his matches due to the intense anxiety I had (especially when he was playing Rafa), I noticed that you could decipher a lot of how his match was going based on his first serve percentage. It’s still I metric I use to track his performance through tournaments. So anyways, this is some personal background for the simulation system. Our Objective: We’re interested in predicting tennis matches. While there are several approaches to do this, we want to take a point-by-point approach. Assume that a player’s serve is iid. Let ps1 be the probability that player 1 will win a point on his serve and ps2 the probability be that player 2 will win a point on his serve. If one player wins a point, the other player has to lose a point so the probability of winning a return for player 1 is just pr1 = 1 − ps2. We can therefore forget about return probabilities. While you can work out the probability of winning a match with recursive equations and a bit of combinatorics, we want to simulate this instead. We will assume that all sets at 6 games each end in tiebreaks and we play best of 3 sets. For added complexity, let’s assume that the probability of winning the point changes on ‘big points’. A big point is defined as a point that can win you a game or set (so includes set points in tiebreaks). So we add the probability ps1,B and ps2,B as the probability that a player will win his point on serve on a big point. Otherwise the probabilities remain ps1and ps2. We’ll aim to format our scores in the following manner: 40–15 | 6:4 6:7 3:3, i.e. for games in progress and tiebreaks, list the player serving first. For sets (whether in progress or completed), list player 1 first, then player 2. This will help us track the game point-by-point and will make things a bit easier when debugging as any odd score combos will stand out and clue us to possible issues in our logic. General Outline: For a quick approach, we’ll approach this problem by setting up a series of functions: a function simulate_set, which tracks games in the set and determines whether or not the set is going to a tiebreaka function called player_server, which increments points for server and returner and keeps track of big pointsa function isBigPoint, which is more of a helper function that determines and an accompanying function getBigPointProbability, which also serves as a helper function and returns the new probabilities if a point is determined to be a big pointa function simulate_tiebreak, which plays tiebreakersa function getScore, which functions as a scoreboardtwo functions printSetMatchSummary and pointsMatchSummary, which print helpful updates such as “Player 1 wins set 1 6 games to 2” and “Player 1 wins the match 3 sets to 1”. These functions aren’t explicitly necessary but I think they offer a nice aesthetic touch a function simulate_set, which tracks games in the set and determines whether or not the set is going to a tiebreak a function called player_server, which increments points for server and returner and keeps track of big points a function isBigPoint, which is more of a helper function that determines and an accompanying function getBigPointProbability, which also serves as a helper function and returns the new probabilities if a point is determined to be a big point a function simulate_tiebreak, which plays tiebreakers a function getScore, which functions as a scoreboard two functions printSetMatchSummary and pointsMatchSummary, which print helpful updates such as “Player 1 wins set 1 6 games to 2” and “Player 1 wins the match 3 sets to 1”. These functions aren’t explicitly necessary but I think they offer a nice aesthetic touch Build out the functions: After completing a simulation or any code structure for that matter, going back and writing about it never captures the whole process — all the written and deleted lines, all the “yes!” moments of cracking logic that ultimately lead to “damn! back to the drawing board” run results. So what’s presented below is a full copy of the necessary functions: Control flow: The code above has a section which runs all the code. We consider default values for most of the important parameters such as player name, ps1 and ps2, and bigpoint1 and bigpoint2. I liked to think of ps1 and ps2 as first serve percentage but we can do a lot of interesting feature engineering to make the probabilities more insightful. If we wanted to run 10, 100, 1000, or more simulations of this match in order to understand confidence intervals, we can change our control code as follows: #this control flow module runs 1000 simulations#and stores the winner of each simulation in winner=[]winner = []p1 = "A"p2 = "B"a = 0.64b = 0.62p1_big_point = 0.70p2_big_point = 0.68#run 1000 runs of the simulationfor ii in range(0, 1000): completed_sets = [] S = 0 gamesMatch = 0 pointsMatch1, pointsMatch2 = 0, 0 setsMatch1, setsMatch2 = 0, 0 pointsTie1, pointsTie2 = 0, 0 pointsGame1, pointsGame2 = 0, 0 while S < 5 and max(setsMatch1, setsMatch2) < 3: gamesSet1, gamesSet2, gamesMatch, S, pointsMatch1, pointsMatch2 = simulateSet(a, b, gamesMatch, S, pointsMatch1, pointsMatch2, completed_sets) print() if gamesSet1 == 6 and gamesSet2 == 6: pointsTie1, pointsTie2, gamesMatch, pointsMatch1, pointsMatch2 = simulateTiebreaker(p1, p2, a, b, gamesMatch, pointsMatch1, pointsMatch2, completed_sets)setsMatch1, setsMatch2 = printSetMatchSummary(p1, p2, gamesSet1, gamesSet2, S, pointsTie1, pointsTie2, setsMatch1, setsMatch2)if gamesSet1 == 6 and gamesSet2 == 6: if pointsTie1 > pointsTie2: completed_sets.append([gamesSet1+1, gamesSet2]) else: completed_sets.append([gamesSet1, gamesSet2+1]) else: completed_sets.append([gamesSet1, gamesSet2])pointsMatchSummary(p1, p2, setsMatch1, setsMatch2, pointsMatch1, pointsMatch2) When our code is run the output looks as follows (just a small snippet). Note, at the end of a game the code displays the number of points won by player A vs. player B: A 0-0|[0-0]A 15-0|[0-0]A 15-15|[0-0]A 30-15|[0-0]game pointA 40-15|[0-0]game pointA 40-30|[0-0] A: 4, B: 2B 0-0|[0-1]B 0-15|[0-1]B 0-30|[0-1]B 15-30|[0-1]B 15-40|[0-1]B 30-40|[0-1] B: 2, A: 4 -- A brokeA 0-0|[2-0]A 0-15|[2-0]A 15-15|[2-0]A 15-30|[2-0]A 15-40|[2-0] A: 1, B: 4 -- B brokeB 0-0|[1-2]B 15-0|[1-2]B 30-0|[1-2]game pointB 40-0|[1-2]game pointB 40-15|[1-2] B: 4, A: 1 How to use this for betting: Now, I’m not going to give away all my secrets, but tennis betting is a fairly good target for statistical methods. I haven’t used this yet, but instead of betting on full matches (or maybe in addition to) I would use this to bet on next point won. Given the output, we could parse many runs to see how often a player wins on 40–15 or 30–30. Isn’t this a bit simplistic? Yes! Most simulations are. In this case, our focus to get some skeleton of game flow going and we narrowed down a whole match into tracking ps1 and ps2. a. For improvements on the simulation, I had a few thoughts that can be categorized as statistical/model improvements and coding/software engineering improvements: Statistical: Originally, we thought we could make a more accurate simulation by incorporating more data instead of just a singular probability for winning a server. However, after more thought, we believe we can still work with a singular probability. In order to set an appropriate starting probability we can take a player’s stats from completed matches, specifically the metrics of importance would be: first serve percent, first serve points won (%), second serve percent, second serve points won (%), break points converted (%), break points defended (%). We can average them to get a comprehensive metric or — continue with our serving focused simulation — and construct a metric that would look something like first serve percent/L2_norm(metrics_listed_above). We can run this for other metrics as the numerator and check against historical matches to see which metric has the most predictive power. The use of the L2 norm puts it in context of the other metrics. We should also implement big points such that break points for a server are recognized as a big point. This brings up another point about our simulation. It’s focused around players’ points won on serve, but in a real match that fluctuates. We are using a fixed probability and even the best methods of computing that fixed probability don’t really encompass the fact that during a match there are ebbs-and-flows in performance. A more accurate way to model this problem is to use a dynamically calculated probability of winning on serve. A tennis match can be represented as discrete state spaces. This allows us to model a match with a Markov Chain as each point in a game is a state and there is a transition probability for entering the next state and that can be encoded into a transition matrix. As with most Markov models, the implementation would probably be a bit tedious, but it is worth pointing out that it’s do-able and probably a better way to utilize data collected on the metrics we listed above. Stepping back from the idea of a simulation, if our goal is to forecast the winner, we suspect tennis would be a great domain for classification algorithms. We believe that a classification algorithm could appropriately utilize the rich set of features that can be attributed to players and matches. For example, the metrics of importance that were listed above are only a subset of all the statistics created and generated during a match. Likewise, each match has it’s own features, the simplest of which is weather conditions. We could align these values on a match level i.e. stats computed at the end of a match or even down to the point level in order to have substantially large data mass. Looking at historical matches for each player we could compute a probability for beating a certain type of opponent. However, all this assumes that player A and player B are real players with match history and not just constructs for the sake of simulation Engineering: The code that was put together for this problem has a few separate functions and a main loop that controls the flow between those functions. While this is functional and can serve as production level code, we believe the solution can be better synthesized when approached in an object oriented way. Tennis lends itself to this kind of implementation as each player has distinct properties and matches, sets, and games all have shared properties. To start, we would want to create a Player class, which would hold attributes such as probability of winning a point, probability of winning a big point, sets won, and games won. Each player would be an object of this class. We could implement an addPoint function in this class so that each player object could track its own points. This would help us implement an event log type trigger where events such as tiebreaks or big points would occur if a Player object’s point count hits certain numbers. Next we would want to create a class Match and classes Set and Game that would extend Match and Set, respectively. Similar to Player class there would be many getter and setter functions to track properties of each Match, Set, and Game. The most important of these would be getScore where Match, Set, and Game would have their own copy through inheritance and since this function would be within it’s own class, no function parameters would be needed. Another interesting advantage of inheritance would be a isTiebreak and playTiebreak function in both Set and Game. isTiebreak would just return a boolean if a game or set has entered a tiebreak and playTiebreak to carry through the logic. The reason this works is because in Tennis a game is more or less an atomic unit through which sets are made. So when two players have a score of 40–40 they are really playing a mini tiebreak except the score oscillates between Deuce (D) and Advantage (A) as opposed to numeric values as is done in a tiebreak at the end of a set. We could theoretically make Game the base class and Set would extend Game and Match would extend Set, but this hasn’t been fully thought out yet. We believe the above is the best layout for this problem on a production level. This allows for greater flexibility going forward as we can easily add more attributes to each class. The code submitted, however, is a proof of concept. We can run simulations and ultimately understand whether or not there is value to running this regularly. Outside of the scope of this problem, if we were considering testing models for efficacy the function based script approach allows for a quick proof-of-concept. Once we have found validity in our model we can set up production level code in the object oriented way described above while still carrying over the core program flow logic (in this case, parameters for determining a tie break, a big point, scoring changes with respect to Deuce and Advantage) and optimizing our functions. Conclusion: As the complexity of the simulation expands, I think it will necessitate the move to the object oriented approach. If you’re not one for sports betting but are interested in coding, this could be a great opportunity to take a functional approach to a simulation and turn it into an object oriented program to better learn the tenets of polymorphism. If you are into sports betting, then get your hands on the publicly available match data and improve the statistical analysis. Tennis is a great sport for algorithmic betting because of the plethora of data and the nature of the sport i.e. 1 vs 1 instead of modeling a team. Have fun, and best of luck!
[ { "code": null, "e": 377, "s": 172, "text": "Over the past few months, I’ve thought a lot about sports betting. Sure the regulatory changes help, but mostly it’s hard for me to look at a problem with so much widely available data and not attempt it." }, { "code": null, "e": 1506, "s": 377, "text": "I grew up both playing and watching tennis. When I started really getting into the sport, Pete Sampras’s had more or less retired, Agassi hadn’t quite had his renaissance but was still a competitor, and newer guys like Safin (fresh off a victory against Sampras in the US Open final) had established reputations as dangerous and talented players. Then came Roger Federer. He turned everything on its head — forget the world records, he was putting up some of the most amazing match stats I had ever seen. It was textbook play in terms of winners, winners to errors ratio, and first serve percentage, and break points converted. You didn’t need to see the stats though, you could watch more or less any of his matches and see a near perfect execution of any shot. Later on, when I stopped watching his matches due to the intense anxiety I had (especially when he was playing Rafa), I noticed that you could decipher a lot of how his match was going based on his first serve percentage. It’s still I metric I use to track his performance through tournaments. So anyways, this is some personal background for the simulation system." }, { "code": null, "e": 2265, "s": 1506, "text": "Our Objective: We’re interested in predicting tennis matches. While there are several approaches to do this, we want to take a point-by-point approach. Assume that a player’s serve is iid. Let ps1 be the probability that player 1 will win a point on his serve and ps2 the probability be that player 2 will win a point on his serve. If one player wins a point, the other player has to lose a point so the probability of winning a return for player 1 is just pr1 = 1 − ps2. We can therefore forget about return probabilities. While you can work out the probability of winning a match with recursive equations and a bit of combinatorics, we want to simulate this instead. We will assume that all sets at 6 games each end in tiebreaks and we play best of 3 sets." }, { "code": null, "e": 2637, "s": 2265, "text": "For added complexity, let’s assume that the probability of winning the point changes on ‘big points’. A big point is defined as a point that can win you a game or set (so includes set points in tiebreaks). So we add the probability ps1,B and ps2,B as the probability that a player will win his point on serve on a big point. Otherwise the probabilities remain ps1and ps2." }, { "code": null, "e": 3049, "s": 2637, "text": "We’ll aim to format our scores in the following manner: 40–15 | 6:4 6:7 3:3, i.e. for games in progress and tiebreaks, list the player serving first. For sets (whether in progress or completed), list player 1 first, then player 2. This will help us track the game point-by-point and will make things a bit easier when debugging as any odd score combos will stand out and clue us to possible issues in our logic." }, { "code": null, "e": 3153, "s": 3049, "text": "General Outline: For a quick approach, we’ll approach this problem by setting up a series of functions:" }, { "code": null, "e": 3988, "s": 3153, "text": "a function simulate_set, which tracks games in the set and determines whether or not the set is going to a tiebreaka function called player_server, which increments points for server and returner and keeps track of big pointsa function isBigPoint, which is more of a helper function that determines and an accompanying function getBigPointProbability, which also serves as a helper function and returns the new probabilities if a point is determined to be a big pointa function simulate_tiebreak, which plays tiebreakersa function getScore, which functions as a scoreboardtwo functions printSetMatchSummary and pointsMatchSummary, which print helpful updates such as “Player 1 wins set 1 6 games to 2” and “Player 1 wins the match 3 sets to 1”. These functions aren’t explicitly necessary but I think they offer a nice aesthetic touch" }, { "code": null, "e": 4104, "s": 3988, "text": "a function simulate_set, which tracks games in the set and determines whether or not the set is going to a tiebreak" }, { "code": null, "e": 4215, "s": 4104, "text": "a function called player_server, which increments points for server and returner and keeps track of big points" }, { "code": null, "e": 4458, "s": 4215, "text": "a function isBigPoint, which is more of a helper function that determines and an accompanying function getBigPointProbability, which also serves as a helper function and returns the new probabilities if a point is determined to be a big point" }, { "code": null, "e": 4512, "s": 4458, "text": "a function simulate_tiebreak, which plays tiebreakers" }, { "code": null, "e": 4565, "s": 4512, "text": "a function getScore, which functions as a scoreboard" }, { "code": null, "e": 4828, "s": 4565, "text": "two functions printSetMatchSummary and pointsMatchSummary, which print helpful updates such as “Player 1 wins set 1 6 games to 2” and “Player 1 wins the match 3 sets to 1”. These functions aren’t explicitly necessary but I think they offer a nice aesthetic touch" }, { "code": null, "e": 4853, "s": 4828, "text": "Build out the functions:" }, { "code": null, "e": 5205, "s": 4853, "text": "After completing a simulation or any code structure for that matter, going back and writing about it never captures the whole process — all the written and deleted lines, all the “yes!” moments of cracking logic that ultimately lead to “damn! back to the drawing board” run results. So what’s presented below is a full copy of the necessary functions:" }, { "code": null, "e": 5219, "s": 5205, "text": "Control flow:" }, { "code": null, "e": 5713, "s": 5219, "text": "The code above has a section which runs all the code. We consider default values for most of the important parameters such as player name, ps1 and ps2, and bigpoint1 and bigpoint2. I liked to think of ps1 and ps2 as first serve percentage but we can do a lot of interesting feature engineering to make the probabilities more insightful. If we wanted to run 10, 100, 1000, or more simulations of this match in order to understand confidence intervals, we can change our control code as follows:" }, { "code": null, "e": 7301, "s": 5713, "text": "#this control flow module runs 1000 simulations#and stores the winner of each simulation in winner=[]winner = []p1 = \"A\"p2 = \"B\"a = 0.64b = 0.62p1_big_point = 0.70p2_big_point = 0.68#run 1000 runs of the simulationfor ii in range(0, 1000): completed_sets = [] S = 0 gamesMatch = 0 pointsMatch1, pointsMatch2 = 0, 0 setsMatch1, setsMatch2 = 0, 0 pointsTie1, pointsTie2 = 0, 0 pointsGame1, pointsGame2 = 0, 0 while S < 5 and max(setsMatch1, setsMatch2) < 3: gamesSet1, gamesSet2, gamesMatch, S, pointsMatch1, pointsMatch2 = simulateSet(a, b, gamesMatch, S, pointsMatch1, pointsMatch2, completed_sets) print() if gamesSet1 == 6 and gamesSet2 == 6: pointsTie1, pointsTie2, gamesMatch, pointsMatch1, pointsMatch2 = simulateTiebreaker(p1, p2, a, b, gamesMatch, pointsMatch1, pointsMatch2, completed_sets)setsMatch1, setsMatch2 = printSetMatchSummary(p1, p2, gamesSet1, gamesSet2, S, pointsTie1, pointsTie2, setsMatch1, setsMatch2)if gamesSet1 == 6 and gamesSet2 == 6: if pointsTie1 > pointsTie2: completed_sets.append([gamesSet1+1, gamesSet2]) else: completed_sets.append([gamesSet1, gamesSet2+1]) else: completed_sets.append([gamesSet1, gamesSet2])pointsMatchSummary(p1, p2, setsMatch1, setsMatch2, pointsMatch1, pointsMatch2)" }, { "code": null, "e": 7470, "s": 7301, "text": "When our code is run the output looks as follows (just a small snippet). Note, at the end of a game the code displays the number of points won by player A vs. player B:" }, { "code": null, "e": 7852, "s": 7470, "text": "A 0-0|[0-0]A 15-0|[0-0]A 15-15|[0-0]A 30-15|[0-0]game pointA 40-15|[0-0]game pointA 40-30|[0-0] A: 4, B: 2B 0-0|[0-1]B 0-15|[0-1]B 0-30|[0-1]B 15-30|[0-1]B 15-40|[0-1]B 30-40|[0-1] B: 2, A: 4 -- A brokeA 0-0|[2-0]A 0-15|[2-0]A 15-15|[2-0]A 15-30|[2-0]A 15-40|[2-0] A: 1, B: 4 -- B brokeB 0-0|[1-2]B 15-0|[1-2]B 30-0|[1-2]game pointB 40-0|[1-2]game pointB 40-15|[1-2] B: 4, A: 1" }, { "code": null, "e": 8223, "s": 7852, "text": "How to use this for betting: Now, I’m not going to give away all my secrets, but tennis betting is a fairly good target for statistical methods. I haven’t used this yet, but instead of betting on full matches (or maybe in addition to) I would use this to bet on next point won. Given the output, we could parse many runs to see how often a player wins on 40–15 or 30–30." }, { "code": null, "e": 8252, "s": 8223, "text": "Isn’t this a bit simplistic?" }, { "code": null, "e": 8405, "s": 8252, "text": "Yes! Most simulations are. In this case, our focus to get some skeleton of game flow going and we narrowed down a whole match into tracking ps1 and ps2." }, { "code": null, "e": 8569, "s": 8405, "text": "a. For improvements on the simulation, I had a few thoughts that can be categorized as statistical/model improvements and coding/software engineering improvements:" }, { "code": null, "e": 8582, "s": 8569, "text": "Statistical:" }, { "code": null, "e": 9643, "s": 8582, "text": "Originally, we thought we could make a more accurate simulation by incorporating more data instead of just a singular probability for winning a server. However, after more thought, we believe we can still work with a singular probability. In order to set an appropriate starting probability we can take a player’s stats from completed matches, specifically the metrics of importance would be: first serve percent, first serve points won (%), second serve percent, second serve points won (%), break points converted (%), break points defended (%). We can average them to get a comprehensive metric or — continue with our serving focused simulation — and construct a metric that would look something like first serve percent/L2_norm(metrics_listed_above). We can run this for other metrics as the numerator and check against historical matches to see which metric has the most predictive power. The use of the L2 norm puts it in context of the other metrics. We should also implement big points such that break points for a server are recognized as a big point." }, { "code": null, "e": 10553, "s": 9643, "text": "This brings up another point about our simulation. It’s focused around players’ points won on serve, but in a real match that fluctuates. We are using a fixed probability and even the best methods of computing that fixed probability don’t really encompass the fact that during a match there are ebbs-and-flows in performance. A more accurate way to model this problem is to use a dynamically calculated probability of winning on serve. A tennis match can be represented as discrete state spaces. This allows us to model a match with a Markov Chain as each point in a game is a state and there is a transition probability for entering the next state and that can be encoded into a transition matrix. As with most Markov models, the implementation would probably be a bit tedious, but it is worth pointing out that it’s do-able and probably a better way to utilize data collected on the metrics we listed above." }, { "code": null, "e": 11366, "s": 10553, "text": "Stepping back from the idea of a simulation, if our goal is to forecast the winner, we suspect tennis would be a great domain for classification algorithms. We believe that a classification algorithm could appropriately utilize the rich set of features that can be attributed to players and matches. For example, the metrics of importance that were listed above are only a subset of all the statistics created and generated during a match. Likewise, each match has it’s own features, the simplest of which is weather conditions. We could align these values on a match level i.e. stats computed at the end of a match or even down to the point level in order to have substantially large data mass. Looking at historical matches for each player we could compute a probability for beating a certain type of opponent." }, { "code": null, "e": 11506, "s": 11366, "text": "However, all this assumes that player A and player B are real players with match history and not just constructs for the sake of simulation" }, { "code": null, "e": 11519, "s": 11506, "text": "Engineering:" }, { "code": null, "e": 12466, "s": 11519, "text": "The code that was put together for this problem has a few separate functions and a main loop that controls the flow between those functions. While this is functional and can serve as production level code, we believe the solution can be better synthesized when approached in an object oriented way. Tennis lends itself to this kind of implementation as each player has distinct properties and matches, sets, and games all have shared properties. To start, we would want to create a Player class, which would hold attributes such as probability of winning a point, probability of winning a big point, sets won, and games won. Each player would be an object of this class. We could implement an addPoint function in this class so that each player object could track its own points. This would help us implement an event log type trigger where events such as tiebreaks or big points would occur if a Player object’s point count hits certain numbers." }, { "code": null, "e": 13634, "s": 12466, "text": "Next we would want to create a class Match and classes Set and Game that would extend Match and Set, respectively. Similar to Player class there would be many getter and setter functions to track properties of each Match, Set, and Game. The most important of these would be getScore where Match, Set, and Game would have their own copy through inheritance and since this function would be within it’s own class, no function parameters would be needed. Another interesting advantage of inheritance would be a isTiebreak and playTiebreak function in both Set and Game. isTiebreak would just return a boolean if a game or set has entered a tiebreak and playTiebreak to carry through the logic. The reason this works is because in Tennis a game is more or less an atomic unit through which sets are made. So when two players have a score of 40–40 they are really playing a mini tiebreak except the score oscillates between Deuce (D) and Advantage (A) as opposed to numeric values as is done in a tiebreak at the end of a set. We could theoretically make Game the base class and Set would extend Game and Match would extend Set, but this hasn’t been fully thought out yet." }, { "code": null, "e": 14460, "s": 13634, "text": "We believe the above is the best layout for this problem on a production level. This allows for greater flexibility going forward as we can easily add more attributes to each class. The code submitted, however, is a proof of concept. We can run simulations and ultimately understand whether or not there is value to running this regularly. Outside of the scope of this problem, if we were considering testing models for efficacy the function based script approach allows for a quick proof-of-concept. Once we have found validity in our model we can set up production level code in the object oriented way described above while still carrying over the core program flow logic (in this case, parameters for determining a tie break, a big point, scoring changes with respect to Deuce and Advantage) and optimizing our functions." }, { "code": null, "e": 14472, "s": 14460, "text": "Conclusion:" } ]
RESTful Web Services - First Application
Let us start writing the actual RESTful web services with Jersey Framework. Before you start writing your first example using the Jersey Framework, you have to make sure that you have setup your Jersey environment properly as explained in the RESTful Web Services - Environment Setup chapter. Here, I am also assuming that you have a little working knowledge of Eclipse IDE. So, let us proceed to write a simple Jersey Application which will expose a web service method to display the list of users. The first step is to create a Dynamic Web Project using Eclipse IDE. Follow the option File → New → Project and finally select the Dynamic Web Project wizard from the wizard list. Now name your project as UserManagement using the wizard window as shown in the following screenshot − Once your project is created successfully, you will have the following content in your Project Explorer − As a second step let us add Jersey Framework and its dependencies (libraries) in our project. Copy all jars from following directories of download jersey zip folder in WEB-INF/lib directory of the project. \jaxrs-ri-2.17\jaxrs-ri\api \jaxrs-ri-2.17\jaxrs-ri\ext \jaxrs-ri-2.17\jaxrs-ri\lib Now, right click on your project name UserManagement and then follow the option available in context menu − Build Path → Configure Build Path to display the Java Build Path window. Now use Add JARs button available under Libraries tab to add the JARs present in WEBINF/lib directory. Now let us create the actual source files under the UserManagement project. First we need to create a package called com.tutorialspoint. To do this, right click on src in package explorer section and follow the option − New → Package. Next we will create UserService.java, User.java,UserDao.java files under the com.tutorialspoint package. User.java package com.tutorialspoint; import java.io.Serializable; import javax.xml.bind.annotation.XmlElement; import javax.xml.bind.annotation.XmlRootElement; @XmlRootElement(name = "user") public class User implements Serializable { private static final long serialVersionUID = 1L; private int id; private String name; private String profession; public User(){} public User(int id, String name, String profession){ this.id = id; this.name = name; this.profession = profession; } public int getId() { return id; } @XmlElement public void setId(int id) { this.id = id; } public String getName() { return name; } @XmlElement public void setName(String name) { this.name = name; } public String getProfession() { return profession; } @XmlElement public void setProfession(String profession) { this.profession = profession; } } UserDao.java package com.tutorialspoint; import java.io.File; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.IOException; import java.io.ObjectInputStream; import java.io.ObjectOutputStream; import java.util.ArrayList; import java.util.List; public class UserDao { public List<User> getAllUsers(){ List<User> userList = null; try { File file = new File("Users.dat"); if (!file.exists()) { User user = new User(1, "Mahesh", "Teacher"); userList = new ArrayList<User>(); userList.add(user); saveUserList(userList); } else{ FileInputStream fis = new FileInputStream(file); ObjectInputStream ois = new ObjectInputStream(fis); userList = (List<User>) ois.readObject(); ois.close(); } } catch (IOException e) { e.printStackTrace(); } catch (ClassNotFoundException e) { e.printStackTrace(); } return userList; } private void saveUserList(List<User> userList){ try { File file = new File("Users.dat"); FileOutputStream fos; fos = new FileOutputStream(file); ObjectOutputStream oos = new ObjectOutputStream(fos); oos.writeObject(userList); oos.close(); } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } } } UserService.java package com.tutorialspoint; import java.util.List; import javax.ws.rs.GET; import javax.ws.rs.Path; import javax.ws.rs.Produces; import javax.ws.rs.core.MediaType; @Path("/UserService") public class UserService { UserDao userDao = new UserDao(); @GET @Path("/users") @Produces(MediaType.APPLICATION_XML) public List<User> getUsers(){ return userDao.getAllUsers(); } } There are two important points to be noted about the main program, The first step is to specify a path for the web service using @Path annotation to the UserService. The first step is to specify a path for the web service using @Path annotation to the UserService. The second step is to specify a path for the particular web service method using @Path annotation to method of UserService. The second step is to specify a path for the particular web service method using @Path annotation to method of UserService. You need to create a Web xml Configuration file which is an XML file and is used to specify Jersey framework servlet for our application. web.xml <?xml version = "1.0" encoding = "UTF-8"?> <web-app xmlns:xsi = "http://www.w3.org/2001/XMLSchema-instance" xmlns = "http://java.sun.com/xml/ns/javaee" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd" id = "WebApp_ID" version = "3.0"> <display-name>User Management</display-name> <servlet> <servlet-name>Jersey RESTful Application</servlet-name> <servlet-class>org.glassfish.jersey.servlet.ServletContainer</servlet-class> <init-param> <param-name>jersey.config.server.provider.packages</param-name> <param-value>com.tutorialspoint</param-value> </init-param> </servlet> <servlet-mapping> <servlet-name>Jersey RESTful Application</servlet-name> <url-pattern>/rest/*</url-pattern> </servlet-mapping> </web-app> Once you are done with creating source and web configuration files, you are ready for this step which is compiling and running your program. To do this, using Eclipse, export your application as a war file and deploy the same in tomcat. To create a WAR file using eclipse, follow the option File → export → Web → War File and finally select project UserManagement and destination folder. To deploy a war file in Tomcat, place the UserManagement.war in the Tomcat Installation Directory → webapps directory and start the Tomcat. We are using Postman, a Chrome extension, to test our webservices. Make a request to UserManagement to get list of all the users. Put http://localhost:8080/UserManagement/rest/UserService/users in POSTMAN with GET request and see the following result. Congratulations, you have created your first RESTful Application successfully. 71 Lectures 10 hours Chaand Sheikh 27 Lectures 2 hours Vinod Kumar Kayartaya 517 Lectures 57 hours Chaand Sheikh 35 Lectures 3.5 hours Antonio Papa Print Add Notes Bookmark this page
[ { "code": null, "e": 2230, "s": 1855, "text": "Let us start writing the actual RESTful web services with Jersey Framework. Before you start writing your first example using the Jersey Framework, you have to make sure that you have setup your Jersey environment properly as explained in the RESTful Web Services - Environment Setup chapter. Here, I am also assuming that you have a little working knowledge of Eclipse IDE." }, { "code": null, "e": 2355, "s": 2230, "text": "So, let us proceed to write a simple Jersey Application which will expose a web service method to display the list of users." }, { "code": null, "e": 2638, "s": 2355, "text": "The first step is to create a Dynamic Web Project using Eclipse IDE. Follow the option File → New → Project and finally select the Dynamic Web Project wizard from the wizard list. Now name your project as UserManagement using the wizard window as shown in the following screenshot −" }, { "code": null, "e": 2744, "s": 2638, "text": "Once your project is created successfully, you will have the following content in your Project Explorer −" }, { "code": null, "e": 2950, "s": 2744, "text": "As a second step let us add Jersey Framework and its dependencies (libraries) in our project. Copy all jars from following directories of download jersey zip folder in WEB-INF/lib directory of the project." }, { "code": null, "e": 2978, "s": 2950, "text": "\\jaxrs-ri-2.17\\jaxrs-ri\\api" }, { "code": null, "e": 3006, "s": 2978, "text": "\\jaxrs-ri-2.17\\jaxrs-ri\\ext" }, { "code": null, "e": 3034, "s": 3006, "text": "\\jaxrs-ri-2.17\\jaxrs-ri\\lib" }, { "code": null, "e": 3215, "s": 3034, "text": "Now, right click on your project name UserManagement and then follow the option available in context menu − Build Path → Configure Build Path to display the Java Build Path window." }, { "code": null, "e": 3318, "s": 3215, "text": "Now use Add JARs button available under Libraries tab to add the JARs present in WEBINF/lib directory." }, { "code": null, "e": 3553, "s": 3318, "text": "Now let us create the actual source files under the UserManagement project. First we need to create a package called com.tutorialspoint. To do this, right click on src in package explorer section and follow the option − New → Package." }, { "code": null, "e": 3658, "s": 3553, "text": "Next we will create UserService.java, User.java,UserDao.java files under the com.tutorialspoint package." }, { "code": null, "e": 3668, "s": 3658, "text": "User.java" }, { "code": null, "e": 4657, "s": 3668, "text": "package com.tutorialspoint; \n\nimport java.io.Serializable; \nimport javax.xml.bind.annotation.XmlElement; \nimport javax.xml.bind.annotation.XmlRootElement; \n@XmlRootElement(name = \"user\") \n\npublic class User implements Serializable { \n private static final long serialVersionUID = 1L; \n private int id; \n private String name; \n private String profession; \n public User(){} \n \n public User(int id, String name, String profession){ \n this.id = id; \n this.name = name; \n this.profession = profession; \n } \n public int getId() { \n return id; \n } \n @XmlElement \n public void setId(int id) { \n this.id = id; \n } \n public String getName() { \n return name; \n } \n @XmlElement\n public void setName(String name) { \n this.name = name; \n } \n public String getProfession() { \n return profession; \n } \n @XmlElement \n public void setProfession(String profession) { \n this.profession = profession; \n } \n} " }, { "code": null, "e": 4670, "s": 4657, "text": "UserDao.java" }, { "code": null, "e": 6251, "s": 4670, "text": "package com.tutorialspoint; \n\nimport java.io.File; \nimport java.io.FileInputStream; \nimport java.io.FileNotFoundException; \nimport java.io.FileOutputStream; \nimport java.io.IOException; \nimport java.io.ObjectInputStream; \nimport java.io.ObjectOutputStream; \nimport java.util.ArrayList; \nimport java.util.List; \n\npublic class UserDao { \n public List<User> getAllUsers(){ \n \n List<User> userList = null; \n try { \n File file = new File(\"Users.dat\"); \n if (!file.exists()) { \n User user = new User(1, \"Mahesh\", \"Teacher\"); \n userList = new ArrayList<User>(); \n userList.add(user); \n saveUserList(userList); \n } \n else{ \n FileInputStream fis = new FileInputStream(file); \n ObjectInputStream ois = new ObjectInputStream(fis); \n userList = (List<User>) ois.readObject(); \n ois.close(); \n } \n } catch (IOException e) { \n e.printStackTrace(); \n } catch (ClassNotFoundException e) { \n e.printStackTrace(); \n } \n return userList; \n } \n private void saveUserList(List<User> userList){ \n try { \n File file = new File(\"Users.dat\"); \n FileOutputStream fos; \n fos = new FileOutputStream(file); \n ObjectOutputStream oos = new ObjectOutputStream(fos); \n oos.writeObject(userList); \n oos.close(); \n } catch (FileNotFoundException e) { \n e.printStackTrace(); \n } catch (IOException e) { \n e.printStackTrace(); \n } \n } \n}" }, { "code": null, "e": 6268, "s": 6251, "text": "UserService.java" }, { "code": null, "e": 6682, "s": 6268, "text": "package com.tutorialspoint; \n\nimport java.util.List; \nimport javax.ws.rs.GET; \nimport javax.ws.rs.Path; \nimport javax.ws.rs.Produces; \nimport javax.ws.rs.core.MediaType; \n@Path(\"/UserService\") \n\npublic class UserService { \n UserDao userDao = new UserDao(); \n @GET \n @Path(\"/users\") \n @Produces(MediaType.APPLICATION_XML) \n public List<User> getUsers(){ \n return userDao.getAllUsers(); \n } \n}" }, { "code": null, "e": 6749, "s": 6682, "text": "There are two important points to be noted about the main program," }, { "code": null, "e": 6848, "s": 6749, "text": "The first step is to specify a path for the web service using @Path annotation to the UserService." }, { "code": null, "e": 6947, "s": 6848, "text": "The first step is to specify a path for the web service using @Path annotation to the UserService." }, { "code": null, "e": 7071, "s": 6947, "text": "The second step is to specify a path for the particular web service method using @Path annotation to method of UserService." }, { "code": null, "e": 7195, "s": 7071, "text": "The second step is to specify a path for the particular web service method using @Path annotation to method of UserService." }, { "code": null, "e": 7333, "s": 7195, "text": "You need to create a Web xml Configuration file which is an XML file and is used to specify Jersey framework servlet for our application." }, { "code": null, "e": 7341, "s": 7333, "text": "web.xml" }, { "code": null, "e": 8213, "s": 7341, "text": "<?xml version = \"1.0\" encoding = \"UTF-8\"?> \n<web-app xmlns:xsi = \"http://www.w3.org/2001/XMLSchema-instance\" \n xmlns = \"http://java.sun.com/xml/ns/javaee\" \n xsi:schemaLocation=\"http://java.sun.com/xml/ns/javaee \n http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd\" \n id = \"WebApp_ID\" version = \"3.0\"> \n <display-name>User Management</display-name> \n <servlet> \n <servlet-name>Jersey RESTful Application</servlet-name> \n <servlet-class>org.glassfish.jersey.servlet.ServletContainer</servlet-class> \n <init-param> \n <param-name>jersey.config.server.provider.packages</param-name> \n <param-value>com.tutorialspoint</param-value> \n </init-param> \n </servlet> \n <servlet-mapping> \n <servlet-name>Jersey RESTful Application</servlet-name> \n <url-pattern>/rest/*</url-pattern> \n </servlet-mapping> \n</web-app>" }, { "code": null, "e": 8450, "s": 8213, "text": "Once you are done with creating source and web configuration files, you are ready for this step which is compiling and running your program. To do this, using Eclipse, export your application as a war file and deploy the same in tomcat." }, { "code": null, "e": 8741, "s": 8450, "text": "To create a WAR file using eclipse, follow the option File → export → Web → War File and finally select project UserManagement and destination folder. To deploy a war file in Tomcat, place the UserManagement.war in the Tomcat Installation Directory → webapps directory and start the Tomcat." }, { "code": null, "e": 8808, "s": 8741, "text": "We are using Postman, a Chrome extension, to test our webservices." }, { "code": null, "e": 8993, "s": 8808, "text": "Make a request to UserManagement to get list of all the users. Put http://localhost:8080/UserManagement/rest/UserService/users in POSTMAN with GET request and see the following result." }, { "code": null, "e": 9072, "s": 8993, "text": "Congratulations, you have created your first RESTful Application successfully." }, { "code": null, "e": 9106, "s": 9072, "text": "\n 71 Lectures \n 10 hours \n" }, { "code": null, "e": 9121, "s": 9106, "text": " Chaand Sheikh" }, { "code": null, "e": 9154, "s": 9121, "text": "\n 27 Lectures \n 2 hours \n" }, { "code": null, "e": 9177, "s": 9154, "text": " Vinod Kumar Kayartaya" }, { "code": null, "e": 9212, "s": 9177, "text": "\n 517 Lectures \n 57 hours \n" }, { "code": null, "e": 9227, "s": 9212, "text": " Chaand Sheikh" }, { "code": null, "e": 9262, "s": 9227, "text": "\n 35 Lectures \n 3.5 hours \n" }, { "code": null, "e": 9276, "s": 9262, "text": " Antonio Papa" }, { "code": null, "e": 9283, "s": 9276, "text": " Print" }, { "code": null, "e": 9294, "s": 9283, "text": " Add Notes" } ]
SAP ABAP - Polymorphism
The term polymorphism literally means ‘many forms’. From an object-oriented perspective, polymorphism works in conjunction with inheritance to make it possible for various types within an inheritance tree to be used interchangeably. That is, polymorphism occurs when there is a hierarchy of classes and they are related by inheritance. ABAP polymorphism means that a call to a method will cause a different method to be executed depending on the type of object that invokes the method. The following program contains an abstract class 'class_prgm', 2 sub classes (class_procedural and class_OO), and a test driver class 'class_type_approach'. In this implementation, the class method 'start' allow us to display the type of programming and its approach. If you look closely at the signature of method 'start', you will observe that it receives an importing parameter of type class_prgm. However, in the Start-Of-Selection event, this method has been called at run-time with objects of type class_procedural and class_OO. Report ZPolymorphism1. CLASS class_prgm Definition Abstract. PUBLIC Section. Methods: prgm_type Abstract, approach1 Abstract. ENDCLASS. CLASS class_procedural Definition Inheriting From class_prgm. PUBLIC Section. Methods: prgm_type Redefinition, approach1 Redefinition. ENDCLASS. CLASS class_procedural Implementation. Method prgm_type. Write: 'Procedural programming'. EndMethod. Method approach1. Write: 'top-down approach'. EndMethod. ENDCLASS. CLASS class_OO Definition Inheriting From class_prgm. PUBLIC Section. Methods: prgm_type Redefinition, approach1 Redefinition. ENDCLASS. CLASS class_OO Implementation. Method prgm_type. Write: 'Object oriented programming'. EndMethod. Method approach1. Write: 'bottom-up approach'. EndMethod. ENDCLASS. CLASS class_type_approach Definition. PUBLIC Section. CLASS-METHODS: start Importing class1_prgm Type Ref To class_prgm. ENDCLASS. CLASS class_type_approach IMPLEMENTATION. Method start. CALL Method class1_prgm→prgm_type. Write: 'follows'. CALL Method class1_prgm→approach1. EndMethod. ENDCLASS. Start-Of-Selection. Data: class_1 Type Ref To class_procedural, class_2 Type Ref To class_OO. Create Object class_1. Create Object class_2. CALL Method class_type_approach⇒start Exporting class1_prgm = class_1. New-Line. CALL Method class_type_approach⇒start Exporting class1_prgm = class_2. The above code produces the following output − Procedural programming follows top-down approach Object oriented programming follows bottom-up approach ABAP run-time environment performs an implicit narrowing cast during the assignment of the importing parameter class1_prgm. This feature helps the 'start' method to be implemented generically. The dynamic type information associated with an object reference variable allows the ABAP run-time environment to dynamically bind a method call with the implementation defined in the object pointed to by the object reference variable. For instance, the importing parameter 'class1_prgm' for method 'start' in the 'class_type_approach' class refers to an abstract type that could never be instantiated on its own. Whenever the method is called with a concrete sub class implementation such as class_procedural or class_OO, the dynamic type of the class1_prgm reference parameter is bound to one of these concrete types. Therefore, the calls to methods 'prgm_type' and 'approach1' refer to the implementations provided in the class_procedural or class_OO sub classes rather than the undefined abstract implementations provided in class 'class_prgm'. 25 Lectures 6 hours Sanjo Thomas 26 Lectures 2 hours Neha Gupta 30 Lectures 2.5 hours Sumit Agarwal 30 Lectures 4 hours Sumit Agarwal 14 Lectures 1.5 hours Neha Malik 13 Lectures 1.5 hours Neha Malik Print Add Notes Bookmark this page
[ { "code": null, "e": 3384, "s": 2898, "text": "The term polymorphism literally means ‘many forms’. From an object-oriented perspective, polymorphism works in conjunction with inheritance to make it possible for various types within an inheritance tree to be used interchangeably. That is, polymorphism occurs when there is a hierarchy of classes and they are related by inheritance. ABAP polymorphism means that a call to a method will cause a different method to be executed depending on the type of object that invokes the method." }, { "code": null, "e": 3919, "s": 3384, "text": "The following program contains an abstract class 'class_prgm', 2 sub classes (class_procedural and class_OO), and a test driver class 'class_type_approach'. In this implementation, the class method 'start' allow us to display the type of programming and its approach. If you look closely at the signature of method 'start', you will observe that it receives an importing parameter of type class_prgm. However, in the Start-Of-Selection event, this method has been called at run-time with objects of type class_procedural and class_OO." }, { "code": null, "e": 5328, "s": 3919, "text": "Report ZPolymorphism1. \nCLASS class_prgm Definition Abstract. \nPUBLIC Section. \nMethods: prgm_type Abstract, \napproach1 Abstract. \nENDCLASS. \n\nCLASS class_procedural Definition \nInheriting From class_prgm. \nPUBLIC Section. \nMethods: prgm_type Redefinition, \napproach1 Redefinition. \nENDCLASS. \n\nCLASS class_procedural Implementation. \nMethod prgm_type. \nWrite: 'Procedural programming'. \n\nEndMethod. Method approach1. \nWrite: 'top-down approach'. \n\nEndMethod. ENDCLASS. \nCLASS class_OO Definition \nInheriting From class_prgm. \nPUBLIC Section. \nMethods: prgm_type Redefinition, \napproach1 Redefinition. \nENDCLASS. \n\nCLASS class_OO Implementation. \nMethod prgm_type. \nWrite: 'Object oriented programming'. \nEndMethod. \n\nMethod approach1. \nWrite: 'bottom-up approach'.\nEndMethod. \nENDCLASS. \n\nCLASS class_type_approach Definition. \nPUBLIC Section. \nCLASS-METHODS: \nstart Importing class1_prgm \nType Ref To class_prgm. \nENDCLASS. \n\nCLASS class_type_approach IMPLEMENTATION. \nMethod start. \nCALL Method class1_prgm→prgm_type. \nWrite: 'follows'. \n\nCALL Method class1_prgm→approach1. \nEndMethod. \nENDCLASS. \n\nStart-Of-Selection. \nData: class_1 Type Ref To class_procedural, \nclass_2 Type Ref To class_OO. \n\nCreate Object class_1. \nCreate Object class_2. \nCALL Method class_type_approach⇒start \nExporting \n\nclass1_prgm = class_1. \nNew-Line. \nCALL Method class_type_approach⇒start \nExporting \nclass1_prgm = class_2. " }, { "code": null, "e": 5375, "s": 5328, "text": "The above code produces the following output −" }, { "code": null, "e": 5482, "s": 5375, "text": "Procedural programming follows top-down approach \nObject oriented programming follows bottom-up approach\n" }, { "code": null, "e": 6089, "s": 5482, "text": "ABAP run-time environment performs an implicit narrowing cast during the assignment of the importing parameter class1_prgm. This feature helps the 'start' method to be implemented generically. The dynamic type information associated with an object reference variable allows the ABAP run-time environment to dynamically bind a method call with the implementation defined in the object pointed to by the object reference variable. For instance, the importing parameter 'class1_prgm' for method 'start' in the 'class_type_approach' class refers to an abstract type that could never be instantiated on its own." }, { "code": null, "e": 6524, "s": 6089, "text": "Whenever the method is called with a concrete sub class implementation such as class_procedural or class_OO, the dynamic type of the class1_prgm reference parameter is bound to one of these concrete types. Therefore, the calls to methods 'prgm_type' and 'approach1' refer to the implementations provided in the class_procedural or class_OO sub classes rather than the undefined abstract implementations provided in class 'class_prgm'." }, { "code": null, "e": 6557, "s": 6524, "text": "\n 25 Lectures \n 6 hours \n" }, { "code": null, "e": 6571, "s": 6557, "text": " Sanjo Thomas" }, { "code": null, "e": 6604, "s": 6571, "text": "\n 26 Lectures \n 2 hours \n" }, { "code": null, "e": 6616, "s": 6604, "text": " Neha Gupta" }, { "code": null, "e": 6651, "s": 6616, "text": "\n 30 Lectures \n 2.5 hours \n" }, { "code": null, "e": 6666, "s": 6651, "text": " Sumit Agarwal" }, { "code": null, "e": 6699, "s": 6666, "text": "\n 30 Lectures \n 4 hours \n" }, { "code": null, "e": 6714, "s": 6699, "text": " Sumit Agarwal" }, { "code": null, "e": 6749, "s": 6714, "text": "\n 14 Lectures \n 1.5 hours \n" }, { "code": null, "e": 6761, "s": 6749, "text": " Neha Malik" }, { "code": null, "e": 6796, "s": 6761, "text": "\n 13 Lectures \n 1.5 hours \n" }, { "code": null, "e": 6808, "s": 6796, "text": " Neha Malik" }, { "code": null, "e": 6815, "s": 6808, "text": " Print" }, { "code": null, "e": 6826, "s": 6815, "text": " Add Notes" } ]
Teradata - SET Operators
SET operators combine results from multiple SELECT statement. This may look similar to Joins, but joins combines columns from multiple tables whereas SET operators combines rows from multiple rows. The number of columns from each SELECT statement should be same. The number of columns from each SELECT statement should be same. The data types from each SELECT must be compatible. The data types from each SELECT must be compatible. ORDER BY should be included only in the final SELECT statement. ORDER BY should be included only in the final SELECT statement. UNION statement is used to combine results from multiple SELECT statements. It ignores duplicates. Following is the basic syntax of the UNION statement. SELECT col1, col2, col3... FROM <table 1> [WHERE condition] UNION SELECT col1, col2, col3... FROM <table 2> [WHERE condition]; Consider the following employee table and salary table. The following UNION query combines the EmployeeNo value from both Employee and Salary table. SELECT EmployeeNo FROM Employee UNION SELECT EmployeeNo FROM Salary; When the query is executed, it produces the following output. EmployeeNo ----------- 101 102 103 104 105 UNION ALL statement is similar to UNION, it combines results from multiple tables including duplicate rows. Following is the basic syntax of the UNION ALL statement. SELECT col1, col2, col3... FROM <table 1> [WHERE condition] UNION ALL SELECT col1, col2, col3... FROM <table 2> [WHERE condition]; Following is an example for UNION ALL statement. SELECT EmployeeNo FROM Employee UNION ALL SELECT EmployeeNo FROM Salary; When the above query is executed, it produces the following output. You can see that it returns the duplicates also. EmployeeNo ----------- 101 104 102 105 103 101 104 102 103 INTERSECT command is also used to combine results from multiple SELECT statements. It returns the rows from the first SELECT statement that has corresponding match in the second SELECT statements. In other words, it returns the rows that exist in both SELECT statements. Following is the basic syntax of the INTERSECT statement. SELECT col1, col2, col3... FROM <table 1> [WHERE condition] INTERSECT SELECT col1, col2, col3... FROM <table 2> [WHERE condition]; Following is an example of INTERSECT statement. It returns the EmployeeNo values that exist in both tables. SELECT EmployeeNo FROM Employee INTERSECT SELECT EmployeeNo FROM Salary; When the above query is executed, it returns the following records. EmployeeNo 105 is excluded since it doesn’t exist in SALARY table. EmployeeNo ----------- 101 104 102 103 MINUS/EXCEPT commands combine rows from multiple tables and returns the rows which are in first SELECT but not in second SELECT. They both return the same results. Following is the basic syntax of the MINUS statement. SELECT col1, col2, col3... FROM <table 1> [WHERE condition] MINUS SELECT col1, col2, col3... FROM <table 2> [WHERE condition]; Following is an example of MINUS statement. SELECT EmployeeNo FROM Employee MINUS SELECT EmployeeNo FROM Salary; When this query is executed, it returns the following record. EmployeeNo ----------- 105 Print Add Notes Bookmark this page
[ { "code": null, "e": 2828, "s": 2630, "text": "SET operators combine results from multiple SELECT statement. This may look similar to Joins, but joins combines columns from multiple tables whereas SET operators combines rows from multiple rows." }, { "code": null, "e": 2893, "s": 2828, "text": "The number of columns from each SELECT statement should be same." }, { "code": null, "e": 2958, "s": 2893, "text": "The number of columns from each SELECT statement should be same." }, { "code": null, "e": 3010, "s": 2958, "text": "The data types from each SELECT must be compatible." }, { "code": null, "e": 3062, "s": 3010, "text": "The data types from each SELECT must be compatible." }, { "code": null, "e": 3126, "s": 3062, "text": "ORDER BY should be included only in the final SELECT statement." }, { "code": null, "e": 3190, "s": 3126, "text": "ORDER BY should be included only in the final SELECT statement." }, { "code": null, "e": 3289, "s": 3190, "text": "UNION statement is used to combine results from multiple SELECT statements. It ignores duplicates." }, { "code": null, "e": 3343, "s": 3289, "text": "Following is the basic syntax of the UNION statement." }, { "code": null, "e": 3483, "s": 3343, "text": "SELECT col1, col2, col3... \nFROM \n<table 1> \n[WHERE condition] \nUNION \n\nSELECT col1, col2, col3... \nFROM \n<table 2> \n[WHERE condition];\n" }, { "code": null, "e": 3539, "s": 3483, "text": "Consider the following employee table and salary table." }, { "code": null, "e": 3632, "s": 3539, "text": "The following UNION query combines the EmployeeNo value from both Employee and Salary table." }, { "code": null, "e": 3710, "s": 3632, "text": "SELECT EmployeeNo \nFROM \nEmployee \nUNION \n\nSELECT EmployeeNo \nFROM \nSalary;" }, { "code": null, "e": 3772, "s": 3710, "text": "When the query is executed, it produces the following output." }, { "code": null, "e": 3837, "s": 3772, "text": "EmployeeNo \n----------- \n 101 \n 102 \n 103 \n 104 \n 105\n" }, { "code": null, "e": 3945, "s": 3837, "text": "UNION ALL statement is similar to UNION, it combines results from multiple tables including duplicate rows." }, { "code": null, "e": 4003, "s": 3945, "text": "Following is the basic syntax of the UNION ALL statement." }, { "code": null, "e": 4145, "s": 4003, "text": "SELECT col1, col2, col3... \nFROM \n<table 1> \n[WHERE condition] \nUNION ALL \n\nSELECT col1, col2, col3...\nFROM \n<table 2> \n[WHERE condition];\n" }, { "code": null, "e": 4194, "s": 4145, "text": "Following is an example for UNION ALL statement." }, { "code": null, "e": 4276, "s": 4194, "text": "SELECT EmployeeNo \nFROM \nEmployee \nUNION ALL \n\nSELECT EmployeeNo \nFROM \nSalary;" }, { "code": null, "e": 4393, "s": 4276, "text": "When the above query is executed, it produces the following output. You can see that it returns the duplicates also." }, { "code": null, "e": 4500, "s": 4393, "text": " EmployeeNo \n----------- \n 101 \n 104 \n 102 \n 105 \n 103 \n 101 \n 104 \n 102 \n 103\n" }, { "code": null, "e": 4771, "s": 4500, "text": "INTERSECT command is also used to combine results from multiple SELECT statements. It returns the rows from the first SELECT statement that has corresponding match in the second SELECT statements. In other words, it returns the rows that exist in both SELECT statements." }, { "code": null, "e": 4829, "s": 4771, "text": "Following is the basic syntax of the INTERSECT statement." }, { "code": null, "e": 4971, "s": 4829, "text": "SELECT col1, col2, col3... \nFROM \n<table 1>\n[WHERE condition] \nINTERSECT \n\nSELECT col1, col2, col3... \nFROM \n<table 2> \n[WHERE condition];\n" }, { "code": null, "e": 5079, "s": 4971, "text": "Following is an example of INTERSECT statement. It returns the EmployeeNo values that exist in both tables." }, { "code": null, "e": 5162, "s": 5079, "text": "SELECT EmployeeNo \nFROM \nEmployee \nINTERSECT \n\nSELECT EmployeeNo \nFROM \nSalary; " }, { "code": null, "e": 5297, "s": 5162, "text": "When the above query is executed, it returns the following records. EmployeeNo 105 is excluded since it doesn’t exist in SALARY table." }, { "code": null, "e": 5355, "s": 5297, "text": "EmployeeNo \n----------- \n 101 \n 104 \n 102 \n 103 \n" }, { "code": null, "e": 5519, "s": 5355, "text": "MINUS/EXCEPT commands combine rows from multiple tables and returns the rows which are in first SELECT but not in second SELECT. They both return the same results." }, { "code": null, "e": 5573, "s": 5519, "text": "Following is the basic syntax of the MINUS statement." }, { "code": null, "e": 5711, "s": 5573, "text": "SELECT col1, col2, col3... \nFROM \n<table 1> \n[WHERE condition] \nMINUS \n\nSELECT col1, col2, col3... \nFROM \n<table 2> \n[WHERE condition];" }, { "code": null, "e": 5755, "s": 5711, "text": "Following is an example of MINUS statement." }, { "code": null, "e": 5833, "s": 5755, "text": "SELECT EmployeeNo \nFROM \nEmployee \nMINUS \n\nSELECT EmployeeNo \nFROM \nSalary;" }, { "code": null, "e": 5895, "s": 5833, "text": "When this query is executed, it returns the following record." }, { "code": null, "e": 5929, "s": 5895, "text": "EmployeeNo \n----------- \n 105 \n" }, { "code": null, "e": 5936, "s": 5929, "text": " Print" }, { "code": null, "e": 5947, "s": 5936, "text": " Add Notes" } ]
How to get top values of a numerical column of an R data frame in decreasing order?
To get the top values in an R data frame, we can use the head function and if we want the values in decreasing order then sort function will be required. Therefore, we need to use the combination of head and sort function to find the top values in decreasing order. For example, if we have a data frame df that contains a column x then we can find top 20 values of x in decreasing order by using head(sort(df$x,decreasing=TRUE),n=20). Consider the CO2 data frame in base R − Live Demo > str(CO2) Classes ‘nfnGroupedData’, ‘nfGroupedData’, ‘groupedData’ and 'data.frame': 84 obs. of 5 variables: $ Plant : Ord.factor w/ 12 levels "Qn1"<"Qn2"<"Qn3"<..: 1 1 1 1 1 1 1 2 2 2 ... $ Type : Factor w/ 2 levels "Quebec","Mississippi": 1 1 1 1 1 1 1 1 1 1 ... $ Treatment: Factor w/ 2 levels "nonchilled","chilled": 1 1 1 1 1 1 1 1 1 1 ... $ conc : num 95 175 250 350 500 675 1000 95 175 250 ... $ uptake : num 16 30.4 34.8 37.2 35.3 39.2 39.7 13.6 27.3 37.1 ... - attr(*, "formula")=Class 'formula' language uptake ~ conc | Plant .. ..- attr(*, ".Environment")= - attr(*, "outer")=Class 'formula' language ~Treatment * Type .. ..- attr(*, ".Environment")= - attr(*, "labels")=List of 2 ..$ x: chr "Ambient carbon dioxide concentration" ..$ y: chr "CO2 uptake rate" - attr(*, "units")=List of 2 ..$ x: chr "(uL/L)" ..$ y: chr "(umol/m^2 s)" Live Demo > head(CO2,20) Plant Type Treatment conc uptake 1 Qn1 Quebec nonchilled 95 16.0 2 Qn1 Quebec nonchilled 175 30.4 3 Qn1 Quebec nonchilled 250 34.8 4 Qn1 Quebec nonchilled 350 37.2 5 Qn1 Quebec nonchilled 500 35.3 6 Qn1 Quebec nonchilled 675 39.2 7 Qn1 Quebec nonchilled 1000 39.7 8 Qn2 Quebec nonchilled 95 13.6 9 Qn2 Quebec nonchilled 175 27.3 10 Qn2 Quebec nonchilled 250 37.1 11 Qn2 Quebec nonchilled 350 41.8 12 Qn2 Quebec nonchilled 500 40.6 13 Qn2 Quebec nonchilled 675 41.4 14 Qn2 Quebec nonchilled 1000 44.3 15 Qn3 Quebec nonchilled 95 16.2 16 Qn3 Quebec nonchilled 175 32.4 17 Qn3 Quebec nonchilled 250 40.3 18 Qn3 Quebec nonchilled 350 42.1 19 Qn3 Quebec nonchilled 500 42.9 20 Qn3 Quebec nonchilled 675 43.9 Extracting top 20 values of conc − Live Demo > head(sort(CO2$conc,decreasing=TRUE),n=20) [1] 1000 1000 1000 1000 1000 1000 1000 1000 1000 1000 1000 1000 675 675 675 [16] 675 675 675 675 675 Extracting top 20 values of uptake − Live Demo > head(sort(CO2$uptake,decreasing=TRUE),n=20) [1] 45.5 44.3 43.9 42.9 42.4 42.1 41.8 41.4 41.4 40.6 40.3 39.7 39.6 39.2 38.9 [16] 38.8 38.7 38.6 38.1 37.5 Consider the iris data frame in base R − Live Demo > str(iris) 'data.frame': 150 obs. of 5 variables: $ Sepal.Length: num 5.1 4.9 4.7 4.6 5 5.4 4.6 5 4.4 4.9 ... $ Sepal.Width : num 3.5 3 3.2 3.1 3.6 3.9 3.4 3.4 2.9 3.1 ... $ Petal.Length: num 1.4 1.4 1.3 1.5 1.4 1.7 1.4 1.5 1.4 1.5 ... $ Petal.Width : num 0.2 0.2 0.2 0.2 0.2 0.4 0.3 0.2 0.2 0.1 ... $ Species : Factor w/ 3 levels "setosa","versicolor",..: 1 1 1 1 1 1 1 1 1 1 ... Live Demo > head(iris,20) Sepal.Length Sepal.Width Petal.Length Petal.Width Species 1 5.1 3.5 1.4 0.2 setosa 2 4.9 3.0 1.4 0.2 setosa 3 4.7 3.2 1.3 0.2 setosa 4 4.6 3.1 1.5 0.2 setosa 5 5.0 3.6 1.4 0.2 setosa 6 5.4 3.9 1.7 0.4 setosa 7 4.6 3.4 1.4 0.3 setosa 8 5.0 3.4 1.5 0.2 setosa 9 4.4 2.9 1.4 0.2 setosa 10 4.9 3.1 1.5 0.1 setosa 11 5.4 3.7 1.5 0.2 setosa 12 4.8 3.4 1.6 0.2 setosa 13 4.8 3.0 1.4 0.1 setosa 14 4.3 3.0 1.1 0.1 setosa 15 5.8 4.0 1.2 0.2 setosa 16 5.7 4.4 1.5 0.4 setosa 17 5.4 3.9 1.3 0.4 setosa 18 5.1 3.5 1.4 0.3 setosa 19 5.7 3.8 1.7 0.3 setosa 20 5.1 3.8 1.5 0.3 setosa Live Demo > head(sort(iris$Sepal.Length,decreasing=TRUE),n=50) [1] 7.9 7.7 7.7 7.7 7.7 7.6 7.4 7.3 7.2 7.2 7.2 7.1 7.0 6.9 6.9 6.9 6.9 6.8 6.8 [20] 6.8 6.7 6.7 6.7 6.7 6.7 6.7 6.7 6.7 6.6 6.6 6.5 6.5 6.5 6.5 6.5 6.4 6.4 6.4 [39] 6.4 6.4 6.4 6.4 6.3 6.3 6.3 6.3 6.3 6.3 6.3 6.3 Live Demo > head(sort(iris$Petal.Length,decreasing=TRUE),n=50) [1] 6.9 6.7 6.7 6.6 6.4 6.3 6.1 6.1 6.1 6.0 6.0 5.9 5.9 5.8 5.8 5.8 5.7 5.7 5.7 [20] 5.6 5.6 5.6 5.6 5.6 5.6 5.5 5.5 5.5 5.4 5.4 5.3 5.3 5.2 5.2 5.1 5.1 5.1 5.1 [39] 5.1 5.1 5.1 5.1 5.0 5.0 5.0 5.0 4.9 4.9 4.9 4.9 Consider the mtcars data in base R − Live Demo > str(mtcars) 'data.frame': 32 obs. of 11 variables: $ mpg : num 21 21 22.8 21.4 18.7 18.1 14.3 24.4 22.8 19.2 ... $ cyl : num 6 6 4 6 8 6 8 4 4 6 ... $ disp: num 160 160 108 258 360 ... $ hp : num 110 110 93 110 175 105 245 62 95 123 ... $ drat: num 3.9 3.9 3.85 3.08 3.15 2.76 3.21 3.69 3.92 3.92 ... $ wt : num 2.62 2.88 2.32 3.21 3.44 ... $ qsec: num 16.5 17 18.6 19.4 17 ... $ vs : num 0 0 1 1 0 1 0 1 1 1 ... $ am : num 1 1 1 0 0 0 0 0 0 0 ... $ gear: num 4 4 4 3 3 3 3 4 4 4 ... $ carb: num 4 4 1 1 2 1 4 2 2 4 ... Live Demo > head(mtcars,20) mpg cyl disp hp drat wt qsec vs am gear carb Mazda RX4 21.0 6 160.0 110 3.90 2.620 16.46 0 1 4 4 Mazda RX4 Wag 21.0 6 160.0 110 3.90 2.875 17.02 0 1 4 4 Datsun 710 22.8 4 108.0 93 3.85 2.320 18.61 1 1 4 1 Hornet 4 Drive 21.4 6 258.0 110 3.08 3.215 19.44 1 0 3 1 Hornet Sportabout 18.7 8 360.0 175 3.15 3.440 17.02 0 0 3 2 Valiant 18.1 6 225.0 105 2.76 3.460 20.22 1 0 3 1 Duster 360 14.3 8 360.0 245 3.21 3.570 15.84 0 0 3 4 Merc 240D 24.4 4 146.7 62 3.69 3.190 20.00 1 0 4 2 Merc 230 22.8 4 140.8 95 3.92 3.150 22.90 1 0 4 2 Merc 280 19.2 6 167.6 123 3.92 3.440 18.30 1 0 4 4 Merc 280C 17.8 6 167.6 123 3.92 3.440 18.90 1 0 4 4 Merc 450SE 16.4 8 275.8 180 3.07 4.070 17.40 0 0 3 3 Merc 450SL 17.3 8 275.8 180 3.07 3.730 17.60 0 0 3 3 Merc 450SLC 15.2 8 275.8 180 3.07 3.780 18.00 0 0 3 3 Cadillac Fleetwood 10.4 8 472.0 205 2.93 5.250 17.98 0 0 3 4 Lincoln Continental 10.4 8 460.0 215 3.00 5.424 17.82 0 0 3 4 Chrysler Imperial 14.7 8 440.0 230 3.23 5.345 17.42 0 0 3 4 Fiat 128 32.4 4 78.7 66 4.08 2.200 19.47 1 1 4 1 Honda Civic 30.4 4 75.7 52 4.93 1.615 18.52 1 1 4 2 Toyota Corolla 33.9 4 71.1 65 4.22 1.835 19.90 1 1 4 1 Live Demo > head(sort(mtcars$wt,decreasing=TRUE),n=20) [1] 5.424 5.345 5.250 4.070 3.845 3.840 3.780 3.730 3.570 3.570 3.520 3.460 [13] 3.440 3.440 3.440 3.435 3.215 3.190 3.170 3.150
[ { "code": null, "e": 1497, "s": 1062, "text": "To get the top values in an R data frame, we can use the head function and if we want the values in decreasing order then sort function will be required. Therefore, we need to use the combination of head and sort function to find the top values in decreasing order. For example, if we have a data frame df that contains a column x then we can find top 20 values of x in decreasing order by using head(sort(df$x,decreasing=TRUE),n=20)." }, { "code": null, "e": 1537, "s": 1497, "text": "Consider the CO2 data frame in base R −" }, { "code": null, "e": 1547, "s": 1537, "text": "Live Demo" }, { "code": null, "e": 1558, "s": 1547, "text": "> str(CO2)" }, { "code": null, "e": 2394, "s": 1558, "text": "Classes ‘nfnGroupedData’, ‘nfGroupedData’, ‘groupedData’ and 'data.frame': 84 obs. of 5 variables:\n$ Plant : Ord.factor w/ 12 levels \"Qn1\"<\"Qn2\"<\"Qn3\"<..: 1 1 1 1 1 1 1 2 2 2 ...\n$ Type : Factor w/ 2 levels \"Quebec\",\"Mississippi\": 1 1 1 1 1 1 1 1 1 1 ...\n$ Treatment: Factor w/ 2 levels \"nonchilled\",\"chilled\": 1 1 1 1 1 1 1 1 1 1 ...\n$ conc : num 95 175 250 350 500 675 1000 95 175 250 ...\n$ uptake : num 16 30.4 34.8 37.2 35.3 39.2 39.7 13.6 27.3 37.1 ...\n- attr(*, \"formula\")=Class 'formula' language uptake ~ conc | Plant\n.. ..- attr(*, \".Environment\")=\n- attr(*, \"outer\")=Class 'formula' language ~Treatment * Type\n.. ..- attr(*, \".Environment\")=\n- attr(*, \"labels\")=List of 2\n..$ x: chr \"Ambient carbon dioxide concentration\"\n..$ y: chr \"CO2 uptake rate\"\n- attr(*, \"units\")=List of 2\n..$ x: chr \"(uL/L)\"\n..$ y: chr \"(umol/m^2 s)\"" }, { "code": null, "e": 2404, "s": 2394, "text": "Live Demo" }, { "code": null, "e": 2419, "s": 2404, "text": "> head(CO2,20)" }, { "code": null, "e": 3122, "s": 2419, "text": "Plant Type Treatment conc uptake\n1 Qn1 Quebec nonchilled 95 16.0\n2 Qn1 Quebec nonchilled 175 30.4\n3 Qn1 Quebec nonchilled 250 34.8\n4 Qn1 Quebec nonchilled 350 37.2\n5 Qn1 Quebec nonchilled 500 35.3\n6 Qn1 Quebec nonchilled 675 39.2\n7 Qn1 Quebec nonchilled 1000 39.7\n8 Qn2 Quebec nonchilled 95 13.6\n9 Qn2 Quebec nonchilled 175 27.3\n10 Qn2 Quebec nonchilled 250 37.1\n11 Qn2 Quebec nonchilled 350 41.8\n12 Qn2 Quebec nonchilled 500 40.6\n13 Qn2 Quebec nonchilled 675 41.4\n14 Qn2 Quebec nonchilled 1000 44.3\n15 Qn3 Quebec nonchilled 95 16.2\n16 Qn3 Quebec nonchilled 175 32.4\n17 Qn3 Quebec nonchilled 250 40.3\n18 Qn3 Quebec nonchilled 350 42.1\n19 Qn3 Quebec nonchilled 500 42.9\n20 Qn3 Quebec nonchilled 675 43.9" }, { "code": null, "e": 3157, "s": 3122, "text": "Extracting top 20 values of conc −" }, { "code": null, "e": 3167, "s": 3157, "text": "Live Demo" }, { "code": null, "e": 3211, "s": 3167, "text": "> head(sort(CO2$conc,decreasing=TRUE),n=20)" }, { "code": null, "e": 3312, "s": 3211, "text": "[1] 1000 1000 1000 1000 1000 1000 1000 1000 1000 1000 1000 1000 675 675 675\n[16] 675 675 675 675 675" }, { "code": null, "e": 3349, "s": 3312, "text": "Extracting top 20 values of uptake −" }, { "code": null, "e": 3359, "s": 3349, "text": "Live Demo" }, { "code": null, "e": 3405, "s": 3359, "text": "> head(sort(CO2$uptake,decreasing=TRUE),n=20)" }, { "code": null, "e": 3514, "s": 3405, "text": "[1] 45.5 44.3 43.9 42.9 42.4 42.1 41.8 41.4 41.4 40.6 40.3 39.7 39.6 39.2 38.9\n[16] 38.8 38.7 38.6 38.1 37.5" }, { "code": null, "e": 3555, "s": 3514, "text": "Consider the iris data frame in base R −" }, { "code": null, "e": 3565, "s": 3555, "text": "Live Demo" }, { "code": null, "e": 3577, "s": 3565, "text": "> str(iris)" }, { "code": null, "e": 3947, "s": 3577, "text": "'data.frame': 150 obs. of 5 variables:\n$ Sepal.Length: num 5.1 4.9 4.7 4.6 5 5.4 4.6 5 4.4 4.9 ...\n$ Sepal.Width : num 3.5 3 3.2 3.1 3.6 3.9 3.4 3.4 2.9 3.1 ...\n$ Petal.Length: num 1.4 1.4 1.3 1.5 1.4 1.7 1.4 1.5 1.4 1.5 ...\n$ Petal.Width : num 0.2 0.2 0.2 0.2 0.2 0.4 0.3 0.2 0.2 0.1 ...\n$ Species : Factor w/ 3 levels \"setosa\",\"versicolor\",..: 1 1 1 1 1 1 1 1 1 1 ..." }, { "code": null, "e": 3957, "s": 3947, "text": "Live Demo" }, { "code": null, "e": 3973, "s": 3957, "text": "> head(iris,20)" }, { "code": null, "e": 4543, "s": 3973, "text": "Sepal.Length Sepal.Width Petal.Length Petal.Width Species\n\n1 5.1 3.5 1.4 0.2 setosa\n2 4.9 3.0 1.4 0.2 setosa\n3 4.7 3.2 1.3 0.2 setosa\n4 4.6 3.1 1.5 0.2 setosa\n5 5.0 3.6 1.4 0.2 setosa\n6 5.4 3.9 1.7 0.4 setosa\n7 4.6 3.4 1.4 0.3 setosa\n8 5.0 3.4 1.5 0.2 setosa\n9 4.4 2.9 1.4 0.2 setosa\n10 4.9 3.1 1.5 0.1 setosa\n11 5.4 3.7 1.5 0.2 setosa\n12 4.8 3.4 1.6 0.2 setosa\n13 4.8 3.0 1.4 0.1 setosa\n14 4.3 3.0 1.1 0.1 setosa\n15 5.8 4.0 1.2 0.2 setosa\n16 5.7 4.4 1.5 0.4 setosa\n17 5.4 3.9 1.3 0.4 setosa\n18 5.1 3.5 1.4 0.3 setosa\n19 5.7 3.8 1.7 0.3 setosa\n20 5.1 3.8 1.5 0.3 setosa" }, { "code": null, "e": 4553, "s": 4543, "text": "Live Demo" }, { "code": null, "e": 4606, "s": 4553, "text": "> head(sort(iris$Sepal.Length,decreasing=TRUE),n=50)" }, { "code": null, "e": 4820, "s": 4606, "text": "[1] 7.9 7.7 7.7 7.7 7.7 7.6 7.4 7.3 7.2 7.2 7.2 7.1 7.0 6.9 6.9 6.9 6.9 6.8 6.8\n[20] 6.8 6.7 6.7 6.7 6.7 6.7 6.7 6.7 6.7 6.6 6.6 6.5 6.5 6.5 6.5 6.5 6.4 6.4 6.4\n[39] 6.4 6.4 6.4 6.4 6.3 6.3 6.3 6.3 6.3 6.3 6.3 6.3" }, { "code": null, "e": 4830, "s": 4820, "text": "Live Demo" }, { "code": null, "e": 4883, "s": 4830, "text": "> head(sort(iris$Petal.Length,decreasing=TRUE),n=50)" }, { "code": null, "e": 5097, "s": 4883, "text": "[1] 6.9 6.7 6.7 6.6 6.4 6.3 6.1 6.1 6.1 6.0 6.0 5.9 5.9 5.8 5.8 5.8 5.7 5.7 5.7\n[20] 5.6 5.6 5.6 5.6 5.6 5.6 5.5 5.5 5.5 5.4 5.4 5.3 5.3 5.2 5.2 5.1 5.1 5.1 5.1\n[39] 5.1 5.1 5.1 5.1 5.0 5.0 5.0 5.0 4.9 4.9 4.9 4.9" }, { "code": null, "e": 5134, "s": 5097, "text": "Consider the mtcars data in base R −" }, { "code": null, "e": 5144, "s": 5134, "text": "Live Demo" }, { "code": null, "e": 5158, "s": 5144, "text": "> str(mtcars)" }, { "code": null, "e": 5666, "s": 5158, "text": "'data.frame': 32 obs. of 11 variables:\n$ mpg : num 21 21 22.8 21.4 18.7 18.1 14.3 24.4 22.8 19.2 ...\n$ cyl : num 6 6 4 6 8 6 8 4 4 6 ...\n$ disp: num 160 160 108 258 360 ...\n$ hp : num 110 110 93 110 175 105 245 62 95 123 ...\n$ drat: num 3.9 3.9 3.85 3.08 3.15 2.76 3.21 3.69 3.92 3.92 ...\n$ wt : num 2.62 2.88 2.32 3.21 3.44 ...\n$ qsec: num 16.5 17 18.6 19.4 17 ...\n$ vs : num 0 0 1 1 0 1 0 1 1 1 ...\n$ am : num 1 1 1 0 0 0 0 0 0 0 ...\n$ gear: num 4 4 4 3 3 3 3 4 4 4 ...\n$ carb: num 4 4 1 1 2 1 4 2 2 4 ..." }, { "code": null, "e": 5676, "s": 5666, "text": "Live Demo" }, { "code": null, "e": 5694, "s": 5676, "text": "> head(mtcars,20)" }, { "code": null, "e": 6822, "s": 5694, "text": "mpg cyl disp hp drat wt qsec vs am gear carb\nMazda RX4 21.0 6 160.0 110 3.90 2.620 16.46 0 1 4 4\nMazda RX4 Wag 21.0 6 160.0 110 3.90 2.875 17.02 0 1 4 4\nDatsun 710 22.8 4 108.0 93 3.85 2.320 18.61 1 1 4 1\nHornet 4 Drive 21.4 6 258.0 110 3.08 3.215 19.44 1 0 3 1\nHornet Sportabout 18.7 8 360.0 175 3.15 3.440 17.02 0 0 3 2\nValiant 18.1 6 225.0 105 2.76 3.460 20.22 1 0 3 1\nDuster 360 14.3 8 360.0 245 3.21 3.570 15.84 0 0 3 4\nMerc 240D 24.4 4 146.7 62 3.69 3.190 20.00 1 0 4 2\nMerc 230 22.8 4 140.8 95 3.92 3.150 22.90 1 0 4 2\nMerc 280 19.2 6 167.6 123 3.92 3.440 18.30 1 0 4 4\nMerc 280C 17.8 6 167.6 123 3.92 3.440 18.90 1 0 4 4\nMerc 450SE 16.4 8 275.8 180 3.07 4.070 17.40 0 0 3 3\nMerc 450SL 17.3 8 275.8 180 3.07 3.730 17.60 0 0 3 3\nMerc 450SLC 15.2 8 275.8 180 3.07 3.780 18.00 0 0 3 3\nCadillac Fleetwood 10.4 8 472.0 205 2.93 5.250 17.98 0 0 3 4\nLincoln Continental 10.4 8 460.0 215 3.00 5.424 17.82 0 0 3 4\nChrysler Imperial 14.7 8 440.0 230 3.23 5.345 17.42 0 0 3 4\nFiat 128 32.4 4 78.7 66 4.08 2.200 19.47 1 1 4 1\nHonda Civic 30.4 4 75.7 52 4.93 1.615 18.52 1 1 4 2\nToyota Corolla 33.9 4 71.1 65 4.22 1.835 19.90 1 1 4 1" }, { "code": null, "e": 6832, "s": 6822, "text": "Live Demo" }, { "code": null, "e": 6877, "s": 6832, "text": "> head(sort(mtcars$wt,decreasing=TRUE),n=20)" }, { "code": null, "e": 7006, "s": 6877, "text": "[1] 5.424 5.345 5.250 4.070 3.845 3.840 3.780 3.730 3.570 3.570 3.520 3.460\n[13] 3.440 3.440 3.440 3.435 3.215 3.190 3.170 3.150" } ]
How to Analyze a PDF with the layout-parser package. | by Brendan Ferris | Towards Data Science
I recently was involved with a project that required parsing of a PDF in order to identify the regions of page and return the text from those regions. The text regions would then be fed to a Q/A model (farm-haystack), and return extracted data from the PDF. Essentially, we wanted the computer to read PDF’s for us and tell us what it found. Currently, there are a few popular modules that perform this task with varying effectiveness, namely, pdfminer and py2pdf. The problem is that table data is very hard to parse/detect. The solution? Take out the tables a figures, return only the text blocks. pip install layoutparser We need to convert each page of the PDF to an image in order to perform OCR on it and extract the text blocks. There are many different ways to do this. You could convert the PDF and save the image on your local machine. But for our purposes we want to save the image of the PDF page in memory temporarily -> extract text -> discard image, because after we perform OCR we no longer need the image (we would still have the original pdf file). To solve this problem, we will use the pdf2image package: pip install pdf2image This package will allow us to input a PDF file, and output each page a an image. We can choose to save the image on a storage medium, or process the PDF as a list of PIL images temporarily then discard them when we are done. images = convert_from_bytes(open('FILE PATH', 'rb').read()) Now, you will have a list of images that you can loop through. In order for these images to be readable by the layout-parser package, you need to convert them to an array of pixel values, which can be achieved easily with numpy. image = np.array(image) Currently, there are two OCR tools that you can use with this package: Google Cloud Vision (GCV) and Tesseract. We will use Tesseract. To Detect the regions of the page, there are pre-trained deep learning models that are available for various use cases (tables, magazine publications, scholarly journals) we will use a model called PubLayNet which is specific to scholarly journals. Keep in mind that there are ways to train custom models for your specific use case. model = lp.Detectron2LayoutModel( config_path ='lp://PubLayNet/mask_rcnn_X_101_32x8d_FPN_3x/config', # In model catalog label_map = {0: "Text", 1: "Title", 2: "List", 3:"Table", 4:"Figure"}, # In model`label_map` extra_config=["MODEL.ROI_HEADS.SCORE_THRESH_TEST", 0.8] # Optional )#loop through each pagefor image in images: ocr_agent = lp.ocr.TesseractAgent() image = np.array(image) layout = model.detect(image)text_blocks = lp.Layout([b for b in layout if b.type == 'Text']) #loop through each text box on page. for block in text_blocks: segment_image = (block .pad(left=5, right=5, top=5, bottom=5) .crop_image(image)) text = ocr_agent.detect(segment_image) block.set(text=text, inplace=True) for i, txt in enumerate(text_blocks.get_texts()): my_file = open("OUTPUT FILE PATH/FILENAME.TXT","a+") my_file.write(txt) After running the above code, you can pick out the regions of each page that you are interested in using the following syntax: text_blocks = lp.Layout([b for b in layout if b.type == 'Text'])title_blocks = lp.Layout([b for b in layout if b.type == 'Title'])list_blocks = lp.Layout([b for b in layout if b.type == 'List'])table_blocks = lp.Layout([b for b in layout if b.type == 'Table'])figure_blocks = lp.Layout([b for b in layout if b.type == 'Figure']) Now, you can will be able to extract text from the portions of the page you are interested in, and disregard the portions that you don’t need. Thus far, the layout-parser package has proved to be the most reliable and easiest tool for analyzing the structure of a page. In this short tutorial we focused on being able to intake a whole (multi-page) PDF and extracting machine readable portions of the page that can then be fed into an NLP model for analysis. For more information, refer to the documentation!
[ { "code": null, "e": 772, "s": 172, "text": "I recently was involved with a project that required parsing of a PDF in order to identify the regions of page and return the text from those regions. The text regions would then be fed to a Q/A model (farm-haystack), and return extracted data from the PDF. Essentially, we wanted the computer to read PDF’s for us and tell us what it found. Currently, there are a few popular modules that perform this task with varying effectiveness, namely, pdfminer and py2pdf. The problem is that table data is very hard to parse/detect. The solution? Take out the tables a figures, return only the text blocks." }, { "code": null, "e": 797, "s": 772, "text": "pip install layoutparser" }, { "code": null, "e": 1297, "s": 797, "text": "We need to convert each page of the PDF to an image in order to perform OCR on it and extract the text blocks. There are many different ways to do this. You could convert the PDF and save the image on your local machine. But for our purposes we want to save the image of the PDF page in memory temporarily -> extract text -> discard image, because after we perform OCR we no longer need the image (we would still have the original pdf file). To solve this problem, we will use the pdf2image package:" }, { "code": null, "e": 1320, "s": 1297, "text": "pip install pdf2image " }, { "code": null, "e": 1545, "s": 1320, "text": "This package will allow us to input a PDF file, and output each page a an image. We can choose to save the image on a storage medium, or process the PDF as a list of PIL images temporarily then discard them when we are done." }, { "code": null, "e": 1605, "s": 1545, "text": "images = convert_from_bytes(open('FILE PATH', 'rb').read())" }, { "code": null, "e": 1668, "s": 1605, "text": "Now, you will have a list of images that you can loop through." }, { "code": null, "e": 1834, "s": 1668, "text": "In order for these images to be readable by the layout-parser package, you need to convert them to an array of pixel values, which can be achieved easily with numpy." }, { "code": null, "e": 1858, "s": 1834, "text": "image = np.array(image)" }, { "code": null, "e": 2326, "s": 1858, "text": "Currently, there are two OCR tools that you can use with this package: Google Cloud Vision (GCV) and Tesseract. We will use Tesseract. To Detect the regions of the page, there are pre-trained deep learning models that are available for various use cases (tables, magazine publications, scholarly journals) we will use a model called PubLayNet which is specific to scholarly journals. Keep in mind that there are ways to train custom models for your specific use case." }, { "code": null, "e": 3311, "s": 2326, "text": "model = lp.Detectron2LayoutModel( config_path ='lp://PubLayNet/mask_rcnn_X_101_32x8d_FPN_3x/config', # In model catalog label_map = {0: \"Text\", 1: \"Title\", 2: \"List\", 3:\"Table\", 4:\"Figure\"}, # In model`label_map` extra_config=[\"MODEL.ROI_HEADS.SCORE_THRESH_TEST\", 0.8] # Optional )#loop through each pagefor image in images: ocr_agent = lp.ocr.TesseractAgent() image = np.array(image) layout = model.detect(image)text_blocks = lp.Layout([b for b in layout if b.type == 'Text']) #loop through each text box on page. for block in text_blocks: segment_image = (block .pad(left=5, right=5, top=5, bottom=5) .crop_image(image)) text = ocr_agent.detect(segment_image) block.set(text=text, inplace=True) for i, txt in enumerate(text_blocks.get_texts()): my_file = open(\"OUTPUT FILE PATH/FILENAME.TXT\",\"a+\") my_file.write(txt)" }, { "code": null, "e": 3438, "s": 3311, "text": "After running the above code, you can pick out the regions of each page that you are interested in using the following syntax:" }, { "code": null, "e": 3767, "s": 3438, "text": "text_blocks = lp.Layout([b for b in layout if b.type == 'Text'])title_blocks = lp.Layout([b for b in layout if b.type == 'Title'])list_blocks = lp.Layout([b for b in layout if b.type == 'List'])table_blocks = lp.Layout([b for b in layout if b.type == 'Table'])figure_blocks = lp.Layout([b for b in layout if b.type == 'Figure'])" }, { "code": null, "e": 3910, "s": 3767, "text": "Now, you can will be able to extract text from the portions of the page you are interested in, and disregard the portions that you don’t need." } ]
text preprocessing using scikit-learn and spaCy | Towards Data Science
Text preprocessing is the process of getting the raw text into a form which can be vectorized and subsequently consumed by machine learning algorithms for natural language processing (NLP) tasks such as text classification, topic modeling, name entity recognition etc. Raw text extensively preprocessed by all text analytics APIs such as Azure’s text analytics APIs or ones developed by us at Specrom Analytics, although the extent and the type of preprocessing is dependent on the type of input text. For example, for our historical news APIs, the input consists of scraped HTML pages, and hence it is important for us to strip the unwanted HTML tags from text before feeding it to the NLP algorithms. However, for some news outlets we get data as JSON from their official REST APIs. In that case, there are no HTML tags at all and it will be a waste of CPU time to run a regex based preprocessor to such a clean text. Hence, it makes sense to preprocess text differently based on the source of the data. If you want to create word clouds as shown below, than it is generally recommended that you remove stop words. But in cases such as name entity recognition (NER), this is not really required and you can safely throw in syntactically complete sentences to the NER of your choice. There are many good blog posts developing a text preprocessing steps but let us go through those here just for completeness sake. The process of converting text contained in paragraphs or sentences into individual words (called tokens) is known as tokenization. This is usually a very important step in text preprocessing before we can convert text into vectors full of numbers. Intuitively and rather naively, one way to tokenize text is to simply break the string at spaces and python already ships with very good string methods which can do it with ease, lets call such a tokenization method “white space tokenization”. However, white space tokenization cannot understand word contractions such as when we combine two words ‘can’ and ‘not’ into “can’t”, don’t (do + not), and I’ve (I + have). These are non-trivial issues, and if we don’t separate “can’t” into “can” and “not” then once we strip punctuations,we will be left with a single word “cant” which is not really a dictionary word. The classical library for text processing in Python called NLTK ships with other tokenizers such as WordPunctTokenizer and TreebankWordTokenizer which all operate on different conventions to try and solve the word contractions issue. For advanced tokenization strategies, there is also a RegexpTokenizer available which can split strings according to a regular expression. All of these approaches are basically rule-based though, and since no real “learning” is happening, you as a user will have to handle all the special cases which might crop up as a result of tokenization strategy. The next generation NLP libraries such as Spacy and Apache Spark NLP have largely fixed this issue and deals with common abbreviations with the tokenization methods as part of their language model. # Create a string input sample_text = "Gemini Man review: Double Will Smith can't save hackneyed spy flick U.S.A"from nltk.tokenize import WhitespaceTokenizertokenizer_w = WhitespaceTokenizer()# Use tokenize method tokenized_list = tokenizer_w.tokenize(sample_text) tokenized_list# output['Gemini', 'Man', 'review:', 'Double', 'Will', 'Smith', "can't", 'save', 'hackneyed', 'spy', 'flick', 'U.S.A'] WordPunct Tokenizer will split on punctuations as shown below. from nltk.tokenize import WordPunctTokenizer tokenizer = WordPunctTokenizer()tokenized_list= tokenizer.tokenize(sample_text)tokenized_list# Output['Gemini', 'Man', 'review', ':', 'Double', 'Will', 'Smith', 'can', "'", 't', 'save', 'hackneyed', 'spy', 'flick', 'U', '.', 'S', '.', 'A'] And NLTK’s treebanktokenizer splits word contractions into two tokens as shown below. from nltk.tokenize import TreebankWordTokenizertokenizer = TreebankWordTokenizer()tokenized_list= tokenizer.tokenize(sample_text)tokenized_list#Output['Gemini', 'Man', 'review', ':', 'Double', 'Will', 'Smith', 'ca', "n't", 'save', 'hackneyed', 'spy', 'flick', 'U.S.A'] Its pretty simple to perform tokenization in SpaCy too, and in the later section on lemmatization you will notice why tokenization as part of language model fixes the word contraction issue. # Spacy Tokenization examplesample_text = "Gemini Man review: Double Will Smith can't save hackneyed spy flick U.S.A"from spacy.lang.en import Englishnlp = English()tokenizer = nlp.Defaults.create_tokenizer(nlp)tokens = tokenizer(sample_text)token_list = []for token in tokens: token_list.append(token.text)token_list#output['Gemini', 'Man', 'review', ':', 'Double', 'Will', 'Smith', 'ca', "n't", 'save', 'hackneyed', 'spy', 'flick', 'U.S.A'] Stemming and lemmatization attempts to get root word (for eg rain) for different word inflections (raining, rained etc). Lemma algos gives you real dictionary words, whereas stemming simply cuts off last parts of the word so its faster but less accurate. Stemming returns words which are not really dictionary words and hence you will not be able to find pretrained vectors for it in Glove, Word2Vec etc and this is a major disadvantage depending on application. Nevertheless, it is pretty popular to use stemming algorithms such as porter and more advanced snowball stemmers. Spacy does not ship with any stemming algorithms so we will be using NLTK for performing stemming; we will show outputs from two stemming algorithms here. For ease of use, we will wrap the whitespace tokenizer into a function. As you can see, both stemmers reduced the verb form (raining) into rain. sample_text = '''Gemini Man review: Double Will Smith can't save hackneyed spy flick U.S.A raining rained ran'''from nltk.tokenize import WhitespaceTokenizerdef w_tokenizer(text): tokenizer = WhitespaceTokenizer() # Use tokenize method tokenized_list = tokenizer.tokenize(text) return(tokenized_list)from nltk.stem.snowball import SnowballStemmerdef stemmer_snowball(text_list): snowball = SnowballStemmer(language='english') return_list = [] for i in range(len(text_list)): return_list.append(snowball.stem(text_list[i])) return(return_list)stemmer_snowball(w_tokenizer(sample_text))#Output['gemini', 'man', 'review:', 'doubl', 'will', 'smith', "can't", 'save', 'hackney', 'spi', 'flick', 'u.s.a', 'rain', 'rain', 'ran'] You get the same result with NLTK’s Porter Stemmer, and this one too words into non dictionary forms such as spy -> spi and double -> doubl from nltk.stem.porter import PorterStemmerdef stemmer_porter(text_list): porter = PorterStemmer() return_list = [] for i in range(len(text_list)): return_list.append(porter.stem(text_list[i])) return(return_list)stemmer_porter(w_tokenizer(sample_text))#Output['gemini', 'man', 'review:', 'doubl', 'will', 'smith', "can't", 'save', 'hackney', 'spi', 'flick', 'u.s.a', 'rain', 'rain', 'ran'] If you use SpaCy for tokenization, then it already stores an attribute called .lemma_ with each tokens, and you can simply call it to get lemmatized forms of each words. Notice that it’s not as aggressive as a stemmer, and it converts word contractions such as “can’t” to “can” and “not”. # https://spacy.io/api/tokenizerfrom spacy.lang.en import Englishnlp = English()tokenizer = nlp.Defaults.create_tokenizer(nlp)tokens = tokenizer(sample_text)#token_list = []lemma_list = []for token in tokens: #token_list.append(token.text) lemma_list.append(token.lemma_)#token_listlemma_list#Output['Gemini', 'Man', 'review', ':', 'Double', 'Will', 'Smith', 'can', 'not', 'save', 'hackneyed', 'spy', 'flick', 'U.S.A', 'rain', 'rain', 'run'] There are certain words above such as “it”, “is”, “that”, “this” etc. which don’t contribute much to the meaning of the underlying sentence and are actually quite common across all English documents; these words are known as stop words. There is generally a need to remove these “common” words before vectorizing tokens by a count vectorizer so that we can reduce the total dimensions of our vectors, and mitigate the so called “curse of dimensionality”. You can remove stop words by essentially three methods: First method is the simplest where you create a list or set of words you want to exclude from your tokens; such as list is already available as part of sklearn’s countvectorizer, NLTK as well as SpaCy. This has been accepted method to remove stop words for quite a long time, however, there is an awareness among researchers and working professionals that such one size fits all method is actually quite harmful in learning about overall meaning of the text; and there are papers out there which caution against this approach. # using hard coded stop word listfrom spacy.lang.en import Englishimport spacyspacy_stopwords = spacy.lang.en.stop_words.STOP_WORDS# spacy_stopwords is a hardcoded setnlp = English()tokenizer = nlp.Defaults.create_tokenizer(nlp)tokens = tokenizer(sample_text)#token_list = []lemma_list = []for token in tokens: if token.lemma_.lower() not in spacy_stopwords: #token_list.append(token.text) lemma_list.append(token.lemma_)#token_listlemma_list#Output['Gemini', 'Man', 'review', ':', 'Double', 'Smith', 'save', 'hackneyed', 'spy', 'flick', 'U.S.A', 'rain', 'rain', 'run'] As expected, the words “will” and “can” etc are removed since they were present in the hard-coded set of stopwords available in SpaCy. Let us wrap this into a function called remove_stop_words so that we can use it as part of sklearn pipeline in section 5. import spacydef remove_stopwords(text_list): spacy_stopwords = spacy.lang.en.stop_words.STOP_WORDSreturn_list = [] for i in range(len(text_list)): if text_list[i] not in spacy_stopwords: return_list.append(text_list[i]) return(return_list) The second approach is where you let the language model figure out if a given token is a stop word or not. Spacy’s tokenization already provides an attribute called is .is_stop for this purpose. Now, there will be times when common stop words are not being excluded by spacy's flag, but that is still better than a hard-coded list of words to be excluded. Just FYI, there is a well documented bug in some SpaCy models[1][2] which avoids detection of stop words in cases when the first letter is capitalized so you need to apply the workaround in case its not detecting stop words properly. # using the .is_stop flagfrom spacy.lang.en import Englishnlp = English()tokenizer = nlp.Defaults.create_tokenizer(nlp)tokens = tokenizer(sample_text)lemma_list = []for token in tokens: if token.is_stop is False: lemma_list.append(token.lemma_)lemma_list#Output['Gemini', 'Man', 'review', ':', 'Double', 'Will', 'Smith', 'not', 'save', 'hackneyed', 'spy', 'flick', 'U.S.A', 'rain', 'rain', 'run'] This is obviously doing a better job, since it detected that “Will” here is the name of a person only removed “can” from the sample text. Let’s wrap this in a function so that we can use it in the last section. # from spacy.lang.en import Englishdef spacy_tokenizer_lemmatizer(text): nlp = English() tokenizer = nlp.Defaults.create_tokenizer(nlp) tokens = tokenizer(text) lemma_list = [] for token in tokens: if token.is_stop is False: lemma_list.append(token.lemma_) return(lemma_list) The third approach to combating stop words is excluding words which appear too frequently in a given corpus; sklearn’s countvectoriser and tfidfvectorizer methods has a parameter called `max_df` which lets you ignore tokens that have a document frequency strictly higher than the given threshold. You can also exclude words by specifying total number of tokens through `max_features` parameter. If you are going to use tf-idf after count vectorizer, than it will automatically assign a much lower weightage to stop words compared to words which contribute to overall meaning of the sentence. Once we have tokenized the text and have converted the word contractions it really isn't useful anymore to have punctuation and special characters in our text. This is of-course not true when we are dealing with text likely to have twitter handles, email addresses etc. In those cases, we alter our text processing pipeline to only strip whitespaces from tokens or skip this step altogether. We can clean out all HTML tags by using the regex ‘<[^>]*>’; All the non word characters can be removed by ‘[\W]+’. You should be careful though about not stripping punctuations before word contractions are handled by the lemmatizer. In the code block below, we will modify our SpaCy code to account for stop words and also remove any punctuations from tokens. As shown in example below, we have successfully removed special character tokens such as “:” which don’t really contribute anything semantically in a bags of words vectorization. import redef preprocessor(text): if type(text) == string: text = re.sub('<[^>]*>', '', text) text = re.sub('[\W]+', '', text.lower()) return textfrom spacy.lang.en import Englishnlp = English()tokenizer = nlp.Defaults.create_tokenizer(nlp)tokens = tokenizer(sample_text)lemma_list = []for token in tokens: if token.is_stop is False: token_preprocessed = preprocessor(token.lemma_) if token_preprocessed != '': lemma_list.append(token_preprocessed)lemma_list#Output:['gemini', 'man', 'review', 'double', 'will', 'smith', 'not', 'save', 'hackneyed', 'spy', 'flick', 'usa', 'rain', 'rain', 'run']#A more appropriate preprocessor function is below which can take both a list and a string as inputdef preprocessor_final(text): if isinstance((text), (str)): text = re.sub('<[^>]*>', '', text) text = re.sub('[\W]+', '', text.lower()) return text if isinstance((text), (list)): return_list = [] for i in range(len(text)): temp_text = re.sub('<[^>]*>', '', text[i]) temp_text = re.sub('[\W]+', '', temp_text.lower()) return_list.append(temp_text) return(return_list) else: pass Another common text processing use case is when we are trying to perform document level sentiment analysis from web data such as social media comments, tweets etc. All of these make extensive use of emoticons, and if we simply strip out all special characters than we may miss out on some very useful tokens which contribute greatly to the semantics and sentiments of the text. If we are planning on using a bags of word type text vectorization than we can simply find all those emoticons and add them towards the end of the tokenized list. In this case, you might have to run the preprocessor as the first step before tokenization. # find emoticons functionimport redef find_emo(text): emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)',text) return emoticonssample_text = " I loved this movie :) but it was rather sad :( "find_emo(sample_text)# output[':)', ':('] As you saw above, text preprocessing is rarely a one size fits all, and most real world applications require us to use different preprocessing modules as per the text source and the further analysis we plan on doing. There are many ways to create such a custom pipeline, but one simple option is to use sklearn pipelines which allows us to sequentially assemble several different steps, with only requirement being that intermediate steps should have implemented the fit and transform methods and the final estimator having atleast a fit method. Now, this might be too onerous a requirement for many small functions such as ones for preprocessing text; but thankfully, sklearn also ships with a functionTransformer which allows us to wrap any arbitrary function into a sklearn compatible one. There is one catch though: the function should not operate directly on objects but wrap them into lists, pandas series or Numpy arrays. This is not a major deterrent though, you can just create a helper function which wraps the output into a list comprehension. # Adapted from https://ryan-cranfill.github.io/sentiment-pipeline-sklearn-3/from sklearn.preprocessing import FunctionTransformerdef pipelinize(function, active=True): def list_comprehend_a_function(list_or_series, active=True): if active: return [function(i) for i in list_or_series] else: # if it's not active, just pass it right back return list_or_series return FunctionTransformer(list_comprehend_a_function, validate=False, kw_args={'active':active}) As a final step, let us compose a sklearn pipeline which uses NLTK’s w_tokenizer function and stemmer_snowball from section 2.1 and uses the preprocessor function from section 4. from sklearn.pipeline import Pipelineestimators = [('tokenizer', pipelinize(w_tokenizer)), ('stemmer', pipelinize(stemmer_snowball)),('stopwordremoval', pipelinize(remove_stopwords)), ('preprocessor', pipelinize(preprocessor_final))]pipe = Pipeline(estimators)pipe.transform([sample_text])Output:[['gemini', 'man', 'review', 'doubl', 'smith', 'cant', 'save', 'hackney', 'spi', 'flick', 'usa', 'rain', 'rain', 'ran']] You can easily change the above pipeline to use the SpaCy functions as shown below. Note that the tokenization function (spacy_tokenizer_lemmatizer) introduced in section 3 returns lemmatized tokens without any stopwords, so those steps are not necessary in our pipeline and we can directly run the preprocessor. spacy_estimators = [('tokenizer', pipelinize(spacy_tokenizer_lemmatizer)), ('preprocessor', pipelinize(preprocessor_final))]spacy_pipe = Pipeline(spacy_estimators)spacy_pipe.transform([sample_text])# Output:[['gemini', 'man', 'review', 'double', 'will', 'smith', 'not', 'save', 'hackneyed', 'spy', 'flick', 'usa', 'rain', 'rain', 'run']] I hope that I have illustrated the ample advantages of using Sklearn pipelines with SpaCy based preprocessing workflow to effectively and efficiently perform preprocessing for almost all NLP tasks.
[ { "code": null, "e": 440, "s": 171, "text": "Text preprocessing is the process of getting the raw text into a form which can be vectorized and subsequently consumed by machine learning algorithms for natural language processing (NLP) tasks such as text classification, topic modeling, name entity recognition etc." }, { "code": null, "e": 1177, "s": 440, "text": "Raw text extensively preprocessed by all text analytics APIs such as Azure’s text analytics APIs or ones developed by us at Specrom Analytics, although the extent and the type of preprocessing is dependent on the type of input text. For example, for our historical news APIs, the input consists of scraped HTML pages, and hence it is important for us to strip the unwanted HTML tags from text before feeding it to the NLP algorithms. However, for some news outlets we get data as JSON from their official REST APIs. In that case, there are no HTML tags at all and it will be a waste of CPU time to run a regex based preprocessor to such a clean text. Hence, it makes sense to preprocess text differently based on the source of the data." }, { "code": null, "e": 1456, "s": 1177, "text": "If you want to create word clouds as shown below, than it is generally recommended that you remove stop words. But in cases such as name entity recognition (NER), this is not really required and you can safely throw in syntactically complete sentences to the NER of your choice." }, { "code": null, "e": 1586, "s": 1456, "text": "There are many good blog posts developing a text preprocessing steps but let us go through those here just for completeness sake." }, { "code": null, "e": 1835, "s": 1586, "text": "The process of converting text contained in paragraphs or sentences into individual words (called tokens) is known as tokenization. This is usually a very important step in text preprocessing before we can convert text into vectors full of numbers." }, { "code": null, "e": 2079, "s": 1835, "text": "Intuitively and rather naively, one way to tokenize text is to simply break the string at spaces and python already ships with very good string methods which can do it with ease, lets call such a tokenization method “white space tokenization”." }, { "code": null, "e": 2449, "s": 2079, "text": "However, white space tokenization cannot understand word contractions such as when we combine two words ‘can’ and ‘not’ into “can’t”, don’t (do + not), and I’ve (I + have). These are non-trivial issues, and if we don’t separate “can’t” into “can” and “not” then once we strip punctuations,we will be left with a single word “cant” which is not really a dictionary word." }, { "code": null, "e": 2822, "s": 2449, "text": "The classical library for text processing in Python called NLTK ships with other tokenizers such as WordPunctTokenizer and TreebankWordTokenizer which all operate on different conventions to try and solve the word contractions issue. For advanced tokenization strategies, there is also a RegexpTokenizer available which can split strings according to a regular expression." }, { "code": null, "e": 3036, "s": 2822, "text": "All of these approaches are basically rule-based though, and since no real “learning” is happening, you as a user will have to handle all the special cases which might crop up as a result of tokenization strategy." }, { "code": null, "e": 3234, "s": 3036, "text": "The next generation NLP libraries such as Spacy and Apache Spark NLP have largely fixed this issue and deals with common abbreviations with the tokenization methods as part of their language model." }, { "code": null, "e": 3633, "s": 3234, "text": "# Create a string input sample_text = \"Gemini Man review: Double Will Smith can't save hackneyed spy flick U.S.A\"from nltk.tokenize import WhitespaceTokenizertokenizer_w = WhitespaceTokenizer()# Use tokenize method tokenized_list = tokenizer_w.tokenize(sample_text) tokenized_list# output['Gemini', 'Man', 'review:', 'Double', 'Will', 'Smith', \"can't\", 'save', 'hackneyed', 'spy', 'flick', 'U.S.A']" }, { "code": null, "e": 3696, "s": 3633, "text": "WordPunct Tokenizer will split on punctuations as shown below." }, { "code": null, "e": 3981, "s": 3696, "text": "from nltk.tokenize import WordPunctTokenizer tokenizer = WordPunctTokenizer()tokenized_list= tokenizer.tokenize(sample_text)tokenized_list# Output['Gemini', 'Man', 'review', ':', 'Double', 'Will', 'Smith', 'can', \"'\", 't', 'save', 'hackneyed', 'spy', 'flick', 'U', '.', 'S', '.', 'A']" }, { "code": null, "e": 4067, "s": 3981, "text": "And NLTK’s treebanktokenizer splits word contractions into two tokens as shown below." }, { "code": null, "e": 4336, "s": 4067, "text": "from nltk.tokenize import TreebankWordTokenizertokenizer = TreebankWordTokenizer()tokenized_list= tokenizer.tokenize(sample_text)tokenized_list#Output['Gemini', 'Man', 'review', ':', 'Double', 'Will', 'Smith', 'ca', \"n't\", 'save', 'hackneyed', 'spy', 'flick', 'U.S.A']" }, { "code": null, "e": 4527, "s": 4336, "text": "Its pretty simple to perform tokenization in SpaCy too, and in the later section on lemmatization you will notice why tokenization as part of language model fixes the word contraction issue." }, { "code": null, "e": 4973, "s": 4527, "text": "# Spacy Tokenization examplesample_text = \"Gemini Man review: Double Will Smith can't save hackneyed spy flick U.S.A\"from spacy.lang.en import Englishnlp = English()tokenizer = nlp.Defaults.create_tokenizer(nlp)tokens = tokenizer(sample_text)token_list = []for token in tokens: token_list.append(token.text)token_list#output['Gemini', 'Man', 'review', ':', 'Double', 'Will', 'Smith', 'ca', \"n't\", 'save', 'hackneyed', 'spy', 'flick', 'U.S.A']" }, { "code": null, "e": 5436, "s": 4973, "text": "Stemming and lemmatization attempts to get root word (for eg rain) for different word inflections (raining, rained etc). Lemma algos gives you real dictionary words, whereas stemming simply cuts off last parts of the word so its faster but less accurate. Stemming returns words which are not really dictionary words and hence you will not be able to find pretrained vectors for it in Glove, Word2Vec etc and this is a major disadvantage depending on application." }, { "code": null, "e": 5850, "s": 5436, "text": "Nevertheless, it is pretty popular to use stemming algorithms such as porter and more advanced snowball stemmers. Spacy does not ship with any stemming algorithms so we will be using NLTK for performing stemming; we will show outputs from two stemming algorithms here. For ease of use, we will wrap the whitespace tokenizer into a function. As you can see, both stemmers reduced the verb form (raining) into rain." }, { "code": null, "e": 6608, "s": 5850, "text": "sample_text = '''Gemini Man review: Double Will Smith can't save hackneyed spy flick U.S.A raining rained ran'''from nltk.tokenize import WhitespaceTokenizerdef w_tokenizer(text): tokenizer = WhitespaceTokenizer() # Use tokenize method tokenized_list = tokenizer.tokenize(text) return(tokenized_list)from nltk.stem.snowball import SnowballStemmerdef stemmer_snowball(text_list): snowball = SnowballStemmer(language='english') return_list = [] for i in range(len(text_list)): return_list.append(snowball.stem(text_list[i])) return(return_list)stemmer_snowball(w_tokenizer(sample_text))#Output['gemini', 'man', 'review:', 'doubl', 'will', 'smith', \"can't\", 'save', 'hackney', 'spi', 'flick', 'u.s.a', 'rain', 'rain', 'ran']" }, { "code": null, "e": 6748, "s": 6608, "text": "You get the same result with NLTK’s Porter Stemmer, and this one too words into non dictionary forms such as spy -> spi and double -> doubl" }, { "code": null, "e": 7157, "s": 6748, "text": "from nltk.stem.porter import PorterStemmerdef stemmer_porter(text_list): porter = PorterStemmer() return_list = [] for i in range(len(text_list)): return_list.append(porter.stem(text_list[i])) return(return_list)stemmer_porter(w_tokenizer(sample_text))#Output['gemini', 'man', 'review:', 'doubl', 'will', 'smith', \"can't\", 'save', 'hackney', 'spi', 'flick', 'u.s.a', 'rain', 'rain', 'ran']" }, { "code": null, "e": 7446, "s": 7157, "text": "If you use SpaCy for tokenization, then it already stores an attribute called .lemma_ with each tokens, and you can simply call it to get lemmatized forms of each words. Notice that it’s not as aggressive as a stemmer, and it converts word contractions such as “can’t” to “can” and “not”." }, { "code": null, "e": 7894, "s": 7446, "text": "# https://spacy.io/api/tokenizerfrom spacy.lang.en import Englishnlp = English()tokenizer = nlp.Defaults.create_tokenizer(nlp)tokens = tokenizer(sample_text)#token_list = []lemma_list = []for token in tokens: #token_list.append(token.text) lemma_list.append(token.lemma_)#token_listlemma_list#Output['Gemini', 'Man', 'review', ':', 'Double', 'Will', 'Smith', 'can', 'not', 'save', 'hackneyed', 'spy', 'flick', 'U.S.A', 'rain', 'rain', 'run']" }, { "code": null, "e": 8349, "s": 7894, "text": "There are certain words above such as “it”, “is”, “that”, “this” etc. which don’t contribute much to the meaning of the underlying sentence and are actually quite common across all English documents; these words are known as stop words. There is generally a need to remove these “common” words before vectorizing tokens by a count vectorizer so that we can reduce the total dimensions of our vectors, and mitigate the so called “curse of dimensionality”." }, { "code": null, "e": 8405, "s": 8349, "text": "You can remove stop words by essentially three methods:" }, { "code": null, "e": 8932, "s": 8405, "text": "First method is the simplest where you create a list or set of words you want to exclude from your tokens; such as list is already available as part of sklearn’s countvectorizer, NLTK as well as SpaCy. This has been accepted method to remove stop words for quite a long time, however, there is an awareness among researchers and working professionals that such one size fits all method is actually quite harmful in learning about overall meaning of the text; and there are papers out there which caution against this approach." }, { "code": null, "e": 9515, "s": 8932, "text": "# using hard coded stop word listfrom spacy.lang.en import Englishimport spacyspacy_stopwords = spacy.lang.en.stop_words.STOP_WORDS# spacy_stopwords is a hardcoded setnlp = English()tokenizer = nlp.Defaults.create_tokenizer(nlp)tokens = tokenizer(sample_text)#token_list = []lemma_list = []for token in tokens: if token.lemma_.lower() not in spacy_stopwords: #token_list.append(token.text) lemma_list.append(token.lemma_)#token_listlemma_list#Output['Gemini', 'Man', 'review', ':', 'Double', 'Smith', 'save', 'hackneyed', 'spy', 'flick', 'U.S.A', 'rain', 'rain', 'run']" }, { "code": null, "e": 9772, "s": 9515, "text": "As expected, the words “will” and “can” etc are removed since they were present in the hard-coded set of stopwords available in SpaCy. Let us wrap this into a function called remove_stop_words so that we can use it as part of sklearn pipeline in section 5." }, { "code": null, "e": 10039, "s": 9772, "text": "import spacydef remove_stopwords(text_list): spacy_stopwords = spacy.lang.en.stop_words.STOP_WORDSreturn_list = [] for i in range(len(text_list)): if text_list[i] not in spacy_stopwords: return_list.append(text_list[i]) return(return_list)" }, { "code": null, "e": 10629, "s": 10039, "text": "The second approach is where you let the language model figure out if a given token is a stop word or not. Spacy’s tokenization already provides an attribute called is .is_stop for this purpose. Now, there will be times when common stop words are not being excluded by spacy's flag, but that is still better than a hard-coded list of words to be excluded. Just FYI, there is a well documented bug in some SpaCy models[1][2] which avoids detection of stop words in cases when the first letter is capitalized so you need to apply the workaround in case its not detecting stop words properly." }, { "code": null, "e": 11040, "s": 10629, "text": "# using the .is_stop flagfrom spacy.lang.en import Englishnlp = English()tokenizer = nlp.Defaults.create_tokenizer(nlp)tokens = tokenizer(sample_text)lemma_list = []for token in tokens: if token.is_stop is False: lemma_list.append(token.lemma_)lemma_list#Output['Gemini', 'Man', 'review', ':', 'Double', 'Will', 'Smith', 'not', 'save', 'hackneyed', 'spy', 'flick', 'U.S.A', 'rain', 'rain', 'run']" }, { "code": null, "e": 11251, "s": 11040, "text": "This is obviously doing a better job, since it detected that “Will” here is the name of a person only removed “can” from the sample text. Let’s wrap this in a function so that we can use it in the last section." }, { "code": null, "e": 11575, "s": 11251, "text": "# from spacy.lang.en import Englishdef spacy_tokenizer_lemmatizer(text): nlp = English() tokenizer = nlp.Defaults.create_tokenizer(nlp) tokens = tokenizer(text) lemma_list = [] for token in tokens: if token.is_stop is False: lemma_list.append(token.lemma_) return(lemma_list)" }, { "code": null, "e": 12167, "s": 11575, "text": "The third approach to combating stop words is excluding words which appear too frequently in a given corpus; sklearn’s countvectoriser and tfidfvectorizer methods has a parameter called `max_df` which lets you ignore tokens that have a document frequency strictly higher than the given threshold. You can also exclude words by specifying total number of tokens through `max_features` parameter. If you are going to use tf-idf after count vectorizer, than it will automatically assign a much lower weightage to stop words compared to words which contribute to overall meaning of the sentence." }, { "code": null, "e": 13099, "s": 12167, "text": "Once we have tokenized the text and have converted the word contractions it really isn't useful anymore to have punctuation and special characters in our text. This is of-course not true when we are dealing with text likely to have twitter handles, email addresses etc. In those cases, we alter our text processing pipeline to only strip whitespaces from tokens or skip this step altogether. We can clean out all HTML tags by using the regex ‘<[^>]*>’; All the non word characters can be removed by ‘[\\W]+’. You should be careful though about not stripping punctuations before word contractions are handled by the lemmatizer. In the code block below, we will modify our SpaCy code to account for stop words and also remove any punctuations from tokens. As shown in example below, we have successfully removed special character tokens such as “:” which don’t really contribute anything semantically in a bags of words vectorization." }, { "code": null, "e": 14301, "s": 13099, "text": "import redef preprocessor(text): if type(text) == string: text = re.sub('<[^>]*>', '', text) text = re.sub('[\\W]+', '', text.lower()) return textfrom spacy.lang.en import Englishnlp = English()tokenizer = nlp.Defaults.create_tokenizer(nlp)tokens = tokenizer(sample_text)lemma_list = []for token in tokens: if token.is_stop is False: token_preprocessed = preprocessor(token.lemma_) if token_preprocessed != '': lemma_list.append(token_preprocessed)lemma_list#Output:['gemini', 'man', 'review', 'double', 'will', 'smith', 'not', 'save', 'hackneyed', 'spy', 'flick', 'usa', 'rain', 'rain', 'run']#A more appropriate preprocessor function is below which can take both a list and a string as inputdef preprocessor_final(text): if isinstance((text), (str)): text = re.sub('<[^>]*>', '', text) text = re.sub('[\\W]+', '', text.lower()) return text if isinstance((text), (list)): return_list = [] for i in range(len(text)): temp_text = re.sub('<[^>]*>', '', text[i]) temp_text = re.sub('[\\W]+', '', temp_text.lower()) return_list.append(temp_text) return(return_list) else: pass" }, { "code": null, "e": 14934, "s": 14301, "text": "Another common text processing use case is when we are trying to perform document level sentiment analysis from web data such as social media comments, tweets etc. All of these make extensive use of emoticons, and if we simply strip out all special characters than we may miss out on some very useful tokens which contribute greatly to the semantics and sentiments of the text. If we are planning on using a bags of word type text vectorization than we can simply find all those emoticons and add them towards the end of the tokenized list. In this case, you might have to run the preprocessor as the first step before tokenization." }, { "code": null, "e": 15176, "s": 14934, "text": "# find emoticons functionimport redef find_emo(text): emoticons = re.findall('(?::|;|=)(?:-)?(?:\\)|\\(|D|P)',text) return emoticonssample_text = \" I loved this movie :) but it was rather sad :( \"find_emo(sample_text)# output[':)', ':(']" }, { "code": null, "e": 15393, "s": 15176, "text": "As you saw above, text preprocessing is rarely a one size fits all, and most real world applications require us to use different preprocessing modules as per the text source and the further analysis we plan on doing." }, { "code": null, "e": 15722, "s": 15393, "text": "There are many ways to create such a custom pipeline, but one simple option is to use sklearn pipelines which allows us to sequentially assemble several different steps, with only requirement being that intermediate steps should have implemented the fit and transform methods and the final estimator having atleast a fit method." }, { "code": null, "e": 16231, "s": 15722, "text": "Now, this might be too onerous a requirement for many small functions such as ones for preprocessing text; but thankfully, sklearn also ships with a functionTransformer which allows us to wrap any arbitrary function into a sklearn compatible one. There is one catch though: the function should not operate directly on objects but wrap them into lists, pandas series or Numpy arrays. This is not a major deterrent though, you can just create a helper function which wraps the output into a list comprehension." }, { "code": null, "e": 16730, "s": 16231, "text": "# Adapted from https://ryan-cranfill.github.io/sentiment-pipeline-sklearn-3/from sklearn.preprocessing import FunctionTransformerdef pipelinize(function, active=True): def list_comprehend_a_function(list_or_series, active=True): if active: return [function(i) for i in list_or_series] else: # if it's not active, just pass it right back return list_or_series return FunctionTransformer(list_comprehend_a_function, validate=False, kw_args={'active':active})" }, { "code": null, "e": 16909, "s": 16730, "text": "As a final step, let us compose a sklearn pipeline which uses NLTK’s w_tokenizer function and stemmer_snowball from section 2.1 and uses the preprocessor function from section 4." }, { "code": null, "e": 17339, "s": 16909, "text": "from sklearn.pipeline import Pipelineestimators = [('tokenizer', pipelinize(w_tokenizer)), ('stemmer', pipelinize(stemmer_snowball)),('stopwordremoval', pipelinize(remove_stopwords)), ('preprocessor', pipelinize(preprocessor_final))]pipe = Pipeline(estimators)pipe.transform([sample_text])Output:[['gemini', 'man', 'review', 'doubl', 'smith', 'cant', 'save', 'hackney', 'spi', 'flick', 'usa', 'rain', 'rain', 'ran']]" }, { "code": null, "e": 17652, "s": 17339, "text": "You can easily change the above pipeline to use the SpaCy functions as shown below. Note that the tokenization function (spacy_tokenizer_lemmatizer) introduced in section 3 returns lemmatized tokens without any stopwords, so those steps are not necessary in our pipeline and we can directly run the preprocessor." }, { "code": null, "e": 18004, "s": 17652, "text": "spacy_estimators = [('tokenizer', pipelinize(spacy_tokenizer_lemmatizer)), ('preprocessor', pipelinize(preprocessor_final))]spacy_pipe = Pipeline(spacy_estimators)spacy_pipe.transform([sample_text])# Output:[['gemini', 'man', 'review', 'double', 'will', 'smith', 'not', 'save', 'hackneyed', 'spy', 'flick', 'usa', 'rain', 'rain', 'run']]" } ]
Creating and training a U-Net model with PyTorch for 2D & 3D semantic segmentation: Model building [2/4] | by Johannes Schmidt | Towards Data Science
In the previous chapter we built a dataloader that picks up our images and performs some transformations and augmentations so that they can be fed in batches to a neural network like the U-Net. In this part, we focus on building a U-Net from scratch with the PyTorch library. The goal is to implement the U-Net in such a way, that important model configurations such as the activation function or the depth can be passed as arguments when creating the model. The U-Net is a convolutional neural network architecture that is designed for fast and precise segmentation of images. It has performed extremely well in several challenges and to this day, it is one of the most popular end-to-end architectures in the field of semantic segmentation. We can split the network into two parts: The encoder path (backbone) and the decoder path. The encoder captures features at different scales of the images by using a traditional stack of convolutional and max pooling layers. Concretely speaking, a block in the encoder consists of the repeated use of two convolutional layers (k=3, s=1), each followed by a non-linearity layer, and a max-pooling layer (k=2, s=2). For every convolution block and its associated max pooling operation, the number of feature maps is doubled to ensure that the network can learn the complex structures effectively. The decoder path is a symmetric expanding counterpart that uses transposed convolutions. This type of convolutional layer is an up-sampling method with trainable parameters and performs the reverse of (down)pooling layers such as the max pool. Similar to the encoder, each convolution block is followed by such an up-convolutional layer. The number of feature maps is halved in every block. Because recreating a segmentation mask from a small feature map is a rather difficult task for the network, the output after every up-convolutional layer is appended by the feature maps of the corresponding encoder block. The feature maps of the encoder layer are cropped if the dimensions exceed the one of the corresponding decoder layers. In the end, the output passes another convolution layer (k=1, s=1) with the number of feature maps being equal to the number of defined labels. The result is a u-shaped convolutional network that offers an elegant solution for good localization and use of context. Let’s take a look at the code. This code is based on https://github.com/ELEKTRONN/elektronn3/blob/master/elektronn3/models/unet.py (c) 2017 Martin Drawitsch, released under MIT License, which implements a configurable (2D/3D) U-Net with user-defined network depth and a few other improvements of the original architecture. They themselves actually used the 2D code from Jackson Huang https://github.com/jaxony/unet-pytorch. Here is a simplified version of the code — saved in a file unet.py: I will not go into detail here, but rather just mention important design choices. It can be useful to view the architecture in repeating blocks in the encoder but also in the decoder path. As you can see in unet.py the DownBlock and the UpBlock help to build the architecture. Both use smaller helper functions that return the correct layer, depending on what arguments are passed , e.g. if a 2D (dim=2) or 3D (dim=3) network is wanted. The number of blocks is defined by the depth of the network. A DownBlock generally has the following scheme: A UpBlock has the following layers: For our Unet class we just need to combine these blocks and make sure that the correct layers from the encoder are concatenated to the decoder (skip pathways). These layers have to be cropped if their sizes do not match with the corresponding layers from the decoder. In such cases, the autocrop function is used. For merging, I concatenate along the channel dimension (see Concatenate). Instead of transposed convolutions we could also use upsampling layers (interpolation methods) that are followed by a 1x1 or 3x3 convolution block to reduce the channel dimension. Using interpolation generally gets rid of the checkerboard artifact. For 3D input consider using trilinear interpolation. At the end we just need to think about the parameter initialization. By default, the weights are initialized with torch.nn.init.xavier_uniform_ and the biases are initialized with zeros using torch.nn.init.zeros_. For details and the available parameter options, I encourage you to take a look at the code. Feel free to change the code to your needs or expand e.g. the number of activation functions. Let’s create such a model and use it to make a prediction on some random input: from unet import UNetmodel = UNet(in_channels=1, out_channels=2, n_blocks=4, start_filters=32, activation='relu', normalization='batch', conv_mode='same', dim=2)x = torch.randn(size=(1, 1, 512, 512), dtype=torch.float32)with torch.no_grad(): out = model(x)print(f'Out: {out.shape}') This will give us: Out: torch.Size([1, 2, 512, 512]) To check weather our model is correct, we can get the model’s summary with this package pytorch-summary: from torchsummary import summarysummary = summary(model, (1, 512, 512)) which prints out a summary like this: To ensure correct semantic concatenations, it is advised to use input sizes that return even spatial dimensions in every block but the last in the encoder. For example: An input size of 1202 gives intermediate output shapes of [602, 302, 152] in the encoder path for a U-Net with depth=4 . A U-Net with depth=5 with the same input size is not recommended, as a maxpooling operation on odd spatial dimensions (e.g. on a 152 input) should be avoided. To make our lives easier, we can numerically compute the maximum network depth for a given input dimension with a simple function: shape = 1920def compute_max_depth(shape, max_depth=10, print_out=True): shapes = [] shapes.append(shape) for level in range(1, max_depth): if shape % 2 ** level == 0 and shape / 2 ** level > 1: shapes.append(shape / 2 ** level) if print_out: print(f'Level {level}: {shape / 2 ** level}') else: if print_out: print(f'Max-level: {level - 1}') break return shapesout = compute_max_depth(shape, print_out=True, max_depth=10) This will output Level 1: 960.0Level 2: 480.0Level 3: 240.0Level 4: 120.0Level 5: 60.0Level 6: 30.0Level 7: 15.0Max-level: 7 which tells us that that we can design a U-Net as deep as this without having to worry about semantic mismatches. Conversely, we can also numerically determine the possible input shapes dimensions for a given depth: low = 10high = 512depth = 8def compute_possible_shapes(low, high, depth): possible_shapes = {} for shape in range(low, high + 1): shapes = compute_max_depth(shape, max_depth=depth, print_out=False) if len(shapes) == depth: possible_shapes[shape] = shapes return possible_shapespossible_shapes = compute_possible_shapes(low, high, depth) This will output {256: [256, 128.0, 64.0, 32.0, 16.0, 8.0, 4.0, 2.0], 384: [384, 192.0, 96.0, 48.0, 24.0, 12.0, 6.0, 3.0], 512: [512, 256.0, 128.0, 64.0, 32.0, 16.0, 8.0, 4.0]} which tells us that we can have 3 different input shapes with such a level 8 U-Net architecture. But I dare to say that such a network with this input size is probably not useful in practice. In this part we created a configurable UNet model for the purpose of semantic segmentation. Now that we have built our model, it is time to create a training loop in the next chapter.
[ { "code": null, "e": 630, "s": 171, "text": "In the previous chapter we built a dataloader that picks up our images and performs some transformations and augmentations so that they can be fed in batches to a neural network like the U-Net. In this part, we focus on building a U-Net from scratch with the PyTorch library. The goal is to implement the U-Net in such a way, that important model configurations such as the activation function or the depth can be passed as arguments when creating the model." }, { "code": null, "e": 914, "s": 630, "text": "The U-Net is a convolutional neural network architecture that is designed for fast and precise segmentation of images. It has performed extremely well in several challenges and to this day, it is one of the most popular end-to-end architectures in the field of semantic segmentation." }, { "code": null, "e": 1509, "s": 914, "text": "We can split the network into two parts: The encoder path (backbone) and the decoder path. The encoder captures features at different scales of the images by using a traditional stack of convolutional and max pooling layers. Concretely speaking, a block in the encoder consists of the repeated use of two convolutional layers (k=3, s=1), each followed by a non-linearity layer, and a max-pooling layer (k=2, s=2). For every convolution block and its associated max pooling operation, the number of feature maps is doubled to ensure that the network can learn the complex structures effectively." }, { "code": null, "e": 2242, "s": 1509, "text": "The decoder path is a symmetric expanding counterpart that uses transposed convolutions. This type of convolutional layer is an up-sampling method with trainable parameters and performs the reverse of (down)pooling layers such as the max pool. Similar to the encoder, each convolution block is followed by such an up-convolutional layer. The number of feature maps is halved in every block. Because recreating a segmentation mask from a small feature map is a rather difficult task for the network, the output after every up-convolutional layer is appended by the feature maps of the corresponding encoder block. The feature maps of the encoder layer are cropped if the dimensions exceed the one of the corresponding decoder layers." }, { "code": null, "e": 2538, "s": 2242, "text": "In the end, the output passes another convolution layer (k=1, s=1) with the number of feature maps being equal to the number of defined labels. The result is a u-shaped convolutional network that offers an elegant solution for good localization and use of context. Let’s take a look at the code." }, { "code": null, "e": 2931, "s": 2538, "text": "This code is based on https://github.com/ELEKTRONN/elektronn3/blob/master/elektronn3/models/unet.py (c) 2017 Martin Drawitsch, released under MIT License, which implements a configurable (2D/3D) U-Net with user-defined network depth and a few other improvements of the original architecture. They themselves actually used the 2D code from Jackson Huang https://github.com/jaxony/unet-pytorch." }, { "code": null, "e": 2999, "s": 2931, "text": "Here is a simplified version of the code — saved in a file unet.py:" }, { "code": null, "e": 3497, "s": 2999, "text": "I will not go into detail here, but rather just mention important design choices. It can be useful to view the architecture in repeating blocks in the encoder but also in the decoder path. As you can see in unet.py the DownBlock and the UpBlock help to build the architecture. Both use smaller helper functions that return the correct layer, depending on what arguments are passed , e.g. if a 2D (dim=2) or 3D (dim=3) network is wanted. The number of blocks is defined by the depth of the network." }, { "code": null, "e": 3545, "s": 3497, "text": "A DownBlock generally has the following scheme:" }, { "code": null, "e": 3581, "s": 3545, "text": "A UpBlock has the following layers:" }, { "code": null, "e": 4271, "s": 3581, "text": "For our Unet class we just need to combine these blocks and make sure that the correct layers from the encoder are concatenated to the decoder (skip pathways). These layers have to be cropped if their sizes do not match with the corresponding layers from the decoder. In such cases, the autocrop function is used. For merging, I concatenate along the channel dimension (see Concatenate). Instead of transposed convolutions we could also use upsampling layers (interpolation methods) that are followed by a 1x1 or 3x3 convolution block to reduce the channel dimension. Using interpolation generally gets rid of the checkerboard artifact. For 3D input consider using trilinear interpolation." }, { "code": null, "e": 4485, "s": 4271, "text": "At the end we just need to think about the parameter initialization. By default, the weights are initialized with torch.nn.init.xavier_uniform_ and the biases are initialized with zeros using torch.nn.init.zeros_." }, { "code": null, "e": 4672, "s": 4485, "text": "For details and the available parameter options, I encourage you to take a look at the code. Feel free to change the code to your needs or expand e.g. the number of activation functions." }, { "code": null, "e": 4752, "s": 4672, "text": "Let’s create such a model and use it to make a prediction on some random input:" }, { "code": null, "e": 5122, "s": 4752, "text": "from unet import UNetmodel = UNet(in_channels=1, out_channels=2, n_blocks=4, start_filters=32, activation='relu', normalization='batch', conv_mode='same', dim=2)x = torch.randn(size=(1, 1, 512, 512), dtype=torch.float32)with torch.no_grad(): out = model(x)print(f'Out: {out.shape}')" }, { "code": null, "e": 5141, "s": 5122, "text": "This will give us:" }, { "code": null, "e": 5175, "s": 5141, "text": "Out: torch.Size([1, 2, 512, 512])" }, { "code": null, "e": 5280, "s": 5175, "text": "To check weather our model is correct, we can get the model’s summary with this package pytorch-summary:" }, { "code": null, "e": 5352, "s": 5280, "text": "from torchsummary import summarysummary = summary(model, (1, 512, 512))" }, { "code": null, "e": 5390, "s": 5352, "text": "which prints out a summary like this:" }, { "code": null, "e": 5839, "s": 5390, "text": "To ensure correct semantic concatenations, it is advised to use input sizes that return even spatial dimensions in every block but the last in the encoder. For example: An input size of 1202 gives intermediate output shapes of [602, 302, 152] in the encoder path for a U-Net with depth=4 . A U-Net with depth=5 with the same input size is not recommended, as a maxpooling operation on odd spatial dimensions (e.g. on a 152 input) should be avoided." }, { "code": null, "e": 5970, "s": 5839, "text": "To make our lives easier, we can numerically compute the maximum network depth for a given input dimension with a simple function:" }, { "code": null, "e": 6491, "s": 5970, "text": "shape = 1920def compute_max_depth(shape, max_depth=10, print_out=True): shapes = [] shapes.append(shape) for level in range(1, max_depth): if shape % 2 ** level == 0 and shape / 2 ** level > 1: shapes.append(shape / 2 ** level) if print_out: print(f'Level {level}: {shape / 2 ** level}') else: if print_out: print(f'Max-level: {level - 1}') break return shapesout = compute_max_depth(shape, print_out=True, max_depth=10)" }, { "code": null, "e": 6508, "s": 6491, "text": "This will output" }, { "code": null, "e": 6616, "s": 6508, "text": "Level 1: 960.0Level 2: 480.0Level 3: 240.0Level 4: 120.0Level 5: 60.0Level 6: 30.0Level 7: 15.0Max-level: 7" }, { "code": null, "e": 6832, "s": 6616, "text": "which tells us that that we can design a U-Net as deep as this without having to worry about semantic mismatches. Conversely, we can also numerically determine the possible input shapes dimensions for a given depth:" }, { "code": null, "e": 7271, "s": 6832, "text": "low = 10high = 512depth = 8def compute_possible_shapes(low, high, depth): possible_shapes = {} for shape in range(low, high + 1): shapes = compute_max_depth(shape, max_depth=depth, print_out=False) if len(shapes) == depth: possible_shapes[shape] = shapes return possible_shapespossible_shapes = compute_possible_shapes(low, high, depth)" }, { "code": null, "e": 7288, "s": 7271, "text": "This will output" }, { "code": null, "e": 7448, "s": 7288, "text": "{256: [256, 128.0, 64.0, 32.0, 16.0, 8.0, 4.0, 2.0], 384: [384, 192.0, 96.0, 48.0, 24.0, 12.0, 6.0, 3.0], 512: [512, 256.0, 128.0, 64.0, 32.0, 16.0, 8.0, 4.0]}" }, { "code": null, "e": 7640, "s": 7448, "text": "which tells us that we can have 3 different input shapes with such a level 8 U-Net architecture. But I dare to say that such a network with this input size is probably not useful in practice." } ]
Java String indexOf() Method
❮ String Methods Search a string for the first occurrence of "planet": String myStr = "Hello planet earth, you are a great planet."; System.out.println(myStr.indexOf("planet")); Try it Yourself » The indexOf() method returns the position of the first occurrence of specified character(s) in a string. Tip: Use the lastIndexOf method to return the position of the last occurrence of specified character(s) in a string. There are 4 indexOf() methods: public int indexOf(String str) public int indexOf(String str, int fromIndex) public int indexOf(int char) public int indexOf(int char, int fromIndex) Find the first occurrence of the letter "e" in a string, starting the search at position 5: public class Main { public static void main(String[] args) { String myStr = "Hello planet earth, you are a great planet."; System.out.println(myStr.indexOf("e", 5)); } } Try it Yourself » We just launchedW3Schools videos Get certifiedby completinga course today! If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail: help@w3schools.com Your message has been sent to W3Schools.
[ { "code": null, "e": 19, "s": 0, "text": "\n❮ String Methods\n" }, { "code": null, "e": 73, "s": 19, "text": "Search a string for the first occurrence of \"planet\":" }, { "code": null, "e": 180, "s": 73, "text": "String myStr = \"Hello planet earth, you are a great planet.\";\nSystem.out.println(myStr.indexOf(\"planet\"));" }, { "code": null, "e": 200, "s": 180, "text": "\nTry it Yourself »\n" }, { "code": null, "e": 305, "s": 200, "text": "The indexOf() method returns the position of the first occurrence of specified character(s) in a string." }, { "code": null, "e": 422, "s": 305, "text": "Tip: Use the lastIndexOf method to return the position of the last occurrence of specified character(s) in a string." }, { "code": null, "e": 453, "s": 422, "text": "There are 4 indexOf() methods:" }, { "code": null, "e": 604, "s": 453, "text": "public int indexOf(String str)\npublic int indexOf(String str, int fromIndex)\npublic int indexOf(int char)\npublic int indexOf(int char, int fromIndex)\n" }, { "code": null, "e": 696, "s": 604, "text": "Find the first occurrence of the letter \"e\" in a string, starting the search at position 5:" }, { "code": null, "e": 879, "s": 696, "text": "public class Main {\n public static void main(String[] args) {\n String myStr = \"Hello planet earth, you are a great planet.\";\n System.out.println(myStr.indexOf(\"e\", 5));\n }\n}\n" }, { "code": null, "e": 899, "s": 879, "text": "\nTry it Yourself »\n" }, { "code": null, "e": 932, "s": 899, "text": "We just launchedW3Schools videos" }, { "code": null, "e": 974, "s": 932, "text": "Get certifiedby completinga course today!" }, { "code": null, "e": 1081, "s": 974, "text": "If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail:" }, { "code": null, "e": 1100, "s": 1081, "text": "help@w3schools.com" } ]
C# | How to play Beep sound through Console - GeeksforGeeks
29 Jan, 2019 Given a normal Console in C#, the task is to play Beep sound through the Console. Approach: This can be achieved with the help of Beep() method of Console Class in System package of C#. The Beep() method of Console Class is used to play a Beep sound through the Console speaker. Syntax: public static void Beep (); Exceptions: This method throws HostProtectionException if this method was executed on a server, such as SQL Server, that does not permit access to a user interface. Below programs show the use of Console.Beep() method: Program 1: // C# program to illustrate the// Console.Beep Methodusing System;using System.Collections.Generic;using System.Linq;using System.Text;using System.Threading;using System.Threading.Tasks; namespace GFG { class Program { static void Main(string[] args) { // Play beep sound once Console.Beep(); }}} Program 2: Play Beep sound n number of times. // C# program to illustrate the// Console.Beep Methodusing System;using System.Collections.Generic;using System.Linq;using System.Text;using System.Threading;using System.Threading.Tasks; namespace GFG { class Program { static void Main(string[] args) { int n = 5; // Play beep sound n times for (int i = 1; i < n; i++) Console.Beep(); }}} Note: Please run the programs on offline Visual Studio to experience the output. CSharp-Console-Class CSharp-method C# Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments C# | Method Overriding C# Dictionary with examples Difference between Ref and Out keywords in C# C# | Delegates Top 50 C# Interview Questions & Answers C# | Constructors Extension Method in C# Introduction to .NET Framework C# | Abstract Classes C# | Class and Object
[ { "code": null, "e": 24717, "s": 24689, "text": "\n29 Jan, 2019" }, { "code": null, "e": 24799, "s": 24717, "text": "Given a normal Console in C#, the task is to play Beep sound through the Console." }, { "code": null, "e": 24903, "s": 24799, "text": "Approach: This can be achieved with the help of Beep() method of Console Class in System package of C#." }, { "code": null, "e": 24996, "s": 24903, "text": "The Beep() method of Console Class is used to play a Beep sound through the Console speaker." }, { "code": null, "e": 25032, "s": 24996, "text": "Syntax: public static void Beep ();" }, { "code": null, "e": 25197, "s": 25032, "text": "Exceptions: This method throws HostProtectionException if this method was executed on a server, such as SQL Server, that does not permit access to a user interface." }, { "code": null, "e": 25251, "s": 25197, "text": "Below programs show the use of Console.Beep() method:" }, { "code": null, "e": 25262, "s": 25251, "text": "Program 1:" }, { "code": "// C# program to illustrate the// Console.Beep Methodusing System;using System.Collections.Generic;using System.Linq;using System.Text;using System.Threading;using System.Threading.Tasks; namespace GFG { class Program { static void Main(string[] args) { // Play beep sound once Console.Beep(); }}}", "e": 25587, "s": 25262, "text": null }, { "code": null, "e": 25633, "s": 25587, "text": "Program 2: Play Beep sound n number of times." }, { "code": "// C# program to illustrate the// Console.Beep Methodusing System;using System.Collections.Generic;using System.Linq;using System.Text;using System.Threading;using System.Threading.Tasks; namespace GFG { class Program { static void Main(string[] args) { int n = 5; // Play beep sound n times for (int i = 1; i < n; i++) Console.Beep(); }}}", "e": 26020, "s": 25633, "text": null }, { "code": null, "e": 26101, "s": 26020, "text": "Note: Please run the programs on offline Visual Studio to experience the output." }, { "code": null, "e": 26122, "s": 26101, "text": "CSharp-Console-Class" }, { "code": null, "e": 26136, "s": 26122, "text": "CSharp-method" }, { "code": null, "e": 26139, "s": 26136, "text": "C#" }, { "code": null, "e": 26237, "s": 26139, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 26246, "s": 26237, "text": "Comments" }, { "code": null, "e": 26259, "s": 26246, "text": "Old Comments" }, { "code": null, "e": 26282, "s": 26259, "text": "C# | Method Overriding" }, { "code": null, "e": 26310, "s": 26282, "text": "C# Dictionary with examples" }, { "code": null, "e": 26356, "s": 26310, "text": "Difference between Ref and Out keywords in C#" }, { "code": null, "e": 26371, "s": 26356, "text": "C# | Delegates" }, { "code": null, "e": 26411, "s": 26371, "text": "Top 50 C# Interview Questions & Answers" }, { "code": null, "e": 26429, "s": 26411, "text": "C# | Constructors" }, { "code": null, "e": 26452, "s": 26429, "text": "Extension Method in C#" }, { "code": null, "e": 26483, "s": 26452, "text": "Introduction to .NET Framework" }, { "code": null, "e": 26505, "s": 26483, "text": "C# | Abstract Classes" } ]
Python – Split a String by Custom Lengths
11 Oct, 2020 Given a String, perform split of strings on the basis of custom lengths. Input : test_str = ‘geeksforgeeks’, cus_lens = [4, 3, 2, 3, 1] Output : [‘geek’, ‘sfo’, ‘rg’, ‘eek’, ‘s’] Explanation : Strings separated by custom lengths.Input : test_str = ‘geeksforgeeks’, cus_lens = [10, 3] Output : [‘geeksforge’, ‘eks’] Explanation : Strings separated by custom lengths. Method #1 : Using slicing + loop In this, we perform task of slicing to cater custom lengths and loop is used to iterate for all the lengths. Python3 # Python3 code to demonstrate working of # Multilength String Split# Using loop + slicing # initializing stringtest_str = 'geeksforgeeks' # printing original stringprint("The original string is : " + str(test_str)) # initializing length listcus_lens = [5, 3, 2, 3] res = []strt = 0for size in cus_lens: # slicing for particular length res.append(test_str[strt : strt + size]) strt += size # printing result print("Strings after splitting : " + str(res)) The original string is : geeksforgeeks Strings after splitting : ['geeks', 'for', 'ge', 'eks'] Method #2 : Using join() + list comprehension + next() This is yet another way in which this task can be performed. In this, we perform task of getting character till length using next(), iterator method, provides more efficient solution. Lastly, join() is used to convert each character list to string. Python3 # Python3 code to demonstrate working of # Multilength String Split# Using join() + list comprehension + next() # initializing stringtest_str = 'geeksforgeeks' # printing original stringprint("The original string is : " + str(test_str)) # initializing length listcus_lens = [5, 3, 2, 3] # join() performs characters to string conversion# list comprehension provides shorthand to solve problemstritr = iter(test_str)res = ["".join(next(stritr) for idx in range(size)) for size in cus_lens] # printing result print("Strings after splitting : " + str(res)) The original string is : geeksforgeeks Strings after splitting : ['geeks', 'for', 'ge', 'eks'] Python string-programs Python Python Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n11 Oct, 2020" }, { "code": null, "e": 101, "s": 28, "text": "Given a String, perform split of strings on the basis of custom lengths." }, { "code": null, "e": 396, "s": 101, "text": "Input : test_str = ‘geeksforgeeks’, cus_lens = [4, 3, 2, 3, 1] Output : [‘geek’, ‘sfo’, ‘rg’, ‘eek’, ‘s’] Explanation : Strings separated by custom lengths.Input : test_str = ‘geeksforgeeks’, cus_lens = [10, 3] Output : [‘geeksforge’, ‘eks’] Explanation : Strings separated by custom lengths. " }, { "code": null, "e": 429, "s": 396, "text": "Method #1 : Using slicing + loop" }, { "code": null, "e": 538, "s": 429, "text": "In this, we perform task of slicing to cater custom lengths and loop is used to iterate for all the lengths." }, { "code": null, "e": 546, "s": 538, "text": "Python3" }, { "code": "# Python3 code to demonstrate working of # Multilength String Split# Using loop + slicing # initializing stringtest_str = 'geeksforgeeks' # printing original stringprint(\"The original string is : \" + str(test_str)) # initializing length listcus_lens = [5, 3, 2, 3] res = []strt = 0for size in cus_lens: # slicing for particular length res.append(test_str[strt : strt + size]) strt += size # printing result print(\"Strings after splitting : \" + str(res)) ", "e": 1025, "s": 546, "text": null }, { "code": null, "e": 1121, "s": 1025, "text": "The original string is : geeksforgeeks\nStrings after splitting : ['geeks', 'for', 'ge', 'eks']\n" }, { "code": null, "e": 1176, "s": 1121, "text": "Method #2 : Using join() + list comprehension + next()" }, { "code": null, "e": 1426, "s": 1176, "text": "This is yet another way in which this task can be performed. In this, we perform task of getting character till length using next(), iterator method, provides more efficient solution. Lastly, join() is used to convert each character list to string." }, { "code": null, "e": 1434, "s": 1426, "text": "Python3" }, { "code": "# Python3 code to demonstrate working of # Multilength String Split# Using join() + list comprehension + next() # initializing stringtest_str = 'geeksforgeeks' # printing original stringprint(\"The original string is : \" + str(test_str)) # initializing length listcus_lens = [5, 3, 2, 3] # join() performs characters to string conversion# list comprehension provides shorthand to solve problemstritr = iter(test_str)res = [\"\".join(next(stritr) for idx in range(size)) for size in cus_lens] # printing result print(\"Strings after splitting : \" + str(res)) ", "e": 1998, "s": 1434, "text": null }, { "code": null, "e": 2094, "s": 1998, "text": "The original string is : geeksforgeeks\nStrings after splitting : ['geeks', 'for', 'ge', 'eks']\n" }, { "code": null, "e": 2117, "s": 2094, "text": "Python string-programs" }, { "code": null, "e": 2124, "s": 2117, "text": "Python" }, { "code": null, "e": 2140, "s": 2124, "text": "Python Programs" } ]
Implementing Photomosaics
27 Dec, 2021 Introduction A photomosaic is an image split into a grid of rectangles, with each replaced by another image that matches the target (the image you ultimately want to appear in the photomosaic). In other words, if you look at a photomosaic from a distance, you see the target image; but if you come closer, you will see that the image actually consists of many smaller images. This works because of how the human eye works. There are two kinds of mosaic, depending on how the matching is done. In simpler kind, each part of the target image is averaged down to a single color. Each of the library images is also reduced to a single color. Each part of the target image is then replaced with one from the library where these colors are as similar as possible. In effect, the target image is reduced in resolution (by downsampling), and then each of the resulting pixels is replaced with an image whose average color matches that pixel. In the more advanced kind of photographic mosaic, the target image is not downsampled, and the matching is done by comparing each pixel in the rectangle to the corresponding pixel from each library image. The rectangle in the target is then replaced with the library image that minimizes the total difference. This requires much more computation than the simple kind, but the results can be much better since the pixel-by-pixel matching can preserve the resolution of the target image. How to create Photomosaics? Read the tile images, which will replace the tiles in the original image.Read the target image and split it into an M×N grid of tiles.For each tile, find the best match from the input images.Create the final mosaic by arranging the selected input images in an M×N grid. Read the tile images, which will replace the tiles in the original image. Read the target image and split it into an M×N grid of tiles. For each tile, find the best match from the input images. Create the final mosaic by arranging the selected input images in an M×N grid. Splitting the images into tiles Now let’s look at how to calculate the coordinates for a single tile from this grid. The tile with index (i, j) has a top-left corner coordinate of (i*w, i*j) and a bottom-right corner coordinate of ((i+1)*w, (j+1)*h), where w and h stand for the width and height of a tile, respectively. These can be used with the PIL to crop and create a tile from this image.Averaging Color ValuesEvery pixel in an image has a color that can be represented by its red, green, and blue values. In this case, you are using 8-bit images, so each of these components has an 8-bit value in the range [0, 255]. Given an image with a total of N pixels, the average RGB is calculated as follows:Matching ImagesFor each tile in the target image, you need to find a matching image from the images in the input folder specified by the user. To determine whether two images match, use the average RGB values. The closest match is the image with the closest average RGB value. The simplest way to do this is to calculate the distance between the RGB values in a pixel to find the best match among the input images. You can use the following distance calculation for 3D points from geometry:Now lets try to code this out Python3 #Importing the required librariesimport os, random, argparsefrom PIL import Imageimport imghdrimport numpy as np def getAverageRGBOld(image): """ Given PIL Image, return average value of color as (r, g, b) """ # no. of pixels in image npixels = image.size[0]*image.size[1] # get colors as [(cnt1, (r1, g1, b1)), ...] cols = image.getcolors(npixels) # get [(c1*r1, c1*g1, c1*g2),...] sumRGB = [(x[0]*x[1][0], x[0]*x[1][1], x[0]*x[1][2]) for x in cols] # calculate (sum(ci*ri)/np, sum(ci*gi)/np, sum(ci*bi)/np) # the zip gives us [(c1*r1, c2*r2, ..), (c1*g1, c1*g2,...)...] avg = tuple([int(sum(x)/npixels) for x in zip(*sumRGB)]) return avg def getAverageRGB(image): """ Given PIL Image, return average value of color as (r, g, b) """ # get image as numpy array im = np.array(image) # get shape w,h,d = im.shape # get average return tuple(np.average(im.reshape(w*h, d), axis=0)) def splitImage(image, size): """ Given Image and dims (rows, cols) returns an m*n list of Images """ W, H = image.size[0], image.size[1] m, n = size w, h = int(W/n), int(H/m) # image list imgs = [] # generate list of dimensions for j in range(m): for i in range(n): # append cropped image imgs.append(image.crop((i*w, j*h, (i+1)*w, (j+1)*h))) return imgs def getImages(imageDir): """ given a directory of images, return a list of Images """ files = os.listdir(imageDir) images = [] for file in files: filePath = os.path.abspath(os.path.join(imageDir, file)) try: # explicit load so we don't run into resource crunch fp = open(filePath, "rb") im = Image.open(fp) images.append(im) # force loading image data from file im.load() # close the file fp.close() except: # skip print("Invalid image: %s" % (filePath,)) return images def getImageFilenames(imageDir): """ given a directory of images, return a list of Image file names """ files = os.listdir(imageDir) filenames = [] for file in files: filePath = os.path.abspath(os.path.join(imageDir, file)) try: imgType = imghdr.what(filePath) if imgType: filenames.append(filePath) except: # skip print("Invalid image: %s" % (filePath,)) return filenames def getBestMatchIndex(input_avg, avgs): """ return index of best Image match based on RGB value distance """ # input image average avg = input_avg # get the closest RGB value to input, based on x/y/z distance index = 0 min_index = 0 min_dist = float("inf") for val in avgs: dist = ((val[0] - avg[0])*(val[0] - avg[0]) + (val[1] - avg[1])*(val[1] - avg[1]) + (val[2] - avg[2])*(val[2] - avg[2])) if dist < min_dist: min_dist = dist min_index = index index += 1 return min_index def createImageGrid(images, dims): """ Given a list of images and a grid size (m, n), create a grid of images. """ m, n = dims # sanity check assert m*n == len(images) # get max height and width of images # ie, not assuming they are all equal width = max([img.size[0] for img in images]) height = max([img.size[1] for img in images]) # create output image grid_img = Image.new('RGB', (n*width, m*height)) # paste images for index in range(len(images)): row = int(index/n) col = index - n*row grid_img.paste(images[index], (col*width, row*height)) return grid_img def createPhotomosaic(target_image, input_images, grid_size, reuse_images=True): """ Creates photomosaic given target and input images. """ print('splitting input image...') # split target image target_images = splitImage(target_image, grid_size) print('finding image matches...') # for each target image, pick one from input output_images = [] # for user feedback count = 0 batch_size = int(len(target_images)/10) # calculate input image averages avgs = [] for img in input_images: avgs.append(getAverageRGB(img)) for img in target_images: # target sub-image average avg = getAverageRGB(img) # find match index match_index = getBestMatchIndex(avg, avgs) output_images.append(input_images[match_index]) # user feedback if count > 0 and batch_size > 10 and count % batch_size is 0: print('processed %d of %d...' %(count, len(target_images))) count += 1 # remove selected image from input if flag set if not reuse_images: input_images.remove(match) print('creating mosaic...') # draw mosaic to image mosaic_image = createImageGrid(output_images, grid_size) # return mosaic return mosaic_image # Gather our code in a main() functiondef main(): # Command line args are in sys.argv[1], sys.argv[2] .. # sys.argv[0] is the script name itself and can be ignored # parse arguments parser = argparse.ArgumentParser (description='Creates a photomosaic from input images') # add arguments parser.add_argument('--target-image', dest='target_image', required=True) parser.add_argument('--input-folder', dest='input_folder', required=True) parser.add_argument('--grid-size', nargs=2, dest='grid_size', required=True) parser.add_argument('--output-file', dest='outfile', required=False) args = parser.parse_args() ###### INPUTS ###### # target image target_image = Image.open(args.target_image) # input images print('reading input folder...') input_images = getImages(args.input_folder) # check if any valid input images found if input_images == []: print('No input images found in %s. Exiting.' % (args.input_folder, )) exit() # shuffle list - to get a more varied output? random.shuffle(input_images) # size of grid grid_size = (int(args.grid_size[0]), int(args.grid_size[1])) # output output_filename = 'mosaic.png' if args.outfile: output_filename = args.outfile # re-use any image in input reuse_images = True # resize the input to fit original image size? resize_input = True ##### END INPUTS ##### print('starting photomosaic creation...') # if images can't be reused, ensure m*n <= num_of_images if not reuse_images: if grid_size[0]*grid_size[1] > len(input_images): print('grid size less than number of images') exit() # resizing input if resize_input: print('resizing images...') # for given grid size, compute max dims w,h of tiles dims = (int(target_image.size[0]/grid_size[1]), int(target_image.size[1]/grid_size[0])) print("max tile dims: %s" % (dims,)) # resize for img in input_images: img.thumbnail(dims) # create photomosaic mosaic_image = createPhotomosaic(target_image, input_images, grid_size, reuse_images) # write out mosaic mosaic_image.save(output_filename, 'PNG') print("saved output to %s" % (output_filename,)) print('done.') # Standard boilerplate to call the main() function to begin# the program.if __name__ == '__main__': main() python test.py --target-image test-data/a.jpg --input-folder test-data/set1/ --grid-size 128 128 Output: Reference Links:1) Python Playground by Mahesh Venkitachalam. 2) PILLOW docs 3) Wikipedia – PhotomosaicsThis article is contributed by Subhajit Saha. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. Akanksha_Rai ManasChhabra2 kalrap615 gulshankumarar231 germanshephered48 Project Python Technical Scripter Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 54, "s": 26, "text": "\n27 Dec, 2021" }, { "code": null, "e": 67, "s": 54, "text": "Introduction" }, { "code": null, "e": 477, "s": 67, "text": "A photomosaic is an image split into a grid of rectangles, with each replaced by another image that matches the target (the image you ultimately want to appear in the photomosaic). In other words, if you look at a photomosaic from a distance, you see the target image; but if you come closer, you will see that the image actually consists of many smaller images. This works because of how the human eye works." }, { "code": null, "e": 988, "s": 477, "text": "There are two kinds of mosaic, depending on how the matching is done. In simpler kind, each part of the target image is averaged down to a single color. Each of the library images is also reduced to a single color. Each part of the target image is then replaced with one from the library where these colors are as similar as possible. In effect, the target image is reduced in resolution (by downsampling), and then each of the resulting pixels is replaced with an image whose average color matches that pixel." }, { "code": null, "e": 1474, "s": 988, "text": "In the more advanced kind of photographic mosaic, the target image is not downsampled, and the matching is done by comparing each pixel in the rectangle to the corresponding pixel from each library image. The rectangle in the target is then replaced with the library image that minimizes the total difference. This requires much more computation than the simple kind, but the results can be much better since the pixel-by-pixel matching can preserve the resolution of the target image." }, { "code": null, "e": 1503, "s": 1474, "text": "How to create Photomosaics? " }, { "code": null, "e": 1773, "s": 1503, "text": "Read the tile images, which will replace the tiles in the original image.Read the target image and split it into an M×N grid of tiles.For each tile, find the best match from the input images.Create the final mosaic by arranging the selected input images in an M×N grid." }, { "code": null, "e": 1847, "s": 1773, "text": "Read the tile images, which will replace the tiles in the original image." }, { "code": null, "e": 1909, "s": 1847, "text": "Read the target image and split it into an M×N grid of tiles." }, { "code": null, "e": 1967, "s": 1909, "text": "For each tile, find the best match from the input images." }, { "code": null, "e": 2046, "s": 1967, "text": "Create the final mosaic by arranging the selected input images in an M×N grid." }, { "code": null, "e": 2078, "s": 2046, "text": "Splitting the images into tiles" }, { "code": null, "e": 3272, "s": 2078, "text": "Now let’s look at how to calculate the coordinates for a single tile from this grid. The tile with index (i, j) has a top-left corner coordinate of (i*w, i*j) and a bottom-right corner coordinate of ((i+1)*w, (j+1)*h), where w and h stand for the width and height of a tile, respectively. These can be used with the PIL to crop and create a tile from this image.Averaging Color ValuesEvery pixel in an image has a color that can be represented by its red, green, and blue values. In this case, you are using 8-bit images, so each of these components has an 8-bit value in the range [0, 255]. Given an image with a total of N pixels, the average RGB is calculated as follows:Matching ImagesFor each tile in the target image, you need to find a matching image from the images in the input folder specified by the user. To determine whether two images match, use the average RGB values. The closest match is the image with the closest average RGB value. The simplest way to do this is to calculate the distance between the RGB values in a pixel to find the best match among the input images. You can use the following distance calculation for 3D points from geometry:Now lets try to code this out" }, { "code": null, "e": 3280, "s": 3272, "text": "Python3" }, { "code": "#Importing the required librariesimport os, random, argparsefrom PIL import Imageimport imghdrimport numpy as np def getAverageRGBOld(image): \"\"\" Given PIL Image, return average value of color as (r, g, b) \"\"\" # no. of pixels in image npixels = image.size[0]*image.size[1] # get colors as [(cnt1, (r1, g1, b1)), ...] cols = image.getcolors(npixels) # get [(c1*r1, c1*g1, c1*g2),...] sumRGB = [(x[0]*x[1][0], x[0]*x[1][1], x[0]*x[1][2]) for x in cols] # calculate (sum(ci*ri)/np, sum(ci*gi)/np, sum(ci*bi)/np) # the zip gives us [(c1*r1, c2*r2, ..), (c1*g1, c1*g2,...)...] avg = tuple([int(sum(x)/npixels) for x in zip(*sumRGB)]) return avg def getAverageRGB(image): \"\"\" Given PIL Image, return average value of color as (r, g, b) \"\"\" # get image as numpy array im = np.array(image) # get shape w,h,d = im.shape # get average return tuple(np.average(im.reshape(w*h, d), axis=0)) def splitImage(image, size): \"\"\" Given Image and dims (rows, cols) returns an m*n list of Images \"\"\" W, H = image.size[0], image.size[1] m, n = size w, h = int(W/n), int(H/m) # image list imgs = [] # generate list of dimensions for j in range(m): for i in range(n): # append cropped image imgs.append(image.crop((i*w, j*h, (i+1)*w, (j+1)*h))) return imgs def getImages(imageDir): \"\"\" given a directory of images, return a list of Images \"\"\" files = os.listdir(imageDir) images = [] for file in files: filePath = os.path.abspath(os.path.join(imageDir, file)) try: # explicit load so we don't run into resource crunch fp = open(filePath, \"rb\") im = Image.open(fp) images.append(im) # force loading image data from file im.load() # close the file fp.close() except: # skip print(\"Invalid image: %s\" % (filePath,)) return images def getImageFilenames(imageDir): \"\"\" given a directory of images, return a list of Image file names \"\"\" files = os.listdir(imageDir) filenames = [] for file in files: filePath = os.path.abspath(os.path.join(imageDir, file)) try: imgType = imghdr.what(filePath) if imgType: filenames.append(filePath) except: # skip print(\"Invalid image: %s\" % (filePath,)) return filenames def getBestMatchIndex(input_avg, avgs): \"\"\" return index of best Image match based on RGB value distance \"\"\" # input image average avg = input_avg # get the closest RGB value to input, based on x/y/z distance index = 0 min_index = 0 min_dist = float(\"inf\") for val in avgs: dist = ((val[0] - avg[0])*(val[0] - avg[0]) + (val[1] - avg[1])*(val[1] - avg[1]) + (val[2] - avg[2])*(val[2] - avg[2])) if dist < min_dist: min_dist = dist min_index = index index += 1 return min_index def createImageGrid(images, dims): \"\"\" Given a list of images and a grid size (m, n), create a grid of images. \"\"\" m, n = dims # sanity check assert m*n == len(images) # get max height and width of images # ie, not assuming they are all equal width = max([img.size[0] for img in images]) height = max([img.size[1] for img in images]) # create output image grid_img = Image.new('RGB', (n*width, m*height)) # paste images for index in range(len(images)): row = int(index/n) col = index - n*row grid_img.paste(images[index], (col*width, row*height)) return grid_img def createPhotomosaic(target_image, input_images, grid_size, reuse_images=True): \"\"\" Creates photomosaic given target and input images. \"\"\" print('splitting input image...') # split target image target_images = splitImage(target_image, grid_size) print('finding image matches...') # for each target image, pick one from input output_images = [] # for user feedback count = 0 batch_size = int(len(target_images)/10) # calculate input image averages avgs = [] for img in input_images: avgs.append(getAverageRGB(img)) for img in target_images: # target sub-image average avg = getAverageRGB(img) # find match index match_index = getBestMatchIndex(avg, avgs) output_images.append(input_images[match_index]) # user feedback if count > 0 and batch_size > 10 and count % batch_size is 0: print('processed %d of %d...' %(count, len(target_images))) count += 1 # remove selected image from input if flag set if not reuse_images: input_images.remove(match) print('creating mosaic...') # draw mosaic to image mosaic_image = createImageGrid(output_images, grid_size) # return mosaic return mosaic_image # Gather our code in a main() functiondef main(): # Command line args are in sys.argv[1], sys.argv[2] .. # sys.argv[0] is the script name itself and can be ignored # parse arguments parser = argparse.ArgumentParser (description='Creates a photomosaic from input images') # add arguments parser.add_argument('--target-image', dest='target_image', required=True) parser.add_argument('--input-folder', dest='input_folder', required=True) parser.add_argument('--grid-size', nargs=2, dest='grid_size', required=True) parser.add_argument('--output-file', dest='outfile', required=False) args = parser.parse_args() ###### INPUTS ###### # target image target_image = Image.open(args.target_image) # input images print('reading input folder...') input_images = getImages(args.input_folder) # check if any valid input images found if input_images == []: print('No input images found in %s. Exiting.' % (args.input_folder, )) exit() # shuffle list - to get a more varied output? random.shuffle(input_images) # size of grid grid_size = (int(args.grid_size[0]), int(args.grid_size[1])) # output output_filename = 'mosaic.png' if args.outfile: output_filename = args.outfile # re-use any image in input reuse_images = True # resize the input to fit original image size? resize_input = True ##### END INPUTS ##### print('starting photomosaic creation...') # if images can't be reused, ensure m*n <= num_of_images if not reuse_images: if grid_size[0]*grid_size[1] > len(input_images): print('grid size less than number of images') exit() # resizing input if resize_input: print('resizing images...') # for given grid size, compute max dims w,h of tiles dims = (int(target_image.size[0]/grid_size[1]), int(target_image.size[1]/grid_size[0])) print(\"max tile dims: %s\" % (dims,)) # resize for img in input_images: img.thumbnail(dims) # create photomosaic mosaic_image = createPhotomosaic(target_image, input_images, grid_size, reuse_images) # write out mosaic mosaic_image.save(output_filename, 'PNG') print(\"saved output to %s\" % (output_filename,)) print('done.') # Standard boilerplate to call the main() function to begin# the program.if __name__ == '__main__': main()", "e": 10147, "s": 3280, "text": null }, { "code": null, "e": 10244, "s": 10147, "text": "python test.py --target-image test-data/a.jpg --input-folder test-data/set1/ --grid-size 128 128" }, { "code": null, "e": 10252, "s": 10244, "text": "Output:" }, { "code": null, "e": 10777, "s": 10252, "text": "Reference Links:1) Python Playground by Mahesh Venkitachalam. 2) PILLOW docs 3) Wikipedia – PhotomosaicsThis article is contributed by Subhajit Saha. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above." }, { "code": null, "e": 10790, "s": 10777, "text": "Akanksha_Rai" }, { "code": null, "e": 10804, "s": 10790, "text": "ManasChhabra2" }, { "code": null, "e": 10814, "s": 10804, "text": "kalrap615" }, { "code": null, "e": 10832, "s": 10814, "text": "gulshankumarar231" }, { "code": null, "e": 10850, "s": 10832, "text": "germanshephered48" }, { "code": null, "e": 10858, "s": 10850, "text": "Project" }, { "code": null, "e": 10865, "s": 10858, "text": "Python" }, { "code": null, "e": 10884, "s": 10865, "text": "Technical Scripter" } ]
Difference between Public and Private blockchain
11 May, 2022 1. What is Public Blockchain ? Public blockchains are open networks that allow anyone to participate in the network i.e. public blockchain is permissionless. In this type of blockchain anyone can join the network and read, write, or participate within the blockchain. A public blockchain is decentralized and does not have a single entity which controls the network. Data on a public blockchain are secure as it is not possible to modify or alter data once they have been validated on the blockchain. Some features of public blockchain are : High Security – It is secure Due to Mining (51% rule). Open Environment – The public blockchain is open for all. Anonymous Nature – In public blockchain every one is anonymous. There is no need to use your real name, or real identity, therefore everything would stay hidden, and no one can track you based on that. No Regulations – Public blockchain doesn’t have any regulations that the nodes have to follow. So, there is no limit to how one can use this platform for their betterment Full Transparency – Public blockchain allow you to see the ledger anytime you want. There is no scope for any corruption or any discrepancies and everyone has to maintain the ledger and participate in consensus. True Decentralization – In this type of blockchain, there isn’t a centralized entity. Thus, the responsibility of maintaining the network is solely on the nodes. They are updating the ledger, and it promotes fairness with help from a consensus algorithm . Full User Empowerment – Typically, in any network user has to follow a lot of rules and regulations. In many cases, the rules might not even be a fair one. But not in public blockchain networks. Here, all of the users are empowered as there is no central authority to look over their every move. Immutable – When something is written to the blockchain, it can not be changed. Distributed – The database is not centralized like in a client-server approach, and all nodes in the blockchain participate in the transaction validation. 2. What is Private Blockchain ? A private blockchain is managed by a network administrator and participants need consent to join the network i.e., a private blockchain is a permissioned blockchain. There are one or more entities which control the network and this leads to reliance on third-parties to transact. In this type of blockchain only entity participating in the transaction have knowledge about the transaction performed whereas others will not able to access it i.e. transactions are private. Some of the features of private blockchain are : Full Privacy – It focus on privacy concerns. Private Blockchain are more centralized. High Efficiency and Faster Transactions – When you distribute the nodes locally, but also have much less nodes to participate in the ledger, the performance is faster. Better Scalability – Being able to add nodes and services on demand can provide a great advantage to the enterprise. Difference between Public and Private blockchain : varshagumber28 rkbhola5 Blockchain Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n11 May, 2022" }, { "code": null, "e": 530, "s": 28, "text": "1. What is Public Blockchain ? Public blockchains are open networks that allow anyone to participate in the network i.e. public blockchain is permissionless. In this type of blockchain anyone can join the network and read, write, or participate within the blockchain. A public blockchain is decentralized and does not have a single entity which controls the network. Data on a public blockchain are secure as it is not possible to modify or alter data once they have been validated on the blockchain. " }, { "code": null, "e": 573, "s": 530, "text": "Some features of public blockchain are : " }, { "code": null, "e": 630, "s": 573, "text": "High Security – It is secure Due to Mining (51% rule). " }, { "code": null, "e": 690, "s": 630, "text": "Open Environment – The public blockchain is open for all. " }, { "code": null, "e": 894, "s": 690, "text": "Anonymous Nature – In public blockchain every one is anonymous. There is no need to use your real name, or real identity, therefore everything would stay hidden, and no one can track you based on that. " }, { "code": null, "e": 1067, "s": 894, "text": "No Regulations – Public blockchain doesn’t have any regulations that the nodes have to follow. So, there is no limit to how one can use this platform for their betterment " }, { "code": null, "e": 1281, "s": 1067, "text": "Full Transparency – Public blockchain allow you to see the ledger anytime you want. There is no scope for any corruption or any discrepancies and everyone has to maintain the ledger and participate in consensus. " }, { "code": null, "e": 1539, "s": 1281, "text": "True Decentralization – In this type of blockchain, there isn’t a centralized entity. Thus, the responsibility of maintaining the network is solely on the nodes. They are updating the ledger, and it promotes fairness with help from a consensus algorithm . " }, { "code": null, "e": 1837, "s": 1539, "text": "Full User Empowerment – Typically, in any network user has to follow a lot of rules and regulations. In many cases, the rules might not even be a fair one. But not in public blockchain networks. Here, all of the users are empowered as there is no central authority to look over their every move. " }, { "code": null, "e": 1919, "s": 1837, "text": "Immutable – When something is written to the blockchain, it can not be changed. " }, { "code": null, "e": 2076, "s": 1919, "text": "Distributed – The database is not centralized like in a client-server approach, and all nodes in the blockchain participate in the transaction validation. " }, { "code": null, "e": 2581, "s": 2076, "text": "2. What is Private Blockchain ? A private blockchain is managed by a network administrator and participants need consent to join the network i.e., a private blockchain is a permissioned blockchain. There are one or more entities which control the network and this leads to reliance on third-parties to transact. In this type of blockchain only entity participating in the transaction have knowledge about the transaction performed whereas others will not able to access it i.e. transactions are private. " }, { "code": null, "e": 2631, "s": 2581, "text": "Some of the features of private blockchain are : " }, { "code": null, "e": 2680, "s": 2633, "text": "Full Privacy – It focus on privacy concerns. " }, { "code": null, "e": 2723, "s": 2680, "text": "Private Blockchain are more centralized. " }, { "code": null, "e": 2893, "s": 2723, "text": "High Efficiency and Faster Transactions – When you distribute the nodes locally, but also have much less nodes to participate in the ledger, the performance is faster. " }, { "code": null, "e": 3063, "s": 2893, "text": "Better Scalability – Being able to add nodes and services on demand can provide a great advantage to the enterprise. Difference between Public and Private blockchain : " }, { "code": null, "e": 3078, "s": 3063, "text": "varshagumber28" }, { "code": null, "e": 3087, "s": 3078, "text": "rkbhola5" }, { "code": null, "e": 3098, "s": 3087, "text": "Blockchain" } ]
Calculate the Sum of Matrix or Array columns in R Programming – colSums() Function
03 Jun, 2020 colSums() function in R Language is used to compute the sums of matrix or array columns. Syntax: colSums (x, na.rm = FALSE, dims = 1) Parameters:x: matrix or arraydims: this is integer value whose dimensions are regarded as ‘columns’ to sum over. It is over dimensions 1:dims. Example 1: # R program to illustrate# colSums function # Initializing a matrix with 3 # rows and 3 columnsx <- matrix(rep(1:9), 3, 3) # Getting the matrix representationx # Calling the colSums() functioncolSums(x) Output: [, 1] [, 2] [, 3] [1, ] 1 4 7 [2, ] 2 5 8 [3, ] 3 6 9 [1] 6 15 24 Example 2: # R program to illustrate# colSums function # Initializing a 3D arrayx <- array(1:12, c(2, 3, 3)) # Getting the array representationx # Calling the colSums() functioncolSums(x, dims = 1)colSums(x, dims = 2) Output: ,, 1 [, 1] [, 2] [, 3] [1, ] 1 3 5 [2, ] 2 4 6,, 2 [, 1] [, 2] [, 3] [1, ] 7 9 11 [2, ] 8 10 12,, 3 [, 1] [, 2] [, 3] [1, ] 1 3 5 [2, ] 2 4 6 [, 1] [, 2] [, 3] [1, ] 3 15 3 [2, ] 7 19 7 [3, ] 11 23 11 [1] 21 57 21 R Matrix-Function R Language Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n03 Jun, 2020" }, { "code": null, "e": 117, "s": 28, "text": "colSums() function in R Language is used to compute the sums of matrix or array columns." }, { "code": null, "e": 162, "s": 117, "text": "Syntax: colSums (x, na.rm = FALSE, dims = 1)" }, { "code": null, "e": 305, "s": 162, "text": "Parameters:x: matrix or arraydims: this is integer value whose dimensions are regarded as ‘columns’ to sum over. It is over dimensions 1:dims." }, { "code": null, "e": 316, "s": 305, "text": "Example 1:" }, { "code": "# R program to illustrate# colSums function # Initializing a matrix with 3 # rows and 3 columnsx <- matrix(rep(1:9), 3, 3) # Getting the matrix representationx # Calling the colSums() functioncolSums(x)", "e": 522, "s": 316, "text": null }, { "code": null, "e": 530, "s": 522, "text": "Output:" }, { "code": null, "e": 631, "s": 530, "text": " [, 1] [, 2] [, 3]\n[1, ] 1 4 7\n[2, ] 2 5 8\n[3, ] 3 6 9\n\n[1] 6 15 24\n" }, { "code": null, "e": 642, "s": 631, "text": "Example 2:" }, { "code": "# R program to illustrate# colSums function # Initializing a 3D arrayx <- array(1:12, c(2, 3, 3)) # Getting the array representationx # Calling the colSums() functioncolSums(x, dims = 1)colSums(x, dims = 2)", "e": 852, "s": 642, "text": null }, { "code": null, "e": 860, "s": 852, "text": "Output:" }, { "code": null, "e": 1173, "s": 860, "text": ",, 1\n\n [, 1] [, 2] [, 3]\n[1, ] 1 3 5\n[2, ] 2 4 6,, 2\n\n [, 1] [, 2] [, 3]\n[1, ] 7 9 11\n[2, ] 8 10 12,, 3\n\n [, 1] [, 2] [, 3]\n[1, ] 1 3 5\n[2, ] 2 4 6\n\n [, 1] [, 2] [, 3]\n[1, ] 3 15 3\n[2, ] 7 19 7\n[3, ] 11 23 11\n\n[1] 21 57 21\n" }, { "code": null, "e": 1191, "s": 1173, "text": "R Matrix-Function" }, { "code": null, "e": 1202, "s": 1191, "text": "R Language" } ]
Check whether a Numpy array contains a specified row
05 Sep, 2020 In this article we will learn about checking a specified row is in NumPy array or not. If the given list is present in a NumPy array as a row then the output is True else False. The list is present in a NumPy array means any row of that numpy array matches with the given list with all elements in given order. This can be done by using simple approach as checking for each row with the given list but this can be easily understood and implemented by using inbuilt library functions numpy.array.tolist(). Syntax: ndarray.tolist() Parameters: none Returns: The possibly nested list of array elements. Examples : Arr = [[1,2,3,4,5], [6,7,8,9,10], [11,12,13,14,15], [16,17,18,19,20]] and the given lists are as follows : lst = [1,2,3,4,5] True, as it matches with the row 0. [16,17,20,19,18] False, as it doesn’t match with any row. [3,2,5,-4,5] False, as it doesn’t match with any row. [11,12,13,14,15] True, as it matches with the row 2. Below is the implementation with an example : Python3 # importing packageimport numpy # create numpy arrayarr = numpy.array([[1, 2, 3, 4, 5], [6, 7, 8, 9, 10], [11, 12, 13, 14, 15], [16, 17, 18, 19, 20] ]) # view arrayprint(arr) # check for some listsprint([1, 2, 3, 4, 5] in arr.tolist())print([16, 17, 20, 19, 18] in arr.tolist())print([3, 2, 5, -4, 5] in arr.tolist())print([11, 12, 13, 14, 15] in arr.tolist()) Output : [[ 1 2 3 4 5] [ 6 7 8 9 10] [11 12 13 14 15] [16 17 18 19 20]] True False False True Python numpy-ndarray Python-numpy Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 53, "s": 25, "text": "\n05 Sep, 2020" }, { "code": null, "e": 558, "s": 53, "text": "In this article we will learn about checking a specified row is in NumPy array or not. If the given list is present in a NumPy array as a row then the output is True else False. The list is present in a NumPy array means any row of that numpy array matches with the given list with all elements in given order. This can be done by using simple approach as checking for each row with the given list but this can be easily understood and implemented by using inbuilt library functions numpy.array.tolist()." }, { "code": null, "e": 583, "s": 558, "text": "Syntax: ndarray.tolist()" }, { "code": null, "e": 600, "s": 583, "text": "Parameters: none" }, { "code": null, "e": 653, "s": 600, "text": "Returns: The possibly nested list of array elements." }, { "code": null, "e": 664, "s": 653, "text": "Examples :" }, { "code": null, "e": 734, "s": 664, "text": "Arr = [[1,2,3,4,5], [6,7,8,9,10], [11,12,13,14,15], [16,17,18,19,20]]" }, { "code": null, "e": 771, "s": 734, "text": "and the given lists are as follows :" }, { "code": null, "e": 1047, "s": 771, "text": "lst = [1,2,3,4,5] True, as it matches with the row 0. [16,17,20,19,18] False, as it doesn’t match with any row. [3,2,5,-4,5] False, as it doesn’t match with any row. [11,12,13,14,15] True, as it matches with the row 2." }, { "code": null, "e": 1093, "s": 1047, "text": "Below is the implementation with an example :" }, { "code": null, "e": 1101, "s": 1093, "text": "Python3" }, { "code": "# importing packageimport numpy # create numpy arrayarr = numpy.array([[1, 2, 3, 4, 5], [6, 7, 8, 9, 10], [11, 12, 13, 14, 15], [16, 17, 18, 19, 20] ]) # view arrayprint(arr) # check for some listsprint([1, 2, 3, 4, 5] in arr.tolist())print([16, 17, 20, 19, 18] in arr.tolist())print([3, 2, 5, -4, 5] in arr.tolist())print([11, 12, 13, 14, 15] in arr.tolist())", "e": 1537, "s": 1101, "text": null }, { "code": null, "e": 1546, "s": 1537, "text": "Output :" }, { "code": null, "e": 1642, "s": 1546, "text": "[[ 1 2 3 4 5]\n [ 6 7 8 9 10]\n [11 12 13 14 15]\n [16 17 18 19 20]]\nTrue\nFalse\nFalse\nTrue\n" }, { "code": null, "e": 1663, "s": 1642, "text": "Python numpy-ndarray" }, { "code": null, "e": 1676, "s": 1663, "text": "Python-numpy" }, { "code": null, "e": 1683, "s": 1676, "text": "Python" } ]
Sum of dependencies in a graph
08 Jul, 2022 Given a directed and connected graph with n nodes. If there is an edge from u to v then u depends on v. Our task was to find out the sum of dependencies for every node. Example: For the graph in diagram, A depends on C and D i.e. 2 B depends on C i.e. 1 D depends on C i.e. 1 And C depends on none. Hence answer -> 0 + 1 + 1 + 2 = 4 Asked in : Flipkart Interview Idea is to check adjacency list and find how many edges are there from each vertex and return the total number of edges. Implementation: C++ Java Python3 C# // C++ program to find the sum of dependencies #include <bits/stdc++.h> using namespace std; // To add an edge void addEdge(vector <int> adj[], int u,int v) { adj[u].push_back(v); } // find the sum of all dependencies int findSum(vector<int> adj[], int V) { int sum = 0; // just find the size at each vector's index for (int u = 0; u < V; u++) sum += adj[u].size(); return sum; } // Driver code int main() { int V = 4; vector<int >adj[V]; addEdge(adj, 0, 2); addEdge(adj, 0, 3); addEdge(adj, 1, 3); addEdge(adj, 2, 3); cout << "Sum of dependencies is " << findSum(adj, V); return 0; } // Java program to find the sum of dependencies import java.util.Vector; class Test { // To add an edge static void addEdge(Vector <Integer> adj[], int u,int v) { adj[u].addElement((v)); } // find the sum of all dependencies static int findSum(Vector<Integer> adj[], int V) { int sum = 0; // just find the size at each vector's index for (int u = 0; u < V; u++) sum += adj[u].size(); return sum; } // Driver method public static void main(String[] args) { int V = 4; @SuppressWarnings("unchecked") Vector<Integer> adj[] = new Vector[V]; for (int i = 0; i < adj.length; i++) { adj[i] = new Vector<>(); } addEdge(adj, 0, 2); addEdge(adj, 0, 3); addEdge(adj, 1, 3); addEdge(adj, 2, 3); System.out.println("Sum of dependencies is " + findSum(adj, V)); } } // This code is contributed by Gaurav Miglani # Python3 program to find the sum # of dependencies # To add an edge def addEdge(adj, u, v): adj[u].append(v) # Find the sum of all dependencies def findSum(adj, V): sum = 0 # Just find the size at each # vector's index for u in range(V): sum += len(adj[u]) return sum # Driver code if __name__=='__main__': V = 4 adj = [[] for i in range(V)] addEdge(adj, 0, 2) addEdge(adj, 0, 3) addEdge(adj, 1, 3) addEdge(adj, 2, 3) print("Sum of dependencies is", findSum(adj, V)) # This code is contributed by rutvik_56 // C# program to find the sum of dependencies using System; using System.Collections; class GFG{ // To add an edge static void addEdge(ArrayList []adj, int u, int v) { adj[u].Add(v); } // Find the sum of all dependencies static int findSum(ArrayList []adj, int V) { int sum = 0; // Just find the size at each // vector's index for(int u = 0; u < V; u++) sum += adj[u].Count; return sum; } // Driver code public static void Main(string[] args) { int V = 4; ArrayList []adj = new ArrayList[V]; for(int i = 0; i < V; i++) { adj[i] = new ArrayList(); } addEdge(adj, 0, 2); addEdge(adj, 0, 3); addEdge(adj, 1, 3); addEdge(adj, 2, 3); Console.Write("Sum of dependencies is " + findSum(adj, V)); } } // This code is contributed by pratham76 Sum of dependencies is 4 Time complexity: O(V) where V is number of vertices in graph. This article is contributed by Sahil Chhabra (akku). If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks. rutvik_56 pratham76 hardikkoriintern Flipkart Graph Flipkart Graph Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 58, "s": 27, "text": " \n08 Jul, 2022\n" }, { "code": null, "e": 228, "s": 58, "text": "Given a directed and connected graph with n nodes. If there is an edge from u to v then u depends on v. Our task was to find out the sum of dependencies for every node. " }, { "code": null, "e": 238, "s": 228, "text": "Example: " }, { "code": null, "e": 398, "s": 238, "text": "For the graph in diagram, \nA depends on C and D i.e. 2 \nB depends on C i.e. 1 \nD depends on C i.e. 1 \nAnd C depends on none. \nHence answer -> 0 + 1 + 1 + 2 = 4" }, { "code": null, "e": 428, "s": 398, "text": "Asked in : Flipkart Interview" }, { "code": null, "e": 550, "s": 428, "text": "Idea is to check adjacency list and find how many edges are there from each vertex and return the total number of edges. " }, { "code": null, "e": 566, "s": 550, "text": "Implementation:" }, { "code": null, "e": 570, "s": 566, "text": "C++" }, { "code": null, "e": 575, "s": 570, "text": "Java" }, { "code": null, "e": 583, "s": 575, "text": "Python3" }, { "code": null, "e": 586, "s": 583, "text": "C#" }, { "code": "\n\n\n\n\n\n\n// C++ program to find the sum of dependencies\n#include <bits/stdc++.h>\nusing namespace std;\n \n// To add an edge\nvoid addEdge(vector <int> adj[], int u,int v)\n{\n adj[u].push_back(v);\n}\n \n// find the sum of all dependencies\nint findSum(vector<int> adj[], int V)\n{\n int sum = 0;\n \n // just find the size at each vector's index\n for (int u = 0; u < V; u++)\n sum += adj[u].size();\n \n return sum;\n}\n \n// Driver code\nint main()\n{\n int V = 4;\n vector<int >adj[V];\n addEdge(adj, 0, 2);\n addEdge(adj, 0, 3);\n addEdge(adj, 1, 3);\n addEdge(adj, 2, 3);\n \n cout << \"Sum of dependencies is \"\n << findSum(adj, V);\n return 0;\n}\n\n\n\n\n\n", "e": 1275, "s": 596, "text": null }, { "code": "\n\n\n\n\n\n\n// Java program to find the sum of dependencies\n \nimport java.util.Vector;\n \nclass Test\n{\n // To add an edge\n static void addEdge(Vector <Integer> adj[], int u,int v)\n {\n adj[u].addElement((v));\n }\n \n // find the sum of all dependencies\n static int findSum(Vector<Integer> adj[], int V)\n {\n int sum = 0;\n \n // just find the size at each vector's index\n for (int u = 0; u < V; u++)\n sum += adj[u].size();\n \n return sum;\n }\n \n // Driver method\n public static void main(String[] args) \n {\n int V = 4;\n @SuppressWarnings(\"unchecked\")\n Vector<Integer> adj[] = new Vector[V];\n \n for (int i = 0; i < adj.length; i++) {\n adj[i] = new Vector<>();\n }\n \n addEdge(adj, 0, 2);\n addEdge(adj, 0, 3);\n addEdge(adj, 1, 3);\n addEdge(adj, 2, 3);\n \n System.out.println(\"Sum of dependencies is \" +\n findSum(adj, V));\n }\n}\n// This code is contributed by Gaurav Miglani\n\n\n\n\n\n", "e": 2376, "s": 1285, "text": null }, { "code": "\n\n\n\n\n\n\n# Python3 program to find the sum \n# of dependencies\n \n# To add an edge\ndef addEdge(adj, u, v):\n \n adj[u].append(v)\n \n# Find the sum of all dependencies\ndef findSum(adj, V):\n \n sum = 0\n \n # Just find the size at each \n # vector's index\n for u in range(V):\n sum += len(adj[u])\n \n return sum\n \n# Driver code\nif __name__=='__main__':\n \n V = 4\n adj = [[] for i in range(V)]\n \n addEdge(adj, 0, 2)\n addEdge(adj, 0, 3)\n addEdge(adj, 1, 3)\n addEdge(adj, 2, 3)\n \n print(\"Sum of dependencies is\",\n findSum(adj, V))\n \n# This code is contributed by rutvik_56\n\n\n\n\n\n", "e": 3029, "s": 2386, "text": null }, { "code": "\n\n\n\n\n\n\n// C# program to find the sum of dependencies\nusing System;\nusing System.Collections;\n \nclass GFG{\n \n// To add an edge\nstatic void addEdge(ArrayList []adj, int u,\n int v)\n{\n adj[u].Add(v);\n}\n \n// Find the sum of all dependencies\nstatic int findSum(ArrayList []adj, int V)\n{\n int sum = 0;\n \n // Just find the size at each \n // vector's index\n for(int u = 0; u < V; u++)\n sum += adj[u].Count;\n \n return sum;\n}\n \n// Driver code\npublic static void Main(string[] args) \n{\n int V = 4;\n \n ArrayList []adj = new ArrayList[V];\n \n for(int i = 0; i < V; i++)\n {\n adj[i] = new ArrayList();\n }\n \n addEdge(adj, 0, 2);\n addEdge(adj, 0, 3);\n addEdge(adj, 1, 3);\n addEdge(adj, 2, 3);\n \n Console.Write(\"Sum of dependencies is \" +\n findSum(adj, V));\n}\n}\n \n// This code is contributed by pratham76\n\n\n\n\n\n", "e": 3969, "s": 3039, "text": null }, { "code": null, "e": 3994, "s": 3969, "text": "Sum of dependencies is 4" }, { "code": null, "e": 4056, "s": 3994, "text": "Time complexity: O(V) where V is number of vertices in graph." }, { "code": null, "e": 4360, "s": 4056, "text": "This article is contributed by Sahil Chhabra (akku). If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks." }, { "code": null, "e": 4370, "s": 4360, "text": "rutvik_56" }, { "code": null, "e": 4380, "s": 4370, "text": "pratham76" }, { "code": null, "e": 4397, "s": 4380, "text": "hardikkoriintern" }, { "code": null, "e": 4408, "s": 4397, "text": "\nFlipkart\n" }, { "code": null, "e": 4416, "s": 4408, "text": "\nGraph\n" }, { "code": null, "e": 4425, "s": 4416, "text": "Flipkart" }, { "code": null, "e": 4431, "s": 4425, "text": "Graph" } ]
Python | Remove last character in list of strings - GeeksforGeeks
15 Mar, 2019 Sometimes, we come across an issue in which we require to delete the last character from each string, that we might have added by mistake and we need to extend this to the whole list. This type of utility is common in web development. Having shorthands to perform this particular job is always a plus. Let’s discuss certain ways in which this can be achieved. Method #1 : Using list comprehension + list slicingThis task can be performed by using the ability of list slicing to remove the characters and the list comprehension helps in extending that logic to whole list. # Python3 code to demonstrate# remove last character from list of strings# using list comprehension + list slicing # initializing listtest_list = ['Manjeets', 'Akashs', 'Akshats', 'Nikhils'] # printing original list print("The original list : " + str(test_list)) # using list comprehension + list slicing# remove last character from list of stringsres = [sub[ : -1] for sub in test_list] # printing resultprint("The list after removing last characters : " + str(res)) The original list : ['Manjeets', 'Akashs', 'Akshats', 'Nikhils'] The list after removing last characters : ['Manjeet', 'Akash', 'Akshat', 'Nikhil'] Method #2 : Using map() + lambdaThe map function can perform the task of getting the functionality executed for all the members of list and lambda function performs the task of removal of last element using list comprehension. # Python3 code to demonstrate# remove last character from list of strings# using map() + lambda # initializing listtest_list = ['Manjeets', 'Akashs', 'Akshats', 'Nikhils'] # printing original list print("The original list : " + str(test_list)) # using map() + lambda# remove last character from list of stringsres = list(map(lambda i: i[ : -1], test_list)) # printing resultprint("The list after removing last characters : " + str(res)) The original list : ['Manjeets', 'Akashs', 'Akshats', 'Nikhils'] The list after removing last characters : ['Manjeet', 'Akash', 'Akshat', 'Nikhil'] Python list-programs Python Python Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Python Dictionary Enumerate() in Python Iterate over a list in Python Read a file line by line in Python Python OOPs Concepts Python program to convert a list to string Defaultdict in Python Python | Get dictionary keys as a list Python | Split string into list of characters Python | Convert a list to dictionary
[ { "code": null, "e": 24938, "s": 24910, "text": "\n15 Mar, 2019" }, { "code": null, "e": 25298, "s": 24938, "text": "Sometimes, we come across an issue in which we require to delete the last character from each string, that we might have added by mistake and we need to extend this to the whole list. This type of utility is common in web development. Having shorthands to perform this particular job is always a plus. Let’s discuss certain ways in which this can be achieved." }, { "code": null, "e": 25510, "s": 25298, "text": "Method #1 : Using list comprehension + list slicingThis task can be performed by using the ability of list slicing to remove the characters and the list comprehension helps in extending that logic to whole list." }, { "code": "# Python3 code to demonstrate# remove last character from list of strings# using list comprehension + list slicing # initializing listtest_list = ['Manjeets', 'Akashs', 'Akshats', 'Nikhils'] # printing original list print(\"The original list : \" + str(test_list)) # using list comprehension + list slicing# remove last character from list of stringsres = [sub[ : -1] for sub in test_list] # printing resultprint(\"The list after removing last characters : \" + str(res))", "e": 25982, "s": 25510, "text": null }, { "code": null, "e": 26131, "s": 25982, "text": "The original list : ['Manjeets', 'Akashs', 'Akshats', 'Nikhils']\nThe list after removing last characters : ['Manjeet', 'Akash', 'Akshat', 'Nikhil']\n" }, { "code": null, "e": 26360, "s": 26133, "text": "Method #2 : Using map() + lambdaThe map function can perform the task of getting the functionality executed for all the members of list and lambda function performs the task of removal of last element using list comprehension." }, { "code": "# Python3 code to demonstrate# remove last character from list of strings# using map() + lambda # initializing listtest_list = ['Manjeets', 'Akashs', 'Akshats', 'Nikhils'] # printing original list print(\"The original list : \" + str(test_list)) # using map() + lambda# remove last character from list of stringsres = list(map(lambda i: i[ : -1], test_list)) # printing resultprint(\"The list after removing last characters : \" + str(res))", "e": 26801, "s": 26360, "text": null }, { "code": null, "e": 26950, "s": 26801, "text": "The original list : ['Manjeets', 'Akashs', 'Akshats', 'Nikhils']\nThe list after removing last characters : ['Manjeet', 'Akash', 'Akshat', 'Nikhil']\n" }, { "code": null, "e": 26971, "s": 26950, "text": "Python list-programs" }, { "code": null, "e": 26978, "s": 26971, "text": "Python" }, { "code": null, "e": 26994, "s": 26978, "text": "Python Programs" }, { "code": null, "e": 27092, "s": 26994, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27101, "s": 27092, "text": "Comments" }, { "code": null, "e": 27114, "s": 27101, "text": "Old Comments" }, { "code": null, "e": 27132, "s": 27114, "text": "Python Dictionary" }, { "code": null, "e": 27154, "s": 27132, "text": "Enumerate() in Python" }, { "code": null, "e": 27184, "s": 27154, "text": "Iterate over a list in Python" }, { "code": null, "e": 27219, "s": 27184, "text": "Read a file line by line in Python" }, { "code": null, "e": 27240, "s": 27219, "text": "Python OOPs Concepts" }, { "code": null, "e": 27283, "s": 27240, "text": "Python program to convert a list to string" }, { "code": null, "e": 27305, "s": 27283, "text": "Defaultdict in Python" }, { "code": null, "e": 27344, "s": 27305, "text": "Python | Get dictionary keys as a list" }, { "code": null, "e": 27390, "s": 27344, "text": "Python | Split string into list of characters" } ]
Tryit Editor v3.7
Tryit: Automatic numbering with counters
[]
Fix ERROR 1093 (HY000): You can't specify target table for update in FROM clause while deleting the lowest value from a MySQL column?
Let us first create a table − mysql> create table DemoTable1597 -> ( -> Marks int -> ); Query OK, 0 rows affected (0.69 sec) Insert some records in the table using insert command − mysql> insert into DemoTable1597 values(45); Query OK, 1 row affected (0.21 sec) mysql> insert into DemoTable1597 values(59); Query OK, 1 row affected (0.24 sec) mysql> insert into DemoTable1597 values(43); Query OK, 1 row affected (0.11 sec) mysql> insert into DemoTable1597 values(85); Query OK, 1 row affected (0.17 sec) mysql> insert into DemoTable1597 values(89); Query OK, 1 row affected (0.12 sec) Display all records from the table using select statement − mysql> select * from DemoTable1597; This will produce the following output − +-------+ | Marks | +-------+ | 45 | | 59 | | 43 | | 85 | | 89 | +-------+ 5 rows in set (0.00 sec) Here is the query to remove ERROR 1093 (HY000). We are deleting the lowest value here − mysql> delete from DemoTable1597 -> where Marks=( -> select lowestMarks from ( select min(Marks) as lowestMarks from DemoTable1597 ) as deleteRecord -> ) limit 1; Query OK, 1 row affected (0.11 sec) Let us check the table records once again − mysql> select * from DemoTable1597; This will produce the following output − +-------+ | Marks | +-------+ | 45 | | 59 | | 85 | | 89 | +-------+ 4 rows in set (0.00 sec)
[ { "code": null, "e": 1092, "s": 1062, "text": "Let us first create a table −" }, { "code": null, "e": 1196, "s": 1092, "text": "mysql> create table DemoTable1597\n -> (\n -> Marks int\n -> );\nQuery OK, 0 rows affected (0.69 sec)" }, { "code": null, "e": 1252, "s": 1196, "text": "Insert some records in the table using insert command −" }, { "code": null, "e": 1657, "s": 1252, "text": "mysql> insert into DemoTable1597 values(45);\nQuery OK, 1 row affected (0.21 sec)\nmysql> insert into DemoTable1597 values(59);\nQuery OK, 1 row affected (0.24 sec)\nmysql> insert into DemoTable1597 values(43);\nQuery OK, 1 row affected (0.11 sec)\nmysql> insert into DemoTable1597 values(85);\nQuery OK, 1 row affected (0.17 sec)\nmysql> insert into DemoTable1597 values(89);\nQuery OK, 1 row affected (0.12 sec)" }, { "code": null, "e": 1717, "s": 1657, "text": "Display all records from the table using select statement −" }, { "code": null, "e": 1753, "s": 1717, "text": "mysql> select * from DemoTable1597;" }, { "code": null, "e": 1794, "s": 1753, "text": "This will produce the following output −" }, { "code": null, "e": 1909, "s": 1794, "text": "+-------+\n| Marks |\n+-------+\n| 45 |\n| 59 |\n| 43 |\n| 85 |\n| 89 |\n+-------+\n5 rows in set (0.00 sec)" }, { "code": null, "e": 1997, "s": 1909, "text": "Here is the query to remove ERROR 1093 (HY000). We are deleting the lowest value here −" }, { "code": null, "e": 2205, "s": 1997, "text": "mysql> delete from DemoTable1597\n -> where Marks=(\n -> select lowestMarks from ( select min(Marks) as lowestMarks from DemoTable1597 ) as deleteRecord\n -> ) limit 1;\nQuery OK, 1 row affected (0.11 sec)" }, { "code": null, "e": 2249, "s": 2205, "text": "Let us check the table records once again −" }, { "code": null, "e": 2285, "s": 2249, "text": "mysql> select * from DemoTable1597;" }, { "code": null, "e": 2326, "s": 2285, "text": "This will produce the following output −" }, { "code": null, "e": 2431, "s": 2326, "text": "+-------+\n| Marks |\n+-------+\n| 45 |\n| 59 |\n| 85 |\n| 89 |\n+-------+\n4 rows in set (0.00 sec)" } ]
Minimum Cost Path | Practice | GeeksforGeeks
Given a square grid of size N, each cell of which contains integer cost which represents a cost to traverse through that cell, we need to find a path from top left cell to bottom right cell by which the total cost incurred is minimum. From the cell (i,j) we can go (i,j-1), (i, j+1), (i-1, j), (i+1, j). Note: It is assumed that negative cost cycles do not exist in the input matrix. Example 1: Input: grid = {{9,4,9,9},{6,7,6,4}, {8,3,3,7},{7,4,9,10}} Output: 43 Explanation: The grid is- 9 4 9 9 6 7 6 4 8 3 3 7 7 4 9 10 The minimum cost is- 9 + 4 + 7 + 3 + 3 + 7 + 10 = 43. Example 2: Input: grid = {{4,4},{3,7}} Output: 14 Explanation: The grid is- 4 4 3 7 The minimum cost is- 4 + 3 + 7 = 14. Your Task: You don't need to read or print anything. Your task is to complete the function minimumCostPath() which takes grid as input parameter and returns the minimum cost to react at bottom right cell from top left cell. Expected Time Compelxity: O(n2*log(n)) Expected Auxiliary Space: O(n2) Constraints: 1 ≤ n ≤ 500 1 ≤ cost of cells ≤ 1000 0 abhishekpanwar6972 days ago int minimumCostPath(vector<vector<int>>& a) { int n=a.size(),m=a[0].size(); vector<vector<int>>dist(n+1,vector<int>(m+1,INT_MAX)); priority_queue<pair<int,pair<int,int>>,vector<pair<int,pair<int,int>>>,greater<pair<int,pair<int,int>>>>pq; dist[0][0]=a[0][0]; pq.push({dist[0][0],{0,0}}); int x[4]={1,-1,0,0}; int y[4]={0,0,1,-1}; while(!pq.empty()) { auto t=pq.top(); pq.pop(); int w=t.first; int i=t.second.first; int j=t.second.second; if(i==n-1&&j==m-1) break; for(int k=0;k<4;k++) { int u=i+x[k]; int v=j+y[k]; if(u>=0&&u<n&&v>=0&&v<m&&dist[u][v]>dist[i][j]+a[u][v]) { dist[u][v]=dist[i][j]+a[u][v]; pq.push({dist[u][v],{u,v}}); } } } return dist[n-1][m-1]; } 0 dusankovacevic2 weeks ago For some reason it gave me this code when looking up greedy tasks. 0 bhaskarmaheshwari83 weeks ago // https://www.youtube.com/watch?v=jbhuqIASjoM bool isSafe(int row,int col,int n){ if(row<0||col<0||row>n-1||col>n-1) return 0; return 1;} int minimumCostPath(vector<vector<int>>& grid) { // Code here int n=grid.size(); priority_queue<pair<int,pair<int,int>>, vector<pair<int,pair<int,int>>>, greater<pair<int,pair<int,int>>> > q; vector<vector<int>> dist(n,vector<int>(n,INT_MAX)); q.push({grid[0][0],{0,0}}); dist[0][0]=grid[0][0]; int dx[]={-1,1,0,0}; int dy[]={0,0,-1,1}; while(!q.empty()) { int distance=q.top().first; int row=q.top().second.first; int col=q.top().second.second; q.pop(); for(int i=0;i<4;i++) { if(isSafe(row+dx[i],col+dy[i],n)) { if(dist[row+dx[i]][col+dy[i]]>distance+grid[row+dx[i]][col+dy[i]]) { dist[row+dx[i]][col+dy[i]]=distance+grid[row+dx[i]][col+dy[i]]; q.push({ dist[row+dx[i]][col+dy[i]],{row+dx[i],col+dy[i]}}); } } } } return dist[n-1][n-1]; 0 akashmr10961 month ago //Function to return the minimum cost to react at bottom //right cell from top left cell. public int minimumCostPath(int[][] grid) { // Code here int n = grid.length; int dist[][] = new int[n][n]; for(int i=0; i<n; i++) Arrays.fill(dist[i], Integer.MAX_VALUE); boolean visited[][] = new boolean[n][n]; PriorityQueue<Node> q = new PriorityQueue<>(); dist[0][0] = grid[0][0]; q.offer(new Node(0, 0, dist[0][0])); while(!q.isEmpty()) { Node next = q.poll(); //System.out.println(String.format("i: %d j: %d dist: %d", next.i, next.j, next.dist)); if (next.i == n-1 && next.j == n-1) { return next.dist; } else { bfs(next, n, visited, dist, grid, q); } } return dist[n-1][n-1]; } private void bfs(Node next, int n, boolean visited[][], int dist[][], int grid[][], PriorityQueue<Node> q) { int i = next.i; int j = next.j; if (visited[i][j]) { return; } visited[i][j] = true; if (j > 0) { dist[i][j-1] = Math.min(dist[i][j-1], next.dist+grid[i][j-1]); q.offer(new Node(i, j-1, dist[i][j-1])); } if (j < n-1) { dist[i][j+1] = Math.min(dist[i][j+1], next.dist+grid[i][j+1]); q.offer(new Node(i, j+1, dist[i][j+1])); } if (i > 0) { dist[i-1][j] = Math.min(dist[i-1][j], next.dist+grid[i-1][j]); q.offer(new Node(i-1, j, dist[i-1][j])); } if (i < n-1) { dist[i+1][j] = Math.min(dist[i+1][j], next.dist+grid[i+1][j]); q.offer(new Node(i+1, j, dist[i+1][j])); } } static class Node implements Comparable<Node> { int i; int j; int dist; public Node(int i, int j, int dist) { this.i = i; this.j = j; this.dist = dist; } @Override public int compareTo(Node n) { return this.dist-n.dist; } } +3 vinamrajha2 months ago bool isvalid(int i, int j, int row, int col){ if(i<0 || j<0 || i>=row || j>=col)return false; return true; } int minimumCostPath(vector<vector<int>>& grid) { // Code here int row = grid.size(), col = grid[0].size(), fcost =0; vector<vector<int>> dist(row, vector<int>(col, INT_MAX)); dist[0][0] = grid[0][0]; priority_queue< pair<int,pair<int,int>>, vector<pair<int,pair<int,int>>>, greater<pair<int,pair<int,int>>>> pq; int dx[4] = {1,-1,0,0}; int dy[4] = {0,0,1,-1}; pq.push({dist[0][0],{0,0}}); while(!pq.empty()){ auto p = pq.top(); pq.pop(); int cost = p.first; auto q = p.second; int i = q.first; int j = q.second; fcost += cost; if(i==row-1 && j==col-1) break; for(int k =0; k<4; ++k){ int xdx = i + dx[k]; int ydy = j + dy[k]; if(isvalid(xdx, ydy, row, col)){ if(dist[xdx][ydy]>=dist[i][j]+grid[xdx][ydy]){ dist[xdx][ydy] = dist[i][j] + grid[xdx][ydy]; pq.push({dist[xdx][ydy],{xdx, ydy}}); } } } } return dist[row-1][col-1]; } +1 madhukartemba2 months ago JAVA SOLUTION USING PRIORITY QUEUE: class Pair implements Comparable<Pair> { int x, y, cost; Pair(int x, int y, int cost) { this.x = x; this.y = y; this.cost = cost; } public int compareTo(Pair p) { return cost - p.cost; } } class Solution { private boolean inLimits(int x, int y, int n) { return (x>=0 && y>=0 && x<n && y<n); } //Function to return the minimum cost to react at bottom //right cell from top left cell. public int minimumCostPath(int[][] grid) { int n = grid.length; PriorityQueue<Pair> pq = new PriorityQueue<>(); pq.add(new Pair(0, 0, grid[0][0])); boolean visited[][] = new boolean[n][n]; int dx[] = {1, 0, -1, 0}; int dy[] = {0, 1, 0, -1}; while(!pq.isEmpty()) { Pair p = pq.poll(); visited[p.x][p.y] = true; for(int i=0; i<4; i++) { int x = p.x + dx[i]; int y = p.y + dy[i]; if(inLimits(x, y, n) && visited[x][y] == false) { if(x==n-1 && y==n-1) return p.cost + grid[x][y]; visited[x][y] = true; pq.add(new Pair(x, y, p.cost + grid[x][y])); } } } return -1; } } 0 avenvy2 months ago Python implementation using Dijkstra's Algo using Heap for 2D array(submitted solution) import heapq class Solution: def minimumCostPath(self, grid): #Code here m=len(grid) n=len(grid[0]) i=0 j=0 dist=[[float("inf")]*n for i in range(m)] dist[i][j]=grid[i][j] pq=[] dx = [0, 0, 1, -1] dy = [1, -1, 0, 0] heapq.heappush(pq,(grid[0][0],(0,0))) while pq: wt,nei=heapq.heappop(pq) u=nei[0] v=nei[1] for k in range(4): nu=u+dx[k] nv=v+dy[k] if 0<=nu<n and 0<=nv<m and wt+grid[nu][nv]<dist[nu][nv]: nwt=wt+grid[nu][nv] dist[nu][nv]=nwt heapq.heappush(pq,(nwt,(nu,nv))) return dist[n-1][m-1] +1 shouryarastogi2 months ago Can someone tell why its givind TLE after 11 test cases class Solution{ //Function to return the minimum cost to react at bottom//right cell from top left cell. static class Node implements Comparable<Node>{ String key; int distance; public Node(String key, int distance) { this.key = key; this.distance = distance; } @Override public int compareTo(Node n){ return this.distance-n.distance; } @Override public boolean equals(Object o){ Node n=(Node) o; return n.key.equals(this.key); } @Override public int hashCode(){ return Objects.hash(key); } } public static int minimumCostPath(int[][] grid) { int rowMax=grid.length; int colMax=grid[0].length; Map<String, List<Node>> connectionMap=new HashMap<>(); Map<String,Integer> distanceMap=new HashMap<>(); PriorityQueue<Node> pq=new PriorityQueue<>(); for(int i=0;i<rowMax;i++){ for(int j=0;j<colMax;j++){ String key=i+"_"+j; if(i==0&&j==0){ distanceMap.put(key,0); pq.add(new Node(key,0)); }else{ distanceMap.put(key,100000); pq.add(new Node(key,100000)); } List<Node> connections=new ArrayList<>(); if(i-1>=0){ connections.add(new Node((i-1)+"_"+j,grid[i-1][j])); } if(i+1<rowMax){ connections.add(new Node((i+1)+"_"+j,grid[i+1][j])); } if(j-1>=0){ connections.add(new Node(i+ "_"+(j-1),grid[i][j-1])); } if(j+1<colMax){ connections.add(new Node(i+"_"+(j+1),grid[i][j+1])); } connectionMap.put(key,connections); } } Map<String,String> visited=new HashMap<>(); while (!pq.isEmpty()){ Node top = pq.poll(); if(visited.containsKey(top.key)){ continue; } visited.put(top.key,""); List<Node> connections = connectionMap.get(top.key); for(Node each: connections){ Integer existingDistance = distanceMap.get(each.key); Integer newDistance = top.distance + each.distance; Integer minDistance=Math.min(existingDistance,newDistance); distanceMap.put(each.key,minDistance); pq.remove(new Node(each.key,1)); pq.add(new Node(each.key,minDistance)); } } return distanceMap.get((rowMax-1)+"_"+(colMax-1)) + grid[0][0]; }} 0 bhaskarmaheshwari83 months ago Why this question cant be done using backtracking?? can anyone tell whats wrong in this as public test cases are passing when i submitting it saying wrong answer?? int visited[1005][1005];int dp[1005][1005];Solution(){ memset(visited,0,sizeof(visited)); memset(dp,-1,sizeof(dp));}int helper(vector<vector<int>>& grid,int n,int row,int col){ if(row==n-1&&col==n-1) return grid[row][col]; if(dp[row][col]!=-1) return dp[row][col]; int dx[]={0,0,-1,1}; int dy[]={-1,1,0,0}; int ans=INT_MAX/4; for(int i=0;i<4;i++) { if(row+dx[i]>=0&&row+dx[i]<n&&col+dy[i]>=0&&col+dy[i]<n&&!visited[row+dx[i]][col+dy[i]]) { visited[row][col]=1; cout<<grid[row][col]<<endl; ans=min(ans,helper(grid,n,row+dx[i],col+dy[i])); visited[row][col]=0; } } //cout<<endl; ans+=grid[row][col]; cout<<endl; return dp[row][col]= ans;} int minimumCostPath(vector<vector<int>>& grid) { // Code here int n=grid.size(); int row=0,col=0; return helper(grid,n,row,col); } 0 apurvpandey173 months ago int minimumCostPath(vector<vector<int>>& grid) { int n = grid.size(); vector<vector<int>> cost(n,vector<int>(n,INT_MAX-1)); cost[0][0] = grid[0][0]; vector<pair<int,int>> dir = {{0,1},{0,-1},{1,0},{0,-1}}; priority_queue<p, vector<p>, greater<p>> pq; pq.push({0,{0,0}}); int dist,x,y,px,py,c; while(! pq.empty()){ x = pq.top().second.first; y = pq.top().second.second; pq.pop(); for(auto it:dir){ px = x + it.first; py = y + it.second; if(px < n && py < n && px >= 0 && py >= 0) if(cost[px][py] > cost[x][y] + grid[px][py]){ cost[px][py] = min(cost[x][y] + grid[px][py], cost[px][py]); pq.push({cost[px][py],{px,py}}); } } } return cost[n-1][n-1]; } Can anyone tell me what is wrong with this code? Thanks We strongly recommend solving this problem on your own before viewing its editorial. Do you still want to view the editorial? Login to access your submissions. Problem Contest Reset the IDE using the second button on the top right corner. Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values. Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints. You can access the hints to get an idea about what is expected of you as well as the final solution code. You can view the solutions submitted by other users from the submission tab.
[ { "code": null, "e": 543, "s": 238, "text": "Given a square grid of size N, each cell of which contains integer cost which represents a cost to traverse through that cell, we need to find a path from top left cell to bottom right cell by which the total cost incurred is minimum.\nFrom the cell (i,j) we can go (i,j-1), (i, j+1), (i-1, j), (i+1, j). " }, { "code": null, "e": 625, "s": 543, "text": "Note: It is assumed that negative cost cycles do not exist in the input matrix.\n " }, { "code": null, "e": 636, "s": 625, "text": "Example 1:" }, { "code": null, "e": 819, "s": 636, "text": "Input: grid = {{9,4,9,9},{6,7,6,4},\n{8,3,3,7},{7,4,9,10}}\nOutput: 43\nExplanation: The grid is-\n9 4 9 9\n6 7 6 4\n8 3 3 7\n7 4 9 10\nThe minimum cost is-\n9 + 4 + 7 + 3 + 3 + 7 + 10 = 43.\n" }, { "code": null, "e": 830, "s": 819, "text": "Example 2:" }, { "code": null, "e": 941, "s": 830, "text": "Input: grid = {{4,4},{3,7}}\nOutput: 14\nExplanation: The grid is-\n4 4\n3 7\nThe minimum cost is- 4 + 3 + 7 = 14.\n" }, { "code": null, "e": 1169, "s": 943, "text": "Your Task:\nYou don't need to read or print anything. Your task is to complete the function minimumCostPath() which takes grid as input parameter and returns the minimum cost to react at bottom right cell from top left cell.\n " }, { "code": null, "e": 1243, "s": 1169, "text": "Expected Time Compelxity: O(n2*log(n))\nExpected Auxiliary Space: O(n2) \n " }, { "code": null, "e": 1293, "s": 1243, "text": "Constraints:\n1 ≤ n ≤ 500\n1 ≤ cost of cells ≤ 1000" }, { "code": null, "e": 1295, "s": 1293, "text": "0" }, { "code": null, "e": 1323, "s": 1295, "text": "abhishekpanwar6972 days ago" }, { "code": null, "e": 2237, "s": 1323, "text": " int minimumCostPath(vector<vector<int>>& a) { int n=a.size(),m=a[0].size(); vector<vector<int>>dist(n+1,vector<int>(m+1,INT_MAX)); priority_queue<pair<int,pair<int,int>>,vector<pair<int,pair<int,int>>>,greater<pair<int,pair<int,int>>>>pq; dist[0][0]=a[0][0]; pq.push({dist[0][0],{0,0}}); int x[4]={1,-1,0,0}; int y[4]={0,0,1,-1}; while(!pq.empty()) { auto t=pq.top(); pq.pop(); int w=t.first; int i=t.second.first; int j=t.second.second; if(i==n-1&&j==m-1) break; for(int k=0;k<4;k++) { int u=i+x[k]; int v=j+y[k]; if(u>=0&&u<n&&v>=0&&v<m&&dist[u][v]>dist[i][j]+a[u][v]) { dist[u][v]=dist[i][j]+a[u][v]; pq.push({dist[u][v],{u,v}}); } } } return dist[n-1][m-1]; }" }, { "code": null, "e": 2239, "s": 2237, "text": "0" }, { "code": null, "e": 2265, "s": 2239, "text": "dusankovacevic2 weeks ago" }, { "code": null, "e": 2332, "s": 2265, "text": "For some reason it gave me this code when looking up greedy tasks." }, { "code": null, "e": 2334, "s": 2332, "text": "0" }, { "code": null, "e": 2364, "s": 2334, "text": "bhaskarmaheshwari83 weeks ago" }, { "code": null, "e": 2412, "s": 2364, "text": "// https://www.youtube.com/watch?v=jbhuqIASjoM" }, { "code": null, "e": 3606, "s": 2412, "text": "bool isSafe(int row,int col,int n){ if(row<0||col<0||row>n-1||col>n-1) return 0; return 1;} int minimumCostPath(vector<vector<int>>& grid) { // Code here int n=grid.size(); priority_queue<pair<int,pair<int,int>>, vector<pair<int,pair<int,int>>>, greater<pair<int,pair<int,int>>> > q; vector<vector<int>> dist(n,vector<int>(n,INT_MAX)); q.push({grid[0][0],{0,0}}); dist[0][0]=grid[0][0]; int dx[]={-1,1,0,0}; int dy[]={0,0,-1,1}; while(!q.empty()) { int distance=q.top().first; int row=q.top().second.first; int col=q.top().second.second; q.pop(); for(int i=0;i<4;i++) { if(isSafe(row+dx[i],col+dy[i],n)) { if(dist[row+dx[i]][col+dy[i]]>distance+grid[row+dx[i]][col+dy[i]]) { dist[row+dx[i]][col+dy[i]]=distance+grid[row+dx[i]][col+dy[i]]; q.push({ dist[row+dx[i]][col+dy[i]],{row+dx[i],col+dy[i]}}); } } } } return dist[n-1][n-1]; " }, { "code": null, "e": 3608, "s": 3606, "text": "0" }, { "code": null, "e": 3631, "s": 3608, "text": "akashmr10961 month ago" }, { "code": null, "e": 5833, "s": 3631, "text": "//Function to return the minimum cost to react at bottom\n\t//right cell from top left cell.\n public int minimumCostPath(int[][] grid)\n {\n // Code here\n int n = grid.length;\n int dist[][] = new int[n][n];\n for(int i=0; i<n; i++)\n Arrays.fill(dist[i], Integer.MAX_VALUE);\n \n boolean visited[][] = new boolean[n][n];\n PriorityQueue<Node> q = new PriorityQueue<>();\n dist[0][0] = grid[0][0];\n q.offer(new Node(0, 0, dist[0][0]));\n while(!q.isEmpty()) {\n Node next = q.poll();\n //System.out.println(String.format(\"i: %d j: %d dist: %d\", next.i, next.j, next.dist));\n if (next.i == n-1 && next.j == n-1) {\n return next.dist;\n } else {\n bfs(next, n, visited, dist, grid, q);\n }\n }\n return dist[n-1][n-1];\n }\n \n private void bfs(Node next, int n, boolean visited[][], int dist[][], int grid[][], PriorityQueue<Node> q) {\n int i = next.i;\n int j = next.j;\n if (visited[i][j]) {\n return;\n }\n \n visited[i][j] = true;\n \n if (j > 0) {\n dist[i][j-1] = Math.min(dist[i][j-1], next.dist+grid[i][j-1]);\n q.offer(new Node(i, j-1, dist[i][j-1]));\n }\n \n if (j < n-1) {\n dist[i][j+1] = Math.min(dist[i][j+1], next.dist+grid[i][j+1]);\n q.offer(new Node(i, j+1, dist[i][j+1]));\n }\n \n if (i > 0) {\n dist[i-1][j] = Math.min(dist[i-1][j], next.dist+grid[i-1][j]);\n q.offer(new Node(i-1, j, dist[i-1][j]));\n }\n \n if (i < n-1) {\n dist[i+1][j] = Math.min(dist[i+1][j], next.dist+grid[i+1][j]);\n q.offer(new Node(i+1, j, dist[i+1][j]));\n }\n }\n \n static class Node implements Comparable<Node> {\n int i;\n int j;\n int dist;\n \n public Node(int i, int j, int dist) {\n this.i = i;\n this.j = j;\n this.dist = dist;\n }\n \n @Override\n public int compareTo(Node n) {\n return this.dist-n.dist;\n }\n }" }, { "code": null, "e": 5836, "s": 5833, "text": "+3" }, { "code": null, "e": 5859, "s": 5836, "text": "vinamrajha2 months ago" }, { "code": null, "e": 7150, "s": 5859, "text": "bool isvalid(int i, int j, int row, int col){\n if(i<0 || j<0 || i>=row || j>=col)return false;\n return true;\n}\n int minimumCostPath(vector<vector<int>>& grid) \n {\n // Code here\n int row = grid.size(), col = grid[0].size(), \t\t fcost =0;\n vector<vector<int>> dist(row, vector<int>(col, INT_MAX));\n dist[0][0] = grid[0][0];\n priority_queue< pair<int,pair<int,int>>, vector<pair<int,pair<int,int>>>, greater<pair<int,pair<int,int>>>> pq;\n int dx[4] = {1,-1,0,0};\n int dy[4] = {0,0,1,-1};\n pq.push({dist[0][0],{0,0}});\n while(!pq.empty()){\n auto p = pq.top();\n pq.pop();\n int cost = p.first;\n auto q = p.second;\n int i = q.first;\n int j = q.second;\n fcost += cost;\n if(i==row-1 && j==col-1) break;\n for(int k =0; k<4; ++k){\n int xdx = i + dx[k];\n int ydy = j + dy[k];\n if(isvalid(xdx, ydy, row, col)){\n if(dist[xdx][ydy]>=dist[i][j]+grid[xdx][ydy]){\n dist[xdx][ydy] = dist[i][j] + grid[xdx][ydy];\n pq.push({dist[xdx][ydy],{xdx, ydy}});\n }\n }\n }\n }\n return dist[row-1][col-1];\n }" }, { "code": null, "e": 7153, "s": 7150, "text": "+1" }, { "code": null, "e": 7179, "s": 7153, "text": "madhukartemba2 months ago" }, { "code": null, "e": 7215, "s": 7179, "text": "JAVA SOLUTION USING PRIORITY QUEUE:" }, { "code": null, "e": 8714, "s": 7215, "text": "class Pair implements Comparable<Pair>\n{\n int x, y, cost;\n \n Pair(int x, int y, int cost)\n {\n this.x = x;\n this.y = y;\n this.cost = cost;\n }\n \n \n public int compareTo(Pair p)\n {\n return cost - p.cost;\n }\n}\n\n\nclass Solution\n{\n \n private boolean inLimits(int x, int y, int n)\n {\n return (x>=0 && y>=0 && x<n && y<n);\n }\n \n //Function to return the minimum cost to react at bottom\n\t//right cell from top left cell.\n public int minimumCostPath(int[][] grid)\n {\n int n = grid.length;\n \n PriorityQueue<Pair> pq = new PriorityQueue<>();\n \n pq.add(new Pair(0, 0, grid[0][0]));\n \n boolean visited[][] = new boolean[n][n];\n \n int dx[] = {1, 0, -1, 0};\n int dy[] = {0, 1, 0, -1};\n \n while(!pq.isEmpty())\n {\n Pair p = pq.poll();\n \n visited[p.x][p.y] = true;\n \n for(int i=0; i<4; i++)\n {\n int x = p.x + dx[i];\n int y = p.y + dy[i];\n \n if(inLimits(x, y, n) && visited[x][y] == false)\n {\n if(x==n-1 && y==n-1) return p.cost + grid[x][y];\n visited[x][y] = true;\n \n pq.add(new Pair(x, y, p.cost + grid[x][y]));\n \n }\n }\n }\n \n return -1;\n \n }\n}" }, { "code": null, "e": 8716, "s": 8714, "text": "0" }, { "code": null, "e": 8735, "s": 8716, "text": "avenvy2 months ago" }, { "code": null, "e": 8826, "s": 8735, "text": "Python implementation using Dijkstra's Algo using Heap for 2D array(submitted solution) " }, { "code": null, "e": 9543, "s": 8826, "text": "import heapq\nclass Solution:\n\tdef minimumCostPath(self, grid):\n\t\t#Code here\n\t\tm=len(grid)\n\t\tn=len(grid[0])\n\t i=0 \n\t j=0\n\t dist=[[float(\"inf\")]*n for i in range(m)]\n\t dist[i][j]=grid[i][j]\n\t pq=[]\n\t dx = [0, 0, 1, -1]\n dy = [1, -1, 0, 0]\n heapq.heappush(pq,(grid[0][0],(0,0)))\n while pq:\n wt,nei=heapq.heappop(pq)\n u=nei[0]\n v=nei[1]\n for k in range(4):\n nu=u+dx[k]\n nv=v+dy[k]\n if 0<=nu<n and 0<=nv<m and wt+grid[nu][nv]<dist[nu][nv]:\n nwt=wt+grid[nu][nv]\n dist[nu][nv]=nwt\n heapq.heappush(pq,(nwt,(nu,nv)))\n\t\treturn dist[n-1][m-1]" }, { "code": null, "e": 9546, "s": 9543, "text": "+1" }, { "code": null, "e": 9573, "s": 9546, "text": "shouryarastogi2 months ago" }, { "code": null, "e": 9629, "s": 9573, "text": "Can someone tell why its givind TLE after 11 test cases" }, { "code": null, "e": 9824, "s": 9629, "text": "class Solution{ //Function to return the minimum cost to react at bottom//right cell from top left cell. static class Node implements Comparable<Node>{ String key; int distance;" }, { "code": null, "e": 10416, "s": 9824, "text": " public Node(String key, int distance) { this.key = key; this.distance = distance; } @Override public int compareTo(Node n){ return this.distance-n.distance; } @Override public boolean equals(Object o){ Node n=(Node) o; return n.key.equals(this.key); } @Override public int hashCode(){ return Objects.hash(key); } } public static int minimumCostPath(int[][] grid) { int rowMax=grid.length; int colMax=grid[0].length;" }, { "code": null, "e": 12317, "s": 10416, "text": " Map<String, List<Node>> connectionMap=new HashMap<>(); Map<String,Integer> distanceMap=new HashMap<>(); PriorityQueue<Node> pq=new PriorityQueue<>(); for(int i=0;i<rowMax;i++){ for(int j=0;j<colMax;j++){ String key=i+\"_\"+j; if(i==0&&j==0){ distanceMap.put(key,0); pq.add(new Node(key,0)); }else{ distanceMap.put(key,100000); pq.add(new Node(key,100000)); } List<Node> connections=new ArrayList<>(); if(i-1>=0){ connections.add(new Node((i-1)+\"_\"+j,grid[i-1][j])); } if(i+1<rowMax){ connections.add(new Node((i+1)+\"_\"+j,grid[i+1][j])); } if(j-1>=0){ connections.add(new Node(i+ \"_\"+(j-1),grid[i][j-1])); } if(j+1<colMax){ connections.add(new Node(i+\"_\"+(j+1),grid[i][j+1])); } connectionMap.put(key,connections); } } Map<String,String> visited=new HashMap<>(); while (!pq.isEmpty()){ Node top = pq.poll(); if(visited.containsKey(top.key)){ continue; } visited.put(top.key,\"\"); List<Node> connections = connectionMap.get(top.key); for(Node each: connections){ Integer existingDistance = distanceMap.get(each.key); Integer newDistance = top.distance + each.distance; Integer minDistance=Math.min(existingDistance,newDistance); distanceMap.put(each.key,minDistance); pq.remove(new Node(each.key,1)); pq.add(new Node(each.key,minDistance)); } } return distanceMap.get((rowMax-1)+\"_\"+(colMax-1)) + grid[0][0]; }}" }, { "code": null, "e": 12319, "s": 12317, "text": "0" }, { "code": null, "e": 12350, "s": 12319, "text": "bhaskarmaheshwari83 months ago" }, { "code": null, "e": 12402, "s": 12350, "text": "Why this question cant be done using backtracking??" }, { "code": null, "e": 12516, "s": 12404, "text": "can anyone tell whats wrong in this as public test cases are passing when i submitting it saying wrong answer??" }, { "code": null, "e": 13436, "s": 12518, "text": "int visited[1005][1005];int dp[1005][1005];Solution(){ memset(visited,0,sizeof(visited)); memset(dp,-1,sizeof(dp));}int helper(vector<vector<int>>& grid,int n,int row,int col){ if(row==n-1&&col==n-1) return grid[row][col]; if(dp[row][col]!=-1) return dp[row][col]; int dx[]={0,0,-1,1}; int dy[]={-1,1,0,0}; int ans=INT_MAX/4; for(int i=0;i<4;i++) { if(row+dx[i]>=0&&row+dx[i]<n&&col+dy[i]>=0&&col+dy[i]<n&&!visited[row+dx[i]][col+dy[i]]) { visited[row][col]=1; cout<<grid[row][col]<<endl; ans=min(ans,helper(grid,n,row+dx[i],col+dy[i])); visited[row][col]=0; } } //cout<<endl; ans+=grid[row][col]; cout<<endl; return dp[row][col]= ans;} int minimumCostPath(vector<vector<int>>& grid) { // Code here int n=grid.size(); int row=0,col=0; return helper(grid,n,row,col); }" }, { "code": null, "e": 13438, "s": 13436, "text": "0" }, { "code": null, "e": 13464, "s": 13438, "text": "apurvpandey173 months ago" }, { "code": null, "e": 14336, "s": 13464, "text": " int minimumCostPath(vector<vector<int>>& grid) { int n = grid.size(); vector<vector<int>> cost(n,vector<int>(n,INT_MAX-1)); cost[0][0] = grid[0][0]; vector<pair<int,int>> dir = {{0,1},{0,-1},{1,0},{0,-1}}; priority_queue<p, vector<p>, greater<p>> pq; pq.push({0,{0,0}}); int dist,x,y,px,py,c; while(! pq.empty()){ x = pq.top().second.first; y = pq.top().second.second; pq.pop(); for(auto it:dir){ px = x + it.first; py = y + it.second; if(px < n && py < n && px >= 0 && py >= 0) if(cost[px][py] > cost[x][y] + grid[px][py]){ cost[px][py] = min(cost[x][y] + grid[px][py], cost[px][py]); pq.push({cost[px][py],{px,py}}); } } } return cost[n-1][n-1]; }" }, { "code": null, "e": 14389, "s": 14340, "text": "Can anyone tell me what is wrong with this code?" }, { "code": null, "e": 14396, "s": 14389, "text": "Thanks" }, { "code": null, "e": 14544, "s": 14398, "text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?" }, { "code": null, "e": 14580, "s": 14544, "text": " Login to access your submissions. " }, { "code": null, "e": 14590, "s": 14580, "text": "\nProblem\n" }, { "code": null, "e": 14600, "s": 14590, "text": "\nContest\n" }, { "code": null, "e": 14663, "s": 14600, "text": "Reset the IDE using the second button on the top right corner." }, { "code": null, "e": 14811, "s": 14663, "text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values." }, { "code": null, "e": 15019, "s": 14811, "text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints." }, { "code": null, "e": 15125, "s": 15019, "text": "You can access the hints to get an idea about what is expected of you as well as the final solution code." } ]
Automate Microsoft Excel and Word Using Python | by M Khorasani | Towards Data Science
Microsoft Excel and Word are without a shred of doubt the two most abundantly used software in the corporate and non-corporate world. They are practically synonymous with the term ‘work’ itself. Oftentimes, not a week goes by without us firing up the combination of the two and one way or another putting their goodness to use. While for the average daily purpose automation would not be solicited, there are times when automation can be a necessity. Namely, when you have a multitude of charts, figures, tables, and reports to generate, it can become an exceedingly tedious undertaking if you choose the manual route. Well, it doesn’t have to be that way. There is in fact a way to create a pipeline in Python where you can seamlessly integrate the two to produce spreadsheets in Excel and then transfer the results to Word to generate a report virtually instantaneously. Meet Openpyxl, arguably one of the most versatile bindings in Python that makes interfacing with Excel quite literally a stroll in the park. Armed with it you can read and write all current and legacy excel formats i.e. xlsx and xls. Openpyxl allows you to populate rows and columns, execute formulae, create 2D and 3D charts, label axes and titles, and a plethora of other abilities that can come in handy. Most importantly however, this package enables you to iterate over an endless numbers of rows and columns in Excel, thereby saving you from all that pesky number crunching and plotting that you had to do previously. And then comes along Python-docx—this package is to Word what Openpyxl is to Excel. If you haven’t already studied their documentation, then you should probably take a look. Python-docx is without exaggeration one of the simplest and most self-explanatory toolkits I have worked with ever since I started working with Python itself. It allows you to automate document generation by inserting text, filling in tables and rendering images into your report automatically without any overhead whatsoever. Without further ado let’s create our very own automated pipeline. Go ahead and fire up Anaconda (or any other IDE of your choice) and install the following packages: pip install openpyxlpip install python-docx Initially, we’ll load an Excel workbook that has already been created (shown below): workbook = xl.load_workbook('Book1.xlsx')sheet_1 = workbook['Sheet1'] Subsequently, we’ll iterate over all of the rows in our spreadsheet to compute and insert the values for power by multiplying current by voltage: for row in range(2, sheet_1.max_row + 1): current = sheet_1.cell(row, 2) voltage = sheet_1.cell(row, 3) power = float(current.value) * float(voltage.value) power_cell = sheet_1.cell(row, 1) power_cell.value = power Once that is done, we will use the calculated values for power to generate a line chart that will be inserted into the specified cell as shown below: values = Reference(sheet_1, min_row = 2, max_row = sheet_1.max_row, min_col = 1, max_col = 1)chart = LineChart()chart.y_axis.title = 'Power'chart.x_axis.title = 'Index'chart.add_data(values)sheet_1.add_chart(chart, 'e2') workbook.save('Book1.xlsx') Now that we have generated our chart, we need to extract it as an image so that we can use it in our Word report. First, we’ll declare the exact location of our Excel file and also where the output chart image should be saved: input_file = "C:/Users/.../Book1.xlsx"output_image = "C:/Users/.../chart.png" Then access the spreadsheet using the following method: operation = win32com.client.Dispatch("Excel.Application")operation.Visible = 0operation.DisplayAlerts = 0workbook_2 = operation.Workbooks.Open(input_file)sheet_2 = operation.Sheets(1) Subsequently, you can iterate over all of the chart objects in the spreadsheet (if there are more than one) and save them in the specified location as such: for x, chart in enumerate(sheet_2.Shapes): chart.Copy() image = ImageGrab.grabclipboard() image.save(output_image, 'png') passworkbook_2.Close(True)operation.Quit() Now that we have our chart image generated, we must create a template document that is basically a normal Microsoft Word Document (.docx) formulated exactly in the way we want our report to look, including typefaces, font sizes, formatting, and page structure. Then all we need to do is to create placeholders for our automated content i.e. table values and images and declare them with variable names as shown below. Any automated content can be declared inside a pair of double curly brackets {{variable_name}}, including text and images. For tables, you need to create a table with a template row with all the columns included, and then you need to append one row above and one row below with the following notation: First row: {%tr for item in variable_name %} Last row: {%tr endfor %} In the figure above the variable names are table_contents for the Python dictionary that will store our tabular data Index for the dictionary keys (first column) Power, Current, and Voltage for the dictionary values (second, third and fourth columns) Then we import our template document into Python and create a dictionary that will store our table’s values: template = DocxTemplate('template.docx')table_contents = []for i in range(2, sheet_1.max_row + 1): table_contents.append({ 'Index': i-1, 'Power': sheet_1.cell(i, 1).value, 'Current': sheet_1.cell(i, 2).value, 'Voltage': sheet_1.cell(i, 3).value }) Next we‘ll import the chart image that was previously produced by Excel and will create another dictionary to instantiate all of the placeholder variables declared in the template document: image = InlineImage(template,'chart.png',Cm(10))context = { 'title': 'Automated Report', 'day': datetime.datetime.now().strftime('%d'), 'month': datetime.datetime.now().strftime('%b'), 'year': datetime.datetime.now().strftime('%Y'), 'table_contents': table_contents, 'image': image } And finally, we’ll render the report with our table of values and chart image: template.render(context)template.save('Automated_report.docx') And there you go, an automatically generated Microsoft Word report with numbers and a chart created in Microsoft Excel. And with that, you have a fully automated pipeline that can be used to create as many tables, charts, and documents as you could possibly ever need. If you want to learn more about data visualization and Python, then feel free to check out the following (affiliate linked) courses:
[ { "code": null, "e": 1045, "s": 172, "text": "Microsoft Excel and Word are without a shred of doubt the two most abundantly used software in the corporate and non-corporate world. They are practically synonymous with the term ‘work’ itself. Oftentimes, not a week goes by without us firing up the combination of the two and one way or another putting their goodness to use. While for the average daily purpose automation would not be solicited, there are times when automation can be a necessity. Namely, when you have a multitude of charts, figures, tables, and reports to generate, it can become an exceedingly tedious undertaking if you choose the manual route. Well, it doesn’t have to be that way. There is in fact a way to create a pipeline in Python where you can seamlessly integrate the two to produce spreadsheets in Excel and then transfer the results to Word to generate a report virtually instantaneously." }, { "code": null, "e": 1669, "s": 1045, "text": "Meet Openpyxl, arguably one of the most versatile bindings in Python that makes interfacing with Excel quite literally a stroll in the park. Armed with it you can read and write all current and legacy excel formats i.e. xlsx and xls. Openpyxl allows you to populate rows and columns, execute formulae, create 2D and 3D charts, label axes and titles, and a plethora of other abilities that can come in handy. Most importantly however, this package enables you to iterate over an endless numbers of rows and columns in Excel, thereby saving you from all that pesky number crunching and plotting that you had to do previously." }, { "code": null, "e": 2170, "s": 1669, "text": "And then comes along Python-docx—this package is to Word what Openpyxl is to Excel. If you haven’t already studied their documentation, then you should probably take a look. Python-docx is without exaggeration one of the simplest and most self-explanatory toolkits I have worked with ever since I started working with Python itself. It allows you to automate document generation by inserting text, filling in tables and rendering images into your report automatically without any overhead whatsoever." }, { "code": null, "e": 2336, "s": 2170, "text": "Without further ado let’s create our very own automated pipeline. Go ahead and fire up Anaconda (or any other IDE of your choice) and install the following packages:" }, { "code": null, "e": 2380, "s": 2336, "text": "pip install openpyxlpip install python-docx" }, { "code": null, "e": 2465, "s": 2380, "text": "Initially, we’ll load an Excel workbook that has already been created (shown below):" }, { "code": null, "e": 2535, "s": 2465, "text": "workbook = xl.load_workbook('Book1.xlsx')sheet_1 = workbook['Sheet1']" }, { "code": null, "e": 2681, "s": 2535, "text": "Subsequently, we’ll iterate over all of the rows in our spreadsheet to compute and insert the values for power by multiplying current by voltage:" }, { "code": null, "e": 2911, "s": 2681, "text": "for row in range(2, sheet_1.max_row + 1): current = sheet_1.cell(row, 2) voltage = sheet_1.cell(row, 3) power = float(current.value) * float(voltage.value) power_cell = sheet_1.cell(row, 1) power_cell.value = power" }, { "code": null, "e": 3061, "s": 2911, "text": "Once that is done, we will use the calculated values for power to generate a line chart that will be inserted into the specified cell as shown below:" }, { "code": null, "e": 3310, "s": 3061, "text": "values = Reference(sheet_1, min_row = 2, max_row = sheet_1.max_row, min_col = 1, max_col = 1)chart = LineChart()chart.y_axis.title = 'Power'chart.x_axis.title = 'Index'chart.add_data(values)sheet_1.add_chart(chart, 'e2') workbook.save('Book1.xlsx')" }, { "code": null, "e": 3537, "s": 3310, "text": "Now that we have generated our chart, we need to extract it as an image so that we can use it in our Word report. First, we’ll declare the exact location of our Excel file and also where the output chart image should be saved:" }, { "code": null, "e": 3615, "s": 3537, "text": "input_file = \"C:/Users/.../Book1.xlsx\"output_image = \"C:/Users/.../chart.png\"" }, { "code": null, "e": 3671, "s": 3615, "text": "Then access the spreadsheet using the following method:" }, { "code": null, "e": 3855, "s": 3671, "text": "operation = win32com.client.Dispatch(\"Excel.Application\")operation.Visible = 0operation.DisplayAlerts = 0workbook_2 = operation.Workbooks.Open(input_file)sheet_2 = operation.Sheets(1)" }, { "code": null, "e": 4012, "s": 3855, "text": "Subsequently, you can iterate over all of the chart objects in the spreadsheet (if there are more than one) and save them in the specified location as such:" }, { "code": null, "e": 4189, "s": 4012, "text": "for x, chart in enumerate(sheet_2.Shapes): chart.Copy() image = ImageGrab.grabclipboard() image.save(output_image, 'png') passworkbook_2.Close(True)operation.Quit()" }, { "code": null, "e": 4607, "s": 4189, "text": "Now that we have our chart image generated, we must create a template document that is basically a normal Microsoft Word Document (.docx) formulated exactly in the way we want our report to look, including typefaces, font sizes, formatting, and page structure. Then all we need to do is to create placeholders for our automated content i.e. table values and images and declare them with variable names as shown below." }, { "code": null, "e": 4909, "s": 4607, "text": "Any automated content can be declared inside a pair of double curly brackets {{variable_name}}, including text and images. For tables, you need to create a table with a template row with all the columns included, and then you need to append one row above and one row below with the following notation:" }, { "code": null, "e": 4920, "s": 4909, "text": "First row:" }, { "code": null, "e": 4954, "s": 4920, "text": "{%tr for item in variable_name %}" }, { "code": null, "e": 4964, "s": 4954, "text": "Last row:" }, { "code": null, "e": 4979, "s": 4964, "text": "{%tr endfor %}" }, { "code": null, "e": 5022, "s": 4979, "text": "In the figure above the variable names are" }, { "code": null, "e": 5096, "s": 5022, "text": "table_contents for the Python dictionary that will store our tabular data" }, { "code": null, "e": 5141, "s": 5096, "text": "Index for the dictionary keys (first column)" }, { "code": null, "e": 5230, "s": 5141, "text": "Power, Current, and Voltage for the dictionary values (second, third and fourth columns)" }, { "code": null, "e": 5339, "s": 5230, "text": "Then we import our template document into Python and create a dictionary that will store our table’s values:" }, { "code": null, "e": 5625, "s": 5339, "text": "template = DocxTemplate('template.docx')table_contents = []for i in range(2, sheet_1.max_row + 1): table_contents.append({ 'Index': i-1, 'Power': sheet_1.cell(i, 1).value, 'Current': sheet_1.cell(i, 2).value, 'Voltage': sheet_1.cell(i, 3).value })" }, { "code": null, "e": 5815, "s": 5625, "text": "Next we‘ll import the chart image that was previously produced by Excel and will create another dictionary to instantiate all of the placeholder variables declared in the template document:" }, { "code": null, "e": 6120, "s": 5815, "text": "image = InlineImage(template,'chart.png',Cm(10))context = { 'title': 'Automated Report', 'day': datetime.datetime.now().strftime('%d'), 'month': datetime.datetime.now().strftime('%b'), 'year': datetime.datetime.now().strftime('%Y'), 'table_contents': table_contents, 'image': image }" }, { "code": null, "e": 6199, "s": 6120, "text": "And finally, we’ll render the report with our table of values and chart image:" }, { "code": null, "e": 6262, "s": 6199, "text": "template.render(context)template.save('Automated_report.docx')" }, { "code": null, "e": 6531, "s": 6262, "text": "And there you go, an automatically generated Microsoft Word report with numbers and a chart created in Microsoft Excel. And with that, you have a fully automated pipeline that can be used to create as many tables, charts, and documents as you could possibly ever need." } ]
Modelling the coronavirus epidemic in a city with Python | by Gevorg Yeghikyan | Towards Data Science
You can learn the entire modelling, simulation and spatial visualization of the Covid-19 epidemic spreading in a city using just Python in this online course or in this one. The recent 2019-nCoV Wuhan coronavirus outbreak in China has sent shocks through financial markets and entire economies, and has duly triggered panic among the general population around the world. On 30 January 2020, 2019-nCoV was even designated a global health emergency by the World Health Organization (WHO). At the time of this writing, no specific treatment verified by medical research standards has yet been discovered. Moreover, some key epidemiological metrics such as the basic reproduction number (the average number of people infected by an ill individual) are still unknown. In our times of unprecedented global connectedness and mobility, such epidemics are a major threat on a global scale due to small world network effects. One could conjecture that conditional on a global catastrophic event (loosely defined as > 100mln casualties) happening in 2020, the most likely cause would be precisely some pandemic — not a nuclear disaster, not climate catastrophe, etc. This is further aggravated by worldwide rapid urbanisation, with our densely populated dynamic cities turning into propagation nodes in the disease diffusion network, thus becoming extremely vulnerable and fragile. In this post, we will discuss what can happen when an epidemic strikes a city, what measures should immediately be taken, and what implications this has for urban planning, policy making, and management. We will take the city of Yerevan as our case study and will mathematically model and simulate in Python the spread of the coronavirus in the city, looking at how urban mobility patterns affect the spread of the disease. Effective, efficient, and sustainable urban mobility is of crucial importance for the functioning of modern cities. It has been shown to directly affect livability and economic output (GDP) of cities. However, in the event of an epidemic, it will add fuel to the fire, amplifyig and propagating the disease spread. So let’s begin by looking at the network of aggregated origin-destination (OD) flows on a uniform Cartesian grid in Yerevan to get an idea about the spatial structure of mobility patterns in the city: Further, if we look at the total inflow to the grid cells, we see a more or less monocentric spatial organisation with some cells with high daily inflow located off the center: Now, imagine that an epidemic breaks out at a random location in the city. How will it spread? What can be done to contain it? To answer these questions, we will build a simple compartmental model to simulate the spread of infectious disease in the city. As an epidemic breaks out, its transmission dynamics varies significantly, depending on the geographical locations of the initial infection and its connectivity with the rest of the city. This is one of the most important insights gained from recent, data-driven studies on epidemics in urban populations. However, as we will see further below, the various outcomes call for similar measures to contain the epidemic and to account for such a possibility in planning and managing cities. Since runnning individual-based epidemic models is challenging, and since our goal is to show general principles of epidemic spread in cities, and not to build a minutely calibrated and accurate epidemic model, we will follow the approach described in this Nature article, modifying the described classical SIR model for our needs. The model divides the population into three compartments. For each location i at time t, the three compartments are as follows: Si,t: the number of individuals not yet infected or susceptible to the disease. Ii,t: the number of individuals infected with the disease and capable of spreading the disease to those in the susceptible group. Ri,t: the number of individuals who have been infected and then removed from the infected group, either due to recovery or due to death. Individuals in this group are not capable of contracting the disease again or transmitting the infection to others. In our simulations, time will be a discrete variable as the state of the system is modelled at a daily basis. In a fully susceptible population at location j at time t, an outbreak happens with probability: where βt is the transmission rate on day t; mj,k reflects mobility from location k to location j, xk,t and yk,t denote the fraction of the infected and susceptible populations on day t at location k and location j, respectively, given by xk,t = Ik,t / Nk and yj,t = Sj,t / Nj, where Nk and Nj are the population sizes at the locations k and j. Then we go ahead and simulate a stochastic process introducing the disease into locations with entirely susceptible populations, with Ij,t+1 being a Bernoulli random variable with probability h(t,j). Once the infections are introduced at random locations, the disease spreads both within those locations and is carried and transmitted in other locations by travelling individuals. This is where the urban mobility patterns characterised by the OD flow matrix play a crucial role. Further, to formalise how the disease is transmitted by an infected person, we need the basic reproduction number, R0. It is defined as R0 = βt / γ where γ is the recovery rate, and can be thought of as the expected number of secondary infections after an infected individual comes into contact with a susceptible population. At the time of this writing, the basic reproduction number for the Wuhan coronavirus has been estimated to be between 1.4 and 4. Let’s take the worst case and assume it’s 4. However, we should note that it’s actually a random variable and the reported number is but the expected number. To make things a bit more interesting, we will run our simulations with different R0 at each location, drawn from a good candidate distribution, Gamma, with mean 4: We can now proceed to the model dynamics: where βk,t is the (random) transmission rate at location k on day t, and α is a coefficient denoting the modal share or the intensity of public transport vs. private car travel modes in the city. The model dynamics described in the above equations are very simple: on day t+1 at location j, we need to subtract from the susceptible population Sj,t the fraction of people infected within location j (the second term in the first equation) as well as the fraction of infected people that have arrived from other locations in the city, weighted by their respective transmission rates βk,t (the third term in the first equation). Since the total population Nj = Sj + Ij + Rj, we need to move the subtracted portion to the infected group, while also moving those recovered to Rj,t+1 (second and third equations). For this analysis, we will use the aggregated OD flow matrix of a typical day obtained from GPS data provided by local ride sharing company gg as a proxy for the mobility patterns in Yerevan city. Next, we need the population counts in each 250×250m grid cell, which we approximate by proportionally scaling the extracted flow counts so that the total inflows in different locations sum up to approximately half of Yerevan’s population of 1.1 million. This is actually a bold assumption, but since varying this portion yielded very similar results, we will stick to it. For our first simulation, we will imagine a sustainable public transport-dominated future urban mobility with α=0.9: We see how fast the infected fraction of the population is climbing up immediately, reaching the epidemic’s peak on around day 8–10, with almost 70% of the population infected, while only a small portion (~10%) of the population having recovered from the disease. Towards day 100, when the epidemic has receded, we see the fraction of recovered individuals reach a staggering 90%! Now let’s see if reducing the intensity of public transport travel to something like α = 0.2 has any effect on mitigating the epidemic spread. This can either be interpreted as taking drastic measures to reduce urban mobility (e.g., by issuing a curfew) or as increasing the share of private car travel to reduce chances of infection during the travel. We see how the peak of the epidemic comes somewhere between day 16 and 20, with a significantly smaller infected group (~45%) and twice as many recovered (~20%). Towards the end of the epidemic, the fraction of susceptible individuals is also twice as big (~24% vs. ~12%), meaning that more people have escaped the disease. As expected, we see that the introduction of dramatic measures to temporarily bring urban mobility down has a big impact on the disease spreading dynamics. Now, let’s see whether another intuitive idea of completely cutting off a few key popular locations has the desired effect. To do this, let’s pick the locations associated with the upper 1 percentile of mobility flows, and completely block all flow to and from those locations, effectively establishing there a quarantine regime. As we can see from the plot, in Yerevan these locations are mostly in the city center, with two other locations being the two largest shopping malls. Choosing a moderate α = 0.5, we obtain: We see an even smaller fraction of infected individuals at the epidemic’s peak (~35%), and, most importantly, we see that towards the end of the epidemic, around half of the population remains susceptible, effectively escaping from contracting the infection! Here is a small animation visualising the dynamics of the high public transport share scenario: By no means claiming accurate epidemic modelling (or even any substantial knowledge in epidemiology beyond the basics), our aim in this post was to get a first insight on how network effects come into play in an urban setting during an infectious disease outbreak. With ever-increasing population densities, mobility, and dynamics, our cities become more exposed to “black swans” and become more fragile. And since you can’t fetch the coffee if you’re dead, smart and sustainable cities will be meaningless without effective and efficient crisis handling capability and mechanisms. For instance, we saw that the introduction of quarantine regimes in key locations, or taking draconian measures to curb mobility, can be instrumental during such a health crisis. However, a further important question would be how to implement such measures while minimizing damage and loss to the functioning of the city and its economy? Further yet, the exact epidemic spreading mechanisms of infectious diseases are still an active area of research and the advances in this fields will have to be communicated to and integrated in urban planning, policymaking, and management to make our cities safe and antifragile. P.S. Read the original post here. The code for the above simulations: import numpy as np # initialize the population vector from the origin-destination flow matrix N_k = np.abs(np.diagonal(OD) + OD.sum(axis=0) - OD.sum(axis=1)) locs_len = len(N_k) # number of locations SIR = np.zeros(shape=(locs_len, 3)) # make a numpy array with 3 columns for keeping track of the S, I, R groups SIR[:,0] = N_k # initialize the S group with the respective populations first_infections = np.where(SIR[:, 0]<=thresh, SIR[:, 0]//20, 0) # for demo purposes, randomly introduce infections SIR[:, 0] = SIR[:, 0] - first_infections SIR[:, 1] = SIR[:, 1] + first_infections # move infections to the I group # row normalize the SIR matrix for keeping track of group proportions row_sums = SIR.sum(axis=1) SIR_n = SIR / row_sums[:, np.newaxis] # initialize parameters beta = 1.6 gamma = 0.04 public_trans = 0.5 # alpha R0 = beta/gamma beta_vec = np.random.gamma(1.6, 2, locs_len) gamma_vec = np.full(locs_len, gamma) public_trans_vec = np.full(locs_len, public_trans) # make copy of the SIR matrices SIR_sim = SIR.copy() SIR_nsim = SIR_n.copy() # run model print(SIR_sim.sum(axis=0).sum() == N_k.sum()) from tqdm import tqdm_notebook infected_pop_norm = [] susceptible_pop_norm = [] recovered_pop_norm = [] for time_step in tqdm_notebook(range(100)): infected_mat = np.array([SIR_nsim[:,1],]*locs_len).transpose() OD_infected = np.round(OD*infected_mat) inflow_infected = OD_infected.sum(axis=0) inflow_infected = np.round(inflow_infected*public_trans_vec) print('total infected inflow: ', inflow_infected.sum()) new_infect = beta_vec*SIR_sim[:, 0]*inflow_infected/(N_k + OD.sum(axis=0)) new_recovered = gamma_vec*SIR_sim[:, 1] new_infect = np.where(new_infect>SIR_sim[:, 0], SIR_sim[:, 0], new_infect) SIR_sim[:, 0] = SIR_sim[:, 0] - new_infect SIR_sim[:, 1] = SIR_sim[:, 1] + new_infect - new_recovered SIR_sim[:, 2] = SIR_sim[:, 2] + new_recovered SIR_sim = np.where(SIR_sim<0,0,SIR_sim) # recompute the normalized SIR matrix row_sums = SIR_sim.sum(axis=1) SIR_nsim = SIR_sim / row_sums[:, np.newaxis] S = SIR_sim[:,0].sum()/N_k.sum() I = SIR_sim[:,1].sum()/N_k.sum() R = SIR_sim[:,2].sum()/N_k.sum() print(S, I, R, (S+I+R)*N_k.sum(), N_k.sum()) print('\n') infected_pop_norm.append(I) susceptible_pop_norm.append(S) recovered_pop_norm.append(R)
[ { "code": null, "e": 345, "s": 171, "text": "You can learn the entire modelling, simulation and spatial visualization of the Covid-19 epidemic spreading in a city using just Python in this online course or in this one." }, { "code": null, "e": 1542, "s": 345, "text": "The recent 2019-nCoV Wuhan coronavirus outbreak in China has sent shocks through financial markets and entire economies, and has duly triggered panic among the general population around the world. On 30 January 2020, 2019-nCoV was even designated a global health emergency by the World Health Organization (WHO). At the time of this writing, no specific treatment verified by medical research standards has yet been discovered. Moreover, some key epidemiological metrics such as the basic reproduction number (the average number of people infected by an ill individual) are still unknown. In our times of unprecedented global connectedness and mobility, such epidemics are a major threat on a global scale due to small world network effects. One could conjecture that conditional on a global catastrophic event (loosely defined as > 100mln casualties) happening in 2020, the most likely cause would be precisely some pandemic — not a nuclear disaster, not climate catastrophe, etc. This is further aggravated by worldwide rapid urbanisation, with our densely populated dynamic cities turning into propagation nodes in the disease diffusion network, thus becoming extremely vulnerable and fragile." }, { "code": null, "e": 1966, "s": 1542, "text": "In this post, we will discuss what can happen when an epidemic strikes a city, what measures should immediately be taken, and what implications this has for urban planning, policy making, and management. We will take the city of Yerevan as our case study and will mathematically model and simulate in Python the spread of the coronavirus in the city, looking at how urban mobility patterns affect the spread of the disease." }, { "code": null, "e": 2281, "s": 1966, "text": "Effective, efficient, and sustainable urban mobility is of crucial importance for the functioning of modern cities. It has been shown to directly affect livability and economic output (GDP) of cities. However, in the event of an epidemic, it will add fuel to the fire, amplifyig and propagating the disease spread." }, { "code": null, "e": 2482, "s": 2281, "text": "So let’s begin by looking at the network of aggregated origin-destination (OD) flows on a uniform Cartesian grid in Yerevan to get an idea about the spatial structure of mobility patterns in the city:" }, { "code": null, "e": 2659, "s": 2482, "text": "Further, if we look at the total inflow to the grid cells, we see a more or less monocentric spatial organisation with some cells with high daily inflow located off the center:" }, { "code": null, "e": 2786, "s": 2659, "text": "Now, imagine that an epidemic breaks out at a random location in the city. How will it spread? What can be done to contain it?" }, { "code": null, "e": 3401, "s": 2786, "text": "To answer these questions, we will build a simple compartmental model to simulate the spread of infectious disease in the city. As an epidemic breaks out, its transmission dynamics varies significantly, depending on the geographical locations of the initial infection and its connectivity with the rest of the city. This is one of the most important insights gained from recent, data-driven studies on epidemics in urban populations. However, as we will see further below, the various outcomes call for similar measures to contain the epidemic and to account for such a possibility in planning and managing cities." }, { "code": null, "e": 3733, "s": 3401, "text": "Since runnning individual-based epidemic models is challenging, and since our goal is to show general principles of epidemic spread in cities, and not to build a minutely calibrated and accurate epidemic model, we will follow the approach described in this Nature article, modifying the described classical SIR model for our needs." }, { "code": null, "e": 3861, "s": 3733, "text": "The model divides the population into three compartments. For each location i at time t, the three compartments are as follows:" }, { "code": null, "e": 3941, "s": 3861, "text": "Si,t: the number of individuals not yet infected or susceptible to the disease." }, { "code": null, "e": 4071, "s": 3941, "text": "Ii,t: the number of individuals infected with the disease and capable of spreading the disease to those in the susceptible group." }, { "code": null, "e": 4324, "s": 4071, "text": "Ri,t: the number of individuals who have been infected and then removed from the infected group, either due to recovery or due to death. Individuals in this group are not capable of contracting the disease again or transmitting the infection to others." }, { "code": null, "e": 4531, "s": 4324, "text": "In our simulations, time will be a discrete variable as the state of the system is modelled at a daily basis. In a fully susceptible population at location j at time t, an outbreak happens with probability:" }, { "code": null, "e": 5075, "s": 4531, "text": "where βt is the transmission rate on day t; mj,k reflects mobility from location k to location j, xk,t and yk,t denote the fraction of the infected and susceptible populations on day t at location k and location j, respectively, given by xk,t = Ik,t / Nk and yj,t = Sj,t / Nj, where Nk and Nj are the population sizes at the locations k and j. Then we go ahead and simulate a stochastic process introducing the disease into locations with entirely susceptible populations, with Ij,t+1 being a Bernoulli random variable with probability h(t,j)." }, { "code": null, "e": 5355, "s": 5075, "text": "Once the infections are introduced at random locations, the disease spreads both within those locations and is carried and transmitted in other locations by travelling individuals. This is where the urban mobility patterns characterised by the OD flow matrix play a crucial role." }, { "code": null, "e": 6133, "s": 5355, "text": "Further, to formalise how the disease is transmitted by an infected person, we need the basic reproduction number, R0. It is defined as R0 = βt / γ where γ is the recovery rate, and can be thought of as the expected number of secondary infections after an infected individual comes into contact with a susceptible population. At the time of this writing, the basic reproduction number for the Wuhan coronavirus has been estimated to be between 1.4 and 4. Let’s take the worst case and assume it’s 4. However, we should note that it’s actually a random variable and the reported number is but the expected number. To make things a bit more interesting, we will run our simulations with different R0 at each location, drawn from a good candidate distribution, Gamma, with mean 4:" }, { "code": null, "e": 6175, "s": 6133, "text": "We can now proceed to the model dynamics:" }, { "code": null, "e": 6371, "s": 6175, "text": "where βk,t is the (random) transmission rate at location k on day t, and α is a coefficient denoting the modal share or the intensity of public transport vs. private car travel modes in the city." }, { "code": null, "e": 6983, "s": 6371, "text": "The model dynamics described in the above equations are very simple: on day t+1 at location j, we need to subtract from the susceptible population Sj,t the fraction of people infected within location j (the second term in the first equation) as well as the fraction of infected people that have arrived from other locations in the city, weighted by their respective transmission rates βk,t (the third term in the first equation). Since the total population Nj = Sj + Ij + Rj, we need to move the subtracted portion to the infected group, while also moving those recovered to Rj,t+1 (second and third equations)." }, { "code": null, "e": 7553, "s": 6983, "text": "For this analysis, we will use the aggregated OD flow matrix of a typical day obtained from GPS data provided by local ride sharing company gg as a proxy for the mobility patterns in Yerevan city. Next, we need the population counts in each 250×250m grid cell, which we approximate by proportionally scaling the extracted flow counts so that the total inflows in different locations sum up to approximately half of Yerevan’s population of 1.1 million. This is actually a bold assumption, but since varying this portion yielded very similar results, we will stick to it." }, { "code": null, "e": 7670, "s": 7553, "text": "For our first simulation, we will imagine a sustainable public transport-dominated future urban mobility with α=0.9:" }, { "code": null, "e": 8404, "s": 7670, "text": "We see how fast the infected fraction of the population is climbing up immediately, reaching the epidemic’s peak on around day 8–10, with almost 70% of the population infected, while only a small portion (~10%) of the population having recovered from the disease. Towards day 100, when the epidemic has receded, we see the fraction of recovered individuals reach a staggering 90%! Now let’s see if reducing the intensity of public transport travel to something like α = 0.2 has any effect on mitigating the epidemic spread. This can either be interpreted as taking drastic measures to reduce urban mobility (e.g., by issuing a curfew) or as increasing the share of private car travel to reduce chances of infection during the travel." }, { "code": null, "e": 8884, "s": 8404, "text": "We see how the peak of the epidemic comes somewhere between day 16 and 20, with a significantly smaller infected group (~45%) and twice as many recovered (~20%). Towards the end of the epidemic, the fraction of susceptible individuals is also twice as big (~24% vs. ~12%), meaning that more people have escaped the disease. As expected, we see that the introduction of dramatic measures to temporarily bring urban mobility down has a big impact on the disease spreading dynamics." }, { "code": null, "e": 9103, "s": 8884, "text": "Now, let’s see whether another intuitive idea of completely cutting off a few key popular locations has the desired effect. To do this, let’s pick the locations associated with the upper 1 percentile of mobility flows," }, { "code": null, "e": 9404, "s": 9103, "text": "and completely block all flow to and from those locations, effectively establishing there a quarantine regime. As we can see from the plot, in Yerevan these locations are mostly in the city center, with two other locations being the two largest shopping malls. Choosing a moderate α = 0.5, we obtain:" }, { "code": null, "e": 9663, "s": 9404, "text": "We see an even smaller fraction of infected individuals at the epidemic’s peak (~35%), and, most importantly, we see that towards the end of the epidemic, around half of the population remains susceptible, effectively escaping from contracting the infection!" }, { "code": null, "e": 9759, "s": 9663, "text": "Here is a small animation visualising the dynamics of the high public transport share scenario:" }, { "code": null, "e": 10679, "s": 9759, "text": "By no means claiming accurate epidemic modelling (or even any substantial knowledge in epidemiology beyond the basics), our aim in this post was to get a first insight on how network effects come into play in an urban setting during an infectious disease outbreak. With ever-increasing population densities, mobility, and dynamics, our cities become more exposed to “black swans” and become more fragile. And since you can’t fetch the coffee if you’re dead, smart and sustainable cities will be meaningless without effective and efficient crisis handling capability and mechanisms. For instance, we saw that the introduction of quarantine regimes in key locations, or taking draconian measures to curb mobility, can be instrumental during such a health crisis. However, a further important question would be how to implement such measures while minimizing damage and loss to the functioning of the city and its economy?" }, { "code": null, "e": 10960, "s": 10679, "text": "Further yet, the exact epidemic spreading mechanisms of infectious diseases are still an active area of research and the advances in this fields will have to be communicated to and integrated in urban planning, policymaking, and management to make our cities safe and antifragile." }, { "code": null, "e": 10994, "s": 10960, "text": "P.S. Read the original post here." }, { "code": null, "e": 11030, "s": 10994, "text": "The code for the above simulations:" } ]
__rmul__ in Python - GeeksforGeeks
04 May, 2020 For every operator sign, there is an underlying mechanism. This underlying mechanism is a special method that will be called during the operator action. This special method is called magical method. For every arithmetic calculation like +, -, *, /, we require 2 operands to carry out operator functionality. Examples: ‘+’ ? ‘__add__’ method ‘_’ ? ‘__sub__’ method ‘*’ ? ‘__mul__’ method As the article is limited to multiplication functionality, we will see about multiplication procedure here. To perform the multiplication functionality, we have to tie up the operator sign to either left/right operand. Before, going to __rmul__ method, we will see about __mul__ method, which helps us to understand multiplication functionality vividly. Let’s take an expression x*y where x is an instance of a class A. To perform the __mul__ method, the operator looks into the class of left operand(x) for the present of __mul__ i.e., operator(*) will check the class A for the presence of ‘__mul__’ method in it. If it has __mul__ method, it calls x.__mul__(y). Otherwise, it throws the ‘TypeError: unsupported operands’ error message. Example 1: class Foo(object): def __init__(self, val): self.val = val def __str__(self): return "Foo [% s]" % self.val class Bar(object): def __init__(self, val): self.val = val def __str__(self): return "Bar [% s]" % self.val # Driver Codef = Foo(5)b = Bar(6)print(f * b) Output: TypeError, unsupported operand type(s) for *: 'Foo' and 'Bar' In the above example, the first operand is f and its class Foo(). As Foo() has no __mul__ method, it doesn’t understand how to multiply. So, it will show up TypeError message. If we check the other class Bar(), even it has no __mul__ method. So, even if we reverse the multiplication to (b*f), it will throw the same error Example 2: Lets add the __mul__ method in Foo class. class Foo(object): def __init__(self, val): self.val = val def __mul__(self, other): return Foo(self.val * other.val) def __str__(self): return "Foo [% s]" % self.val class Bar(object): def __init__(self, val): self.val = val def __str__(self): return "Bar [% s]" % self.val # Driver Codef = Foo(5)b = Bar(6)print(f * b) Output: Foo 30 As it is already mentioned, the operator by default looks into the left operand’s class, and here it finds the __mul__ method. Now it knows what to do and resulted 30 f.__mul__(b) = 5.__mul__(6). If we reverse the multiplication to (b*f), it throws up the issue again, as it looks into left operand’s class(Bar()) which doesn’t have any __mul__ method. b.__mul__(f) will throws the issue as b’s class Bar() doesn’t have __mul__ method. A slight difference between __mul__ and __rmul__ is, Operator looks for __mul__ in left operand and looks for __rmul__ in right operand. For example, x*y. Operator looks for __rmul__ method in the y’s class definition. If it finds the __rmul__ method, it will show up with the result, otherwise throws the TypeError error message Example 1: Let’s take the above example with a small modification. class Foo(object): def __init__(self, val): self.val = val def __str__(self): return "Foo [% s]" % self.val class Bar(object): def __init__(self, val): self.val = val def __rmul__(self, other): return Bar(self.val * other.val) def __str__(self): return "Bar [% s]" % self.val # Driver codef = Foo(5)b = Bar(6) print(f * b) Output: Bar 30 In the above example, it assumes f*b as b.__rmul__(f) as __rmul__ method is present in Bar() class of the instance b. If we reverse the multiplication to (b*f). The notation will be f.__rmul__(b). If it doesn’t have __rmul__ method, it can’t understand what to notate and throws up TypeError message.’ These type of operators, that require 2 operands, it will by default carry both __mul__ and __rmul__ method. To perform multiplication with both normal and reverse multiplication, see the below example. Example 2: class Foo(object): def __init__(self, val): self.val = val def __str__(self): return "Foo [% s]" % self.val class Bar(object): def __init__(self, val): self.val = val def __rmul__(self, other): return Bar(self.val * other.val) def __mul__(self, other): return self.__rmul__(other) def __str__(self): return "Bar [% s]" % self.val # Driver Codef = Foo(5)b = Bar(6) print(b * f)print(f * b) Output: Bar [30] Bar [30] python-oop-concepts Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments How to Install PIP on Windows ? How to drop one or multiple columns in Pandas Dataframe How To Convert Python Dictionary To JSON? Check if element exists in list in Python Python | Pandas dataframe.groupby() Defaultdict in Python Python | Get unique values from a list Python Classes and Objects Python | os.path.join() method Create a directory in Python
[ { "code": null, "e": 23901, "s": 23873, "text": "\n04 May, 2020" }, { "code": null, "e": 24209, "s": 23901, "text": "For every operator sign, there is an underlying mechanism. This underlying mechanism is a special method that will be called during the operator action. This special method is called magical method. For every arithmetic calculation like +, -, *, /, we require 2 operands to carry out operator functionality." }, { "code": null, "e": 24219, "s": 24209, "text": "Examples:" }, { "code": null, "e": 24288, "s": 24219, "text": "‘+’ ? ‘__add__’ method\n‘_’ ? ‘__sub__’ method\n‘*’ ? ‘__mul__’ method" }, { "code": null, "e": 24642, "s": 24288, "text": "As the article is limited to multiplication functionality, we will see about multiplication procedure here. To perform the multiplication functionality, we have to tie up the operator sign to either left/right operand. Before, going to __rmul__ method, we will see about __mul__ method, which helps us to understand multiplication functionality vividly." }, { "code": null, "e": 25027, "s": 24642, "text": "Let’s take an expression x*y where x is an instance of a class A. To perform the __mul__ method, the operator looks into the class of left operand(x) for the present of __mul__ i.e., operator(*) will check the class A for the presence of ‘__mul__’ method in it. If it has __mul__ method, it calls x.__mul__(y). Otherwise, it throws the ‘TypeError: unsupported operands’ error message." }, { "code": null, "e": 25038, "s": 25027, "text": "Example 1:" }, { "code": "class Foo(object): def __init__(self, val): self.val = val def __str__(self): return \"Foo [% s]\" % self.val class Bar(object): def __init__(self, val): self.val = val def __str__(self): return \"Bar [% s]\" % self.val # Driver Codef = Foo(5)b = Bar(6)print(f * b)", "e": 25366, "s": 25038, "text": null }, { "code": null, "e": 25374, "s": 25366, "text": "Output:" }, { "code": null, "e": 25436, "s": 25374, "text": "TypeError, unsupported operand type(s) for *: 'Foo' and 'Bar'" }, { "code": null, "e": 25759, "s": 25436, "text": "In the above example, the first operand is f and its class Foo(). As Foo() has no __mul__ method, it doesn’t understand how to multiply. So, it will show up TypeError message. If we check the other class Bar(), even it has no __mul__ method. So, even if we reverse the multiplication to (b*f), it will throw the same error" }, { "code": null, "e": 25812, "s": 25759, "text": "Example 2: Lets add the __mul__ method in Foo class." }, { "code": "class Foo(object): def __init__(self, val): self.val = val def __mul__(self, other): return Foo(self.val * other.val) def __str__(self): return \"Foo [% s]\" % self.val class Bar(object): def __init__(self, val): self.val = val def __str__(self): return \"Bar [% s]\" % self.val # Driver Codef = Foo(5)b = Bar(6)print(f * b)", "e": 26211, "s": 25812, "text": null }, { "code": null, "e": 26219, "s": 26211, "text": "Output:" }, { "code": null, "e": 26226, "s": 26219, "text": "Foo 30" }, { "code": null, "e": 26662, "s": 26226, "text": "As it is already mentioned, the operator by default looks into the left operand’s class, and here it finds the __mul__ method. Now it knows what to do and resulted 30 f.__mul__(b) = 5.__mul__(6). If we reverse the multiplication to (b*f), it throws up the issue again, as it looks into left operand’s class(Bar()) which doesn’t have any __mul__ method. b.__mul__(f) will throws the issue as b’s class Bar() doesn’t have __mul__ method." }, { "code": null, "e": 26992, "s": 26662, "text": "A slight difference between __mul__ and __rmul__ is, Operator looks for __mul__ in left operand and looks for __rmul__ in right operand. For example, x*y. Operator looks for __rmul__ method in the y’s class definition. If it finds the __rmul__ method, it will show up with the result, otherwise throws the TypeError error message" }, { "code": null, "e": 27059, "s": 26992, "text": "Example 1: Let’s take the above example with a small modification." }, { "code": "class Foo(object): def __init__(self, val): self.val = val def __str__(self): return \"Foo [% s]\" % self.val class Bar(object): def __init__(self, val): self.val = val def __rmul__(self, other): return Bar(self.val * other.val) def __str__(self): return \"Bar [% s]\" % self.val # Driver codef = Foo(5)b = Bar(6) print(f * b)", "e": 27455, "s": 27059, "text": null }, { "code": null, "e": 27463, "s": 27455, "text": "Output:" }, { "code": null, "e": 27470, "s": 27463, "text": "Bar 30" }, { "code": null, "e": 27772, "s": 27470, "text": "In the above example, it assumes f*b as b.__rmul__(f) as __rmul__ method is present in Bar() class of the instance b. If we reverse the multiplication to (b*f). The notation will be f.__rmul__(b). If it doesn’t have __rmul__ method, it can’t understand what to notate and throws up TypeError message.’" }, { "code": null, "e": 27975, "s": 27772, "text": "These type of operators, that require 2 operands, it will by default carry both __mul__ and __rmul__ method. To perform multiplication with both normal and reverse multiplication, see the below example." }, { "code": null, "e": 27986, "s": 27975, "text": "Example 2:" }, { "code": "class Foo(object): def __init__(self, val): self.val = val def __str__(self): return \"Foo [% s]\" % self.val class Bar(object): def __init__(self, val): self.val = val def __rmul__(self, other): return Bar(self.val * other.val) def __mul__(self, other): return self.__rmul__(other) def __str__(self): return \"Bar [% s]\" % self.val # Driver Codef = Foo(5)b = Bar(6) print(b * f)print(f * b)", "e": 28462, "s": 27986, "text": null }, { "code": null, "e": 28470, "s": 28462, "text": "Output:" }, { "code": null, "e": 28488, "s": 28470, "text": "Bar [30]\nBar [30]" }, { "code": null, "e": 28508, "s": 28488, "text": "python-oop-concepts" }, { "code": null, "e": 28515, "s": 28508, "text": "Python" }, { "code": null, "e": 28613, "s": 28515, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 28622, "s": 28613, "text": "Comments" }, { "code": null, "e": 28635, "s": 28622, "text": "Old Comments" }, { "code": null, "e": 28667, "s": 28635, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 28723, "s": 28667, "text": "How to drop one or multiple columns in Pandas Dataframe" }, { "code": null, "e": 28765, "s": 28723, "text": "How To Convert Python Dictionary To JSON?" }, { "code": null, "e": 28807, "s": 28765, "text": "Check if element exists in list in Python" }, { "code": null, "e": 28843, "s": 28807, "text": "Python | Pandas dataframe.groupby()" }, { "code": null, "e": 28865, "s": 28843, "text": "Defaultdict in Python" }, { "code": null, "e": 28904, "s": 28865, "text": "Python | Get unique values from a list" }, { "code": null, "e": 28931, "s": 28904, "text": "Python Classes and Objects" }, { "code": null, "e": 28962, "s": 28931, "text": "Python | os.path.join() method" } ]
Matrix Interchange - Java | Practice | GeeksforGeeks
Working with 2D arrays is quite important. Here we will do swapping of column in a 2D array. You are given a matrix M or r rows and c columns. You need to swap the first column with the last column. Example: Input: 3 4 1 2 3 4 4 3 2 1 6 7 8 9 Output: 4 2 3 1 1 3 2 4 9 7 8 6 Your Task: Since this is a function problem, you don't need to take any input. Just complete the provided function interchange(int, int , int ) that take matrix, rows and columns as parameters. Constraints: 1 <= r,c <= 100 0 newlife290420043 weeks ago static void interchange(int a[][],int r, int c){ // Your code here int [][]new_a = new int[r][c]; int temp = 0; for(int i = 0;i<r;i++){ for(int j=0;j<c;j++){ if(j<1){ int v = c-1; temp = a[i][j]; a[i][j] = a[i][v]; a[i][v] = temp; v--; } } } // print the array for(int i = 0;i<r;i++){ for(int j=0;j<c;j++){ System.out.print(a[i][j]+" "); } System.out.println(); } }} 0 rahul8781 month ago static void interchange(int a[][],int r, int c){ for(int i = 0;i<r;i++){ for(int j = 0;j<c;j++){ if(j==0){ int temp=a[i][j]; a[i][j]=a[i][c-1]; a[i][c-1]=temp; } } //System.out.println(); } for(int i = 0;i<r;i++){ for(int j = 0;j<c;j++){ System.out.print(a[i][j] + " "); } System.out.println(); } } 0 patelshivanshu20172 months ago static void interchange(int a[][],int r, int c){ // Your code here int i=0; while(i<r){ int temp=a[i][0]; a[i][0]=a[i][c-1]; a[i][c-1]=temp; i++; } for( i = 0;i<r;i++){ for(int j = 0;j<c;j++){ System.out.print(a[i][j] + " "); } System.out.println(); } }} 0 swapniltayal4222 months ago class Geeks{ static void interchange(int a[][],int r, int c){ // Your code here for (int i = 0; i < r; i++ ){ int temp = a[i][0] ; a[i][0] = a[i][c-1] ; a[i][c-1] = temp ; } for (int i = 0; i < r; i++ ){ for (int j = 0; j < c; j++ ){ System.out.print(a[i][j] + " "); }System.out.println(); } }} 0 sowndhar5252 months ago class Geeks{ static void interchange(int a[][],int r, int c){ for(int i = 0; i < r; i++){ int temp = a[i][0]; a[i][0] = a[i][c-1]; a[i][c-1] = temp; } for(int i = 0; i < r; i++){ for(int j = 0; j < c; j++){ System.out.print(a[i][j]+" "); } System.out.println(); } } } 0 abhiofficial4 months ago static void interchange(int a[][],int r, int c){ // Your code here for(int i = 0;i<r;i++){ for(int j = 0;j<c;j++){ if(j==0){ System.out.print(a[i][c-1] + " "); } else if(j==c-1){ System.out.print(a[i][0] + " "); } else System.out.print(a[i][j] + " "); } System.out.println(); } } 0 rajadash23454 months ago static void interchange(int a[][],int r, int c){ // Your code here int temp=0; for(int i = 0;i<r;i++){ temp =a[i][0]; a[i][0]=a[i][c-1]; a[i][c-1]=temp; for(int j = 0;j<c;j++){ System.out.print(a[i][j] + " "); } System.out.println(); } } } 0 mrpascal4 months ago Simple Solution static void interchange(int a[][],int r, int c){ // Your code here for(int j = 0;j<r;j++){ int temp = a[j][0]; a[j][0] = a[j][c-1]; a[j][c-1] = temp; } for(int i = 0;i<r;i++){ for(int j = 0;j<c;j++){ System.out.print(a[i][j] + " "); } System.out.println(); } } 0 junadmd545 months ago // JAVA SOLUTION class Geeks{ static void interchange(int a[][],int r, int c){ int j1,j2; for(int i =0;i<r;i++){ if(r==1&&c==1) break; j1=0; j2=c-1; a[i][j1]=a[i][j1]+a[i][j2]; a[i][j2]=a[i][j1]-a[i][j2]; a[i][j1]=a[i][j1]-a[i][j2]; } // Your code here for(int i = 0;i<r;i++){ for(int j = 0;j<c;j++){ System.out.print(a[i][j] + " "); } System.out.println(); } }} 0 mashhadihossain6 months ago SIMPLE JAVA SOLUTION static void interchange(int a[][],int r, int c){ for(int i=0;i<r;i++) { int temp=a[i][0]; a[i][0]=a[i][c-1]; a[i][c-1]=temp; } for(int i = 0;i<r;i++){ for(int j = 0;j<c;j++){ System.out.print(a[i][j] + " "); } System.out.println(); } } We strongly recommend solving this problem on your own before viewing its editorial. Do you still want to view the editorial? Login to access your submissions. Problem Contest Reset the IDE using the second button on the top right corner. Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values. Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints. You can access the hints to get an idea about what is expected of you as well as the final solution code. You can view the solutions submitted by other users from the submission tab.
[ { "code": null, "e": 489, "s": 290, "text": "Working with 2D arrays is quite important. Here we will do swapping of column in a 2D array. You are given a matrix M or r rows and c columns. You need to swap the first column with the last column." }, { "code": null, "e": 498, "s": 489, "text": "Example:" }, { "code": null, "e": 566, "s": 498, "text": "Input:\n3 4\n1 2 3 4\n4 3 2 1\n6 7 8 9\n\nOutput:\n4 2 3 1\n1 3 2 4\n9 7 8 6" }, { "code": null, "e": 760, "s": 566, "text": "Your Task:\nSince this is a function problem, you don't need to take any input. Just complete the provided function interchange(int, int , int ) that take matrix, rows and columns as parameters." }, { "code": null, "e": 789, "s": 760, "text": "Constraints:\n1 <= r,c <= 100" }, { "code": null, "e": 791, "s": 789, "text": "0" }, { "code": null, "e": 818, "s": 791, "text": "newlife290420043 weeks ago" }, { "code": null, "e": 1422, "s": 818, "text": "static void interchange(int a[][],int r, int c){ // Your code here int [][]new_a = new int[r][c]; int temp = 0; for(int i = 0;i<r;i++){ for(int j=0;j<c;j++){ if(j<1){ int v = c-1; temp = a[i][j]; a[i][j] = a[i][v]; a[i][v] = temp; v--; } } } // print the array for(int i = 0;i<r;i++){ for(int j=0;j<c;j++){ System.out.print(a[i][j]+\" \"); } System.out.println(); } }}" }, { "code": null, "e": 1424, "s": 1422, "text": "0" }, { "code": null, "e": 1444, "s": 1424, "text": "rahul8781 month ago" }, { "code": null, "e": 1976, "s": 1444, "text": " static void interchange(int a[][],int r, int c){\n \n for(int i = 0;i<r;i++){\n for(int j = 0;j<c;j++){\n if(j==0){\n int temp=a[i][j];\n a[i][j]=a[i][c-1];\n a[i][c-1]=temp;\n }\n }\n //System.out.println();\n } \n \n for(int i = 0;i<r;i++){\n for(int j = 0;j<c;j++){\n System.out.print(a[i][j] + \" \");\n }\n System.out.println();\n } \n }" }, { "code": null, "e": 1978, "s": 1976, "text": "0" }, { "code": null, "e": 2009, "s": 1978, "text": "patelshivanshu20172 months ago" }, { "code": null, "e": 2392, "s": 2009, "text": " static void interchange(int a[][],int r, int c){ // Your code here int i=0; while(i<r){ int temp=a[i][0]; a[i][0]=a[i][c-1]; a[i][c-1]=temp; i++; } for( i = 0;i<r;i++){ for(int j = 0;j<c;j++){ System.out.print(a[i][j] + \" \"); } System.out.println(); } }}" }, { "code": null, "e": 2394, "s": 2392, "text": "0" }, { "code": null, "e": 2422, "s": 2394, "text": "swapniltayal4222 months ago" }, { "code": null, "e": 2836, "s": 2422, "text": "class Geeks{ static void interchange(int a[][],int r, int c){ // Your code here for (int i = 0; i < r; i++ ){ int temp = a[i][0] ; a[i][0] = a[i][c-1] ; a[i][c-1] = temp ; } for (int i = 0; i < r; i++ ){ for (int j = 0; j < c; j++ ){ System.out.print(a[i][j] + \" \"); }System.out.println(); } }}" }, { "code": null, "e": 2838, "s": 2836, "text": "0" }, { "code": null, "e": 2862, "s": 2838, "text": "sowndhar5252 months ago" }, { "code": null, "e": 3277, "s": 2862, "text": "class Geeks{\n \n static void interchange(int a[][],int r, int c){\n \n for(int i = 0; i < r; i++){\n int temp = a[i][0];\n a[i][0] = a[i][c-1];\n a[i][c-1] = temp;\n }\n \n for(int i = 0; i < r; i++){\n for(int j = 0; j < c; j++){\n System.out.print(a[i][j]+\" \");\n }\n System.out.println();\n }\n }\n}" }, { "code": null, "e": 3279, "s": 3277, "text": "0" }, { "code": null, "e": 3304, "s": 3279, "text": "abhiofficial4 months ago" }, { "code": null, "e": 3769, "s": 3304, "text": "static void interchange(int a[][],int r, int c){ // Your code here for(int i = 0;i<r;i++){ for(int j = 0;j<c;j++){ if(j==0){ System.out.print(a[i][c-1] + \" \"); } else if(j==c-1){ System.out.print(a[i][0] + \" \"); } else System.out.print(a[i][j] + \" \"); } System.out.println(); } }" }, { "code": null, "e": 3771, "s": 3769, "text": "0" }, { "code": null, "e": 3796, "s": 3771, "text": "rajadash23454 months ago" }, { "code": null, "e": 4168, "s": 3796, "text": "static void interchange(int a[][],int r, int c){ // Your code here int temp=0; for(int i = 0;i<r;i++){ temp =a[i][0]; a[i][0]=a[i][c-1]; a[i][c-1]=temp; for(int j = 0;j<c;j++){ System.out.print(a[i][j] + \" \"); } System.out.println(); } }" }, { "code": null, "e": 4170, "s": 4168, "text": "}" }, { "code": null, "e": 4172, "s": 4170, "text": "0" }, { "code": null, "e": 4193, "s": 4172, "text": "mrpascal4 months ago" }, { "code": null, "e": 4209, "s": 4193, "text": "Simple Solution" }, { "code": null, "e": 4659, "s": 4209, "text": "static void interchange(int a[][],int r, int c){\n \n // Your code here\n \n for(int j = 0;j<r;j++){\n int temp = a[j][0];\n a[j][0] = a[j][c-1];\n a[j][c-1] = temp;\n }\n \n \n for(int i = 0;i<r;i++){\n for(int j = 0;j<c;j++){\n System.out.print(a[i][j] + \" \");\n }\n System.out.println();\n } \n }" }, { "code": null, "e": 4661, "s": 4659, "text": "0" }, { "code": null, "e": 4683, "s": 4661, "text": "junadmd545 months ago" }, { "code": null, "e": 4700, "s": 4683, "text": "// JAVA SOLUTION" }, { "code": null, "e": 5220, "s": 4700, "text": "class Geeks{ static void interchange(int a[][],int r, int c){ int j1,j2; for(int i =0;i<r;i++){ if(r==1&&c==1) break; j1=0; j2=c-1; a[i][j1]=a[i][j1]+a[i][j2]; a[i][j2]=a[i][j1]-a[i][j2]; a[i][j1]=a[i][j1]-a[i][j2]; } // Your code here for(int i = 0;i<r;i++){ for(int j = 0;j<c;j++){ System.out.print(a[i][j] + \" \"); } System.out.println(); } }}" }, { "code": null, "e": 5222, "s": 5220, "text": "0" }, { "code": null, "e": 5250, "s": 5222, "text": "mashhadihossain6 months ago" }, { "code": null, "e": 5271, "s": 5250, "text": "SIMPLE JAVA SOLUTION" }, { "code": null, "e": 5649, "s": 5271, "text": "static void interchange(int a[][],int r, int c){ for(int i=0;i<r;i++) { int temp=a[i][0]; a[i][0]=a[i][c-1]; a[i][c-1]=temp; } for(int i = 0;i<r;i++){ for(int j = 0;j<c;j++){ System.out.print(a[i][j] + \" \"); } System.out.println(); } }" }, { "code": null, "e": 5795, "s": 5649, "text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?" }, { "code": null, "e": 5831, "s": 5795, "text": " Login to access your submissions. " }, { "code": null, "e": 5841, "s": 5831, "text": "\nProblem\n" }, { "code": null, "e": 5851, "s": 5841, "text": "\nContest\n" }, { "code": null, "e": 5914, "s": 5851, "text": "Reset the IDE using the second button on the top right corner." }, { "code": null, "e": 6062, "s": 5914, "text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values." }, { "code": null, "e": 6270, "s": 6062, "text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints." }, { "code": null, "e": 6376, "s": 6270, "text": "You can access the hints to get an idea about what is expected of you as well as the final solution code." } ]
Next greater number set digits | Practice | GeeksforGeeks
Given a number n, find the smallest number that has same set of digits as n and is greater than n. If n is the greatest possible number with its set of digits, report it. Example 1: Input: N = 143 Output: 314 Explanation: Numbers possible with digits 1, 3 and 4 are: 134, 143, 314, 341, 413, 431. The first greater number after 143 is 314. ​Example 2: Input: N = 431 Output: not possible Explanation: Numbers possible with digits 1, 3 and 4 are: 134, 143, 314, 341, 413, 431. Clearly, there's no number greater than 431. Your Task: You don't need to read input or print anything. Your task is to complete the function findNext () which takes an integer N as input and returns the smallest number greater than N with the same set of digits as N. If such a number is not possible, return -1. Expected Time Complexity: O(LogN). Expected Auxiliary Space: O(LogN). Constraints: 1 ≤ N ≤ 100000 0 gaurabhkumarjha271020013 months ago string s= to_string (N);// integer N to string s int temp; while (next_permutation (s.begin(), s.end())){ temp= stoi(s);// convert string s to integer temp use stoi if (temp > N) return temp; } return -1; 0 apoorvmishra4 months ago void swap(char *a, char *b){ char temp = *a; *a = *b; *b = temp;} int findNext (int N) { string arr=to_string(N); int i; for(i=arr.size()-1;i>=0;i--){ //cout<<arr[i]; if(arr[i]>arr[i-1]) break; } //cout<<i<<endl; if(i==0) return -1; int ind=i; for(int j=arr.size()-1;j>i;j--){ if(arr[j]>arr[i-1]){ ind=j; break; } } //cout<<ind<<endl; swap(&arr[i-1],&arr[ind]); sort(arr.begin()+i,arr.end()); return stoi(arr); } 0 chessnoobdj4 months ago Easy c++ int findNext (int N) { string str = to_string(N); if(!next_permutation(str.begin(), str.end())) return -1; return stoi(str); } 0 sureshrao10000015 months ago class Solution{public:int findNext(int n) { if(n == 1) return -1; string num = to_string(n); int i, j; for(i = num.length()-1; i > 0; i--){ if(num[i] > num[i-1]) break; } if(i == 0) return -1; for(j = num.length()-1; j >= i; j--){ if(num[i-1] < num[j]) { swap(num[i-1], num[j]); break; } } reverse(num.begin()+i, num.end()); return stoi(num);}}; 0 adnansheikh5 months ago . 0 yoursnataraj6 months ago string s = to_string(N); for(int i=1;i<s.length();i++) { if(s[i-1]>=s[i]) { if(i+1==s.length()) return -1; } else { break; } } stack<int> st; for(int i=s.length()-1;i>=0;i--) { if(s[i-1]>=s[i]) { st.push(i); } else { st.push(i); int t; while(!st.empty()&&s[i-1]<s[st.top()]) { t=st.top(); st.pop(); } swap(s[i-1],s[t]); reverse(s.begin()+i,s.end()); break; } } return stoi(s); https://youtu.be/Nbi8lK80Rqk 0 chamoliabhishek0077 months ago from itertools import permutationsclass Solution: def findNext (self,n): # your code here k=str(n) permutation = [''.join(p) for p in permutations(k)] p=permutation p=list(sorted(set(p))) if (p[len(p)-1])>k: return p[p.index(k)+1] return -1#simple 0 Sumit Rathore10 months ago Sumit Rathore o(log(n)) solutionhttps://uploads.disquscdn.c... 0 Drigger1 year ago Drigger for reference: https://www.geeksforgeeks.o... 0 Ankush Singh1 year ago Ankush Singh No doubt it is not an easy level question int findNext (int N) { //code here. string s=to_string(N);int t; while(next_permutation(s.begin(),s.end())) { t=stoi(s); if(t>N) return t; } return -1; } We strongly recommend solving this problem on your own before viewing its editorial. Do you still want to view the editorial? Login to access your submissions. Problem Contest Reset the IDE using the second button on the top right corner. Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values. Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints. You can access the hints to get an idea about what is expected of you as well as the final solution code. You can view the solutions submitted by other users from the submission tab.
[ { "code": null, "e": 409, "s": 238, "text": "Given a number n, find the smallest number that has same set of digits as n and is greater than n. If n is the greatest possible number with its set of digits, report it." }, { "code": null, "e": 420, "s": 409, "text": "Example 1:" }, { "code": null, "e": 581, "s": 420, "text": "Input:\nN = 143\nOutput: 314\nExplanation: Numbers possible with digits \n1, 3 and 4 are: 134, 143, 314, 341, 413, 431.\nThe first greater number after 143 is 314.\n\n" }, { "code": null, "e": 596, "s": 581, "text": "​Example 2:" }, { "code": null, "e": 767, "s": 596, "text": "Input: \nN = 431\nOutput: not possible\nExplanation: Numbers possible with digits\n1, 3 and 4 are: 134, 143, 314, 341, 413, 431.\nClearly, there's no number greater than 431.\n" }, { "code": null, "e": 1037, "s": 767, "text": "\nYour Task:\nYou don't need to read input or print anything. Your task is to complete the function findNext () which takes an integer N as input and returns the smallest number greater than N with the same set of digits as N. If such a number is not possible, return -1." }, { "code": null, "e": 1108, "s": 1037, "text": "\nExpected Time Complexity: O(LogN).\nExpected Auxiliary Space: O(LogN)." }, { "code": null, "e": 1137, "s": 1108, "text": "\nConstraints:\n1 ≤ N ≤ 100000" }, { "code": null, "e": 1139, "s": 1137, "text": "0" }, { "code": null, "e": 1175, "s": 1139, "text": "gaurabhkumarjha271020013 months ago" }, { "code": null, "e": 1471, "s": 1175, "text": " string s= to_string (N);// integer N to string s\n int temp;\n \n while (next_permutation (s.begin(), s.end())){\n \n temp= stoi(s);// convert string s to integer temp use stoi\n \n if (temp > N)\n return temp;\n }\n \n return -1;" }, { "code": null, "e": 1473, "s": 1471, "text": "0" }, { "code": null, "e": 1498, "s": 1473, "text": "apoorvmishra4 months ago" }, { "code": null, "e": 2070, "s": 1498, "text": "void swap(char *a, char *b){ char temp = *a; *a = *b; *b = temp;} int findNext (int N) { string arr=to_string(N); int i; for(i=arr.size()-1;i>=0;i--){ //cout<<arr[i]; if(arr[i]>arr[i-1]) break; } //cout<<i<<endl; if(i==0) return -1; int ind=i; for(int j=arr.size()-1;j>i;j--){ if(arr[j]>arr[i-1]){ ind=j; break; } } //cout<<ind<<endl; swap(&arr[i-1],&arr[ind]); sort(arr.begin()+i,arr.end()); return stoi(arr); }" }, { "code": null, "e": 2072, "s": 2070, "text": "0" }, { "code": null, "e": 2096, "s": 2072, "text": "chessnoobdj4 months ago" }, { "code": null, "e": 2105, "s": 2096, "text": "Easy c++" }, { "code": null, "e": 2279, "s": 2105, "text": "int findNext (int N) \n {\n string str = to_string(N);\n if(!next_permutation(str.begin(), str.end()))\n return -1;\n return stoi(str);\n } " }, { "code": null, "e": 2281, "s": 2279, "text": "0" }, { "code": null, "e": 2310, "s": 2281, "text": "sureshrao10000015 months ago" }, { "code": null, "e": 2720, "s": 2310, "text": "class Solution{public:int findNext(int n) { if(n == 1) return -1; string num = to_string(n); int i, j; for(i = num.length()-1; i > 0; i--){ if(num[i] > num[i-1]) break; } if(i == 0) return -1; for(j = num.length()-1; j >= i; j--){ if(num[i-1] < num[j]) { swap(num[i-1], num[j]); break; } } reverse(num.begin()+i, num.end()); return stoi(num);}};" }, { "code": null, "e": 2722, "s": 2720, "text": "0" }, { "code": null, "e": 2746, "s": 2722, "text": "adnansheikh5 months ago" }, { "code": null, "e": 2748, "s": 2746, "text": "." }, { "code": null, "e": 2750, "s": 2748, "text": "0" }, { "code": null, "e": 2775, "s": 2750, "text": "yoursnataraj6 months ago" }, { "code": null, "e": 3448, "s": 2775, "text": " string s = to_string(N);\n for(int i=1;i<s.length();i++)\n {\n if(s[i-1]>=s[i])\n {\n if(i+1==s.length()) return -1;\n }\n else\n {\n break;\n }\n }\n stack<int> st;\n for(int i=s.length()-1;i>=0;i--)\n {\n if(s[i-1]>=s[i])\n {\n st.push(i);\n }\n else\n {\n st.push(i);\n int t;\n while(!st.empty()&&s[i-1]<s[st.top()])\n {\n t=st.top();\n st.pop();\n }\n swap(s[i-1],s[t]);\n reverse(s.begin()+i,s.end());\n break;\n }\n }\n return stoi(s);" }, { "code": null, "e": 3477, "s": 3448, "text": "https://youtu.be/Nbi8lK80Rqk" }, { "code": null, "e": 3479, "s": 3477, "text": "0" }, { "code": null, "e": 3510, "s": 3479, "text": "chamoliabhishek0077 months ago" }, { "code": null, "e": 3830, "s": 3510, "text": "from itertools import permutationsclass Solution: def findNext (self,n): # your code here k=str(n) permutation = [''.join(p) for p in permutations(k)] p=permutation p=list(sorted(set(p))) if (p[len(p)-1])>k: return p[p.index(k)+1] return -1#simple " }, { "code": null, "e": 3832, "s": 3830, "text": "0" }, { "code": null, "e": 3859, "s": 3832, "text": "Sumit Rathore10 months ago" }, { "code": null, "e": 3873, "s": 3859, "text": "Sumit Rathore" }, { "code": null, "e": 3922, "s": 3873, "text": "o(log(n)) solutionhttps://uploads.disquscdn.c..." }, { "code": null, "e": 3924, "s": 3922, "text": "0" }, { "code": null, "e": 3942, "s": 3924, "text": "Drigger1 year ago" }, { "code": null, "e": 3950, "s": 3942, "text": "Drigger" }, { "code": null, "e": 3996, "s": 3950, "text": "for reference: https://www.geeksforgeeks.o..." }, { "code": null, "e": 3998, "s": 3996, "text": "0" }, { "code": null, "e": 4021, "s": 3998, "text": "Ankush Singh1 year ago" }, { "code": null, "e": 4034, "s": 4021, "text": "Ankush Singh" }, { "code": null, "e": 4076, "s": 4034, "text": "No doubt it is not an easy level question" }, { "code": null, "e": 4317, "s": 4076, "text": "int findNext (int N) { //code here. string s=to_string(N);int t; while(next_permutation(s.begin(),s.end())) { t=stoi(s); if(t>N) return t; } return -1; }" }, { "code": null, "e": 4463, "s": 4317, "text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?" }, { "code": null, "e": 4499, "s": 4463, "text": " Login to access your submissions. " }, { "code": null, "e": 4509, "s": 4499, "text": "\nProblem\n" }, { "code": null, "e": 4519, "s": 4509, "text": "\nContest\n" }, { "code": null, "e": 4582, "s": 4519, "text": "Reset the IDE using the second button on the top right corner." }, { "code": null, "e": 4730, "s": 4582, "text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values." }, { "code": null, "e": 4938, "s": 4730, "text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints." }, { "code": null, "e": 5044, "s": 4938, "text": "You can access the hints to get an idea about what is expected of you as well as the final solution code." } ]
Print a matrix in alternate manner (left to right then right to left) - GeeksforGeeks
22 Apr, 2021 Given a 2D array, the task is to print the 2D in alternate manner (First row from left to right, then from right to left, and so on). Examples: Input : arr[][2] = {{1, 2} {2, 3}}; Output : 1 2 3 2 Input :arr[][3] = { { 7 , 2 , 3 }, { 2 , 3 , 4 }, { 5 , 6 , 1 }}; Output : 7 2 3 4 3 2 5 6 1 The solution of this problem is that run two loops and and print row in left to right and right to left manners. We maintain a flag to see if current row should be printed from left to right or right to left. We toggle the flag after every iteration. C++ Java Python 3 C# PHP Javascript // C++ program to print matrix in alternate manner#include<bits/stdc++.h>using namespace std;#define R 3#define C 3 // Function for print matrix in alternate mannervoid convert(int arr[R][C]){ bool leftToRight = true; for (int i=0; i<R; i++) { if (leftToRight) { for (int j=0; j<C; j++) printf("%d ", arr[i][j]); } else { for (int j=C-1; j>=0; j--) printf("%d ",arr[i][j]); } leftToRight = !leftToRight; }} // Driver codeint main(){ int arr[][C] = { { 1 , 2 , 3 }, { 3 , 2 , 1 }, { 4 , 5 , 6 }, }; convert(arr); return 0;} //Java program to print matrix in alternate mannerclass GFG { static final int R = 3; static final int C = 3; // Function for print matrix in alternate manner static void convert(int arr[][]) { boolean leftToRight = true; for (int i = 0; i < R; i++) { if (leftToRight) { for (int j = 0; j < C; j++) { System.out.printf("%d ", arr[i][j]); } } else { for (int j = C - 1; j >= 0; j--) { System.out.printf("%d ", arr[i][j]); } } leftToRight = !leftToRight; } } // Driver code static public void main(String[] args) { int arr[][] = { {1, 2, 3}, {3, 2, 1}, {4, 5, 6},}; convert(arr); }} // This code is contributed by Rajput-Ji # Python 3 program to print matrix# in alternate mannerR = 3C = 3 # Function for print matrix# in alternate mannerdef convert(arr): leftToRight = True for i in range(R): if (leftToRight): for j in range(C): print(arr[i][j], end = " ") else: for j in range(C - 1, -1, -1): print(arr[i][j], end = " ") leftToRight = not leftToRight # Driver codeif __name__ == "__main__": arr =[[ 1 , 2 , 3 ], [ 3 , 2 , 1 ], [ 4 , 5 , 6 ]] convert(arr) # This code is contributed# by ChitraNayal //C# program to print matrix in alternate mannerusing System;public class GFG { static readonly int R = 3; static readonly int C = 3; // Function for print matrix in alternate manner static void convert(int [,]arr) { bool leftToRight = true; for (int i = 0; i < R; i++) { if (leftToRight) { for (int j = 0; j < C; j++) { Console.Write(arr[i,j]+" "); } } else { for (int j = C - 1; j >= 0; j--) { Console.Write(arr[i,j]+" "); } } leftToRight = !leftToRight; } } // Driver code static public void Main() { int [,]arr = { {1, 2, 3}, {3, 2, 1}, {4, 5, 6},}; convert(arr); }} // This code is contributed by Rajput-Ji <?php// PHP program to print matrix// in alternate manner$R = 3;$C = 3; // Function for print matrix// in alternate mannerfunction convert($arr){ global $R; global $C; $leftToRight = true; for ($i = 0; $i < $R; $i++) { if ($leftToRight) { for ($j = 0; $j < $C; $j++) echo $arr[$i][$j], " "; } else { for ($j = $C - 1; $j >= 0; $j--) echo $arr[$i][$j], " "; } $leftToRight = !$leftToRight; }} // Driver code$arr = array(array(1 , 2 , 3 ), array(3 , 2 , 1 ), array(4 , 5 , 6 )); convert($arr); // This code is contributed by ajit?> <script>// Javascript program to print matrix in alternate manner let R = 3; let C = 3; // Function for print matrix in alternate manner function convert(arr) { let leftToRight = true; for (let i = 0; i < R; i++) { if (leftToRight) { for (let j = 0; j < C; j++) { document.write(arr[i][j]+" "); } } else { for (let j = C - 1; j >= 0; j--) { document.write( arr[i][j]+" "); } } leftToRight = !leftToRight; } } // Driver code let arr =[[ 1 , 2 , 3 ], [ 3 , 2 , 1 ], [ 4 , 5 , 6 ]] convert(arr) // This code is contributed by avanitrachhadiya2155</script> Output: 1 2 3 1 2 3 4 5 6 Time Complexity : O(R*C) Space Complexity : O(1)This article is contributed by DANISH_RAZA. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. jit_t ukasp Rajput-Ji avanitrachhadiya2155 Matrix Matrix Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Divide and Conquer | Set 5 (Strassen's Matrix Multiplication) Efficiently compute sums of diagonals of a matrix Program to multiply two matrices Count all possible paths from top left to bottom right of a mXn matrix Printing all solutions in N-Queen Problem Min Cost Path | DP-6 Python program to multiply two matrices The Celebrity Problem Search in a row wise and column wise sorted matrix Real-time application of Data Structures
[ { "code": null, "e": 26352, "s": 26324, "text": "\n22 Apr, 2021" }, { "code": null, "e": 26498, "s": 26352, "text": "Given a 2D array, the task is to print the 2D in alternate manner (First row from left to right, then from right to left, and so on). Examples: " }, { "code": null, "e": 26722, "s": 26498, "text": "Input : arr[][2] = {{1, 2}\n {2, 3}}; \nOutput : 1 2 3 2 \n \nInput :arr[][3] = { { 7 , 2 , 3 },\n { 2 , 3 , 4 },\n { 5 , 6 , 1 }}; \nOutput : 7 2 3 4 3 2 5 6 1" }, { "code": null, "e": 26976, "s": 26724, "text": "The solution of this problem is that run two loops and and print row in left to right and right to left manners. We maintain a flag to see if current row should be printed from left to right or right to left. We toggle the flag after every iteration. " }, { "code": null, "e": 26980, "s": 26976, "text": "C++" }, { "code": null, "e": 26985, "s": 26980, "text": "Java" }, { "code": null, "e": 26994, "s": 26985, "text": "Python 3" }, { "code": null, "e": 26997, "s": 26994, "text": "C#" }, { "code": null, "e": 27001, "s": 26997, "text": "PHP" }, { "code": null, "e": 27012, "s": 27001, "text": "Javascript" }, { "code": "// C++ program to print matrix in alternate manner#include<bits/stdc++.h>using namespace std;#define R 3#define C 3 // Function for print matrix in alternate mannervoid convert(int arr[R][C]){ bool leftToRight = true; for (int i=0; i<R; i++) { if (leftToRight) { for (int j=0; j<C; j++) printf(\"%d \", arr[i][j]); } else { for (int j=C-1; j>=0; j--) printf(\"%d \",arr[i][j]); } leftToRight = !leftToRight; }} // Driver codeint main(){ int arr[][C] = { { 1 , 2 , 3 }, { 3 , 2 , 1 }, { 4 , 5 , 6 }, }; convert(arr); return 0;}", "e": 27686, "s": 27012, "text": null }, { "code": "//Java program to print matrix in alternate mannerclass GFG { static final int R = 3; static final int C = 3; // Function for print matrix in alternate manner static void convert(int arr[][]) { boolean leftToRight = true; for (int i = 0; i < R; i++) { if (leftToRight) { for (int j = 0; j < C; j++) { System.out.printf(\"%d \", arr[i][j]); } } else { for (int j = C - 1; j >= 0; j--) { System.out.printf(\"%d \", arr[i][j]); } } leftToRight = !leftToRight; } } // Driver code static public void main(String[] args) { int arr[][] = { {1, 2, 3}, {3, 2, 1}, {4, 5, 6},}; convert(arr); }} // This code is contributed by Rajput-Ji", "e": 28578, "s": 27686, "text": null }, { "code": "# Python 3 program to print matrix# in alternate mannerR = 3C = 3 # Function for print matrix# in alternate mannerdef convert(arr): leftToRight = True for i in range(R): if (leftToRight): for j in range(C): print(arr[i][j], end = \" \") else: for j in range(C - 1, -1, -1): print(arr[i][j], end = \" \") leftToRight = not leftToRight # Driver codeif __name__ == \"__main__\": arr =[[ 1 , 2 , 3 ], [ 3 , 2 , 1 ], [ 4 , 5 , 6 ]] convert(arr) # This code is contributed# by ChitraNayal", "e": 29166, "s": 28578, "text": null }, { "code": " //C# program to print matrix in alternate mannerusing System;public class GFG { static readonly int R = 3; static readonly int C = 3; // Function for print matrix in alternate manner static void convert(int [,]arr) { bool leftToRight = true; for (int i = 0; i < R; i++) { if (leftToRight) { for (int j = 0; j < C; j++) { Console.Write(arr[i,j]+\" \"); } } else { for (int j = C - 1; j >= 0; j--) { Console.Write(arr[i,j]+\" \"); } } leftToRight = !leftToRight; } } // Driver code static public void Main() { int [,]arr = { {1, 2, 3}, {3, 2, 1}, {4, 5, 6},}; convert(arr); }} // This code is contributed by Rajput-Ji", "e": 30059, "s": 29166, "text": null }, { "code": "<?php// PHP program to print matrix// in alternate manner$R = 3;$C = 3; // Function for print matrix// in alternate mannerfunction convert($arr){ global $R; global $C; $leftToRight = true; for ($i = 0; $i < $R; $i++) { if ($leftToRight) { for ($j = 0; $j < $C; $j++) echo $arr[$i][$j], \" \"; } else { for ($j = $C - 1; $j >= 0; $j--) echo $arr[$i][$j], \" \"; } $leftToRight = !$leftToRight; }} // Driver code$arr = array(array(1 , 2 , 3 ), array(3 , 2 , 1 ), array(4 , 5 , 6 )); convert($arr); // This code is contributed by ajit?>", "e": 30733, "s": 30059, "text": null }, { "code": "<script>// Javascript program to print matrix in alternate manner let R = 3; let C = 3; // Function for print matrix in alternate manner function convert(arr) { let leftToRight = true; for (let i = 0; i < R; i++) { if (leftToRight) { for (let j = 0; j < C; j++) { document.write(arr[i][j]+\" \"); } } else { for (let j = C - 1; j >= 0; j--) { document.write( arr[i][j]+\" \"); } } leftToRight = !leftToRight; } } // Driver code let arr =[[ 1 , 2 , 3 ], [ 3 , 2 , 1 ], [ 4 , 5 , 6 ]] convert(arr) // This code is contributed by avanitrachhadiya2155</script>", "e": 31515, "s": 30733, "text": null }, { "code": null, "e": 31525, "s": 31515, "text": "Output: " }, { "code": null, "e": 31543, "s": 31525, "text": "1 2 3 1 2 3 4 5 6" }, { "code": null, "e": 32015, "s": 31543, "text": "Time Complexity : O(R*C) Space Complexity : O(1)This article is contributed by DANISH_RAZA. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. " }, { "code": null, "e": 32021, "s": 32015, "text": "jit_t" }, { "code": null, "e": 32027, "s": 32021, "text": "ukasp" }, { "code": null, "e": 32037, "s": 32027, "text": "Rajput-Ji" }, { "code": null, "e": 32058, "s": 32037, "text": "avanitrachhadiya2155" }, { "code": null, "e": 32065, "s": 32058, "text": "Matrix" }, { "code": null, "e": 32072, "s": 32065, "text": "Matrix" }, { "code": null, "e": 32170, "s": 32072, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 32232, "s": 32170, "text": "Divide and Conquer | Set 5 (Strassen's Matrix Multiplication)" }, { "code": null, "e": 32282, "s": 32232, "text": "Efficiently compute sums of diagonals of a matrix" }, { "code": null, "e": 32315, "s": 32282, "text": "Program to multiply two matrices" }, { "code": null, "e": 32386, "s": 32315, "text": "Count all possible paths from top left to bottom right of a mXn matrix" }, { "code": null, "e": 32428, "s": 32386, "text": "Printing all solutions in N-Queen Problem" }, { "code": null, "e": 32449, "s": 32428, "text": "Min Cost Path | DP-6" }, { "code": null, "e": 32489, "s": 32449, "text": "Python program to multiply two matrices" }, { "code": null, "e": 32511, "s": 32489, "text": "The Celebrity Problem" }, { "code": null, "e": 32562, "s": 32511, "text": "Search in a row wise and column wise sorted matrix" } ]
fsync() - Unix, Linux System Call
Unix - Home Unix - Getting Started Unix - File Management Unix - Directories Unix - File Permission Unix - Environment Unix - Basic Utilities Unix - Pipes & Filters Unix - Processes Unix - Communication Unix - The vi Editor Unix - What is Shell? Unix - Using Variables Unix - Special Variables Unix - Using Arrays Unix - Basic Operators Unix - Decision Making Unix - Shell Loops Unix - Loop Control Unix - Shell Substitutions Unix - Quoting Mechanisms Unix - IO Redirections Unix - Shell Functions Unix - Manpage Help Unix - Regular Expressions Unix - File System Basics Unix - User Administration Unix - System Performance Unix - System Logging Unix - Signals and Traps Unix - Useful Commands Unix - Quick Guide Unix - Builtin Functions Unix - System Calls Unix - Commands List Unix Useful Resources Computer Glossary Who is Who Copyright © 2014 by tutorialspoint fsync, fdatasync - synchronize a file’s in-core state with storage device #include <unistd.h> int fsync(int fd); int fdatasync(int fd); int fsync(int fd); int fdatasync(int fd); fsync() transfers ("flushes") all modified in-core data of (i.e., modified buffer cache pages for) the file referred to by the file descriptor fd to the disk device (or other permanent storage device) where that file resides. The call blocks until the device reports that the transfer has completed. It also flushes metadata information associated with the file (see stat(2)). Calling fsync() does not necessarily ensure that the entry in the directory containing the file has also reached disk. For that an explicit fsync() on a file descriptor for the directory is also needed. fdatasync() is similar to fsync(), but does not flush modified metadata unless that metadata is needed in order to allow a subsequent data retrieval to be correctly handled. For example, changes to st_atime or st_mtime (respectively, time of last access and time of last modification; see stat(2)) do not not require flushing because they are not necessary for a subsequent data read to be handled correctly. On the other hand, a change to the file size (st_size, as made by say ftruncate(2)), would require a metadata flush. The aim of fdatasync(2) is to reduce disk activity for applications that do not require all metadata to be synchronised with the disk. On success, zero is returned. On error, -1 is returned, and errno is set appropriately. If the underlying hard disk has write caching enabled, then the data may not really be on permanent storage when fsync() / fdatasync() return. When an ext2 file system is mounted with the sync option, directory entries are also implicitly synced by fsync(). On kernels before 2.4, fsync() on big files can be inefficient. An alternative might be to use the O_SYNC flag to open(2). POSIX.1-2001 bdflush (2) bdflush (2) open (2) open (2) sync (2) sync (2) sync_file_range (2) sync_file_range (2) hdparm (8) hdparm (8) mount (8) mount (8) sync (8) sync (8) update (8) update (8) Advertisements 129 Lectures 23 hours Eduonix Learning Solutions 5 Lectures 4.5 hours Frahaan Hussain 35 Lectures 2 hours Pradeep D 41 Lectures 2.5 hours Musab Zayadneh 46 Lectures 4 hours GUHARAJANM 6 Lectures 4 hours Uplatz Print Add Notes Bookmark this page
[ { "code": null, "e": 1466, "s": 1454, "text": "Unix - Home" }, { "code": null, "e": 1489, "s": 1466, "text": "Unix - Getting Started" }, { "code": null, "e": 1512, "s": 1489, "text": "Unix - File Management" }, { "code": null, "e": 1531, "s": 1512, "text": "Unix - Directories" }, { "code": null, "e": 1554, "s": 1531, "text": "Unix - File Permission" }, { "code": null, "e": 1573, "s": 1554, "text": "Unix - Environment" }, { "code": null, "e": 1596, "s": 1573, "text": "Unix - Basic Utilities" }, { "code": null, "e": 1619, "s": 1596, "text": "Unix - Pipes & Filters" }, { "code": null, "e": 1636, "s": 1619, "text": "Unix - Processes" }, { "code": null, "e": 1657, "s": 1636, "text": "Unix - Communication" }, { "code": null, "e": 1678, "s": 1657, "text": "Unix - The vi Editor" }, { "code": null, "e": 1700, "s": 1678, "text": "Unix - What is Shell?" }, { "code": null, "e": 1723, "s": 1700, "text": "Unix - Using Variables" }, { "code": null, "e": 1748, "s": 1723, "text": "Unix - Special Variables" }, { "code": null, "e": 1768, "s": 1748, "text": "Unix - Using Arrays" }, { "code": null, "e": 1791, "s": 1768, "text": "Unix - Basic Operators" }, { "code": null, "e": 1814, "s": 1791, "text": "Unix - Decision Making" }, { "code": null, "e": 1833, "s": 1814, "text": "Unix - Shell Loops" }, { "code": null, "e": 1853, "s": 1833, "text": "Unix - Loop Control" }, { "code": null, "e": 1880, "s": 1853, "text": "Unix - Shell Substitutions" }, { "code": null, "e": 1906, "s": 1880, "text": "Unix - Quoting Mechanisms" }, { "code": null, "e": 1929, "s": 1906, "text": "Unix - IO Redirections" }, { "code": null, "e": 1952, "s": 1929, "text": "Unix - Shell Functions" }, { "code": null, "e": 1972, "s": 1952, "text": "Unix - Manpage Help" }, { "code": null, "e": 1999, "s": 1972, "text": "Unix - Regular Expressions" }, { "code": null, "e": 2025, "s": 1999, "text": "Unix - File System Basics" }, { "code": null, "e": 2052, "s": 2025, "text": "Unix - User Administration" }, { "code": null, "e": 2078, "s": 2052, "text": "Unix - System Performance" }, { "code": null, "e": 2100, "s": 2078, "text": "Unix - System Logging" }, { "code": null, "e": 2125, "s": 2100, "text": "Unix - Signals and Traps" }, { "code": null, "e": 2148, "s": 2125, "text": "Unix - Useful Commands" }, { "code": null, "e": 2167, "s": 2148, "text": "Unix - Quick Guide" }, { "code": null, "e": 2192, "s": 2167, "text": "Unix - Builtin Functions" }, { "code": null, "e": 2212, "s": 2192, "text": "Unix - System Calls" }, { "code": null, "e": 2233, "s": 2212, "text": "Unix - Commands List" }, { "code": null, "e": 2255, "s": 2233, "text": "Unix Useful Resources" }, { "code": null, "e": 2273, "s": 2255, "text": "Computer Glossary" }, { "code": null, "e": 2284, "s": 2273, "text": "Who is Who" }, { "code": null, "e": 2319, "s": 2284, "text": "Copyright © 2014 by tutorialspoint" }, { "code": null, "e": 2393, "s": 2319, "text": "fsync, fdatasync - synchronize a file’s in-core state with storage device" }, { "code": null, "e": 2459, "s": 2393, "text": "#include <unistd.h> \nint fsync(int fd); \nint fdatasync(int fd); \n" }, { "code": null, "e": 2480, "s": 2459, "text": "\nint fsync(int fd); " }, { "code": null, "e": 2506, "s": 2480, "text": "\nint fdatasync(int fd); \n" }, { "code": null, "e": 2884, "s": 2506, "text": "fsync() transfers (\"flushes\") all modified in-core data of (i.e., modified buffer cache pages for) the\nfile referred to by the file descriptor fd to the disk device (or other permanent storage device)\nwhere that file resides.\nThe call blocks until the device reports that the transfer has completed.\nIt also flushes metadata information associated with the file (see\nstat(2))." }, { "code": null, "e": 3087, "s": 2884, "text": "Calling fsync() does not necessarily ensure\nthat the entry in the directory containing the file has also reached disk.\nFor that an explicit\nfsync() on a file descriptor for the directory is also needed." }, { "code": null, "e": 3613, "s": 3087, "text": "fdatasync() is similar to\nfsync(), but does not flush modified metadata unless that metadata\nis needed in order to allow a subsequent data retrieval to be\ncorrectly handled. For example, changes to st_atime or\nst_mtime (respectively, time of last access and\ntime of last modification; see stat(2)) do not not require flushing because they are not necessary for a subsequent data read to be handled correctly.\nOn the other hand, a change to the file size (st_size, as made by say ftruncate(2)), would require a metadata flush." }, { "code": null, "e": 3749, "s": 3613, "text": " The aim of fdatasync(2) is to reduce disk activity for applications that do not require all metadata to be synchronised with the disk." }, { "code": null, "e": 3838, "s": 3749, "text": "On success, zero is returned. On error, -1 is returned, and\nerrno is set appropriately." }, { "code": null, "e": 3981, "s": 3838, "text": "If the underlying hard disk has write caching enabled, then\nthe data may not really be on permanent storage when\nfsync() /\nfdatasync() return." }, { "code": null, "e": 4096, "s": 3981, "text": "When an ext2 file system is mounted with the\nsync option, directory entries are also implicitly synced by\nfsync()." }, { "code": null, "e": 4219, "s": 4096, "text": "On kernels before 2.4,\nfsync() on big files can be inefficient.\nAn alternative might be to use the\nO_SYNC flag to\nopen(2)." }, { "code": null, "e": 4232, "s": 4219, "text": "POSIX.1-2001" }, { "code": null, "e": 4244, "s": 4232, "text": "bdflush (2)" }, { "code": null, "e": 4256, "s": 4244, "text": "bdflush (2)" }, { "code": null, "e": 4265, "s": 4256, "text": "open (2)" }, { "code": null, "e": 4274, "s": 4265, "text": "open (2)" }, { "code": null, "e": 4283, "s": 4274, "text": "sync (2)" }, { "code": null, "e": 4292, "s": 4283, "text": "sync (2)" }, { "code": null, "e": 4312, "s": 4292, "text": "sync_file_range (2)" }, { "code": null, "e": 4332, "s": 4312, "text": "sync_file_range (2)" }, { "code": null, "e": 4343, "s": 4332, "text": "hdparm (8)" }, { "code": null, "e": 4354, "s": 4343, "text": "hdparm (8)" }, { "code": null, "e": 4364, "s": 4354, "text": "mount (8)" }, { "code": null, "e": 4374, "s": 4364, "text": "mount (8)" }, { "code": null, "e": 4383, "s": 4374, "text": "sync (8)" }, { "code": null, "e": 4392, "s": 4383, "text": "sync (8)" }, { "code": null, "e": 4403, "s": 4392, "text": "update (8)" }, { "code": null, "e": 4414, "s": 4403, "text": "update (8)" }, { "code": null, "e": 4431, "s": 4414, "text": "\nAdvertisements\n" }, { "code": null, "e": 4466, "s": 4431, "text": "\n 129 Lectures \n 23 hours \n" }, { "code": null, "e": 4494, "s": 4466, "text": " Eduonix Learning Solutions" }, { "code": null, "e": 4528, "s": 4494, "text": "\n 5 Lectures \n 4.5 hours \n" }, { "code": null, "e": 4545, "s": 4528, "text": " Frahaan Hussain" }, { "code": null, "e": 4578, "s": 4545, "text": "\n 35 Lectures \n 2 hours \n" }, { "code": null, "e": 4589, "s": 4578, "text": " Pradeep D" }, { "code": null, "e": 4624, "s": 4589, "text": "\n 41 Lectures \n 2.5 hours \n" }, { "code": null, "e": 4640, "s": 4624, "text": " Musab Zayadneh" }, { "code": null, "e": 4673, "s": 4640, "text": "\n 46 Lectures \n 4 hours \n" }, { "code": null, "e": 4685, "s": 4673, "text": " GUHARAJANM" }, { "code": null, "e": 4717, "s": 4685, "text": "\n 6 Lectures \n 4 hours \n" }, { "code": null, "e": 4725, "s": 4717, "text": " Uplatz" }, { "code": null, "e": 4732, "s": 4725, "text": " Print" }, { "code": null, "e": 4743, "s": 4732, "text": " Add Notes" } ]
Groovy - Relational Operators
Relational operators allow of the comparison of objects. Following are the relational operators available in Groovy − The following code snippet shows how the various operators can be used. class Example { static void main(String[] args) { def x = 5; def y = 10; def z = 8; if(x == y) { println("x is equal to y"); } else println("x is not equal to y"); if(z != y) { println("z is not equal to y"); } else println("z is equal to y"); if(z != y) { println("z is not equal to y"); } else println("z is equal to y"); if(z<y) { println("z is less than y"); } else println("z is greater than y"); if(x<=y) { println("x is less than y"); } else println("x is greater than y"); if(x>y) { println("x is greater than y"); } else println("x is less than y"); if(x>=y) { println("x is greater or equal to y"); } else println("x is less than y"); } } When we run the above program, we will get the following result. It can be seen that the results are as expected from the description of the operators as shown above. x is not equal to y z is not equal to y z is not equal to y z is less than y x is less than y x is less than y x is less than y 52 Lectures 8 hours Krishna Sakinala 49 Lectures 2.5 hours Packt Publishing Print Add Notes Bookmark this page
[ { "code": null, "e": 2356, "s": 2238, "text": "Relational operators allow of the comparison of objects. Following are the relational operators available in Groovy −" }, { "code": null, "e": 2428, "s": 2356, "text": "The following code snippet shows how the various operators can be used." }, { "code": null, "e": 3376, "s": 2428, "text": "class Example { \n static void main(String[] args) { \n def x = 5;\n def y = 10;\n def z = 8;\n\t\t\n if(x == y) { \n println(\"x is equal to y\"); \n } else \n println(\"x is not equal to y\"); \n\t\t\t\n if(z != y) { \n println(\"z is not equal to y\"); \n } else \n println(\"z is equal to y\"); \n\t\t\t\t\n if(z != y) { \n println(\"z is not equal to y\"); \n } else \n println(\"z is equal to y\"); \n\t\t\t\t\t\n if(z<y) { \n println(\"z is less than y\"); \n } else \n println(\"z is greater than y\"); \n\t\t\t\t\t\t\n if(x<=y) { \n println(\"x is less than y\"); \n } else \n println(\"x is greater than y\"); \n\t\t\t\n if(x>y) { \n println(\"x is greater than y\"); \n } else \n println(\"x is less than y\"); \n\t\t\t\n if(x>=y) { \n println(\"x is greater or equal to y\"); \n } else \n println(\"x is less than y\"); \n } \n} " }, { "code": null, "e": 3543, "s": 3376, "text": "When we run the above program, we will get the following result. It can be seen that the results are as expected from the description of the operators as shown above." }, { "code": null, "e": 3678, "s": 3543, "text": "x is not equal to y \nz is not equal to y \nz is not equal to y \nz is less than y\nx is less than y \nx is less than y \nx is less than y \n" }, { "code": null, "e": 3711, "s": 3678, "text": "\n 52 Lectures \n 8 hours \n" }, { "code": null, "e": 3729, "s": 3711, "text": " Krishna Sakinala" }, { "code": null, "e": 3764, "s": 3729, "text": "\n 49 Lectures \n 2.5 hours \n" }, { "code": null, "e": 3782, "s": 3764, "text": " Packt Publishing" }, { "code": null, "e": 3789, "s": 3782, "text": " Print" }, { "code": null, "e": 3800, "s": 3789, "text": " Add Notes" } ]
How to declare global Variables in JavaScript?
A global variable has global scope. The scope of a variable is the region of your program in which it is defined. JavaScript variables have only two scopes. Global Variables − A global variable has global scope which means it can be defined anywhere in your JavaScript code. Local Variables − A local variable will be visible only within a function where it is defined. Function parameters are always local to that function. Within the body of a function, a local variable takes precedence over a global variable with the same name. If you declare a local variable or function parameter with the same name as a global variable, you effectively hide the global variable.Here’s how you can declare a global variable − <html> <body onload = checkscope();> <script> <!-- var myVar = "global"; // Declare a global variable function checkscope( ) { var myVar = "local"; // Declare a local variable document.write(myVar); } //--> </script> </body> </html>
[ { "code": null, "e": 1219, "s": 1062, "text": "A global variable has global scope. The scope of a variable is the region of your program in which it is defined. JavaScript variables have only two scopes." }, { "code": null, "e": 1337, "s": 1219, "text": "Global Variables − A global variable has global scope which means it can be defined anywhere in your JavaScript code." }, { "code": null, "e": 1487, "s": 1337, "text": "Local Variables − A local variable will be visible only within a function where it is defined. Function parameters are always local to that function." }, { "code": null, "e": 1778, "s": 1487, "text": "Within the body of a function, a local variable takes precedence over a global variable with the same name. If you declare a local variable or function parameter with the same name as a global variable, you effectively hide the global variable.Here’s how you can declare a global variable −" }, { "code": null, "e": 2115, "s": 1778, "text": "<html>\n <body onload = checkscope();>\n <script>\n <!--\n var myVar = \"global\"; // Declare a global variable\n function checkscope( ) {\n var myVar = \"local\"; // Declare a local variable\n document.write(myVar);\n }\n //-->\n </script>\n </body>\n</html>" } ]
Sort id and reverse the items with MongoDB
The $natural returns the documents in natural order. To reverse the items, use $natural:-1. Let us create a collection with documents − > db.demo710.insertOne({id:101,Name:"Robert"}); { "acknowledged" : true, "insertedId" : ObjectId("5ea83a855d33e20ed1097b7a") } > db.demo710.insertOne({id:102,Name:"Carol"}); { "acknowledged" : true, "insertedId" : ObjectId("5ea83a8d5d33e20ed1097b7b") } > db.demo710.insertOne({id:103,Name:"Mike"}); { "acknowledged" : true, "insertedId" : ObjectId("5ea83a935d33e20ed1097b7c") } > db.demo710.insertOne({id:104,Name:"Sam"}); { "acknowledged" : true, "insertedId" : ObjectId("5ea83a9b5d33e20ed1097b7d") } Display all documents from a collection with the help of find() method − > db.demo710.find(); This will produce the following output − { "_id" : ObjectId("5ea83a855d33e20ed1097b7a"), "id" : 101, "Name" : "Robert" } { "_id" : ObjectId("5ea83a8d5d33e20ed1097b7b"), "id" : 102, "Name" : "Carol" } { "_id" : ObjectId("5ea83a935d33e20ed1097b7c"), "id" : 103, "Name" : "Mike" } { "_id" : ObjectId("5ea83a9b5d33e20ed1097b7d"), "id" : 104, "Name" : "Sam" } Following is the query to sort and reverse the items − > db.demo710.find().sort({$natural:-1}); This will produce the following output − { "_id" : ObjectId("5ea83a9b5d33e20ed1097b7d"), "id" : 104, "Name" : "Sam" } { "_id" : ObjectId("5ea83a935d33e20ed1097b7c"), "id" : 103, "Name" : "Mike" } { "_id" : ObjectId("5ea83a8d5d33e20ed1097b7b"), "id" : 102, "Name" : "Carol" } { "_id" : ObjectId("5ea83a855d33e20ed1097b7a"), "id" : 101, "Name" : "Robert" }
[ { "code": null, "e": 1198, "s": 1062, "text": "The $natural returns the documents in natural order. To reverse the items, use $natural:-1. Let us create a collection with documents −" }, { "code": null, "e": 1724, "s": 1198, "text": "> db.demo710.insertOne({id:101,Name:\"Robert\"});\n{\n \"acknowledged\" : true,\n \"insertedId\" : ObjectId(\"5ea83a855d33e20ed1097b7a\")\n}\n> db.demo710.insertOne({id:102,Name:\"Carol\"});\n{\n \"acknowledged\" : true,\n \"insertedId\" : ObjectId(\"5ea83a8d5d33e20ed1097b7b\")\n}\n> db.demo710.insertOne({id:103,Name:\"Mike\"});\n{\n \"acknowledged\" : true,\n \"insertedId\" : ObjectId(\"5ea83a935d33e20ed1097b7c\")\n}\n> db.demo710.insertOne({id:104,Name:\"Sam\"});\n{\n \"acknowledged\" : true,\n \"insertedId\" : ObjectId(\"5ea83a9b5d33e20ed1097b7d\")\n}" }, { "code": null, "e": 1797, "s": 1724, "text": "Display all documents from a collection with the help of find() method −" }, { "code": null, "e": 1818, "s": 1797, "text": "> db.demo710.find();" }, { "code": null, "e": 1859, "s": 1818, "text": "This will produce the following output −" }, { "code": null, "e": 2173, "s": 1859, "text": "{ \"_id\" : ObjectId(\"5ea83a855d33e20ed1097b7a\"), \"id\" : 101, \"Name\" : \"Robert\" }\n{ \"_id\" : ObjectId(\"5ea83a8d5d33e20ed1097b7b\"), \"id\" : 102, \"Name\" : \"Carol\" }\n{ \"_id\" : ObjectId(\"5ea83a935d33e20ed1097b7c\"), \"id\" : 103, \"Name\" : \"Mike\" }\n{ \"_id\" : ObjectId(\"5ea83a9b5d33e20ed1097b7d\"), \"id\" : 104, \"Name\" : \"Sam\" }" }, { "code": null, "e": 2228, "s": 2173, "text": "Following is the query to sort and reverse the items −" }, { "code": null, "e": 2269, "s": 2228, "text": "> db.demo710.find().sort({$natural:-1});" }, { "code": null, "e": 2310, "s": 2269, "text": "This will produce the following output −" }, { "code": null, "e": 2624, "s": 2310, "text": "{ \"_id\" : ObjectId(\"5ea83a9b5d33e20ed1097b7d\"), \"id\" : 104, \"Name\" : \"Sam\" }\n{ \"_id\" : ObjectId(\"5ea83a935d33e20ed1097b7c\"), \"id\" : 103, \"Name\" : \"Mike\" }\n{ \"_id\" : ObjectId(\"5ea83a8d5d33e20ed1097b7b\"), \"id\" : 102, \"Name\" : \"Carol\" }\n{ \"_id\" : ObjectId(\"5ea83a855d33e20ed1097b7a\"), \"id\" : 101, \"Name\" : \"Robert\" }" } ]
Composite Transformation in 2-D graphics - GeeksforGeeks
06 Aug, 2021 Prerequisite – Basic types of 2-D Transformation : TranslationScalingRotationReflectionShearing of a 2-D object Translation Scaling Rotation Reflection Shearing of a 2-D object Composite Transformation : As the name suggests itself Composition, here we combine two or more transformations into one single transformation that is equivalent to the transformations that are performed one after one over a 2-D object. Example : Consider we have a 2-D object on which we first apply transformation T1 (2-D matrix condition) and then we apply transformation T2(2-D matrix condition) over the 2-D object and the object get transformed, the very equivalent effect over the 2-D object we can obtain by multiplying T1 & T2 (2-D matrix conditions) with each other and then applying the T12 (resultant of T1 X T2) with the coordinates of the 2-D image to get the transformed final image. Problem : Consider we have a square O(0, 0), B(4, 0), C(4, 4), D(0, 4) on which we first apply T1(scaling transformation) given scaling factor is Sx=Sy=0.5 and then we apply T2(rotation transformation in clockwise direction) it by 90*(angle), in last we perform T3(reflection transformation about origin). Ans : The square O, A, C, D looks like : Square_given(Fig.1) First, we perform scaling transformation over a 2-D object : Representation of scaling condition : For coordinate O(0, 0) : For coordinate B(4, 0) : For coordinate C(4, 4) : For coordinate D(0, 4) : 2-D object after scaling : Fig.2 *Now, we’ll perform rotation transformation in clockwise-direction on Fig.2 by 90θ: The condition of rotation transformation of 2-D object about origin is : For coordinate O(0, 0) : For coordinate B(2, 0) : For coordinate C(2, 2) : For coordinate D(0, 2) : 2-D object after rotating about origin by 90* angle : Fig.3 *Now, we’ll perform third last operation on Fig.3, by reflecting it about origin : The condition of reflecting an object about origin is For coordinate O(0, 0) : For coordinate B'(0, 0) : For coordinate C'(0, 0) : For coordinate D'(0, 0) : The final 2-D object after reflecting about origin, we get : Fig.4 Note : The above finale result of Fig.4, that we get after applying all transformation one after one in a serial manner. We could also get the same result by combining all the transformation 2-D matrix conditions together and multiplying each other and get a resultant of multiplication(R). Then, applying that 2D-resultant matrix(R) at each coordinate of the given square(above). So, you will get the same result as you have in Fig.4. Solution using Composite transformation : *First we multiplied 2-D matrix conditions of Scaling transformation with Rotation transformation : *Now, we multiplied Resultant 2-D matrix(R1) with the third last given Reflecting condition of transformation(R2) to get Resultant(R) : Now, we’ll applied the Resultant(R) of 2d-matrix at each coordinate of the given object (square) to get the final transformed or modified object. First transformed coordinate O(0, 0) is : Second, transformed coordinate B'(4, 0) is : Third transformed coordinate C'(4, 4) is : Fourth transformed coordinate D'(0, 4) is : The final result of the transformed object that you get would be same as above : singghakshay sumitgumber28 computer-graphics Misc Misc Misc Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Find all factors of a natural number | Set 1 How to write Regular Expressions? fgets() and gets() in C language Minimax Algorithm in Game Theory | Set 3 (Tic-Tac-Toe AI - Finding optimal move) Association Rule Recursive Functions Software Engineering | Prototyping Model Java Math min() method with Examples Set add() method in Java with Examples OOPs | Object Oriented Design
[ { "code": null, "e": 24443, "s": 24415, "text": "\n06 Aug, 2021" }, { "code": null, "e": 24496, "s": 24443, "text": "Prerequisite – Basic types of 2-D Transformation : " }, { "code": null, "e": 24557, "s": 24496, "text": "TranslationScalingRotationReflectionShearing of a 2-D object" }, { "code": null, "e": 24569, "s": 24557, "text": "Translation" }, { "code": null, "e": 24577, "s": 24569, "text": "Scaling" }, { "code": null, "e": 24586, "s": 24577, "text": "Rotation" }, { "code": null, "e": 24597, "s": 24586, "text": "Reflection" }, { "code": null, "e": 24622, "s": 24597, "text": "Shearing of a 2-D object" }, { "code": null, "e": 24860, "s": 24622, "text": "Composite Transformation : As the name suggests itself Composition, here we combine two or more transformations into one single transformation that is equivalent to the transformations that are performed one after one over a 2-D object. " }, { "code": null, "e": 25323, "s": 24860, "text": "Example : Consider we have a 2-D object on which we first apply transformation T1 (2-D matrix condition) and then we apply transformation T2(2-D matrix condition) over the 2-D object and the object get transformed, the very equivalent effect over the 2-D object we can obtain by multiplying T1 & T2 (2-D matrix conditions) with each other and then applying the T12 (resultant of T1 X T2) with the coordinates of the 2-D image to get the transformed final image. " }, { "code": null, "e": 25630, "s": 25323, "text": "Problem : Consider we have a square O(0, 0), B(4, 0), C(4, 4), D(0, 4) on which we first apply T1(scaling transformation) given scaling factor is Sx=Sy=0.5 and then we apply T2(rotation transformation in clockwise direction) it by 90*(angle), in last we perform T3(reflection transformation about origin). " }, { "code": null, "e": 25673, "s": 25630, "text": "Ans : The square O, A, C, D looks like : " }, { "code": null, "e": 25693, "s": 25673, "text": "Square_given(Fig.1)" }, { "code": null, "e": 25755, "s": 25693, "text": "First, we perform scaling transformation over a 2-D object : " }, { "code": null, "e": 25795, "s": 25755, "text": "Representation of scaling condition : " }, { "code": null, "e": 25821, "s": 25795, "text": "For coordinate O(0, 0) : " }, { "code": null, "e": 25847, "s": 25821, "text": "For coordinate B(4, 0) : " }, { "code": null, "e": 25873, "s": 25847, "text": "For coordinate C(4, 4) : " }, { "code": null, "e": 25899, "s": 25873, "text": "For coordinate D(0, 4) : " }, { "code": null, "e": 25928, "s": 25899, "text": "2-D object after scaling : " }, { "code": null, "e": 25934, "s": 25928, "text": "Fig.2" }, { "code": null, "e": 26019, "s": 25934, "text": "*Now, we’ll perform rotation transformation in clockwise-direction on Fig.2 by 90θ: " }, { "code": null, "e": 26092, "s": 26019, "text": "The condition of rotation transformation of 2-D object about origin is :" }, { "code": null, "e": 26118, "s": 26092, "text": "For coordinate O(0, 0) : " }, { "code": null, "e": 26144, "s": 26118, "text": "For coordinate B(2, 0) : " }, { "code": null, "e": 26170, "s": 26144, "text": "For coordinate C(2, 2) : " }, { "code": null, "e": 26196, "s": 26170, "text": "For coordinate D(0, 2) : " }, { "code": null, "e": 26252, "s": 26196, "text": "2-D object after rotating about origin by 90* angle : " }, { "code": null, "e": 26258, "s": 26252, "text": "Fig.3" }, { "code": null, "e": 26342, "s": 26258, "text": "*Now, we’ll perform third last operation on Fig.3, by reflecting it about origin : " }, { "code": null, "e": 26398, "s": 26342, "text": "The condition of reflecting an object about origin is " }, { "code": null, "e": 26424, "s": 26398, "text": "For coordinate O(0, 0) : " }, { "code": null, "e": 26451, "s": 26424, "text": "For coordinate B'(0, 0) : " }, { "code": null, "e": 26478, "s": 26451, "text": "For coordinate C'(0, 0) : " }, { "code": null, "e": 26505, "s": 26478, "text": "For coordinate D'(0, 0) : " }, { "code": null, "e": 26567, "s": 26505, "text": "The final 2-D object after reflecting about origin, we get : " }, { "code": null, "e": 26575, "s": 26569, "text": "Fig.4" }, { "code": null, "e": 27012, "s": 26575, "text": "Note : The above finale result of Fig.4, that we get after applying all transformation one after one in a serial manner. We could also get the same result by combining all the transformation 2-D matrix conditions together and multiplying each other and get a resultant of multiplication(R). Then, applying that 2D-resultant matrix(R) at each coordinate of the given square(above). So, you will get the same result as you have in Fig.4. " }, { "code": null, "e": 27155, "s": 27012, "text": "Solution using Composite transformation : *First we multiplied 2-D matrix conditions of Scaling transformation with Rotation transformation : " }, { "code": null, "e": 27292, "s": 27155, "text": "*Now, we multiplied Resultant 2-D matrix(R1) with the third last given Reflecting condition of transformation(R2) to get Resultant(R) : " }, { "code": null, "e": 27439, "s": 27292, "text": "Now, we’ll applied the Resultant(R) of 2d-matrix at each coordinate of the given object (square) to get the final transformed or modified object. " }, { "code": null, "e": 27482, "s": 27439, "text": "First transformed coordinate O(0, 0) is : " }, { "code": null, "e": 27528, "s": 27482, "text": "Second, transformed coordinate B'(4, 0) is : " }, { "code": null, "e": 27572, "s": 27528, "text": "Third transformed coordinate C'(4, 4) is : " }, { "code": null, "e": 27617, "s": 27572, "text": "Fourth transformed coordinate D'(0, 4) is : " }, { "code": null, "e": 27699, "s": 27617, "text": "The final result of the transformed object that you get would be same as above : " }, { "code": null, "e": 27716, "s": 27703, "text": "singghakshay" }, { "code": null, "e": 27730, "s": 27716, "text": "sumitgumber28" }, { "code": null, "e": 27748, "s": 27730, "text": "computer-graphics" }, { "code": null, "e": 27753, "s": 27748, "text": "Misc" }, { "code": null, "e": 27758, "s": 27753, "text": "Misc" }, { "code": null, "e": 27763, "s": 27758, "text": "Misc" }, { "code": null, "e": 27861, "s": 27763, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27870, "s": 27861, "text": "Comments" }, { "code": null, "e": 27883, "s": 27870, "text": "Old Comments" }, { "code": null, "e": 27928, "s": 27883, "text": "Find all factors of a natural number | Set 1" }, { "code": null, "e": 27962, "s": 27928, "text": "How to write Regular Expressions?" }, { "code": null, "e": 27995, "s": 27962, "text": "fgets() and gets() in C language" }, { "code": null, "e": 28076, "s": 27995, "text": "Minimax Algorithm in Game Theory | Set 3 (Tic-Tac-Toe AI - Finding optimal move)" }, { "code": null, "e": 28093, "s": 28076, "text": "Association Rule" }, { "code": null, "e": 28113, "s": 28093, "text": "Recursive Functions" }, { "code": null, "e": 28154, "s": 28113, "text": "Software Engineering | Prototyping Model" }, { "code": null, "e": 28191, "s": 28154, "text": "Java Math min() method with Examples" }, { "code": null, "e": 28230, "s": 28191, "text": "Set add() method in Java with Examples" } ]
Automate Creating a New GitHub Repository with “Gitstart” | by Shinichi Okada | Towards Data Science
[Update: 2021–06–09][Update: 2021–05–23][Updated: 2021–4–17] (Homebrew version and the newer version are available.) The Gitstart will remove all the hassle when creating a new GitHub repository. After creating a repository at GitHub, you have to type the following as a standard procedure: Line 1: Adding “My Repo” to the README markdown file.Line 2: Creating a new Git repository.Line 3: Adding the README.md in the working directory to the staging area.Line 4: Saving your changes to the local repository.Line 5: Creating a branch “main”.Line 6: Adding the remote where your repository is stored at.Line 7: Uploading the local repository content to a remote repository. I created a bash script called Gitstart which automates the above workflow and adds .gitignore, README, and license.txt to your repo. Gitstart will create .gitignore and the template README, gitignore, and license file. Then it will add, commit, and push them to your Github account. GitHub CLI You need GitHub CLI. It is available for macOS, Linux, and Windows. gh is GitHub on the command line. It brings pull requests, issues, and other GitHub concepts to the terminal ... — GitHub CLI GitHub CLI has auth, repo, and other methods. It allows us to log in and create a GitHub repo on the command line. Once you installed GitHub CLI, you can log in to your GitHub account. $ gh auth login? What account do you want to log into? GitHub.com- Logging into github.com? How would you like to authenticate? Login with a web browser! First copy your one-time code: 1111-2222- Press Enter to open github.com in your browser...✓ Authentication complete. Press Enter to continue...? Choose default git protocol SSH- gh config set -h github.com git_protocol ssh✓ Configured git protocol✓ Logged in as shinokada Please select SSH as the default git protocol. yq GitHub CLI creates a YAML file at ~/.connfig/gh/hosts.yml. We use yq to read the username from hosts.yml. The yq is a lightweight and portable command-line YAML processor. $ brew install yq If you use Homebrew: brew tap shinokada/gitstart && brew install gitstart After installing Awesome package manager: awesome install shinokada/gitstart Download the Gitstart file or cron the repo. I recommend creating the ~/bin directory and moving the script to this directory. $ mkdir ~/bin$ mv ./gitstart ~/bin Add ~/bin to your PATH in ~/.bashrc or ~/.zshrc, so that you can run the script from anywhere. export PATH="~/bin:$PATH" Make the file executable. $ cd ~/bin$ chmod 755 gitstart Create a new directory for your new project and then run gitstart. It will ask "Visibility" and "This will create 'your_repo' in your current directory. Continue?". When you add a language like gitstart -l python, it will search for a gitignore file on the Github official site. On your terminal: $ gitstart -d ./my-repo This will create a directory and a GitHub repo called my-repo. You can cd to a directory and run gitstart -d . -l python. $ cd my_new_repo$ gitstart -d . -l python>>> Your github username is shinokada.Select a license:1) MIT: I want it simple and permissive.2) Apache License 2.0: I need to work in a community.3) GNU GPLv3: I care about sharing improvements.4) QuitYour lisence: 2Apache>>> Creating .gitignore for Python... You need to select one of the licenses. Outputs: The script will: Read your license number and create a license.txt. Read your GitHub user from ~/.config/gh/hosts.yml. If a language is provided, it will check at the Github gitignore site if the language exists or not. If the status is 200 then it will create a .gitignore file. Uses the directory name as a GitHub repo name. Run git init and create a README.md file with the repo name. Add README.md and commit with a message. If you are logged in, create a new repo at GitHub.com. Add the remote and push all the files in the directory. The following line store your Github username to user. user=$(cat <"$HOME"/.config/gh/hosts.yml | yq r - '"github.com".user') The following will store the HTTP status code to http_status. url=https://www.google.com/http_status=$(curl --write-out '%{http_code}' --silent --output /dev/null "$url") ShellCheck is a shell script static analysis tool. It points out and clarifies typical syntax issues that cause a shell to give cryptic error messages. You can find your installation here. I used it to improve my bash syntax. $ cd ~/bin$ shellcheck gitstartIn gitstart line 7:user=$(cat ~/.config/gh/hosts.yml | yq r - '"github.com".user') ^--------------------^ SC2002: Useless cat. Consider 'cmd < file | ..' or 'cmd file | ..' instead.In gitstart line 9:repo=$(basename $dir) # bin ^--^ SC2086: Double quote to prevent globbing and word splitting.Did you mean:repo=$(basename "$dir") # binIn gitstart line 24: gh repo create $repo ^---^ SC2086: Double quote to prevent globbing and word splitting.Did you mean: gh repo create "$repo"In gitstart line 25: git remote add origin git@github.com:$user/$repo.git ^---^ SC2086: Double quote to prevent globbing and word splitting. ^---^ SC2086: Double quote to prevent globbing and word splitting.Did you mean: git remote add origin git@github.com:"$user"/"$repo".gitFor more information: https://www.shellcheck.net/wiki/SC2086 -- Double quote to prevent globbing ... https://www.shellcheck.net/wiki/SC2002 -- Useless cat. Consider 'cmd < file... It printouts suggestions on how to fix the problem. I hope this is useful for your next project and speeds up your workflow. With the help of GitHub CLI and yq, we are able to automate a dry repetition. Bash scripts are a great tool to automate your workflow. Get full access to every story on Medium by becoming a member.
[ { "code": null, "e": 233, "s": 172, "text": "[Update: 2021–06–09][Update: 2021–05–23][Updated: 2021–4–17]" }, { "code": null, "e": 289, "s": 233, "text": "(Homebrew version and the newer version are available.)" }, { "code": null, "e": 463, "s": 289, "text": "The Gitstart will remove all the hassle when creating a new GitHub repository. After creating a repository at GitHub, you have to type the following as a standard procedure:" }, { "code": null, "e": 845, "s": 463, "text": "Line 1: Adding “My Repo” to the README markdown file.Line 2: Creating a new Git repository.Line 3: Adding the README.md in the working directory to the staging area.Line 4: Saving your changes to the local repository.Line 5: Creating a branch “main”.Line 6: Adding the remote where your repository is stored at.Line 7: Uploading the local repository content to a remote repository." }, { "code": null, "e": 979, "s": 845, "text": "I created a bash script called Gitstart which automates the above workflow and adds .gitignore, README, and license.txt to your repo." }, { "code": null, "e": 1129, "s": 979, "text": "Gitstart will create .gitignore and the template README, gitignore, and license file. Then it will add, commit, and push them to your Github account." }, { "code": null, "e": 1140, "s": 1129, "text": "GitHub CLI" }, { "code": null, "e": 1208, "s": 1140, "text": "You need GitHub CLI. It is available for macOS, Linux, and Windows." }, { "code": null, "e": 1334, "s": 1208, "text": "gh is GitHub on the command line. It brings pull requests, issues, and other GitHub concepts to the terminal ... — GitHub CLI" }, { "code": null, "e": 1449, "s": 1334, "text": "GitHub CLI has auth, repo, and other methods. It allows us to log in and create a GitHub repo on the command line." }, { "code": null, "e": 1519, "s": 1449, "text": "Once you installed GitHub CLI, you can log in to your GitHub account." }, { "code": null, "e": 1946, "s": 1519, "text": "$ gh auth login? What account do you want to log into? GitHub.com- Logging into github.com? How would you like to authenticate? Login with a web browser! First copy your one-time code: 1111-2222- Press Enter to open github.com in your browser...✓ Authentication complete. Press Enter to continue...? Choose default git protocol SSH- gh config set -h github.com git_protocol ssh✓ Configured git protocol✓ Logged in as shinokada" }, { "code": null, "e": 1993, "s": 1946, "text": "Please select SSH as the default git protocol." }, { "code": null, "e": 1996, "s": 1993, "text": "yq" }, { "code": null, "e": 2168, "s": 1996, "text": "GitHub CLI creates a YAML file at ~/.connfig/gh/hosts.yml. We use yq to read the username from hosts.yml. The yq is a lightweight and portable command-line YAML processor." }, { "code": null, "e": 2186, "s": 2168, "text": "$ brew install yq" }, { "code": null, "e": 2207, "s": 2186, "text": "If you use Homebrew:" }, { "code": null, "e": 2260, "s": 2207, "text": "brew tap shinokada/gitstart && brew install gitstart" }, { "code": null, "e": 2302, "s": 2260, "text": "After installing Awesome package manager:" }, { "code": null, "e": 2337, "s": 2302, "text": "awesome install shinokada/gitstart" }, { "code": null, "e": 2382, "s": 2337, "text": "Download the Gitstart file or cron the repo." }, { "code": null, "e": 2464, "s": 2382, "text": "I recommend creating the ~/bin directory and moving the script to this directory." }, { "code": null, "e": 2499, "s": 2464, "text": "$ mkdir ~/bin$ mv ./gitstart ~/bin" }, { "code": null, "e": 2594, "s": 2499, "text": "Add ~/bin to your PATH in ~/.bashrc or ~/.zshrc, so that you can run the script from anywhere." }, { "code": null, "e": 2620, "s": 2594, "text": "export PATH=\"~/bin:$PATH\"" }, { "code": null, "e": 2646, "s": 2620, "text": "Make the file executable." }, { "code": null, "e": 2677, "s": 2646, "text": "$ cd ~/bin$ chmod 755 gitstart" }, { "code": null, "e": 2842, "s": 2677, "text": "Create a new directory for your new project and then run gitstart. It will ask \"Visibility\" and \"This will create 'your_repo' in your current directory. Continue?\"." }, { "code": null, "e": 2956, "s": 2842, "text": "When you add a language like gitstart -l python, it will search for a gitignore file on the Github official site." }, { "code": null, "e": 2974, "s": 2956, "text": "On your terminal:" }, { "code": null, "e": 2998, "s": 2974, "text": "$ gitstart -d ./my-repo" }, { "code": null, "e": 3061, "s": 2998, "text": "This will create a directory and a GitHub repo called my-repo." }, { "code": null, "e": 3120, "s": 3061, "text": "You can cd to a directory and run gitstart -d . -l python." }, { "code": null, "e": 3423, "s": 3120, "text": "$ cd my_new_repo$ gitstart -d . -l python>>> Your github username is shinokada.Select a license:1) MIT: I want it simple and permissive.2) Apache License 2.0: I need to work in a community.3) GNU GPLv3: I care about sharing improvements.4) QuitYour lisence: 2Apache>>> Creating .gitignore for Python..." }, { "code": null, "e": 3463, "s": 3423, "text": "You need to select one of the licenses." }, { "code": null, "e": 3472, "s": 3463, "text": "Outputs:" }, { "code": null, "e": 3489, "s": 3472, "text": "The script will:" }, { "code": null, "e": 3540, "s": 3489, "text": "Read your license number and create a license.txt." }, { "code": null, "e": 3591, "s": 3540, "text": "Read your GitHub user from ~/.config/gh/hosts.yml." }, { "code": null, "e": 3752, "s": 3591, "text": "If a language is provided, it will check at the Github gitignore site if the language exists or not. If the status is 200 then it will create a .gitignore file." }, { "code": null, "e": 3799, "s": 3752, "text": "Uses the directory name as a GitHub repo name." }, { "code": null, "e": 3860, "s": 3799, "text": "Run git init and create a README.md file with the repo name." }, { "code": null, "e": 3901, "s": 3860, "text": "Add README.md and commit with a message." }, { "code": null, "e": 3956, "s": 3901, "text": "If you are logged in, create a new repo at GitHub.com." }, { "code": null, "e": 4012, "s": 3956, "text": "Add the remote and push all the files in the directory." }, { "code": null, "e": 4067, "s": 4012, "text": "The following line store your Github username to user." }, { "code": null, "e": 4138, "s": 4067, "text": "user=$(cat <\"$HOME\"/.config/gh/hosts.yml | yq r - '\"github.com\".user')" }, { "code": null, "e": 4200, "s": 4138, "text": "The following will store the HTTP status code to http_status." }, { "code": null, "e": 4309, "s": 4200, "text": "url=https://www.google.com/http_status=$(curl --write-out '%{http_code}' --silent --output /dev/null \"$url\")" }, { "code": null, "e": 4535, "s": 4309, "text": "ShellCheck is a shell script static analysis tool. It points out and clarifies typical syntax issues that cause a shell to give cryptic error messages. You can find your installation here. I used it to improve my bash syntax." }, { "code": null, "e": 5645, "s": 4535, "text": "$ cd ~/bin$ shellcheck gitstartIn gitstart line 7:user=$(cat ~/.config/gh/hosts.yml | yq r - '\"github.com\".user') ^--------------------^ SC2002: Useless cat. Consider 'cmd < file | ..' or 'cmd file | ..' instead.In gitstart line 9:repo=$(basename $dir) # bin ^--^ SC2086: Double quote to prevent globbing and word splitting.Did you mean:repo=$(basename \"$dir\") # binIn gitstart line 24: gh repo create $repo ^---^ SC2086: Double quote to prevent globbing and word splitting.Did you mean: gh repo create \"$repo\"In gitstart line 25: git remote add origin git@github.com:$user/$repo.git ^---^ SC2086: Double quote to prevent globbing and word splitting. ^---^ SC2086: Double quote to prevent globbing and word splitting.Did you mean: git remote add origin git@github.com:\"$user\"/\"$repo\".gitFor more information: https://www.shellcheck.net/wiki/SC2086 -- Double quote to prevent globbing ... https://www.shellcheck.net/wiki/SC2002 -- Useless cat. Consider 'cmd < file..." }, { "code": null, "e": 5697, "s": 5645, "text": "It printouts suggestions on how to fix the problem." }, { "code": null, "e": 5905, "s": 5697, "text": "I hope this is useful for your next project and speeds up your workflow. With the help of GitHub CLI and yq, we are able to automate a dry repetition. Bash scripts are a great tool to automate your workflow." } ]
Building a Translation System In Minutes | by Ceshine Lee | Towards Data Science
Sequence-to-sequence(seq2seq)[1] is a versatile structure and capable of many things (language translation, text summarization[2], video captioning[3], etc.). For a short introduction to seq2seq, here are some good posts: [4][5]. Sean Robertson’s tutorial notebook[6] and Jeremy Howard’s lectures [6][7] are great starting points to get a firm grasp on the technical details of seq2seq. However, I’d try to avoid implementing all these details myself when dealing with real-world problems. It’s usually not a good idea to reinvent the wheel, especially when you’re very new to this field. I’ve found that OpenNMT project is very active, has good documentation, and can be used out-of-the-box: opennmt.net There also are some more general frameworks (for example, [8]), but may need some customization to make it work on your specific problem. There are two official versions of OpenNMT: OpenNMT-Lua (a.k.a. OpenNMT): the main project developed with LuaTorch.Optimized and stable code for production and large scale experiments. OpenNMT-py: light version of OpenNMT using PyTorch.Initially created by the Facebook AI research team as a sample project for PyTorch, this version is easier to extend and is suited for research purpose but does not include all features. We’re going to use the PyTorch version in the following sections. We will walk you through the steps needed to create a very basic translation system with a medium-sized dataset. Clone the OpenNMT-py git repository on Github into a local folder: github.com You might want to fork the repository on Github if you’re planning to customize or extend it later. Also it is suggested in the README: Codebase is nearing a stable 0.1 version. We currently recommend forking if you want stable code. Here we’re going to use the dataset from AI Challenger — English-Chinese Machine Translation competition. It is a dataset with 10 million English-Chinese sentence pairs. The English copora are conversational English extracted from English learning websites and movie subtitles. From my understanding, most of the translation are submitted by enthusiasts, not necessarily professionals. The translated Chinese sentences are checked by human annotators. challenger.ai Downloading the dataset requires account sign-up and possibly ID verification (can’t remember whether the latter is mandatory). If that’s a problem for you, you can try datasets from WMT17. There are some problems to the AI Challenger dataset: 1. The quality of the translation is not consistent. 2. Because many of sentences are from movie subtitles, the translation are often context-dependent (related to the previous or the next sentence). However, there are no context information available in the dataset. Let’s see how the out-of-the-box model perform on this dataset. Because of memory restriction, I down-sampled the dataset to 1 million sentences. (We’ll assume that you put the dataset into folder challenger under the OpenNMT root directory.) The validation and test dataset comes in XML format. We need to convert it to plain text files where a line consists of a single sentence. A simple way to do that is using BeautifulSoup. Here’s a sample chunk of code: with open(input_file, "r") as f: soup = BeautifulSoup(f.read(), "lxml") lines = [ (int(x["id"]), x.text.strip()) for x in soup.findAll("seg")] # Ensure the same order lines = sorted(lines, key=lambda x: x[0]) The input sentence must be tokenized with tokens space-separated. For English, there are a few tokenizers to choose from. One example is nltk.tokenize.word_tokenize : with open(output_file, "w") as f: f.write( "\n".join([ " ".join(word_tokenize(l[1])) for l in lines ]) ) It turns “It’s a neat one — two. Walker to Burton.” into “It ‘s a neat one — two . Walker to Burton .”. For Chinese, we use the simplest character-level tokenization, that is, treat each character as a token: with open(output_file, "w") as f: f.write( "\n".join([ " ".join([c if c != " " else "<s>" for c in l[1]]) for l in lines ]) ) It turns “我就一天24小时都得在她眼皮子底下。” into “我 就 一 天 2 4 小 时 都 得 在 她 眼 皮 子 底 下 。”. (Note because the token are space-separated, we need a special token “<s>” to represent the space characters.) (I didn’t provide full code for step 3 and step 4 because it’s really beginner-level Python programming. You should be able to complete these tasks by yourself.) Simply run the following command in the root directory: python preprocess.py -train_src challenger/train.en.sample \ -train_tg challenger/train.zh.sample \ -valid_src challenger/valid.en \ -valid_tgt challenger/valid.zh \ -save_data challenger/opennmt -report_every 10000 The preprocessing script will go through the dataset, keep track of token frequencies, and construct a vocabulary list. I ran into memory problem here and had to down-sample the training dataset to 1 million rows, but I think the raw dataset should fit into 16GB memory with some optimization. python train.py -data challenger/opennmt \ -save_model models/baseline -gpuid 0 \ -learning_rate 0.001 -opt adam -epochs 20 It’ll use your first GPU to train a model. The default model structure is: NMTModel ( (encoder): RNNEncoder ( (embeddings): Embeddings ( (make_embedding): Sequential ( (emb_luts): Elementwise ( (0): Embedding(50002, 500, padding_idx=1) ) ) ) (rnn): LSTM(500, 500, num_layers=2, dropout=0.3) ) (decoder): InputFeedRNNDecoder ( (embeddings): Embeddings ( (make_embedding): Sequential ( (emb_luts): Elementwise ( (0): Embedding(6370, 500, padding_idx=1) ) ) ) (dropout): Dropout (p = 0.3) (rnn): StackedLSTM ( (dropout): Dropout (p = 0.3) (layers): ModuleList ( (0): LSTMCell(1000, 500) (1): LSTMCell(500, 500) ) ) (attn): GlobalAttention ( (linear_in): Linear (500 -> 500) (linear_out): Linear (1000 -> 500) (sm): Softmax () (tanh): Tanh () ) ) (generator): Sequential ( (0): Linear (500 -> 6370) (1): LogSoftmax () )) The vocabulary size of source and target corpora is 50,002 and 6,370, respectively. The source vocabulary is obviously truncated to 50,000. The target vocabulary is relatively small because there are not that many common Chinese characters. python translate.py \ -model models/baseline_acc_58.79_ppl_7.51_e14 \ -src challenger/valid.en -tgt challenger/valid.zh \ -output challenger/valid_pred.58.79 -gpu 0 -replace_unk Replace models/baseline_acc_58.79_ppl_7.51_e14 with your own model. The model naming should be obvious: this is a model after 14 epochs of training, with 58.79 accuracy and 7.51 perplexity on validation set. You can also calculate BLEU score with the following: wget https://raw.githubusercontent.com/moses-smt/mosesdecoder/master/scripts/generic/multi-bleu.perlperl multi-bleu.perl challenger/valid.zh \ < challenger/valid_pred.58.79 Now you have a working translation system! If you want to submit the translation to AI Challenger, you need to reverse Step 4 and then Step 3. Again, they should be quite simple to implement. English: You knew it in your heart you haven’t washed your hairChinese(pred): 你心里清楚你没洗头发Chinese(gold): 你心里知道你压根就没洗过头 English: I never dreamed that one of my own would be going off to a University, but here I stand,Chinese(pred): 我从来没梦到过我的一个人会去大学,但是我站在这里,Chinese(gold): 我从没想过我的孩子会上大学,但我站在这, English: We just don’t have time to waste on the wrong man.Chinese(pred): 我们只是没时间浪费人。Chinese(gold): 如果找错了人我们可玩不起。 The above three examples are, from top to bottom, semantically correct, partially correct, and entirely incomprehensible. After examining a few examples, I found most of the machine translated sentences were partially correct, and there were surprising amount of semantically correct ones. Not a bad result, considering how little effort we’ve had put in so far. If you submit the result you should get around .22 BLEU. The current top BLEU score is .33, so there’s a lot of rooms for improvement. You can check out opts.py in the root folder for more built-in model parameters. Or dive deep into the codebase to figure out how things work and where might be improved. The other paths include applying word segmentation on Chinese sentences, adding named entity recognition, using pronunciation dictionary[10] to guess translation to unseen English names, etc. (Update on 2017/10/14: If you use jieba and jieba.cut with default settings to tokenize the Chinese sentence, you’d get around .20 BLEU on public leaderboad. One of the possible reasons to the score drop is its much larger Chinese vocabulary size. You can tell from the number of <unk> in the output.) Sutskever, I., Vinyals, O., & Le, Q. V. (2014). Sequence to Sequence Learning with Neural Networks.Nallapati, R., Zhou, B., dos Santos, C., & Xiang, B. (2016). Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond.Venugopalan, S., Rohrbach, M., Donahue, J., Mooney, R., Darrell, T., & Saenko, K. (2015). Sequence to SequenceSequence to sequence model: Introduction and conceptsseq2seq: the clown car of deep learningPractical PyTorch: Translation with a Sequence to Sequence Network and AttentionCutting Edge Deep Learning For Coders, Part 2, Lecture 12 — Attention ModelsCutting Edge Deep Learning For Coders, Part 2, Lecture 13 — Neural TranslationGoogle/seq2seq: A general-purpose encoder-decoder framework for TensorflowThe CMU Pronouncing Dictionary Sutskever, I., Vinyals, O., & Le, Q. V. (2014). Sequence to Sequence Learning with Neural Networks. Nallapati, R., Zhou, B., dos Santos, C., & Xiang, B. (2016). Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond. Venugopalan, S., Rohrbach, M., Donahue, J., Mooney, R., Darrell, T., & Saenko, K. (2015). Sequence to Sequence Sequence to sequence model: Introduction and concepts seq2seq: the clown car of deep learning Practical PyTorch: Translation with a Sequence to Sequence Network and Attention Cutting Edge Deep Learning For Coders, Part 2, Lecture 12 — Attention Models Cutting Edge Deep Learning For Coders, Part 2, Lecture 13 — Neural Translation Google/seq2seq: A general-purpose encoder-decoder framework for Tensorflow The CMU Pronouncing Dictionary
[ { "code": null, "e": 402, "s": 172, "text": "Sequence-to-sequence(seq2seq)[1] is a versatile structure and capable of many things (language translation, text summarization[2], video captioning[3], etc.). For a short introduction to seq2seq, here are some good posts: [4][5]." }, { "code": null, "e": 865, "s": 402, "text": "Sean Robertson’s tutorial notebook[6] and Jeremy Howard’s lectures [6][7] are great starting points to get a firm grasp on the technical details of seq2seq. However, I’d try to avoid implementing all these details myself when dealing with real-world problems. It’s usually not a good idea to reinvent the wheel, especially when you’re very new to this field. I’ve found that OpenNMT project is very active, has good documentation, and can be used out-of-the-box:" }, { "code": null, "e": 877, "s": 865, "text": "opennmt.net" }, { "code": null, "e": 1015, "s": 877, "text": "There also are some more general frameworks (for example, [8]), but may need some customization to make it work on your specific problem." }, { "code": null, "e": 1059, "s": 1015, "text": "There are two official versions of OpenNMT:" }, { "code": null, "e": 1200, "s": 1059, "text": "OpenNMT-Lua (a.k.a. OpenNMT): the main project developed with LuaTorch.Optimized and stable code for production and large scale experiments." }, { "code": null, "e": 1438, "s": 1200, "text": "OpenNMT-py: light version of OpenNMT using PyTorch.Initially created by the Facebook AI research team as a sample project for PyTorch, this version is easier to extend and is suited for research purpose but does not include all features." }, { "code": null, "e": 1617, "s": 1438, "text": "We’re going to use the PyTorch version in the following sections. We will walk you through the steps needed to create a very basic translation system with a medium-sized dataset." }, { "code": null, "e": 1684, "s": 1617, "text": "Clone the OpenNMT-py git repository on Github into a local folder:" }, { "code": null, "e": 1695, "s": 1684, "text": "github.com" }, { "code": null, "e": 1831, "s": 1695, "text": "You might want to fork the repository on Github if you’re planning to customize or extend it later. Also it is suggested in the README:" }, { "code": null, "e": 1929, "s": 1831, "text": "Codebase is nearing a stable 0.1 version. We currently recommend forking if you want stable code." }, { "code": null, "e": 2381, "s": 1929, "text": "Here we’re going to use the dataset from AI Challenger — English-Chinese Machine Translation competition. It is a dataset with 10 million English-Chinese sentence pairs. The English copora are conversational English extracted from English learning websites and movie subtitles. From my understanding, most of the translation are submitted by enthusiasts, not necessarily professionals. The translated Chinese sentences are checked by human annotators." }, { "code": null, "e": 2395, "s": 2381, "text": "challenger.ai" }, { "code": null, "e": 2585, "s": 2395, "text": "Downloading the dataset requires account sign-up and possibly ID verification (can’t remember whether the latter is mandatory). If that’s a problem for you, you can try datasets from WMT17." }, { "code": null, "e": 2907, "s": 2585, "text": "There are some problems to the AI Challenger dataset: 1. The quality of the translation is not consistent. 2. Because many of sentences are from movie subtitles, the translation are often context-dependent (related to the previous or the next sentence). However, there are no context information available in the dataset." }, { "code": null, "e": 3053, "s": 2907, "text": "Let’s see how the out-of-the-box model perform on this dataset. Because of memory restriction, I down-sampled the dataset to 1 million sentences." }, { "code": null, "e": 3150, "s": 3053, "text": "(We’ll assume that you put the dataset into folder challenger under the OpenNMT root directory.)" }, { "code": null, "e": 3368, "s": 3150, "text": "The validation and test dataset comes in XML format. We need to convert it to plain text files where a line consists of a single sentence. A simple way to do that is using BeautifulSoup. Here’s a sample chunk of code:" }, { "code": null, "e": 3594, "s": 3368, "text": "with open(input_file, \"r\") as f: soup = BeautifulSoup(f.read(), \"lxml\") lines = [ (int(x[\"id\"]), x.text.strip()) for x in soup.findAll(\"seg\")] # Ensure the same order lines = sorted(lines, key=lambda x: x[0])" }, { "code": null, "e": 3660, "s": 3594, "text": "The input sentence must be tokenized with tokens space-separated." }, { "code": null, "e": 3761, "s": 3660, "text": "For English, there are a few tokenizers to choose from. One example is nltk.tokenize.word_tokenize :" }, { "code": null, "e": 3911, "s": 3761, "text": "with open(output_file, \"w\") as f: f.write( \"\\n\".join([ \" \".join(word_tokenize(l[1])) for l in lines ]) )" }, { "code": null, "e": 4015, "s": 3911, "text": "It turns “It’s a neat one — two. Walker to Burton.” into “It ‘s a neat one — two . Walker to Burton .”." }, { "code": null, "e": 4120, "s": 4015, "text": "For Chinese, we use the simplest character-level tokenization, that is, treat each character as a token:" }, { "code": null, "e": 4288, "s": 4120, "text": "with open(output_file, \"w\") as f: f.write( \"\\n\".join([ \" \".join([c if c != \" \" else \"<s>\" for c in l[1]]) for l in lines ]) )" }, { "code": null, "e": 4473, "s": 4288, "text": "It turns “我就一天24小时都得在她眼皮子底下。” into “我 就 一 天 2 4 小 时 都 得 在 她 眼 皮 子 底 下 。”. (Note because the token are space-separated, we need a special token “<s>” to represent the space characters.)" }, { "code": null, "e": 4635, "s": 4473, "text": "(I didn’t provide full code for step 3 and step 4 because it’s really beginner-level Python programming. You should be able to complete these tasks by yourself.)" }, { "code": null, "e": 4691, "s": 4635, "text": "Simply run the following command in the root directory:" }, { "code": null, "e": 4928, "s": 4691, "text": "python preprocess.py -train_src challenger/train.en.sample \\ -train_tg challenger/train.zh.sample \\ -valid_src challenger/valid.en \\ -valid_tgt challenger/valid.zh \\ -save_data challenger/opennmt -report_every 10000" }, { "code": null, "e": 5222, "s": 4928, "text": "The preprocessing script will go through the dataset, keep track of token frequencies, and construct a vocabulary list. I ran into memory problem here and had to down-sample the training dataset to 1 million rows, but I think the raw dataset should fit into 16GB memory with some optimization." }, { "code": null, "e": 5352, "s": 5222, "text": "python train.py -data challenger/opennmt \\ -save_model models/baseline -gpuid 0 \\ -learning_rate 0.001 -opt adam -epochs 20" }, { "code": null, "e": 5427, "s": 5352, "text": "It’ll use your first GPU to train a model. The default model structure is:" }, { "code": null, "e": 6326, "s": 5427, "text": "NMTModel ( (encoder): RNNEncoder ( (embeddings): Embeddings ( (make_embedding): Sequential ( (emb_luts): Elementwise ( (0): Embedding(50002, 500, padding_idx=1) ) ) ) (rnn): LSTM(500, 500, num_layers=2, dropout=0.3) ) (decoder): InputFeedRNNDecoder ( (embeddings): Embeddings ( (make_embedding): Sequential ( (emb_luts): Elementwise ( (0): Embedding(6370, 500, padding_idx=1) ) ) ) (dropout): Dropout (p = 0.3) (rnn): StackedLSTM ( (dropout): Dropout (p = 0.3) (layers): ModuleList ( (0): LSTMCell(1000, 500) (1): LSTMCell(500, 500) ) ) (attn): GlobalAttention ( (linear_in): Linear (500 -> 500) (linear_out): Linear (1000 -> 500) (sm): Softmax () (tanh): Tanh () ) ) (generator): Sequential ( (0): Linear (500 -> 6370) (1): LogSoftmax () ))" }, { "code": null, "e": 6567, "s": 6326, "text": "The vocabulary size of source and target corpora is 50,002 and 6,370, respectively. The source vocabulary is obviously truncated to 50,000. The target vocabulary is relatively small because there are not that many common Chinese characters." }, { "code": null, "e": 6755, "s": 6567, "text": "python translate.py \\ -model models/baseline_acc_58.79_ppl_7.51_e14 \\ -src challenger/valid.en -tgt challenger/valid.zh \\ -output challenger/valid_pred.58.79 -gpu 0 -replace_unk" }, { "code": null, "e": 6963, "s": 6755, "text": "Replace models/baseline_acc_58.79_ppl_7.51_e14 with your own model. The model naming should be obvious: this is a model after 14 epochs of training, with 58.79 accuracy and 7.51 perplexity on validation set." }, { "code": null, "e": 7017, "s": 6963, "text": "You can also calculate BLEU score with the following:" }, { "code": null, "e": 7194, "s": 7017, "text": "wget https://raw.githubusercontent.com/moses-smt/mosesdecoder/master/scripts/generic/multi-bleu.perlperl multi-bleu.perl challenger/valid.zh \\ < challenger/valid_pred.58.79" }, { "code": null, "e": 7237, "s": 7194, "text": "Now you have a working translation system!" }, { "code": null, "e": 7386, "s": 7237, "text": "If you want to submit the translation to AI Challenger, you need to reverse Step 4 and then Step 3. Again, they should be quite simple to implement." }, { "code": null, "e": 7503, "s": 7386, "text": "English: You knew it in your heart you haven’t washed your hairChinese(pred): 你心里清楚你没洗头发Chinese(gold): 你心里知道你压根就没洗过头" }, { "code": null, "e": 7676, "s": 7503, "text": "English: I never dreamed that one of my own would be going off to a University, but here I stand,Chinese(pred): 我从来没梦到过我的一个人会去大学,但是我站在这里,Chinese(gold): 我从没想过我的孩子会上大学,但我站在这," }, { "code": null, "e": 7790, "s": 7676, "text": "English: We just don’t have time to waste on the wrong man.Chinese(pred): 我们只是没时间浪费人。Chinese(gold): 如果找错了人我们可玩不起。" }, { "code": null, "e": 8153, "s": 7790, "text": "The above three examples are, from top to bottom, semantically correct, partially correct, and entirely incomprehensible. After examining a few examples, I found most of the machine translated sentences were partially correct, and there were surprising amount of semantically correct ones. Not a bad result, considering how little effort we’ve had put in so far." }, { "code": null, "e": 8459, "s": 8153, "text": "If you submit the result you should get around .22 BLEU. The current top BLEU score is .33, so there’s a lot of rooms for improvement. You can check out opts.py in the root folder for more built-in model parameters. Or dive deep into the codebase to figure out how things work and where might be improved." }, { "code": null, "e": 8651, "s": 8459, "text": "The other paths include applying word segmentation on Chinese sentences, adding named entity recognition, using pronunciation dictionary[10] to guess translation to unseen English names, etc." }, { "code": null, "e": 8953, "s": 8651, "text": "(Update on 2017/10/14: If you use jieba and jieba.cut with default settings to tokenize the Chinese sentence, you’d get around .20 BLEU on public leaderboad. One of the possible reasons to the score drop is its much larger Chinese vocabulary size. You can tell from the number of <unk> in the output.)" }, { "code": null, "e": 9728, "s": 8953, "text": "Sutskever, I., Vinyals, O., & Le, Q. V. (2014). Sequence to Sequence Learning with Neural Networks.Nallapati, R., Zhou, B., dos Santos, C., & Xiang, B. (2016). Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond.Venugopalan, S., Rohrbach, M., Donahue, J., Mooney, R., Darrell, T., & Saenko, K. (2015). Sequence to SequenceSequence to sequence model: Introduction and conceptsseq2seq: the clown car of deep learningPractical PyTorch: Translation with a Sequence to Sequence Network and AttentionCutting Edge Deep Learning For Coders, Part 2, Lecture 12 — Attention ModelsCutting Edge Deep Learning For Coders, Part 2, Lecture 13 — Neural TranslationGoogle/seq2seq: A general-purpose encoder-decoder framework for TensorflowThe CMU Pronouncing Dictionary" }, { "code": null, "e": 9828, "s": 9728, "text": "Sutskever, I., Vinyals, O., & Le, Q. V. (2014). Sequence to Sequence Learning with Neural Networks." }, { "code": null, "e": 9964, "s": 9828, "text": "Nallapati, R., Zhou, B., dos Santos, C., & Xiang, B. (2016). Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond." }, { "code": null, "e": 10075, "s": 9964, "text": "Venugopalan, S., Rohrbach, M., Donahue, J., Mooney, R., Darrell, T., & Saenko, K. (2015). Sequence to Sequence" }, { "code": null, "e": 10129, "s": 10075, "text": "Sequence to sequence model: Introduction and concepts" }, { "code": null, "e": 10169, "s": 10129, "text": "seq2seq: the clown car of deep learning" }, { "code": null, "e": 10250, "s": 10169, "text": "Practical PyTorch: Translation with a Sequence to Sequence Network and Attention" }, { "code": null, "e": 10327, "s": 10250, "text": "Cutting Edge Deep Learning For Coders, Part 2, Lecture 12 — Attention Models" }, { "code": null, "e": 10406, "s": 10327, "text": "Cutting Edge Deep Learning For Coders, Part 2, Lecture 13 — Neural Translation" }, { "code": null, "e": 10481, "s": 10406, "text": "Google/seq2seq: A general-purpose encoder-decoder framework for Tensorflow" } ]
Docker - Building Files
We created our Docker File in the last chapter. It’s now time to build the Docker File. The Docker File can be built with the following command − docker build Let’s learn more about this command. This method allows the users to build their own Docker images. docker build -t ImageName:TagName dir -t − is to mention a tag to the image -t − is to mention a tag to the image ImageName − This is the name you want to give to your image. ImageName − This is the name you want to give to your image. TagName − This is the tag you want to give to your image. TagName − This is the tag you want to give to your image. Dir − The directory where the Docker File is present. Dir − The directory where the Docker File is present. None sudo docker build –t myimage:0.1. Here, myimage is the name we are giving to the Image and 0.1 is the tag number we are giving to our image. Since the Docker File is in the present working directory, we used "." at the end of the command to signify the present working directory. From the output, you will first see that the Ubuntu Image will be downloaded from Docker Hub, because there is no image available locally on the machine. Finally, when the build is complete, all the necessary commands would have run on the image. You will then see the successfully built message and the ID of the new Image. When you run the Docker images command, you would then be able to see your new image. You can now build containers from your new Image. 70 Lectures 12 hours Anshul Chauhan 41 Lectures 5 hours AR Shankar 31 Lectures 3 hours Abhilash Nelson 15 Lectures 2 hours Harshit Srivastava, Pranjal Srivastava 33 Lectures 4 hours Mumshad Mannambeth 13 Lectures 53 mins Musab Zayadneh Print Add Notes Bookmark this page
[ { "code": null, "e": 2486, "s": 2340, "text": "We created our Docker File in the last chapter. It’s now time to build the Docker File. The Docker File can be built with the following command −" }, { "code": null, "e": 2501, "s": 2486, "text": "docker build \n" }, { "code": null, "e": 2538, "s": 2501, "text": "Let’s learn more about this command." }, { "code": null, "e": 2601, "s": 2538, "text": "This method allows the users to build their own Docker images." }, { "code": null, "e": 2641, "s": 2601, "text": "docker build -t ImageName:TagName dir\n" }, { "code": null, "e": 2680, "s": 2641, "text": "-t − is to mention a tag to the image " }, { "code": null, "e": 2719, "s": 2680, "text": "-t − is to mention a tag to the image " }, { "code": null, "e": 2780, "s": 2719, "text": "ImageName − This is the name you want to give to your image." }, { "code": null, "e": 2841, "s": 2780, "text": "ImageName − This is the name you want to give to your image." }, { "code": null, "e": 2899, "s": 2841, "text": "TagName − This is the tag you want to give to your image." }, { "code": null, "e": 2957, "s": 2899, "text": "TagName − This is the tag you want to give to your image." }, { "code": null, "e": 3011, "s": 2957, "text": "Dir − The directory where the Docker File is present." }, { "code": null, "e": 3065, "s": 3011, "text": "Dir − The directory where the Docker File is present." }, { "code": null, "e": 3070, "s": 3065, "text": "None" }, { "code": null, "e": 3105, "s": 3070, "text": "sudo docker build –t myimage:0.1. " }, { "code": null, "e": 3212, "s": 3105, "text": "Here, myimage is the name we are giving to the Image and 0.1 is the tag number we are giving to our image." }, { "code": null, "e": 3351, "s": 3212, "text": "Since the Docker File is in the present working directory, we used \".\" at the end of the command to signify the present working directory." }, { "code": null, "e": 3505, "s": 3351, "text": "From the output, you will first see that the Ubuntu Image will be downloaded from Docker Hub, because there is no image available locally on the machine." }, { "code": null, "e": 3598, "s": 3505, "text": "Finally, when the build is complete, all the necessary commands would have run on the image." }, { "code": null, "e": 3762, "s": 3598, "text": "You will then see the successfully built message and the ID of the new Image. When you run the Docker images command, you would then be able to see your new image." }, { "code": null, "e": 3812, "s": 3762, "text": "You can now build containers from your new Image." }, { "code": null, "e": 3846, "s": 3812, "text": "\n 70 Lectures \n 12 hours \n" }, { "code": null, "e": 3862, "s": 3846, "text": " Anshul Chauhan" }, { "code": null, "e": 3895, "s": 3862, "text": "\n 41 Lectures \n 5 hours \n" }, { "code": null, "e": 3907, "s": 3895, "text": " AR Shankar" }, { "code": null, "e": 3940, "s": 3907, "text": "\n 31 Lectures \n 3 hours \n" }, { "code": null, "e": 3957, "s": 3940, "text": " Abhilash Nelson" }, { "code": null, "e": 3990, "s": 3957, "text": "\n 15 Lectures \n 2 hours \n" }, { "code": null, "e": 4030, "s": 3990, "text": " Harshit Srivastava, Pranjal Srivastava" }, { "code": null, "e": 4063, "s": 4030, "text": "\n 33 Lectures \n 4 hours \n" }, { "code": null, "e": 4083, "s": 4063, "text": " Mumshad Mannambeth" }, { "code": null, "e": 4115, "s": 4083, "text": "\n 13 Lectures \n 53 mins\n" }, { "code": null, "e": 4131, "s": 4115, "text": " Musab Zayadneh" }, { "code": null, "e": 4138, "s": 4131, "text": " Print" }, { "code": null, "e": 4149, "s": 4138, "text": " Add Notes" } ]
How can I write a MySQL stored function that calculates the factorial of a given number?
Following is the example of a stored function that can calculate the factorial of a given number − CREATE FUNCTION factorial (n DECIMAL(3,0)) RETURNS DECIMAL(20,0) DETERMINISTIC BEGIN DECLARE factorial DECIMAL(20,0) DEFAULT 1; DECLARE counter DECIMAL(3,0); SET counter = n; factorial_loop: REPEAT SET factorial = factorial * counter; SET counter = counter - 1; UNTIL counter = 1 END REPEAT; RETURN factorial; END // mysql> Select Factorial(5)// +--------------+ | Factorial(5) | +--------------+ | 120 | +--------------+ 1 row in set (0.27 sec) mysql> Select Factorial(6)// +--------------+ | Factorial(6) | +--------------+ | 720 | +--------------+ 1 row in set (0.00 sec)
[ { "code": null, "e": 1161, "s": 1062, "text": "Following is the example of a stored function that can calculate the factorial of a given number −" }, { "code": null, "e": 1756, "s": 1161, "text": "CREATE FUNCTION factorial (n DECIMAL(3,0))\nRETURNS DECIMAL(20,0)\nDETERMINISTIC\nBEGIN\nDECLARE factorial DECIMAL(20,0) DEFAULT 1;\nDECLARE counter DECIMAL(3,0);\nSET counter = n;\nfactorial_loop: REPEAT\nSET factorial = factorial * counter;\nSET counter = counter - 1;\nUNTIL counter = 1\nEND REPEAT;\nRETURN factorial;\nEND //\n\nmysql> Select Factorial(5)//\n+--------------+\n| Factorial(5) |\n+--------------+\n| 120 |\n+--------------+\n1 row in set (0.27 sec)\n\nmysql> Select Factorial(6)//\n+--------------+\n| Factorial(6) |\n+--------------+\n| 720 |\n+--------------+\n1 row in set (0.00 sec)" } ]
H2 Database - Create
CREATE is a generic SQL command used to create Tables, Schemas, Sequences, Views, and Users in H2 Database server. Create Table is a command used to create a user-defined table in the current database. Following is the generic syntax for the Create Table command. CREATE [ CACHED | MEMORY ] [ TEMP | [ GLOBAL | LOCAL ] TEMPORARY ] TABLE [ IF NOT EXISTS ] name [ ( { columnDefinition | constraint } [,...] ) ] [ ENGINE tableEngineName [ WITH tableEngineParamName [,...] ] ] [ NOT PERSISTENT ] [ TRANSACTIONAL ] [ AS select ] By using the generic syntax of the Create Table command, we can create different types of tables such as cached tables, memory tables, and temporary tables. Following is the list to describe different clauses from the given syntax. CACHED − The cached tables are the default type for regular tables. This means the number of rows is not limited by the main memory. CACHED − The cached tables are the default type for regular tables. This means the number of rows is not limited by the main memory. MEMORY − The memory tables are the default type for temporary tables. This means the memory tables should not get too large and the index data is kept in the main memory. MEMORY − The memory tables are the default type for temporary tables. This means the memory tables should not get too large and the index data is kept in the main memory. TEMPORARY − Temporary tables are deleted while closing or opening a database. Basically, temporary tables are of two types − GLOBAL type − Accessible by all connections. LOCAL type − Accessible by the current connection. The default type for temporary tables is global type. Indexes of temporary tables are kept in the main memory, unless the temporary table is created using CREATE CACHED TABLE. TEMPORARY − Temporary tables are deleted while closing or opening a database. Basically, temporary tables are of two types − GLOBAL type − Accessible by all connections. GLOBAL type − Accessible by all connections. LOCAL type − Accessible by the current connection. LOCAL type − Accessible by the current connection. The default type for temporary tables is global type. Indexes of temporary tables are kept in the main memory, unless the temporary table is created using CREATE CACHED TABLE. ENGINE − The ENGINE option is only required when custom table implementations are used. ENGINE − The ENGINE option is only required when custom table implementations are used. NOT PERSISTENT − It is a modifier to keep the complete table data in-memory and all rows are lost when the database is closed. NOT PERSISTENT − It is a modifier to keep the complete table data in-memory and all rows are lost when the database is closed. TRANSACTIONAL − It is a keyword that commits an open transaction and this command supports only temporary tables. TRANSACTIONAL − It is a keyword that commits an open transaction and this command supports only temporary tables. In this example, let us create a table named tutorials_tbl using the following given data. The following query is used to create a table tutorials_tbl along with the given column data. CREATE TABLE tutorials_tbl ( id INT NOT NULL, title VARCHAR(50) NOT NULL, author VARCHAR(20) NOT NULL, submission_date DATE ); The above query produces the following output. (0) rows effected Create Schema is a command used to create a user-dependent schema under a particular authorization (under the currently registered user). Following is the generic syntax of the Create Schema command. CREATE SCHEMA [ IF NOT EXISTS ] name [ AUTHORIZATION ownerUserName ] In the above generic syntax, AUTHORIZATION is a keyword used to provide the respective user name. This command is optional which means if we are not providing the user name, then it will consider the current user. The user that executes the command must have admin rights, as well as the owner. This command commits an open transaction in this connection. In this example, let us create a schema named test_schema under SA user, using the following command. CREATE SCHEMA test_schema AUTHORIZATION sa; The above command produces the following output. (0) rows effected Sequence is concept which is used to generate a number by following a sequence for id or any random column values. Following is the generic syntax of the create sequence command. CREATE SEQUENCE [ IF NOT EXISTS ] newSequenceName [ START WITH long ] [ INCREMENT BY long ] [ MINVALUE long | NOMINVALUE | NO MINVALUE ] [ MAXVALUE long | NOMAXVALUE | NO MAXVALUE ] [ CYCLE long | NOCYCLE | NO CYCLE ] [ CACHE long | NOCACHE | NO CACHE ] This generic syntax is used to create a sequence. The datatype of a sequence is BIGINT. In this the sequence, values are never re-used, even when the transaction is roll backed. In this example, let us create a sequence named SEQ_ID, using the following query. CREATE SEQUENCE SEQ_ID; The above query produces the following output. (0) rows effected 14 Lectures 1 hours Mahesh Kumar 100 Lectures 9.5 hours Hari Om Singh 108 Lectures 8 hours Pavan Lalwani 10 Lectures 1 hours Deepti Trivedi 20 Lectures 2 hours Deepti Trivedi 14 Lectures 1 hours Deepti Trivedi Print Add Notes Bookmark this page
[ { "code": null, "e": 2222, "s": 2107, "text": "CREATE is a generic SQL command used to create Tables, Schemas, Sequences, Views, and Users in H2 Database server." }, { "code": null, "e": 2309, "s": 2222, "text": "Create Table is a command used to create a user-defined table in the current database." }, { "code": null, "e": 2371, "s": 2309, "text": "Following is the generic syntax for the Create Table command." }, { "code": null, "e": 2638, "s": 2371, "text": "CREATE [ CACHED | MEMORY ] [ TEMP | [ GLOBAL | LOCAL ] TEMPORARY ] \nTABLE [ IF NOT EXISTS ] name \n[ ( { columnDefinition | constraint } [,...] ) ] \n[ ENGINE tableEngineName [ WITH tableEngineParamName [,...] ] ] \n[ NOT PERSISTENT ] [ TRANSACTIONAL ] \n[ AS select ] \n" }, { "code": null, "e": 2870, "s": 2638, "text": "By using the generic syntax of the Create Table command, we can create different types of tables such as cached tables, memory tables, and temporary tables. Following is the list to describe different clauses from the given syntax." }, { "code": null, "e": 3003, "s": 2870, "text": "CACHED − The cached tables are the default type for regular tables. This means the number of rows is not limited by the main memory." }, { "code": null, "e": 3136, "s": 3003, "text": "CACHED − The cached tables are the default type for regular tables. This means the number of rows is not limited by the main memory." }, { "code": null, "e": 3307, "s": 3136, "text": "MEMORY − The memory tables are the default type for temporary tables. This means the memory tables should not get too large and the index data is kept in the main memory." }, { "code": null, "e": 3478, "s": 3307, "text": "MEMORY − The memory tables are the default type for temporary tables. This means the memory tables should not get too large and the index data is kept in the main memory." }, { "code": null, "e": 3877, "s": 3478, "text": "TEMPORARY − Temporary tables are deleted while closing or opening a database. Basically, temporary tables are of two types −\n\nGLOBAL type − Accessible by all connections.\nLOCAL type − Accessible by the current connection.\n\nThe default type for temporary tables is global type. Indexes of temporary tables are kept in the main memory, unless the temporary table is created using CREATE CACHED TABLE." }, { "code": null, "e": 4002, "s": 3877, "text": "TEMPORARY − Temporary tables are deleted while closing or opening a database. Basically, temporary tables are of two types −" }, { "code": null, "e": 4047, "s": 4002, "text": "GLOBAL type − Accessible by all connections." }, { "code": null, "e": 4092, "s": 4047, "text": "GLOBAL type − Accessible by all connections." }, { "code": null, "e": 4143, "s": 4092, "text": "LOCAL type − Accessible by the current connection." }, { "code": null, "e": 4194, "s": 4143, "text": "LOCAL type − Accessible by the current connection." }, { "code": null, "e": 4370, "s": 4194, "text": "The default type for temporary tables is global type. Indexes of temporary tables are kept in the main memory, unless the temporary table is created using CREATE CACHED TABLE." }, { "code": null, "e": 4458, "s": 4370, "text": "ENGINE − The ENGINE option is only required when custom table implementations are used." }, { "code": null, "e": 4546, "s": 4458, "text": "ENGINE − The ENGINE option is only required when custom table implementations are used." }, { "code": null, "e": 4673, "s": 4546, "text": "NOT PERSISTENT − It is a modifier to keep the complete table data in-memory and all rows are lost when the database is closed." }, { "code": null, "e": 4800, "s": 4673, "text": "NOT PERSISTENT − It is a modifier to keep the complete table data in-memory and all rows are lost when the database is closed." }, { "code": null, "e": 4914, "s": 4800, "text": "TRANSACTIONAL − It is a keyword that commits an open transaction and this command supports only temporary tables." }, { "code": null, "e": 5028, "s": 4914, "text": "TRANSACTIONAL − It is a keyword that commits an open transaction and this command supports only temporary tables." }, { "code": null, "e": 5119, "s": 5028, "text": "In this example, let us create a table named tutorials_tbl using the following given data." }, { "code": null, "e": 5213, "s": 5119, "text": "The following query is used to create a table tutorials_tbl along with the given column data." }, { "code": null, "e": 5358, "s": 5213, "text": "CREATE TABLE tutorials_tbl ( \n id INT NOT NULL, \n title VARCHAR(50) NOT NULL, \n author VARCHAR(20) NOT NULL, \n submission_date DATE \n);\n" }, { "code": null, "e": 5405, "s": 5358, "text": "The above query produces the following output." }, { "code": null, "e": 5425, "s": 5405, "text": "(0) rows effected \n" }, { "code": null, "e": 5563, "s": 5425, "text": "Create Schema is a command used to create a user-dependent schema under a particular authorization (under the currently registered user)." }, { "code": null, "e": 5625, "s": 5563, "text": "Following is the generic syntax of the Create Schema command." }, { "code": null, "e": 5696, "s": 5625, "text": "CREATE SCHEMA [ IF NOT EXISTS ] name [ AUTHORIZATION ownerUserName ] \n" }, { "code": null, "e": 5991, "s": 5696, "text": "In the above generic syntax, AUTHORIZATION is a keyword used to provide the respective user name. This command is optional which means if we are not providing the user name, then it will consider the current user. The user that executes the command must have admin rights, as well as the owner." }, { "code": null, "e": 6052, "s": 5991, "text": "This command commits an open transaction in this connection." }, { "code": null, "e": 6154, "s": 6052, "text": "In this example, let us create a schema named test_schema under SA user, using the following command." }, { "code": null, "e": 6199, "s": 6154, "text": "CREATE SCHEMA test_schema AUTHORIZATION sa; " }, { "code": null, "e": 6248, "s": 6199, "text": "The above command produces the following output." }, { "code": null, "e": 6268, "s": 6248, "text": "(0) rows effected \n" }, { "code": null, "e": 6383, "s": 6268, "text": "Sequence is concept which is used to generate a number by following a sequence for id or any random column values." }, { "code": null, "e": 6447, "s": 6383, "text": "Following is the generic syntax of the create sequence command." }, { "code": null, "e": 6708, "s": 6447, "text": "CREATE SEQUENCE [ IF NOT EXISTS ] newSequenceName [ START WITH long ] \n[ INCREMENT BY long ] \n[ MINVALUE long | NOMINVALUE | NO MINVALUE ] \n[ MAXVALUE long | NOMAXVALUE | NO MAXVALUE ] \n[ CYCLE long | NOCYCLE | NO CYCLE ] \n[ CACHE long | NOCACHE | NO CACHE ] \n" }, { "code": null, "e": 6886, "s": 6708, "text": "This generic syntax is used to create a sequence. The datatype of a sequence is BIGINT. In this the sequence, values are never re-used, even when the transaction is roll backed." }, { "code": null, "e": 6969, "s": 6886, "text": "In this example, let us create a sequence named SEQ_ID, using the following query." }, { "code": null, "e": 6994, "s": 6969, "text": "CREATE SEQUENCE SEQ_ID; " }, { "code": null, "e": 7041, "s": 6994, "text": "The above query produces the following output." }, { "code": null, "e": 7061, "s": 7041, "text": "(0) rows effected \n" }, { "code": null, "e": 7094, "s": 7061, "text": "\n 14 Lectures \n 1 hours \n" }, { "code": null, "e": 7108, "s": 7094, "text": " Mahesh Kumar" }, { "code": null, "e": 7144, "s": 7108, "text": "\n 100 Lectures \n 9.5 hours \n" }, { "code": null, "e": 7159, "s": 7144, "text": " Hari Om Singh" }, { "code": null, "e": 7193, "s": 7159, "text": "\n 108 Lectures \n 8 hours \n" }, { "code": null, "e": 7208, "s": 7193, "text": " Pavan Lalwani" }, { "code": null, "e": 7241, "s": 7208, "text": "\n 10 Lectures \n 1 hours \n" }, { "code": null, "e": 7257, "s": 7241, "text": " Deepti Trivedi" }, { "code": null, "e": 7290, "s": 7257, "text": "\n 20 Lectures \n 2 hours \n" }, { "code": null, "e": 7306, "s": 7290, "text": " Deepti Trivedi" }, { "code": null, "e": 7339, "s": 7306, "text": "\n 14 Lectures \n 1 hours \n" }, { "code": null, "e": 7355, "s": 7339, "text": " Deepti Trivedi" }, { "code": null, "e": 7362, "s": 7355, "text": " Print" }, { "code": null, "e": 7373, "s": 7362, "text": " Add Notes" } ]
How to integrate Sikuli scripts into Selenium?
We can integrate Sikuli scripts into Selenium webdriver. Sikuli is an automation tool which is open-source. It has the feature to capture the images on the elements as well as perform operations on them. Some of the advantages of Sikuli are − Desktop or Windows applications can be automated. Desktop or Windows applications can be automated. Can be used for flash testing. Can be used for flash testing. Can be used on platforms like mobile, Mac, and Linux. Can be used on platforms like mobile, Mac, and Linux. It is based on image recognition technique. It is based on image recognition technique. Can be easily integrated with Selenium. Can be easily integrated with Selenium. To integrate Sikuli with Selenium, follow the below steps − Navigate to the link − https://launchpad.net/sikuli/+download. Click on the jar to download it (which can be used for Java environments) and save it in a location. Add the jar to the Java project in Eclipse IDE. Right−click on the project and select Properties. Then click on Java Build Path. Go to the Java Build Path tab. Click on Libraries. Then click on Add External JARs. Browse and add the Sikuli jar that we downloaded. Finally, click on Apply and Close. Capture the image of the edit box on which we will enter Selenium with the help of Sikuli and save it in a location. import org.openqa.selenium.WebDriver; import org.openqa.selenium.WebElement; import org.openqa.selenium.chrome.ChromeDriver; import org.sikuli.script.FindFailed; import org.sikuli.script.Pattern; import org.sikuli.script.Screen; public class SikuliIntegrate{ public static void main(String[] args) { System.setProperty("webdriver.chrome.driver", "C:\\Users\\ghs6kor\\Desktop\\Java\\chromedriver.exe"); WebDriver driver = new ChromeDriver(); driver.manage().timeouts().implicitlyWait(5, TimeUnit.SECONDS); driver.get("https://www.tutorialspoint.com/index.htm"); // Screen class to access Sikuli methods Screen s = new Screen(); //object of Pattern to specify image path Pattern e = new Pattern("C:\\Users\\ghs6kor\\Image.png"); //add wait time s.wait(e, 5); //enter text and click s.type(e, "Selenium"); s.click(e); } }
[ { "code": null, "e": 1266, "s": 1062, "text": "We can integrate Sikuli scripts into Selenium webdriver. Sikuli is an automation tool which is open-source. It has the feature to capture the images on the elements as well as perform operations on them." }, { "code": null, "e": 1305, "s": 1266, "text": "Some of the advantages of Sikuli are −" }, { "code": null, "e": 1355, "s": 1305, "text": "Desktop or Windows applications can be automated." }, { "code": null, "e": 1405, "s": 1355, "text": "Desktop or Windows applications can be automated." }, { "code": null, "e": 1436, "s": 1405, "text": "Can be used for flash testing." }, { "code": null, "e": 1467, "s": 1436, "text": "Can be used for flash testing." }, { "code": null, "e": 1521, "s": 1467, "text": "Can be used on platforms like mobile, Mac, and Linux." }, { "code": null, "e": 1575, "s": 1521, "text": "Can be used on platforms like mobile, Mac, and Linux." }, { "code": null, "e": 1619, "s": 1575, "text": "It is based on image recognition technique." }, { "code": null, "e": 1663, "s": 1619, "text": "It is based on image recognition technique." }, { "code": null, "e": 1703, "s": 1663, "text": "Can be easily integrated with Selenium." }, { "code": null, "e": 1743, "s": 1703, "text": "Can be easily integrated with Selenium." }, { "code": null, "e": 1866, "s": 1743, "text": "To integrate Sikuli with Selenium, follow the below steps − Navigate to the link − https://launchpad.net/sikuli/+download." }, { "code": null, "e": 1967, "s": 1866, "text": "Click on the jar to download it (which can be used for Java environments) and save it in a location." }, { "code": null, "e": 2265, "s": 1967, "text": "Add the jar to the Java project in Eclipse IDE. Right−click on the project and select Properties. Then click on Java Build Path. Go to the Java Build Path tab. Click on Libraries. Then click on Add External JARs. Browse and add the Sikuli jar that we downloaded. Finally, click on Apply and Close." }, { "code": null, "e": 2382, "s": 2265, "text": "Capture the image of the edit box on which we will enter Selenium with the help of Sikuli and save it in a location." }, { "code": null, "e": 3291, "s": 2382, "text": "import org.openqa.selenium.WebDriver;\nimport org.openqa.selenium.WebElement;\nimport org.openqa.selenium.chrome.ChromeDriver;\nimport org.sikuli.script.FindFailed;\nimport org.sikuli.script.Pattern;\nimport org.sikuli.script.Screen;\npublic class SikuliIntegrate{\n public static void main(String[] args) {\n System.setProperty(\"webdriver.chrome.driver\", \"C:\\\\Users\\\\ghs6kor\\\\Desktop\\\\Java\\\\chromedriver.exe\");\n WebDriver driver = new ChromeDriver();\n driver.manage().timeouts().implicitlyWait(5, TimeUnit.SECONDS);\n driver.get(\"https://www.tutorialspoint.com/index.htm\");\n // Screen class to access Sikuli methods\n Screen s = new Screen();\n //object of Pattern to specify image path\n Pattern e = new Pattern(\"C:\\\\Users\\\\ghs6kor\\\\Image.png\");\n //add wait time\n s.wait(e, 5);\n //enter text and click\n s.type(e, \"Selenium\");\n s.click(e);\n }\n}" } ]
Groovy - String Length
Syntax − The length of the string determined by the length() method of the string. Parameters − No parameters. Return Value − An Integer showing the length of the string. Following is an example of the usage of strings in Groovy − class Example { static void main(String[] args) { String a = "Hello"; println(a.length()); } } When we run the above program, we will get the following result − 5 52 Lectures 8 hours Krishna Sakinala 49 Lectures 2.5 hours Packt Publishing Print Add Notes Bookmark this page
[ { "code": null, "e": 2321, "s": 2238, "text": "Syntax − The length of the string determined by the length() method of the string." }, { "code": null, "e": 2349, "s": 2321, "text": "Parameters − No parameters." }, { "code": null, "e": 2409, "s": 2349, "text": "Return Value − An Integer showing the length of the string." }, { "code": null, "e": 2469, "s": 2409, "text": "Following is an example of the usage of strings in Groovy −" }, { "code": null, "e": 2583, "s": 2469, "text": "class Example {\n static void main(String[] args) {\n String a = \"Hello\";\n println(a.length());\n } \n}" }, { "code": null, "e": 2649, "s": 2583, "text": "When we run the above program, we will get the following result −" }, { "code": null, "e": 2652, "s": 2649, "text": "5\n" }, { "code": null, "e": 2685, "s": 2652, "text": "\n 52 Lectures \n 8 hours \n" }, { "code": null, "e": 2703, "s": 2685, "text": " Krishna Sakinala" }, { "code": null, "e": 2738, "s": 2703, "text": "\n 49 Lectures \n 2.5 hours \n" }, { "code": null, "e": 2756, "s": 2738, "text": " Packt Publishing" }, { "code": null, "e": 2763, "s": 2756, "text": " Print" }, { "code": null, "e": 2774, "s": 2763, "text": " Add Notes" } ]
Basics of Discrete Event Simulation using SimPy - GeeksforGeeks
19 Nov, 2020 SimPy is a powerful process-based discrete event simulation framework written in Python. Installation :To install SimPy, use the following command – pip install simpy Basic Concepts : The core idea behind SimPy is the generator function in Python. The difference between a normal function and a generator is that a normal function uses the “return” statement, while a generator uses “yield” statement. If the function has a return statement, then even on multiple function calls, it returns the same value. For eg – def func(): return 1 return 2 When the func() is called during the runtime, it will always return at the first instance of the return statement, that is, the function func() always returns 1, and the next return statement is never executed. However, in discrete event simulation, we may need to find the state of the system at a given time T. For that, it is required to remember the state of the interval just before T, and then perform the given simulation and return to state at time T. This is where generator functions are quite useful. For example, consider the following function def func(): while True: yield 1 yield 2 Now, when the first time this function is called, it ‘yields’ 1. However, on the very next call, it will yield 2. In some sense, it remembers what it returned upon the last call, and moves on to the next yield statement. Events in SimPy are called processes, which are defined by generator functions of their own. These processes take place inside an Environment. (Imagine the environment to be a large box, inside of which the processes are kept.) Consider a simple example, involving the simulation of a traffic light – # Python 3 code to demonstrate basics of SimPy package # Simulation of a Traffic Light # import the SimPy package import simpy # Generator function that defines the working of the traffic light # "timeout()" function makes next yield statement wait for a # given time passed as the argument def Traffic_Light(env): while True: print ("Light turns GRN at " + str(env.now)) # Light is green for 25 seconds yield env.timeout(25) print ("Light turns YEL at " + str(env.now)) # Light is yellow for 5 seconds yield env.timeout(5) print ("Light turns RED at " + str(env.now)) # Light is red for 60 seconds yield env.timeout(60) # env is the environment variable env = simpy.Environment() # The process defined by the function Traffic_Light(env) # is added to the environment env.process(Traffic_Light(env)) # The process is run for the first 180 seconds (180 is not included) env.run(until = 180) Output : Light turns GRN at 0 Light turns YEL at 25 Light turns RED at 30 Light turns GRN at 90 Light turns YEL at 115 Light turns RED at 120 In this code, the generator function Traffic_Light(env) takes the environment variable as the argument and simulates the operation of the traffic light for the time period passed as argument in the env.run() function. (Actually, time in SimPy is unitless. Though it can be converted to hours, minutes or seconds as per convenience). env.now returns the current value of the time elapsed. env.timeout() function is the base of this simulation, as it waits for the time passed as the argument to be elapsed on the computer’s simulation clock (it is not a real time clock), and then initiate the next yield statement, till the time passed as argument in env.run() has finished. env.run() starts all the processes linked to the environment at the same time = 0. python-modules Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Install PIP on Windows ? How To Convert Python Dictionary To JSON? How to drop one or multiple columns in Pandas Dataframe Check if element exists in list in Python Python | os.path.join() method Defaultdict in Python Selecting rows in pandas DataFrame based on conditions Python | Get unique values from a list Create a directory in Python Python | Pandas dataframe.groupby()
[ { "code": null, "e": 24292, "s": 24264, "text": "\n19 Nov, 2020" }, { "code": null, "e": 24381, "s": 24292, "text": "SimPy is a powerful process-based discrete event simulation framework written in Python." }, { "code": null, "e": 24441, "s": 24381, "text": "Installation :To install SimPy, use the following command –" }, { "code": null, "e": 24459, "s": 24441, "text": "pip install simpy" }, { "code": null, "e": 24476, "s": 24459, "text": "Basic Concepts :" }, { "code": null, "e": 24694, "s": 24476, "text": "The core idea behind SimPy is the generator function in Python. The difference between a normal function and a generator is that a normal function uses the “return” statement, while a generator uses “yield” statement." }, { "code": null, "e": 24808, "s": 24694, "text": "If the function has a return statement, then even on multiple function calls, it returns the same value. For eg –" }, { "code": "def func(): return 1 return 2", "e": 24844, "s": 24808, "text": null }, { "code": null, "e": 25055, "s": 24844, "text": "When the func() is called during the runtime, it will always return at the first instance of the return statement, that is, the function func() always returns 1, and the next return statement is never executed." }, { "code": null, "e": 25304, "s": 25055, "text": "However, in discrete event simulation, we may need to find the state of the system at a given time T. For that, it is required to remember the state of the interval just before T, and then perform the given simulation and return to state at time T." }, { "code": null, "e": 25401, "s": 25304, "text": "This is where generator functions are quite useful. For example, consider the following function" }, { "code": "def func(): while True: yield 1 yield 2", "e": 25458, "s": 25401, "text": null }, { "code": null, "e": 25679, "s": 25458, "text": "Now, when the first time this function is called, it ‘yields’ 1. However, on the very next call, it will yield 2. In some sense, it remembers what it returned upon the last call, and moves on to the next yield statement." }, { "code": null, "e": 25907, "s": 25679, "text": "Events in SimPy are called processes, which are defined by generator functions of their own. These processes take place inside an Environment. (Imagine the environment to be a large box, inside of which the processes are kept.)" }, { "code": null, "e": 25980, "s": 25907, "text": "Consider a simple example, involving the simulation of a traffic light –" }, { "code": "# Python 3 code to demonstrate basics of SimPy package # Simulation of a Traffic Light # import the SimPy package import simpy # Generator function that defines the working of the traffic light # \"timeout()\" function makes next yield statement wait for a # given time passed as the argument def Traffic_Light(env): while True: print (\"Light turns GRN at \" + str(env.now)) # Light is green for 25 seconds yield env.timeout(25) print (\"Light turns YEL at \" + str(env.now)) # Light is yellow for 5 seconds yield env.timeout(5) print (\"Light turns RED at \" + str(env.now)) # Light is red for 60 seconds yield env.timeout(60) # env is the environment variable env = simpy.Environment() # The process defined by the function Traffic_Light(env) # is added to the environment env.process(Traffic_Light(env)) # The process is run for the first 180 seconds (180 is not included) env.run(until = 180) ", "e": 27010, "s": 25980, "text": null }, { "code": null, "e": 27019, "s": 27010, "text": "Output :" }, { "code": null, "e": 27153, "s": 27019, "text": "Light turns GRN at 0\nLight turns YEL at 25\nLight turns RED at 30\nLight turns GRN at 90\nLight turns YEL at 115\nLight turns RED at 120\n" }, { "code": null, "e": 27541, "s": 27153, "text": "In this code, the generator function Traffic_Light(env) takes the environment variable as the argument and simulates the operation of the traffic light for the time period passed as argument in the env.run() function. (Actually, time in SimPy is unitless. Though it can be converted to hours, minutes or seconds as per convenience). env.now returns the current value of the time elapsed." }, { "code": null, "e": 27828, "s": 27541, "text": "env.timeout() function is the base of this simulation, as it waits for the time passed as the argument to be elapsed on the computer’s simulation clock (it is not a real time clock), and then initiate the next yield statement, till the time passed as argument in env.run() has finished." }, { "code": null, "e": 27911, "s": 27828, "text": "env.run() starts all the processes linked to the environment at the same time = 0." }, { "code": null, "e": 27926, "s": 27911, "text": "python-modules" }, { "code": null, "e": 27933, "s": 27926, "text": "Python" }, { "code": null, "e": 28031, "s": 27933, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 28063, "s": 28031, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 28105, "s": 28063, "text": "How To Convert Python Dictionary To JSON?" }, { "code": null, "e": 28161, "s": 28105, "text": "How to drop one or multiple columns in Pandas Dataframe" }, { "code": null, "e": 28203, "s": 28161, "text": "Check if element exists in list in Python" }, { "code": null, "e": 28234, "s": 28203, "text": "Python | os.path.join() method" }, { "code": null, "e": 28256, "s": 28234, "text": "Defaultdict in Python" }, { "code": null, "e": 28311, "s": 28256, "text": "Selecting rows in pandas DataFrame based on conditions" }, { "code": null, "e": 28350, "s": 28311, "text": "Python | Get unique values from a list" }, { "code": null, "e": 28379, "s": 28350, "text": "Create a directory in Python" } ]
How do I delete SharedPreferences data for my Android App using Kotlin?
This example demonstrates how to delete SharedPreferences data for my Android App using Kotlin. Step 1 − Create a new project in Android Studio, go to File ? New Project and fill all required details to create a new project. Step 2 − Add the following code to res/layout/activity_main.xml. <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:gravity="center_horizontal" android:orientation="vertical" android:padding="8dp" tools:context=".MainActivity"> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginTop="100dp" android:layout_marginBottom="100dp" android:textAlignment="center" android:textColor="@android:color/holo_green_dark" android:textSize="32sp" android:textStyle="bold" /> <TextView android:id="@+id/textView" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginBottom="20dp" android:textAlignment="center" android:textColor="@android:color/background_dark" android:textSize="20sp" android:textStyle="bold" /> <Button android:id="@+id/button" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Delete Shared Preference" /> <TextView android:id="@+id/tvAfterChange" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginTop="10dp" android:textAlignment="center" android:textColor="@android:color/holo_red_light" android:textSize="20sp" android:textStyle="bold" /> </LinearLayout> Step 3 − Add the following code to src/MainActivity.kt import android.content.Context import android.content.SharedPreferences import android.os.Bundle import android.widget.Button import android.widget.TextView import androidx.appcompat.app.AppCompatActivity class MainActivity : AppCompatActivity() { lateinit var textView: TextView lateinit var tvAfterDelete: TextView lateinit var sharedPreferences: SharedPreferences lateinit var editor: SharedPreferences.Editor lateinit var button: Button override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) textView = findViewById(R.id.textView) tvAfterDelete = findViewById(R.id.tvAfterChange) button = findViewById(R.id.button) title = "KotlinApp" sharedPreferences = getPreferences(Context.MODE_PRIVATE) editor = sharedPreferences.edit() editor.putString(resources.getString(R.string.sharedPref_key_player), "Cristiano Ronaldo") editor.putString(resources.getString(R.string.sharedPref_key_country), "Portugal") editor.apply() val player = sharedPreferences.getString(resources.getString(R.string.sharedPref_key_player), "") val country = sharedPreferences.getString(resources.getString(R.string.sharedPref_key_country), "") textView.text = "SharedPreferences Values\n" textView.text = "Player : $player\nCountry : $country" button.setOnClickListener { editor.remove(resources.getString(R.string.sharedPref_key_country)) editor.apply() val playerNow = sharedPreferences.getString(resources.getString(R.string.sharedPref_key_player), "") val countryNow = sharedPreferences.getString(resources.getString(R.string.sharedPref_key_country), "") tvAfterDelete.text = "SharedPreferences Values - After Removing City\n" tvAfterDelete.text = "Country : $playerNow\nCountry : $countryNow" } } } Step 4 − Add the following code in res/values/string.xml <string name="sharedPref_key_player">player_name</string> <string name="sharedPref_key_country">country_name</string> Step 5 − Add the following code to androidManifest.xml <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="app.com.q10"> <application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:roundIcon="@mipmap/ic_launcher_round" android:supportsRtl="true" android:theme="@style/AppTheme"> <activity android:name=".MainActivity"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> </manifest> Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click the Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen
[ { "code": null, "e": 1158, "s": 1062, "text": "This example demonstrates how to delete SharedPreferences data for my Android App using Kotlin." }, { "code": null, "e": 1287, "s": 1158, "text": "Step 1 − Create a new project in Android Studio, go to File ? New Project and fill all required details to create a new project." }, { "code": null, "e": 1352, "s": 1287, "text": "Step 2 − Add the following code to res/layout/activity_main.xml." }, { "code": null, "e": 2915, "s": 1352, "text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<LinearLayout xmlns:android=\"http://schemas.android.com/apk/res/android\"\n xmlns:tools=\"http://schemas.android.com/tools\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\"\n android:gravity=\"center_horizontal\"\n android:orientation=\"vertical\"\n android:padding=\"8dp\"\n tools:context=\".MainActivity\">\n <TextView\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:layout_marginTop=\"100dp\"\n android:layout_marginBottom=\"100dp\"\n android:textAlignment=\"center\"\n android:textColor=\"@android:color/holo_green_dark\"\n android:textSize=\"32sp\"\n android:textStyle=\"bold\" />\n <TextView\n android:id=\"@+id/textView\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"wrap_content\"\n android:layout_marginBottom=\"20dp\"\n android:textAlignment=\"center\"\n android:textColor=\"@android:color/background_dark\"\n android:textSize=\"20sp\"\n android:textStyle=\"bold\" />\n <Button\n android:id=\"@+id/button\"\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:text=\"Delete Shared Preference\" />\n <TextView\n android:id=\"@+id/tvAfterChange\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"wrap_content\"\n android:layout_marginTop=\"10dp\"\n android:textAlignment=\"center\"\n android:textColor=\"@android:color/holo_red_light\"\n android:textSize=\"20sp\"\n android:textStyle=\"bold\" />\n</LinearLayout>" }, { "code": null, "e": 2970, "s": 2915, "text": "Step 3 − Add the following code to src/MainActivity.kt" }, { "code": null, "e": 4896, "s": 2970, "text": "import android.content.Context\nimport android.content.SharedPreferences\nimport android.os.Bundle\nimport android.widget.Button\nimport android.widget.TextView\nimport androidx.appcompat.app.AppCompatActivity\nclass MainActivity : AppCompatActivity() {\n lateinit var textView: TextView\n lateinit var tvAfterDelete: TextView\n lateinit var sharedPreferences: SharedPreferences\n lateinit var editor: SharedPreferences.Editor\n lateinit var button: Button\n override fun onCreate(savedInstanceState: Bundle?) {\n super.onCreate(savedInstanceState)\n setContentView(R.layout.activity_main)\n textView = findViewById(R.id.textView)\n tvAfterDelete = findViewById(R.id.tvAfterChange)\n button = findViewById(R.id.button)\n title = \"KotlinApp\"\n sharedPreferences = getPreferences(Context.MODE_PRIVATE)\n editor = sharedPreferences.edit()\n editor.putString(resources.getString(R.string.sharedPref_key_player), \"Cristiano Ronaldo\")\n editor.putString(resources.getString(R.string.sharedPref_key_country), \"Portugal\")\n editor.apply()\n val player = sharedPreferences.getString(resources.getString(R.string.sharedPref_key_player), \"\")\n val country = sharedPreferences.getString(resources.getString(R.string.sharedPref_key_country), \"\")\n textView.text = \"SharedPreferences Values\\n\"\n textView.text = \"Player : $player\\nCountry : $country\"\n button.setOnClickListener {\n editor.remove(resources.getString(R.string.sharedPref_key_country))\n editor.apply()\n val playerNow = sharedPreferences.getString(resources.getString(R.string.sharedPref_key_player), \"\")\n val countryNow = sharedPreferences.getString(resources.getString(R.string.sharedPref_key_country), \"\")\n tvAfterDelete.text = \"SharedPreferences Values - After Removing City\\n\"\n tvAfterDelete.text = \"Country : $playerNow\\nCountry : $countryNow\"\n }\n }\n}" }, { "code": null, "e": 4953, "s": 4896, "text": "Step 4 − Add the following code in res/values/string.xml" }, { "code": null, "e": 5071, "s": 4953, "text": "<string name=\"sharedPref_key_player\">player_name</string>\n<string name=\"sharedPref_key_country\">country_name</string>" }, { "code": null, "e": 5126, "s": 5071, "text": "Step 5 − Add the following code to androidManifest.xml" }, { "code": null, "e": 5793, "s": 5126, "text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<manifest xmlns:android=\"http://schemas.android.com/apk/res/android\" package=\"app.com.q10\">\n <application\n android:allowBackup=\"true\"\n android:icon=\"@mipmap/ic_launcher\"\n android:label=\"@string/app_name\"\n android:roundIcon=\"@mipmap/ic_launcher_round\"\n android:supportsRtl=\"true\"\n android:theme=\"@style/AppTheme\">\n <activity android:name=\".MainActivity\">\n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.LAUNCHER\" />\n </intent-filter>\n </activity>\n </application>\n</manifest>" }, { "code": null, "e": 6141, "s": 5793, "text": "Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click the Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen" } ]
Count odd and even digits in a number in PL/SQL
We are given a positive integer of digits and the task is to calculate the count of odd and even digits in a number using PL/SQL. PL/SQL is a combination of SQL along with the procedural features of programming languages. It was developed by Oracle Corporation in the early 90's to enhance the capabilities of SQL. PL/SQL is one of three key programming languages embedded in the Oracle Database, along with SQL itself and Java. Input − int number = 23146579 Output count of odd digits in a number are : 5 count of even digits in a number are : 3 Explanation − In the given number, we have 2, 4, 6 as an even digits therefore count of even digits in a number are 3 and we have 3, 1, 5, 7 and 9 as an odd digits therefore count of odd digits in a number are 5. Input − int number = 4567228 Output count of odd digits in a number are : 2 count of even digits in a number are : 5 Explanation − In the given number, we have 5 and 7 as an odd digits therefore count of odd digits in a number are 2 and we have 4, 6, 2, 2 and 8 as an even digits therefore count of even digits in a number are 5. Input a number in an integer type variable of datatype NUMBER used in PL/SQL. Input a number in an integer type variable of datatype NUMBER used in PL/SQL. Take a length of type VARCHAR(50) which describes the maximum size length can store. Take a length of type VARCHAR(50) which describes the maximum size length can store. Take two variables as count for odd digits and count for even digits and initially set them to 0 Take two variables as count for odd digits and count for even digits and initially set them to 0 Start Loop For from 1 till the length while pass a number to it Start Loop For from 1 till the length while pass a number to it Inside the loop, set length as substr(number, i, 1) Inside the loop, set length as substr(number, i, 1) Now, check IF mod of length by 2 is not equals to 0 then increase the count for odd digits in a number Now, check IF mod of length by 2 is not equals to 0 then increase the count for odd digits in a number Else, increase the count of even digits in a number Else, increase the count of even digits in a number Print the result. Print the result. DECLARE digits NUMBER := 23146579; length VARCHAR2(50); count_odd NUMBER(10) := 0; count_even NUMBER(10) := 0; BEGIN FOR i IN 1..Length(digits) LOOP length := Substr(digits, i, 1); IF mod(length, 2) != 0 THEN count_odd := count_odd + 1; ELSE count_even := count_even + 1; END IF; END LOOP; dbms_output.Put_line('count of odd digits in a number are : ' || count_odd); dbms_output.Put_line('count of even digits in a number are : ' || count_even); END; If we run the above code it will generate the following output − count of odd digits in a number are : 5 count of even digits in a number are : 3
[ { "code": null, "e": 1192, "s": 1062, "text": "We are given a positive integer of digits and the task is to calculate the count of odd and even digits in a number using PL/SQL." }, { "code": null, "e": 1377, "s": 1192, "text": "PL/SQL is a combination of SQL along with the procedural features of programming languages. It was developed by Oracle Corporation in the early 90's to enhance the capabilities of SQL." }, { "code": null, "e": 1491, "s": 1377, "text": "PL/SQL is one of three key programming languages embedded in the Oracle Database, along with SQL itself and Java." }, { "code": null, "e": 1521, "s": 1491, "text": "Input − int number = 23146579" }, { "code": null, "e": 1529, "s": 1521, "text": "Output " }, { "code": null, "e": 1610, "s": 1529, "text": "count of odd digits in a number are : 5\ncount of even digits in a number are : 3" }, { "code": null, "e": 1823, "s": 1610, "text": "Explanation − In the given number, we have 2, 4, 6 as an even digits therefore count of even digits in a number are 3 and we have 3, 1, 5, 7 and 9 as an odd digits therefore count of odd digits in a number are 5." }, { "code": null, "e": 1852, "s": 1823, "text": "Input − int number = 4567228" }, { "code": null, "e": 1860, "s": 1852, "text": "Output " }, { "code": null, "e": 1941, "s": 1860, "text": "count of odd digits in a number are : 2\ncount of even digits in a number are : 5" }, { "code": null, "e": 2154, "s": 1941, "text": "Explanation − In the given number, we have 5 and 7 as an odd digits therefore count of odd digits in a number are 2 and we have 4, 6, 2, 2 and 8 as an even digits therefore count of even digits in a number are 5." }, { "code": null, "e": 2232, "s": 2154, "text": "Input a number in an integer type variable of datatype NUMBER used in PL/SQL." }, { "code": null, "e": 2310, "s": 2232, "text": "Input a number in an integer type variable of datatype NUMBER used in PL/SQL." }, { "code": null, "e": 2395, "s": 2310, "text": "Take a length of type VARCHAR(50) which describes the maximum size length can store." }, { "code": null, "e": 2480, "s": 2395, "text": "Take a length of type VARCHAR(50) which describes the maximum size length can store." }, { "code": null, "e": 2577, "s": 2480, "text": "Take two variables as count for odd digits and count for even digits and initially set them to 0" }, { "code": null, "e": 2674, "s": 2577, "text": "Take two variables as count for odd digits and count for even digits and initially set them to 0" }, { "code": null, "e": 2738, "s": 2674, "text": "Start Loop For from 1 till the length while pass a number to it" }, { "code": null, "e": 2802, "s": 2738, "text": "Start Loop For from 1 till the length while pass a number to it" }, { "code": null, "e": 2854, "s": 2802, "text": "Inside the loop, set length as substr(number, i, 1)" }, { "code": null, "e": 2906, "s": 2854, "text": "Inside the loop, set length as substr(number, i, 1)" }, { "code": null, "e": 3009, "s": 2906, "text": "Now, check IF mod of length by 2 is not equals to 0 then increase the count for odd digits in a number" }, { "code": null, "e": 3112, "s": 3009, "text": "Now, check IF mod of length by 2 is not equals to 0 then increase the count for odd digits in a number" }, { "code": null, "e": 3164, "s": 3112, "text": "Else, increase the count of even digits in a number" }, { "code": null, "e": 3216, "s": 3164, "text": "Else, increase the count of even digits in a number" }, { "code": null, "e": 3234, "s": 3216, "text": "Print the result." }, { "code": null, "e": 3252, "s": 3234, "text": "Print the result." }, { "code": null, "e": 3772, "s": 3252, "text": "DECLARE\n digits NUMBER := 23146579;\n length VARCHAR2(50);\n count_odd NUMBER(10) := 0;\n count_even NUMBER(10) := 0;\nBEGIN\n FOR i IN 1..Length(digits)\n LOOP\n length := Substr(digits, i, 1);\n IF mod(length, 2) != 0 THEN\n count_odd := count_odd + 1;\n ELSE\n count_even := count_even + 1;\n END IF;\n END LOOP;\n dbms_output.Put_line('count of odd digits in a number are : ' || count_odd);\n dbms_output.Put_line('count of even digits in a number are : ' || count_even);\nEND;" }, { "code": null, "e": 3837, "s": 3772, "text": "If we run the above code it will generate the following output −" }, { "code": null, "e": 3918, "s": 3837, "text": "count of odd digits in a number are : 5\ncount of even digits in a number are : 3" } ]
How to convert CSV File to PDF File using Python? - GeeksforGeeks
16 Mar, 2021 In this article, we will learn how to do Conversion of CSV to PDF file format. This simple task can be easily done using two Steps : Firstly, We convert our CSV file to HTML using the Pandas In the Second Step, we use PDFkit Python API to convert our HTML file to the PDF file format. Firstly, We convert our CSV file to HTML using the Pandas In the Second Step, we use PDFkit Python API to convert our HTML file to the PDF file format. Approach: 1. Converting CSV file to HTML using Pandas Framework. Pandas is a fast, powerful, flexible, and easy-to-use open-source data analysis and manipulation tool, built on top of the Python programming language. CSV File Used: For this section of tutorial we will be using : pandas.read_csv(): read_csv is an important pandas function to read CSV files and do operations on it.We will be using it to read our input CSV file. .to_html(): With help of DataFrame.to_html() method, we can get the html format of a dataframe by using DataFrame.to_html() method.This function takes in a CSV file as input, converts it, and saves it locally in HTML file format. pandas.read_csv(): read_csv is an important pandas function to read CSV files and do operations on it.We will be using it to read our input CSV file. .to_html(): With help of DataFrame.to_html() method, we can get the html format of a dataframe by using DataFrame.to_html() method.This function takes in a CSV file as input, converts it, and saves it locally in HTML file format. Syntax for converting CSV to HTML using Pandas : import pandas as pd CSV = pd.read_csv(“MyCSV.csv”) CSV.to_html(“MyCSV.html”) HTML File Used: MyCSV 2. Converting HTML file to CSV using PDFKit Python API There are many approaches for generating PDF in python. pdfkit is one of the better approaches as, it renders HTML into PDF with various image formats, HTML forms, and other complex printable documents. We can create a PDF document with pdfkit in 3 ways. They are : from URL from a HTML file from the string. 2.1. Generate PDF from URL: The following script gives us the pdf file from a website URL. import pdfkit pdfkit.from_url('https://www.geeksforgeeks.org', 'Output.pdf') 2.2. Generate PDF from file: The following script gives us the pdf file from an HTML file. import pdfkit pdfkit.from_file('LocalHTMLFile.html', 'Output.pdf') 2.3. Generate PDF from the string: The following script gives us the pdf file from a string. import pdfkit pdfkit.from_string('Geeks For Geeks', 'Output.pdf') Since we have already converted our CSV file to HTML we will be using the first method i.e. Generating PDF from URL wherein either we can give any website’s address or any local HTML file. If one already have wkhtmltopdf installed on machine we may use this syntax directly : Syntax for converting HTML to PDF using PDFKit : import pdfkit pdfkit.from_url(“MyCSV.html”, “FinalOutput.pdf”) Else, we also need to install wkhtmltopdf for the script to run on our PC and set the installed file wkhtmltopdf.exe ‘s path to our PC’s Environment Variables and we can now skip the configuration section in the script. or We can alternatively set Configuration as shown for the installed wkhtmltopdf.exe file and pass on the config variable to pdfkit.from_url function : Path Configuration path_wkhtmltopdf = r’D:\Softwares\wkhtmltopdf\bin\wkhtmltopdf.exe’ config = pdfkit.configuration(wkhtmltopdf=path_wkhtmltopdf) Convert HTML file to PDF with pdfkit pdfkit.from_url(“MyCSV.html”, “FinalOutput.pdf”, configuration=config) Implementation: Initial files in the folder INITIAL FILES IN FOLDER Python import pandas as pd import pdfkit # SAVE CSV TO HTML USING PANDAS csv = 'MyCSV.csv' html_file = csv_file[:-3]+'html' df = pd.read_csv(csv_file, sep=',') df.to_html(html_file) # INSTALL wkhtmltopdf AND SET PATH IN CONFIGURATION # These two Steps could be eliminated By Installing wkhtmltopdf - # - and setting it's path to Environment Variables path_wkhtmltopdf = r'D:\Softwares\wkhtmltopdf\bin\wkhtmltopdf.exe' config = pdfkit.configuration(wkhtmltopdf=path_wkhtmltopdf) # CONVERT HTML FILE TO PDF WITH PDFKIT pdfkit.from_url("MyCSV.html", "FinalOutput.pdf", configuration=config) After Running Above Python Script : FILES IN the SAME DIRECTORY AFTER RUNNING PYTHON SCRIPT Final Output : Picked python-csv Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Python Dictionary Enumerate() in Python Read a file line by line in Python Defaultdict in Python Different ways to create Pandas Dataframe sum() function in Python Iterate over a list in Python How to Install PIP on Windows ? Deque in Python Python String | replace()
[ { "code": null, "e": 24595, "s": 24564, "text": " \n16 Mar, 2021\n" }, { "code": null, "e": 24728, "s": 24595, "text": "In this article, we will learn how to do Conversion of CSV to PDF file format. This simple task can be easily done using two Steps :" }, { "code": null, "e": 24882, "s": 24728, "text": "\nFirstly, We convert our CSV file to HTML using the Pandas\nIn the Second Step, we use PDFkit Python API to convert our HTML file to the PDF file format.\n" }, { "code": null, "e": 24940, "s": 24882, "text": "Firstly, We convert our CSV file to HTML using the Pandas" }, { "code": null, "e": 25034, "s": 24940, "text": "In the Second Step, we use PDFkit Python API to convert our HTML file to the PDF file format." }, { "code": null, "e": 25044, "s": 25034, "text": "Approach:" }, { "code": null, "e": 25099, "s": 25044, "text": "1. Converting CSV file to HTML using Pandas Framework." }, { "code": null, "e": 25251, "s": 25099, "text": "Pandas is a fast, powerful, flexible, and easy-to-use open-source data analysis and manipulation tool, built on top of the Python programming language." }, { "code": null, "e": 25266, "s": 25251, "text": "CSV File Used:" }, { "code": null, "e": 25314, "s": 25266, "text": "For this section of tutorial we will be using :" }, { "code": null, "e": 25696, "s": 25314, "text": "\npandas.read_csv(): read_csv is an important pandas function to read CSV files and do operations on it.We will be using it to read our input CSV file.\n.to_html(): With help of DataFrame.to_html() method, we can get the html format of a dataframe by using DataFrame.to_html() method.This function takes in a CSV file as input, converts it, and saves it locally in HTML file format.\n" }, { "code": null, "e": 25846, "s": 25696, "text": "pandas.read_csv(): read_csv is an important pandas function to read CSV files and do operations on it.We will be using it to read our input CSV file." }, { "code": null, "e": 26076, "s": 25846, "text": ".to_html(): With help of DataFrame.to_html() method, we can get the html format of a dataframe by using DataFrame.to_html() method.This function takes in a CSV file as input, converts it, and saves it locally in HTML file format." }, { "code": null, "e": 26125, "s": 26076, "text": "Syntax for converting CSV to HTML using Pandas :" }, { "code": null, "e": 26146, "s": 26125, "text": "import pandas as pd " }, { "code": null, "e": 26179, "s": 26146, "text": "CSV = pd.read_csv(“MyCSV.csv”) " }, { "code": null, "e": 26207, "s": 26179, "text": "CSV.to_html(“MyCSV.html”) " }, { "code": null, "e": 26229, "s": 26207, "text": "HTML File Used: MyCSV" }, { "code": null, "e": 26284, "s": 26229, "text": "2. Converting HTML file to CSV using PDFKit Python API" }, { "code": null, "e": 26487, "s": 26284, "text": "There are many approaches for generating PDF in python. pdfkit is one of the better approaches as, it renders HTML into PDF with various image formats, HTML forms, and other complex printable documents." }, { "code": null, "e": 26550, "s": 26487, "text": "We can create a PDF document with pdfkit in 3 ways. They are :" }, { "code": null, "e": 26559, "s": 26550, "text": "from URL" }, { "code": null, "e": 26576, "s": 26559, "text": "from a HTML file" }, { "code": null, "e": 26593, "s": 26576, "text": "from the string." }, { "code": null, "e": 26684, "s": 26593, "text": "2.1. Generate PDF from URL: The following script gives us the pdf file from a website URL." }, { "code": null, "e": 26761, "s": 26684, "text": "import pdfkit\npdfkit.from_url('https://www.geeksforgeeks.org', 'Output.pdf')" }, { "code": null, "e": 26852, "s": 26761, "text": "2.2. Generate PDF from file: The following script gives us the pdf file from an HTML file." }, { "code": null, "e": 26919, "s": 26852, "text": "import pdfkit\npdfkit.from_file('LocalHTMLFile.html', 'Output.pdf')" }, { "code": null, "e": 27012, "s": 26919, "text": "2.3. Generate PDF from the string: The following script gives us the pdf file from a string." }, { "code": null, "e": 27078, "s": 27012, "text": "import pdfkit\npdfkit.from_string('Geeks For Geeks', 'Output.pdf')" }, { "code": null, "e": 27267, "s": 27078, "text": "Since we have already converted our CSV file to HTML we will be using the first method i.e. Generating PDF from URL wherein either we can give any website’s address or any local HTML file." }, { "code": null, "e": 27355, "s": 27267, "text": "If one already have wkhtmltopdf installed on machine we may use this syntax directly : " }, { "code": null, "e": 27405, "s": 27355, "text": "Syntax for converting HTML to PDF using PDFKit :" }, { "code": null, "e": 27420, "s": 27405, "text": "import pdfkit " }, { "code": null, "e": 27469, "s": 27420, "text": "pdfkit.from_url(“MyCSV.html”, “FinalOutput.pdf”)" }, { "code": null, "e": 27689, "s": 27469, "text": "Else, we also need to install wkhtmltopdf for the script to run on our PC and set the installed file wkhtmltopdf.exe ‘s path to our PC’s Environment Variables and we can now skip the configuration section in the script." }, { "code": null, "e": 27693, "s": 27689, "text": "or " }, { "code": null, "e": 27842, "s": 27693, "text": "We can alternatively set Configuration as shown for the installed wkhtmltopdf.exe file and pass on the config variable to pdfkit.from_url function :" }, { "code": null, "e": 27862, "s": 27842, "text": " Path Configuration" }, { "code": null, "e": 27929, "s": 27862, "text": "path_wkhtmltopdf = r’D:\\Softwares\\wkhtmltopdf\\bin\\wkhtmltopdf.exe’" }, { "code": null, "e": 27989, "s": 27929, "text": "config = pdfkit.configuration(wkhtmltopdf=path_wkhtmltopdf)" }, { "code": null, "e": 28026, "s": 27989, "text": "Convert HTML file to PDF with pdfkit" }, { "code": null, "e": 28097, "s": 28026, "text": "pdfkit.from_url(“MyCSV.html”, “FinalOutput.pdf”, configuration=config)" }, { "code": null, "e": 28113, "s": 28097, "text": "Implementation:" }, { "code": null, "e": 28141, "s": 28113, "text": "Initial files in the folder" }, { "code": null, "e": 28165, "s": 28141, "text": "INITIAL FILES IN FOLDER" }, { "code": null, "e": 28172, "s": 28165, "text": "Python" }, { "code": "\n\n\n\n\n\n\nimport pandas as pd \nimport pdfkit \n \n# SAVE CSV TO HTML USING PANDAS \ncsv = 'MyCSV.csv'\nhtml_file = csv_file[:-3]+'html'\n \ndf = pd.read_csv(csv_file, sep=',') \ndf.to_html(html_file) \n \n# INSTALL wkhtmltopdf AND SET PATH IN CONFIGURATION \n# These two Steps could be eliminated By Installing wkhtmltopdf - \n# - and setting it's path to Environment Variables \npath_wkhtmltopdf = r'D:\\Softwares\\wkhtmltopdf\\bin\\wkhtmltopdf.exe'\nconfig = pdfkit.configuration(wkhtmltopdf=path_wkhtmltopdf) \n \n# CONVERT HTML FILE TO PDF WITH PDFKIT \npdfkit.from_url(\"MyCSV.html\", \"FinalOutput.pdf\", configuration=config) \n\n\n\n\n\n", "e": 28799, "s": 28182, "text": null }, { "code": null, "e": 28835, "s": 28799, "text": "After Running Above Python Script :" }, { "code": null, "e": 28891, "s": 28835, "text": "FILES IN the SAME DIRECTORY AFTER RUNNING PYTHON SCRIPT" }, { "code": null, "e": 28906, "s": 28891, "text": "Final Output :" }, { "code": null, "e": 28915, "s": 28906, "text": "\nPicked\n" }, { "code": null, "e": 28928, "s": 28915, "text": "\npython-csv\n" }, { "code": null, "e": 28937, "s": 28928, "text": "\nPython\n" }, { "code": null, "e": 29142, "s": 28937, "text": "Writing code in comment? \n Please use ide.geeksforgeeks.org, \n generate link and share the link here.\n " }, { "code": null, "e": 29160, "s": 29142, "text": "Python Dictionary" }, { "code": null, "e": 29182, "s": 29160, "text": "Enumerate() in Python" }, { "code": null, "e": 29217, "s": 29182, "text": "Read a file line by line in Python" }, { "code": null, "e": 29239, "s": 29217, "text": "Defaultdict in Python" }, { "code": null, "e": 29281, "s": 29239, "text": "Different ways to create Pandas Dataframe" }, { "code": null, "e": 29306, "s": 29281, "text": "sum() function in Python" }, { "code": null, "e": 29336, "s": 29306, "text": "Iterate over a list in Python" }, { "code": null, "e": 29368, "s": 29336, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 29384, "s": 29368, "text": "Deque in Python" } ]
Node.js Date.format() API - GeeksforGeeks
21 Jan, 2022 The date-and-time.Date.format() is a minimalist collection of functions for manipulating JS date and time module which is used to format the date according to a certain pattern. Required Module: Install the module by npm or used it locally. By using npm. npm install date-and-time --save By using CDN link. <script src="/path/to/date-and-time.min.js"></script> Syntax: format(dateObj, formatString[, utc]) Parameters: This method takes the following arguments as parameters: dateObj: It is the object of the date. formatString: It is the new string format in which date will be shown. Return Value: This method returns formatted date and time. Example 1: index.js // Node.js program to demonstrate the // Date.format() method // Importing moduleconst date = require('date-and-time') // Creating object of current date and time // by using Date() const now = new Date(); // Formatting the date and time// by using date.format() methodconst value = date.format(now,'YYYY/MM/DD HH:mm:ss'); // Display the resultconsole.log("current date and time : " + value) Run the index.js file using the following command: node index.js Output: current date and time : 2021/03/07 12:13:46 Example 2: index.js // Node.js program to demonstrate the // Date.format() method // Importing moduleconst date = require('date-and-time') // Formatting the date and time// by using date.format() methodconst value = date.format((new Date('December 17, 1995 03:24:00')), 'YYYY/MM/DD HH:mm:ss'); // Display the resultconsole.log("date and time : " + value) Run the index.js file using the following command: node index.js Output: date and time : 1995/12/17 03:24:00 Reference: https://github.com/knowledgecode/date-and-time adnanirshad158 NodeJS date-time NodeJS-API Node.js Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments How to build a basic CRUD app with Node.js and ReactJS ? How to connect Node.js with React.js ? Mongoose Populate() Method Express.js req.params Property Mongoose find() Function Top 10 Front End Developer Skills That You Need in 2022 Top 10 Projects For Beginners To Practice HTML and CSS Skills How to fetch data from an API in ReactJS ? How to insert spaces/tabs in text using HTML/CSS? Difference between var, let and const keywords in JavaScript
[ { "code": null, "e": 24531, "s": 24503, "text": "\n21 Jan, 2022" }, { "code": null, "e": 24710, "s": 24531, "text": "The date-and-time.Date.format() is a minimalist collection of functions for manipulating JS date and time module which is used to format the date according to a certain pattern. " }, { "code": null, "e": 24773, "s": 24710, "text": "Required Module: Install the module by npm or used it locally." }, { "code": null, "e": 24787, "s": 24773, "text": "By using npm." }, { "code": null, "e": 24820, "s": 24787, "text": "npm install date-and-time --save" }, { "code": null, "e": 24839, "s": 24820, "text": "By using CDN link." }, { "code": null, "e": 24893, "s": 24839, "text": "<script src=\"/path/to/date-and-time.min.js\"></script>" }, { "code": null, "e": 24901, "s": 24893, "text": "Syntax:" }, { "code": null, "e": 24938, "s": 24901, "text": "format(dateObj, formatString[, utc])" }, { "code": null, "e": 25007, "s": 24938, "text": "Parameters: This method takes the following arguments as parameters:" }, { "code": null, "e": 25046, "s": 25007, "text": "dateObj: It is the object of the date." }, { "code": null, "e": 25117, "s": 25046, "text": "formatString: It is the new string format in which date will be shown." }, { "code": null, "e": 25176, "s": 25117, "text": "Return Value: This method returns formatted date and time." }, { "code": null, "e": 25187, "s": 25176, "text": "Example 1:" }, { "code": null, "e": 25196, "s": 25187, "text": "index.js" }, { "code": "// Node.js program to demonstrate the // Date.format() method // Importing moduleconst date = require('date-and-time') // Creating object of current date and time // by using Date() const now = new Date(); // Formatting the date and time// by using date.format() methodconst value = date.format(now,'YYYY/MM/DD HH:mm:ss'); // Display the resultconsole.log(\"current date and time : \" + value)", "e": 25595, "s": 25196, "text": null }, { "code": null, "e": 25646, "s": 25595, "text": "Run the index.js file using the following command:" }, { "code": null, "e": 25660, "s": 25646, "text": "node index.js" }, { "code": null, "e": 25668, "s": 25660, "text": "Output:" }, { "code": null, "e": 25712, "s": 25668, "text": "current date and time : 2021/03/07 12:13:46" }, { "code": null, "e": 25723, "s": 25712, "text": "Example 2:" }, { "code": null, "e": 25732, "s": 25723, "text": "index.js" }, { "code": "// Node.js program to demonstrate the // Date.format() method // Importing moduleconst date = require('date-and-time') // Formatting the date and time// by using date.format() methodconst value = date.format((new Date('December 17, 1995 03:24:00')), 'YYYY/MM/DD HH:mm:ss'); // Display the resultconsole.log(\"date and time : \" + value)", "e": 26072, "s": 25732, "text": null }, { "code": null, "e": 26123, "s": 26072, "text": "Run the index.js file using the following command:" }, { "code": null, "e": 26137, "s": 26123, "text": "node index.js" }, { "code": null, "e": 26145, "s": 26137, "text": "Output:" }, { "code": null, "e": 26181, "s": 26145, "text": "date and time : 1995/12/17 03:24:00" }, { "code": null, "e": 26239, "s": 26181, "text": "Reference: https://github.com/knowledgecode/date-and-time" }, { "code": null, "e": 26254, "s": 26239, "text": "adnanirshad158" }, { "code": null, "e": 26271, "s": 26254, "text": "NodeJS date-time" }, { "code": null, "e": 26282, "s": 26271, "text": "NodeJS-API" }, { "code": null, "e": 26290, "s": 26282, "text": "Node.js" }, { "code": null, "e": 26307, "s": 26290, "text": "Web Technologies" }, { "code": null, "e": 26405, "s": 26307, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 26414, "s": 26405, "text": "Comments" }, { "code": null, "e": 26427, "s": 26414, "text": "Old Comments" }, { "code": null, "e": 26484, "s": 26427, "text": "How to build a basic CRUD app with Node.js and ReactJS ?" }, { "code": null, "e": 26523, "s": 26484, "text": "How to connect Node.js with React.js ?" }, { "code": null, "e": 26550, "s": 26523, "text": "Mongoose Populate() Method" }, { "code": null, "e": 26581, "s": 26550, "text": "Express.js req.params Property" }, { "code": null, "e": 26606, "s": 26581, "text": "Mongoose find() Function" }, { "code": null, "e": 26662, "s": 26606, "text": "Top 10 Front End Developer Skills That You Need in 2022" }, { "code": null, "e": 26724, "s": 26662, "text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills" }, { "code": null, "e": 26767, "s": 26724, "text": "How to fetch data from an API in ReactJS ?" }, { "code": null, "e": 26817, "s": 26767, "text": "How to insert spaces/tabs in text using HTML/CSS?" } ]
Scala | String Interpolation - GeeksforGeeks
26 Feb, 2019 String Interpolation refers to substitution of defined variables or expressions in a given String with respected values. String Interpolation provides an easy way to process String literals. To apply this feature of Scala, we must follow few rules: String must be defined with starting character as s / f /raw.Variables in the String must have ‘$’ as prefix.Expressions must be enclosed within curly braces ({, }) and ‘$’ is added as prefix. String must be defined with starting character as s / f /raw. Variables in the String must have ‘$’ as prefix. Expressions must be enclosed within curly braces ({, }) and ‘$’ is added as prefix. Syntax: // x and y are defined val str = s"Sum of $x and $y is ${x+y}" s Interpolator: Within the String, we can access variables, object fields, functions calls, etc.Example 1: variables and expressions:// Scala program// for s interpolator // Creating objectobject GFG{ // Main method def main(args:Array[String]) { val x = 20 val y = 10 // without s interpolator val str1 = "Sum of $x and $y is ${x+y}" // with s interpolator val str2 = s"Sum of $x and $y is ${x+y}" println("str1: "+str1) println("str2: "+str2) }}Output:str1: Sum of $x and $y is ${x+y} str2: Sum of 20 and 10 is 30Example 2: function call// Scala program// for s interpolator // Creating objectobject GFG{ // adding two numbers def add(a:Int, b:Int):Int = { a+b } // Main method def main(args:Array[String]) { val x = 20 val y = 10 // without s interpolator val str1 = "Sum of $x and $y is ${add(x, y)}" // with s interpolator val str2 = s"Sum of $x and $y is ${add(x, y)}" println("str1: " + str1) println("str2: " + str2) }}Output:str1: Sum of $x and $y is ${add(x, y)} str2: Sum of 20 and 10 is 30f Interpolator: This interpolation helps in formatting numbers easily.To understand how format specifiers work refer Format Specifiers.Example 1: printing upto 2 decimal place:// Scala program// for f interpolator // Creating objectobject GFG{ // Main method def main(args:Array[String]) { val x = 20.6 // without f interpolator val str1 = "Value of x is $x%.2f" // with f interpolator val str2 = f"Value of x is $x%.2f" println("str1: " + str1) println("str2: " + str2) }}Output:str1: Value of x is $x%.2f str2: Value of x is 20.60Example 2: setting width in integers:// Scala program// for f interpolator // Creating objectobject GFG{ // Main method def main(args:Array[String]) { val x = 11 // without f interpolator val str1 = "Value of x is $x%04d" // with f interpolator val str2 = f"Value of x is $x%04d" println(str1) println(str2) }}Output:Value of x is $x%04d Value of x is 0011If we try to pass a Double value while formatting is done using %d specifier, compiler outputs an error. In case of %f specifier, passing Int is acceptable.raw Interpolator: String Literal should start with ‘raw’. This interpolator treats escape sequences same as any other character in a String.Example :printing escape sequence:// Scala program// for raw interpolator // Creating objectobject GFG{ // Main method def main(args:Array[String]) { // without raw interpolator val str1 = "Hello\nWorld" // with raw interpolator val str2 = raw"Hello\nWorld" println("str1: " + str1) println("str2: " + str2) }}Output:str1: Hello World str2: Hello\nWorld s Interpolator: Within the String, we can access variables, object fields, functions calls, etc.Example 1: variables and expressions:// Scala program// for s interpolator // Creating objectobject GFG{ // Main method def main(args:Array[String]) { val x = 20 val y = 10 // without s interpolator val str1 = "Sum of $x and $y is ${x+y}" // with s interpolator val str2 = s"Sum of $x and $y is ${x+y}" println("str1: "+str1) println("str2: "+str2) }}Output:str1: Sum of $x and $y is ${x+y} str2: Sum of 20 and 10 is 30Example 2: function call// Scala program// for s interpolator // Creating objectobject GFG{ // adding two numbers def add(a:Int, b:Int):Int = { a+b } // Main method def main(args:Array[String]) { val x = 20 val y = 10 // without s interpolator val str1 = "Sum of $x and $y is ${add(x, y)}" // with s interpolator val str2 = s"Sum of $x and $y is ${add(x, y)}" println("str1: " + str1) println("str2: " + str2) }}Output:str1: Sum of $x and $y is ${add(x, y)} str2: Sum of 20 and 10 is 30 Example 1: variables and expressions: // Scala program// for s interpolator // Creating objectobject GFG{ // Main method def main(args:Array[String]) { val x = 20 val y = 10 // without s interpolator val str1 = "Sum of $x and $y is ${x+y}" // with s interpolator val str2 = s"Sum of $x and $y is ${x+y}" println("str1: "+str1) println("str2: "+str2) }} Output: str1: Sum of $x and $y is ${x+y} str2: Sum of 20 and 10 is 30 Example 2: function call // Scala program// for s interpolator // Creating objectobject GFG{ // adding two numbers def add(a:Int, b:Int):Int = { a+b } // Main method def main(args:Array[String]) { val x = 20 val y = 10 // without s interpolator val str1 = "Sum of $x and $y is ${add(x, y)}" // with s interpolator val str2 = s"Sum of $x and $y is ${add(x, y)}" println("str1: " + str1) println("str2: " + str2) }} Output: str1: Sum of $x and $y is ${add(x, y)} str2: Sum of 20 and 10 is 30 f Interpolator: This interpolation helps in formatting numbers easily.To understand how format specifiers work refer Format Specifiers.Example 1: printing upto 2 decimal place:// Scala program// for f interpolator // Creating objectobject GFG{ // Main method def main(args:Array[String]) { val x = 20.6 // without f interpolator val str1 = "Value of x is $x%.2f" // with f interpolator val str2 = f"Value of x is $x%.2f" println("str1: " + str1) println("str2: " + str2) }}Output:str1: Value of x is $x%.2f str2: Value of x is 20.60Example 2: setting width in integers:// Scala program// for f interpolator // Creating objectobject GFG{ // Main method def main(args:Array[String]) { val x = 11 // without f interpolator val str1 = "Value of x is $x%04d" // with f interpolator val str2 = f"Value of x is $x%04d" println(str1) println(str2) }}Output:Value of x is $x%04d Value of x is 0011If we try to pass a Double value while formatting is done using %d specifier, compiler outputs an error. In case of %f specifier, passing Int is acceptable. To understand how format specifiers work refer Format Specifiers. Example 1: printing upto 2 decimal place: // Scala program// for f interpolator // Creating objectobject GFG{ // Main method def main(args:Array[String]) { val x = 20.6 // without f interpolator val str1 = "Value of x is $x%.2f" // with f interpolator val str2 = f"Value of x is $x%.2f" println("str1: " + str1) println("str2: " + str2) }} Output: str1: Value of x is $x%.2f str2: Value of x is 20.60 Example 2: setting width in integers: // Scala program// for f interpolator // Creating objectobject GFG{ // Main method def main(args:Array[String]) { val x = 11 // without f interpolator val str1 = "Value of x is $x%04d" // with f interpolator val str2 = f"Value of x is $x%04d" println(str1) println(str2) }} Output: Value of x is $x%04d Value of x is 0011 If we try to pass a Double value while formatting is done using %d specifier, compiler outputs an error. In case of %f specifier, passing Int is acceptable. raw Interpolator: String Literal should start with ‘raw’. This interpolator treats escape sequences same as any other character in a String.Example :printing escape sequence:// Scala program// for raw interpolator // Creating objectobject GFG{ // Main method def main(args:Array[String]) { // without raw interpolator val str1 = "Hello\nWorld" // with raw interpolator val str2 = raw"Hello\nWorld" println("str1: " + str1) println("str2: " + str2) }}Output:str1: Hello World str2: Hello\nWorld // Scala program// for raw interpolator // Creating objectobject GFG{ // Main method def main(args:Array[String]) { // without raw interpolator val str1 = "Hello\nWorld" // with raw interpolator val str2 = raw"Hello\nWorld" println("str1: " + str1) println("str2: " + str2) }} Output: str1: Hello World str2: Hello\nWorld Picked Scala Scala-Strings Scala Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments For Loop in Scala Scala Tutorial – Learn Scala with Step By Step Guide Scala | map() method Scala | flatMap Method Scala List filter() method with example Scala | reduce() Function String concatenation in Scala Type Casting in Scala Scala List contains() method with example Scala String substring() method with example
[ { "code": null, "e": 24427, "s": 24399, "text": "\n26 Feb, 2019" }, { "code": null, "e": 24676, "s": 24427, "text": "String Interpolation refers to substitution of defined variables or expressions in a given String with respected values. String Interpolation provides an easy way to process String literals. To apply this feature of Scala, we must follow few rules:" }, { "code": null, "e": 24869, "s": 24676, "text": "String must be defined with starting character as s / f /raw.Variables in the String must have ‘$’ as prefix.Expressions must be enclosed within curly braces ({, }) and ‘$’ is added as prefix." }, { "code": null, "e": 24931, "s": 24869, "text": "String must be defined with starting character as s / f /raw." }, { "code": null, "e": 24980, "s": 24931, "text": "Variables in the String must have ‘$’ as prefix." }, { "code": null, "e": 25064, "s": 24980, "text": "Expressions must be enclosed within curly braces ({, }) and ‘$’ is added as prefix." }, { "code": null, "e": 25072, "s": 25064, "text": "Syntax:" }, { "code": null, "e": 25135, "s": 25072, "text": "// x and y are defined\nval str = s\"Sum of $x and $y is ${x+y}\"" }, { "code": null, "e": 28130, "s": 25135, "text": "s Interpolator: Within the String, we can access variables, object fields, functions calls, etc.Example 1: variables and expressions:// Scala program// for s interpolator // Creating objectobject GFG{ // Main method def main(args:Array[String]) { val x = 20 val y = 10 // without s interpolator val str1 = \"Sum of $x and $y is ${x+y}\" // with s interpolator val str2 = s\"Sum of $x and $y is ${x+y}\" println(\"str1: \"+str1) println(\"str2: \"+str2) }}Output:str1: Sum of $x and $y is ${x+y}\nstr2: Sum of 20 and 10 is 30Example 2: function call// Scala program// for s interpolator // Creating objectobject GFG{ // adding two numbers def add(a:Int, b:Int):Int = { a+b } // Main method def main(args:Array[String]) { val x = 20 val y = 10 // without s interpolator val str1 = \"Sum of $x and $y is ${add(x, y)}\" // with s interpolator val str2 = s\"Sum of $x and $y is ${add(x, y)}\" println(\"str1: \" + str1) println(\"str2: \" + str2) }}Output:str1: Sum of $x and $y is ${add(x, y)}\nstr2: Sum of 20 and 10 is 30f Interpolator: This interpolation helps in formatting numbers easily.To understand how format specifiers work refer Format Specifiers.Example 1: printing upto 2 decimal place:// Scala program// for f interpolator // Creating objectobject GFG{ // Main method def main(args:Array[String]) { val x = 20.6 // without f interpolator val str1 = \"Value of x is $x%.2f\" // with f interpolator val str2 = f\"Value of x is $x%.2f\" println(\"str1: \" + str1) println(\"str2: \" + str2) }}Output:str1: Value of x is $x%.2f\nstr2: Value of x is 20.60Example 2: setting width in integers:// Scala program// for f interpolator // Creating objectobject GFG{ // Main method def main(args:Array[String]) { val x = 11 // without f interpolator val str1 = \"Value of x is $x%04d\" // with f interpolator val str2 = f\"Value of x is $x%04d\" println(str1) println(str2) }}Output:Value of x is $x%04d\nValue of x is 0011If we try to pass a Double value while formatting is done using %d specifier, compiler outputs an error. In case of %f specifier, passing Int is acceptable.raw Interpolator: String Literal should start with ‘raw’. This interpolator treats escape sequences same as any other character in a String.Example :printing escape sequence:// Scala program// for raw interpolator // Creating objectobject GFG{ // Main method def main(args:Array[String]) { // without raw interpolator val str1 = \"Hello\\nWorld\" // with raw interpolator val str2 = raw\"Hello\\nWorld\" println(\"str1: \" + str1) println(\"str2: \" + str2) }}Output:str1: Hello\nWorld\nstr2: Hello\\nWorld" }, { "code": null, "e": 29342, "s": 28130, "text": "s Interpolator: Within the String, we can access variables, object fields, functions calls, etc.Example 1: variables and expressions:// Scala program// for s interpolator // Creating objectobject GFG{ // Main method def main(args:Array[String]) { val x = 20 val y = 10 // without s interpolator val str1 = \"Sum of $x and $y is ${x+y}\" // with s interpolator val str2 = s\"Sum of $x and $y is ${x+y}\" println(\"str1: \"+str1) println(\"str2: \"+str2) }}Output:str1: Sum of $x and $y is ${x+y}\nstr2: Sum of 20 and 10 is 30Example 2: function call// Scala program// for s interpolator // Creating objectobject GFG{ // adding two numbers def add(a:Int, b:Int):Int = { a+b } // Main method def main(args:Array[String]) { val x = 20 val y = 10 // without s interpolator val str1 = \"Sum of $x and $y is ${add(x, y)}\" // with s interpolator val str2 = s\"Sum of $x and $y is ${add(x, y)}\" println(\"str1: \" + str1) println(\"str2: \" + str2) }}Output:str1: Sum of $x and $y is ${add(x, y)}\nstr2: Sum of 20 and 10 is 30" }, { "code": null, "e": 29380, "s": 29342, "text": "Example 1: variables and expressions:" }, { "code": "// Scala program// for s interpolator // Creating objectobject GFG{ // Main method def main(args:Array[String]) { val x = 20 val y = 10 // without s interpolator val str1 = \"Sum of $x and $y is ${x+y}\" // with s interpolator val str2 = s\"Sum of $x and $y is ${x+y}\" println(\"str1: \"+str1) println(\"str2: \"+str2) }}", "e": 29785, "s": 29380, "text": null }, { "code": null, "e": 29793, "s": 29785, "text": "Output:" }, { "code": null, "e": 29855, "s": 29793, "text": "str1: Sum of $x and $y is ${x+y}\nstr2: Sum of 20 and 10 is 30" }, { "code": null, "e": 29880, "s": 29855, "text": "Example 2: function call" }, { "code": "// Scala program// for s interpolator // Creating objectobject GFG{ // adding two numbers def add(a:Int, b:Int):Int = { a+b } // Main method def main(args:Array[String]) { val x = 20 val y = 10 // without s interpolator val str1 = \"Sum of $x and $y is ${add(x, y)}\" // with s interpolator val str2 = s\"Sum of $x and $y is ${add(x, y)}\" println(\"str1: \" + str1) println(\"str2: \" + str2) }}", "e": 30389, "s": 29880, "text": null }, { "code": null, "e": 30397, "s": 30389, "text": "Output:" }, { "code": null, "e": 30465, "s": 30397, "text": "str1: Sum of $x and $y is ${add(x, y)}\nstr2: Sum of 20 and 10 is 30" }, { "code": null, "e": 31682, "s": 30465, "text": "f Interpolator: This interpolation helps in formatting numbers easily.To understand how format specifiers work refer Format Specifiers.Example 1: printing upto 2 decimal place:// Scala program// for f interpolator // Creating objectobject GFG{ // Main method def main(args:Array[String]) { val x = 20.6 // without f interpolator val str1 = \"Value of x is $x%.2f\" // with f interpolator val str2 = f\"Value of x is $x%.2f\" println(\"str1: \" + str1) println(\"str2: \" + str2) }}Output:str1: Value of x is $x%.2f\nstr2: Value of x is 20.60Example 2: setting width in integers:// Scala program// for f interpolator // Creating objectobject GFG{ // Main method def main(args:Array[String]) { val x = 11 // without f interpolator val str1 = \"Value of x is $x%04d\" // with f interpolator val str2 = f\"Value of x is $x%04d\" println(str1) println(str2) }}Output:Value of x is $x%04d\nValue of x is 0011If we try to pass a Double value while formatting is done using %d specifier, compiler outputs an error. In case of %f specifier, passing Int is acceptable." }, { "code": null, "e": 31748, "s": 31682, "text": "To understand how format specifiers work refer Format Specifiers." }, { "code": null, "e": 31790, "s": 31748, "text": "Example 1: printing upto 2 decimal place:" }, { "code": "// Scala program// for f interpolator // Creating objectobject GFG{ // Main method def main(args:Array[String]) { val x = 20.6 // without f interpolator val str1 = \"Value of x is $x%.2f\" // with f interpolator val str2 = f\"Value of x is $x%.2f\" println(\"str1: \" + str1) println(\"str2: \" + str2) }}", "e": 32177, "s": 31790, "text": null }, { "code": null, "e": 32185, "s": 32177, "text": "Output:" }, { "code": null, "e": 32238, "s": 32185, "text": "str1: Value of x is $x%.2f\nstr2: Value of x is 20.60" }, { "code": null, "e": 32276, "s": 32238, "text": "Example 2: setting width in integers:" }, { "code": "// Scala program// for f interpolator // Creating objectobject GFG{ // Main method def main(args:Array[String]) { val x = 11 // without f interpolator val str1 = \"Value of x is $x%04d\" // with f interpolator val str2 = f\"Value of x is $x%04d\" println(str1) println(str2) }}", "e": 32633, "s": 32276, "text": null }, { "code": null, "e": 32641, "s": 32633, "text": "Output:" }, { "code": null, "e": 32681, "s": 32641, "text": "Value of x is $x%04d\nValue of x is 0011" }, { "code": null, "e": 32838, "s": 32681, "text": "If we try to pass a Double value while formatting is done using %d specifier, compiler outputs an error. In case of %f specifier, passing Int is acceptable." }, { "code": null, "e": 33406, "s": 32838, "text": "raw Interpolator: String Literal should start with ‘raw’. This interpolator treats escape sequences same as any other character in a String.Example :printing escape sequence:// Scala program// for raw interpolator // Creating objectobject GFG{ // Main method def main(args:Array[String]) { // without raw interpolator val str1 = \"Hello\\nWorld\" // with raw interpolator val str2 = raw\"Hello\\nWorld\" println(\"str1: \" + str1) println(\"str2: \" + str2) }}Output:str1: Hello\nWorld\nstr2: Hello\\nWorld" }, { "code": "// Scala program// for raw interpolator // Creating objectobject GFG{ // Main method def main(args:Array[String]) { // without raw interpolator val str1 = \"Hello\\nWorld\" // with raw interpolator val str2 = raw\"Hello\\nWorld\" println(\"str1: \" + str1) println(\"str2: \" + str2) }}", "e": 33757, "s": 33406, "text": null }, { "code": null, "e": 33765, "s": 33757, "text": "Output:" }, { "code": null, "e": 33802, "s": 33765, "text": "str1: Hello\nWorld\nstr2: Hello\\nWorld" }, { "code": null, "e": 33809, "s": 33802, "text": "Picked" }, { "code": null, "e": 33815, "s": 33809, "text": "Scala" }, { "code": null, "e": 33829, "s": 33815, "text": "Scala-Strings" }, { "code": null, "e": 33835, "s": 33829, "text": "Scala" }, { "code": null, "e": 33933, "s": 33835, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 33942, "s": 33933, "text": "Comments" }, { "code": null, "e": 33955, "s": 33942, "text": "Old Comments" }, { "code": null, "e": 33973, "s": 33955, "text": "For Loop in Scala" }, { "code": null, "e": 34026, "s": 33973, "text": "Scala Tutorial – Learn Scala with Step By Step Guide" }, { "code": null, "e": 34047, "s": 34026, "text": "Scala | map() method" }, { "code": null, "e": 34070, "s": 34047, "text": "Scala | flatMap Method" }, { "code": null, "e": 34110, "s": 34070, "text": "Scala List filter() method with example" }, { "code": null, "e": 34136, "s": 34110, "text": "Scala | reduce() Function" }, { "code": null, "e": 34166, "s": 34136, "text": "String concatenation in Scala" }, { "code": null, "e": 34188, "s": 34166, "text": "Type Casting in Scala" }, { "code": null, "e": 34230, "s": 34188, "text": "Scala List contains() method with example" } ]
Kubernetes - Setup
It is important to set up the Virtual Datacenter (vDC) before setting up Kubernetes. This can be considered as a set of machines where they can communicate with each other via the network. For hands-on approach, you can set up vDC on PROFITBRICKS if you do not have a physical or cloud infrastructure set up. Once the IaaS setup on any cloud is complete, you need to configure the Master and the Node. Note − The setup is shown for Ubuntu machines. The same can be set up on other Linux machines as well. Installing Docker − Docker is required on all the instances of Kubernetes. Following are the steps to install the Docker. Step 1 − Log on to the machine with the root user account. Step 2 − Update the package information. Make sure that the apt package is working. Step 3 − Run the following commands. $ sudo apt-get update $ sudo apt-get install apt-transport-https ca-certificates Step 4 − Add the new GPG key. $ sudo apt-key adv \ --keyserver hkp://ha.pool.sks-keyservers.net:80 \ --recv-keys 58118E89F3A912897C070ADBF76221572C52609D $ echo "deb https://apt.dockerproject.org/repo ubuntu-trusty main" | sudo tee /etc/apt/sources.list.d/docker.list Step 5 − Update the API package image. $ sudo apt-get update Once all the above tasks are complete, you can start with the actual installation of the Docker engine. However, before this you need to verify that the kernel version you are using is correct. Run the following commands to install the Docker engine. Step 1 − Logon to the machine. Step 2 − Update the package index. $ sudo apt-get update Step 3 − Install the Docker Engine using the following command. $ sudo apt-get install docker-engine Step 4 − Start the Docker daemon. $ sudo apt-get install docker-engine Step 5 − To very if the Docker is installed, use the following command. $ sudo docker run hello-world This needs to be installed on Kubernetes Master Machine. In order to install it, run the following commands. $ curl -L https://github.com/coreos/etcd/releases/download/v2.0.0/etcd -v2.0.0-linux-amd64.tar.gz -o etcd-v2.0.0-linux-amd64.tar.gz ->1 $ tar xzvf etcd-v2.0.0-linux-amd64.tar.gz ------>2 $ cd etcd-v2.0.0-linux-amd64 ------------>3 $ mkdir /opt/bin ------------->4 $ cp etcd* /opt/bin ----------->5 In the above set of command − First, we download the etcd. Save this with specified name. Then, we have to un-tar the tar package. We make a dir. inside the /opt named bin. Copy the extracted file to the target location. Now we are ready to build Kubernetes. We need to install Kubernetes on all the machines on the cluster. $ git clone https://github.com/GoogleCloudPlatform/kubernetes.git $ cd kubernetes $ make release The above command will create a _output dir in the root of the kubernetes folder. Next, we can extract the directory into any of the directory of our choice /opt/bin, etc. Next, comes the networking part wherein we need to actually start with the setup of Kubernetes master and node. In order to do this, we will make an entry in the host file which can be done on the node machine. $ echo "<IP address of master machine> kube-master < IP address of Node Machine>" >> /etc/hosts Following will be the output of the above command. Now, we will start with the actual configuration on Kubernetes Master. First, we will start copying all the configuration files to their correct location. $ cp <Current dir. location>/kube-apiserver /opt/bin/ $ cp <Current dir. location>/kube-controller-manager /opt/bin/ $ cp <Current dir. location>/kube-kube-scheduler /opt/bin/ $ cp <Current dir. location>/kubecfg /opt/bin/ $ cp <Current dir. location>/kubectl /opt/bin/ $ cp <Current dir. location>/kubernetes /opt/bin/ The above command will copy all the configuration files to the required location. Now we will come back to the same directory where we have built the Kubernetes folder. $ cp kubernetes/cluster/ubuntu/init_conf/kube-apiserver.conf /etc/init/ $ cp kubernetes/cluster/ubuntu/init_conf/kube-controller-manager.conf /etc/init/ $ cp kubernetes/cluster/ubuntu/init_conf/kube-kube-scheduler.conf /etc/init/ $ cp kubernetes/cluster/ubuntu/initd_scripts/kube-apiserver /etc/init.d/ $ cp kubernetes/cluster/ubuntu/initd_scripts/kube-controller-manager /etc/init.d/ $ cp kubernetes/cluster/ubuntu/initd_scripts/kube-kube-scheduler /etc/init.d/ $ cp kubernetes/cluster/ubuntu/default_scripts/kubelet /etc/default/ $ cp kubernetes/cluster/ubuntu/default_scripts/kube-proxy /etc/default/ $ cp kubernetes/cluster/ubuntu/default_scripts/kubelet /etc/default/ The next step is to update the copied configuration file under /etc. dir. Configure etcd on master using the following command. $ ETCD_OPTS = "-listen-client-urls = http://kube-master:4001" For this on the master, we need to edit the /etc/default/kube-apiserver file which we copied earlier. $ KUBE_APISERVER_OPTS = "--address = 0.0.0.0 \ --port = 8080 \ --etcd_servers = <The path that is configured in ETCD_OPTS> \ --portal_net = 11.1.1.0/24 \ --allow_privileged = false \ --kubelet_port = < Port you want to configure> \ --v = 0" We need to add the following content in /etc/default/kube-controller-manager. $ KUBE_CONTROLLER_MANAGER_OPTS = "--address = 0.0.0.0 \ --master = 127.0.0.1:8080 \ --machines = kube-minion \ -----> #this is the kubernatics node --v = 0 Next, configure the kube scheduler in the corresponding file. $ KUBE_SCHEDULER_OPTS = "--address = 0.0.0.0 \ --master = 127.0.0.1:8080 \ --v = 0" Once all the above tasks are complete, we are good to go ahead by bring up the Kubernetes Master. In order to do this, we will restart the Docker. $ service docker restart Kubernetes node will run two services the kubelet and the kube-proxy. Before moving ahead, we need to copy the binaries we downloaded to their required folders where we want to configure the kubernetes node. Use the same method of copying the files that we did for kubernetes master. As it will only run the kubelet and the kube-proxy, we will configure them. $ cp <Path of the extracted file>/kubelet /opt/bin/ $ cp <Path of the extracted file>/kube-proxy /opt/bin/ $ cp <Path of the extracted file>/kubecfg /opt/bin/ $ cp <Path of the extracted file>/kubectl /opt/bin/ $ cp <Path of the extracted file>/kubernetes /opt/bin/ Now, we will copy the content to the appropriate dir. $ cp kubernetes/cluster/ubuntu/init_conf/kubelet.conf /etc/init/ $ cp kubernetes/cluster/ubuntu/init_conf/kube-proxy.conf /etc/init/ $ cp kubernetes/cluster/ubuntu/initd_scripts/kubelet /etc/init.d/ $ cp kubernetes/cluster/ubuntu/initd_scripts/kube-proxy /etc/init.d/ $ cp kubernetes/cluster/ubuntu/default_scripts/kubelet /etc/default/ $ cp kubernetes/cluster/ubuntu/default_scripts/kube-proxy /etc/default/ We will configure the kubelet and kube-proxy conf files. We will configure the /etc/init/kubelet.conf. $ KUBELET_OPTS = "--address = 0.0.0.0 \ --port = 10250 \ --hostname_override = kube-minion \ --etcd_servers = http://kube-master:4001 \ --enable_server = true --v = 0" / For kube-proxy, we will configure using the following command. $ KUBE_PROXY_OPTS = "--etcd_servers = http://kube-master:4001 \ --v = 0" /etc/init/kube-proxy.conf Finally, we will restart the Docker service. $ service docker restart Now we are done with the configuration. You can check by running the following commands. $ /opt/bin/kubectl get minions 41 Lectures 5 hours AR Shankar 15 Lectures 2 hours Harshit Srivastava, Pranjal Srivastava 18 Lectures 1.5 hours Nigel Poulton 25 Lectures 1.5 hours Pranjal Srivastava 18 Lectures 1 hours Pranjal Srivastava 26 Lectures 1.5 hours Pranjal Srivastava Print Add Notes Bookmark this page
[ { "code": null, "e": 2504, "s": 2195, "text": "It is important to set up the Virtual Datacenter (vDC) before setting up Kubernetes. This can be considered as a set of machines where they can communicate with each other via the network. For hands-on approach, you can set up vDC on PROFITBRICKS if you do not have a physical or cloud infrastructure set up." }, { "code": null, "e": 2597, "s": 2504, "text": "Once the IaaS setup on any cloud is complete, you need to configure the Master and the Node." }, { "code": null, "e": 2700, "s": 2597, "text": "Note − The setup is shown for Ubuntu machines. The same can be set up on other Linux machines as well." }, { "code": null, "e": 2822, "s": 2700, "text": "Installing Docker − Docker is required on all the instances of Kubernetes. Following are the steps to install the Docker." }, { "code": null, "e": 2881, "s": 2822, "text": "Step 1 − Log on to the machine with the root user account." }, { "code": null, "e": 2965, "s": 2881, "text": "Step 2 − Update the package information. Make sure that the apt package is working." }, { "code": null, "e": 3002, "s": 2965, "text": "Step 3 − Run the following commands." }, { "code": null, "e": 3084, "s": 3002, "text": "$ sudo apt-get update\n$ sudo apt-get install apt-transport-https ca-certificates\n" }, { "code": null, "e": 3114, "s": 3084, "text": "Step 4 − Add the new GPG key." }, { "code": null, "e": 3359, "s": 3114, "text": "$ sudo apt-key adv \\\n --keyserver hkp://ha.pool.sks-keyservers.net:80 \\\n --recv-keys 58118E89F3A912897C070ADBF76221572C52609D\n$ echo \"deb https://apt.dockerproject.org/repo ubuntu-trusty main\" | sudo tee\n/etc/apt/sources.list.d/docker.list\n" }, { "code": null, "e": 3398, "s": 3359, "text": "Step 5 − Update the API package image." }, { "code": null, "e": 3421, "s": 3398, "text": "$ sudo apt-get update\n" }, { "code": null, "e": 3615, "s": 3421, "text": "Once all the above tasks are complete, you can start with the actual installation of the Docker engine. However, before this you need to verify that the kernel version you are using is correct." }, { "code": null, "e": 3672, "s": 3615, "text": "Run the following commands to install the Docker engine." }, { "code": null, "e": 3703, "s": 3672, "text": "Step 1 − Logon to the machine." }, { "code": null, "e": 3738, "s": 3703, "text": "Step 2 − Update the package index." }, { "code": null, "e": 3761, "s": 3738, "text": "$ sudo apt-get update\n" }, { "code": null, "e": 3825, "s": 3761, "text": "Step 3 − Install the Docker Engine using the following command." }, { "code": null, "e": 3863, "s": 3825, "text": "$ sudo apt-get install docker-engine\n" }, { "code": null, "e": 3897, "s": 3863, "text": "Step 4 − Start the Docker daemon." }, { "code": null, "e": 3935, "s": 3897, "text": "$ sudo apt-get install docker-engine\n" }, { "code": null, "e": 4007, "s": 3935, "text": "Step 5 − To very if the Docker is installed, use the following command." }, { "code": null, "e": 4038, "s": 4007, "text": "$ sudo docker run hello-world\n" }, { "code": null, "e": 4147, "s": 4038, "text": "This needs to be installed on Kubernetes Master Machine. In order to install it, run the following commands." }, { "code": null, "e": 4446, "s": 4147, "text": "$ curl -L https://github.com/coreos/etcd/releases/download/v2.0.0/etcd\n-v2.0.0-linux-amd64.tar.gz -o etcd-v2.0.0-linux-amd64.tar.gz ->1\n$ tar xzvf etcd-v2.0.0-linux-amd64.tar.gz ------>2\n$ cd etcd-v2.0.0-linux-amd64 ------------>3\n$ mkdir /opt/bin ------------->4\n$ cp etcd* /opt/bin ----------->5\n" }, { "code": null, "e": 4476, "s": 4446, "text": "In the above set of command −" }, { "code": null, "e": 4536, "s": 4476, "text": "First, we download the etcd. Save this with specified name." }, { "code": null, "e": 4577, "s": 4536, "text": "Then, we have to un-tar the tar package." }, { "code": null, "e": 4619, "s": 4577, "text": "We make a dir. inside the /opt named bin." }, { "code": null, "e": 4667, "s": 4619, "text": "Copy the extracted file to the target location." }, { "code": null, "e": 4771, "s": 4667, "text": "Now we are ready to build Kubernetes. We need to install Kubernetes on all the machines on the cluster." }, { "code": null, "e": 4869, "s": 4771, "text": "$ git clone https://github.com/GoogleCloudPlatform/kubernetes.git\n$ cd kubernetes\n$ make release\n" }, { "code": null, "e": 5041, "s": 4869, "text": "The above command will create a _output dir in the root of the kubernetes folder. Next, we can extract the directory into any of the directory of our choice /opt/bin, etc." }, { "code": null, "e": 5252, "s": 5041, "text": "Next, comes the networking part wherein we need to actually start with the setup of Kubernetes master and node. In order to do this, we will make an entry in the host file which can be done on the node machine." }, { "code": null, "e": 5349, "s": 5252, "text": "$ echo \"<IP address of master machine> kube-master\n< IP address of Node Machine>\" >> /etc/hosts\n" }, { "code": null, "e": 5400, "s": 5349, "text": "Following will be the output of the above command." }, { "code": null, "e": 5471, "s": 5400, "text": "Now, we will start with the actual configuration on Kubernetes Master." }, { "code": null, "e": 5555, "s": 5471, "text": "First, we will start copying all the configuration files to their correct location." }, { "code": null, "e": 5876, "s": 5555, "text": "$ cp <Current dir. location>/kube-apiserver /opt/bin/\n$ cp <Current dir. location>/kube-controller-manager /opt/bin/\n$ cp <Current dir. location>/kube-kube-scheduler /opt/bin/\n$ cp <Current dir. location>/kubecfg /opt/bin/\n$ cp <Current dir. location>/kubectl /opt/bin/\n$ cp <Current dir. location>/kubernetes /opt/bin/\n" }, { "code": null, "e": 6045, "s": 5876, "text": "The above command will copy all the configuration files to the required location. Now we will come back to the same directory where we have built the Kubernetes folder." }, { "code": null, "e": 6721, "s": 6045, "text": "$ cp kubernetes/cluster/ubuntu/init_conf/kube-apiserver.conf /etc/init/\n$ cp kubernetes/cluster/ubuntu/init_conf/kube-controller-manager.conf /etc/init/\n$ cp kubernetes/cluster/ubuntu/init_conf/kube-kube-scheduler.conf /etc/init/\n\n$ cp kubernetes/cluster/ubuntu/initd_scripts/kube-apiserver /etc/init.d/\n$ cp kubernetes/cluster/ubuntu/initd_scripts/kube-controller-manager /etc/init.d/\n$ cp kubernetes/cluster/ubuntu/initd_scripts/kube-kube-scheduler /etc/init.d/\n\n$ cp kubernetes/cluster/ubuntu/default_scripts/kubelet /etc/default/\n$ cp kubernetes/cluster/ubuntu/default_scripts/kube-proxy /etc/default/\n$ cp kubernetes/cluster/ubuntu/default_scripts/kubelet /etc/default/\n" }, { "code": null, "e": 6795, "s": 6721, "text": "The next step is to update the copied configuration file under /etc. dir." }, { "code": null, "e": 6849, "s": 6795, "text": "Configure etcd on master using the following command." }, { "code": null, "e": 6912, "s": 6849, "text": "$ ETCD_OPTS = \"-listen-client-urls = http://kube-master:4001\"\n" }, { "code": null, "e": 7014, "s": 6912, "text": "For this on the master, we need to edit the /etc/default/kube-apiserver file which we copied earlier." }, { "code": null, "e": 7256, "s": 7014, "text": "$ KUBE_APISERVER_OPTS = \"--address = 0.0.0.0 \\\n--port = 8080 \\\n--etcd_servers = <The path that is configured in ETCD_OPTS> \\\n--portal_net = 11.1.1.0/24 \\\n--allow_privileged = false \\\n--kubelet_port = < Port you want to configure> \\\n--v = 0\"\n" }, { "code": null, "e": 7334, "s": 7256, "text": "We need to add the following content in /etc/default/kube-controller-manager." }, { "code": null, "e": 7491, "s": 7334, "text": "$ KUBE_CONTROLLER_MANAGER_OPTS = \"--address = 0.0.0.0 \\\n--master = 127.0.0.1:8080 \\\n--machines = kube-minion \\ -----> #this is the kubernatics node\n--v = 0\n" }, { "code": null, "e": 7553, "s": 7491, "text": "Next, configure the kube scheduler in the corresponding file." }, { "code": null, "e": 7638, "s": 7553, "text": "$ KUBE_SCHEDULER_OPTS = \"--address = 0.0.0.0 \\\n--master = 127.0.0.1:8080 \\\n--v = 0\"\n" }, { "code": null, "e": 7785, "s": 7638, "text": "Once all the above tasks are complete, we are good to go ahead by bring up the Kubernetes Master. In order to do this, we will restart the Docker." }, { "code": null, "e": 7811, "s": 7785, "text": "$ service docker restart\n" }, { "code": null, "e": 8019, "s": 7811, "text": "Kubernetes node will run two services the kubelet and the kube-proxy. Before moving ahead, we need to copy the binaries we downloaded to their required folders where we want to configure the kubernetes node." }, { "code": null, "e": 8171, "s": 8019, "text": "Use the same method of copying the files that we did for kubernetes master. As it will only run the kubelet and the kube-proxy, we will configure them." }, { "code": null, "e": 8438, "s": 8171, "text": "$ cp <Path of the extracted file>/kubelet /opt/bin/\n$ cp <Path of the extracted file>/kube-proxy /opt/bin/\n$ cp <Path of the extracted file>/kubecfg /opt/bin/\n$ cp <Path of the extracted file>/kubectl /opt/bin/\n$ cp <Path of the extracted file>/kubernetes /opt/bin/\n" }, { "code": null, "e": 8492, "s": 8438, "text": "Now, we will copy the content to the appropriate dir." }, { "code": null, "e": 8902, "s": 8492, "text": "$ cp kubernetes/cluster/ubuntu/init_conf/kubelet.conf /etc/init/\n$ cp kubernetes/cluster/ubuntu/init_conf/kube-proxy.conf /etc/init/\n$ cp kubernetes/cluster/ubuntu/initd_scripts/kubelet /etc/init.d/\n$ cp kubernetes/cluster/ubuntu/initd_scripts/kube-proxy /etc/init.d/\n$ cp kubernetes/cluster/ubuntu/default_scripts/kubelet /etc/default/\n$ cp kubernetes/cluster/ubuntu/default_scripts/kube-proxy /etc/default/\n" }, { "code": null, "e": 8959, "s": 8902, "text": "We will configure the kubelet and kube-proxy conf files." }, { "code": null, "e": 9005, "s": 8959, "text": "We will configure the /etc/init/kubelet.conf." }, { "code": null, "e": 9176, "s": 9005, "text": "$ KUBELET_OPTS = \"--address = 0.0.0.0 \\\n--port = 10250 \\\n--hostname_override = kube-minion \\\n--etcd_servers = http://kube-master:4001 \\\n--enable_server = true\n--v = 0\"\n/\n" }, { "code": null, "e": 9239, "s": 9176, "text": "For kube-proxy, we will configure using the following command." }, { "code": null, "e": 9339, "s": 9239, "text": "$ KUBE_PROXY_OPTS = \"--etcd_servers = http://kube-master:4001 \\\n--v = 0\"\n/etc/init/kube-proxy.conf\n" }, { "code": null, "e": 9384, "s": 9339, "text": "Finally, we will restart the Docker service." }, { "code": null, "e": 9410, "s": 9384, "text": "$ service docker restart\n" }, { "code": null, "e": 9499, "s": 9410, "text": "Now we are done with the configuration. You can check by running the following commands." }, { "code": null, "e": 9531, "s": 9499, "text": "$ /opt/bin/kubectl get minions\n" }, { "code": null, "e": 9564, "s": 9531, "text": "\n 41 Lectures \n 5 hours \n" }, { "code": null, "e": 9576, "s": 9564, "text": " AR Shankar" }, { "code": null, "e": 9609, "s": 9576, "text": "\n 15 Lectures \n 2 hours \n" }, { "code": null, "e": 9649, "s": 9609, "text": " Harshit Srivastava, Pranjal Srivastava" }, { "code": null, "e": 9684, "s": 9649, "text": "\n 18 Lectures \n 1.5 hours \n" }, { "code": null, "e": 9699, "s": 9684, "text": " Nigel Poulton" }, { "code": null, "e": 9734, "s": 9699, "text": "\n 25 Lectures \n 1.5 hours \n" }, { "code": null, "e": 9754, "s": 9734, "text": " Pranjal Srivastava" }, { "code": null, "e": 9787, "s": 9754, "text": "\n 18 Lectures \n 1 hours \n" }, { "code": null, "e": 9807, "s": 9787, "text": " Pranjal Srivastava" }, { "code": null, "e": 9842, "s": 9807, "text": "\n 26 Lectures \n 1.5 hours \n" }, { "code": null, "e": 9862, "s": 9842, "text": " Pranjal Srivastava" }, { "code": null, "e": 9869, "s": 9862, "text": " Print" }, { "code": null, "e": 9880, "s": 9869, "text": " Add Notes" } ]
Training RetinaNet on Google Colab to detect pliers, hammers, and screwdrivers from KTH Handtools Dataset. | by Iustina Ivanova | Towards Data Science
In this article, RetinaNet is trained in Google Colab to detect plier, hammer and screwdriver instruments. The dataset was taken from an opened source called KTH Handtools Dataset. It consists of 3 types of images for the handtools: hammer, plier and screwdriver in different illuminations and different locations. Jupyter notebook code for the article can be found in my github. Firstly, the dataset should be downloaded from the source locally (unzip file and rename it to KTH_Handtool_Dataset), then download folder into google drive. After that we should mount google disc to google colab: from google.colab import drivedrive.mount('/content/drive') Secondly, the data need to be prepared to train RetinaNet: all the images in the initial dataset should be saved in different folders with correlated bounding boxes for each image. To train RetinaNet we need to create a CSV file: each line in this file should consist of the name of the image file from the dataset, bounding box coordinates for each object and the name of a class for the object. The following script parses all the folders (there are 3 folders for blue, white and brown background for images). The first folder to parse is ‘Blue_background’. The difference between this folder and ‘Brown_background’ or ‘White_background’ is that the folders with instruments do not contain ‘Kinect’ and ‘webcam’ subfolders. from bs4 import BeautifulSoupimport osimport csvfolder = '/content/drive/My Drive/KTH_Handtool_Dataset'subfolder = ['Blue_background']in_subfolder = ['Artificial', 'Cloudy', 'Directed']instruments = ['hammer1', 'hammer2', 'hammer3', 'plier1', 'plier2', 'plier3', 'screw1', 'screw2', 'screw3']with open('train.csv', mode='w') as file: writer = csv.writer(file, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL) for folder_name in subfolder: for in_folder_name in in_subfolder: for instrument in instruments: directory_name = os.path.join(folder, folder_name, 'rgb', in_folder_name, instrument) directory_name_xml = os.path.join(folder, folder_name, 'bboxes', in_folder_name, instrument) for filename in os.listdir(directory_name_xml): label = instrument filename_jpg = filename[:-4] + '.jpg' filename_str = os.path.join(directory_name, filename_jpg) handler = open(os.path.join(directory_name_xml, filename)).read() soup = BeautifulSoup(handler, "xml") xmin = int(soup.xmin.string) xmax = int(soup.xmax.string) ymin = int(soup.ymin.string) ymax = int(soup.ymax.string) row = [filename_str, xmin, ymin, xmax, ymax, label] writer.writerow(row) parameter folder — the directory where all the data is stored (parent directory for extracted KTH_Handtool_Dataset folder). For ‘White_background’ and ‘Brown_background’ script should be different as they contain subfolders ‘Kinect’ and ‘webcam’ in the folders for instruments (‘hammer1’, ‘hammer2’, ‘hammer3’, ‘plier1’, ‘plier2’, ‘plier3’, ‘screw1’, ‘screw2’, ‘screw3’). I added this line to include these folders for sub_instrument in sub_instruments: Another trick is that ‘rgb’ folder and ‘bbox’ folder have different names for Kinect: in some folders it is ‘Kinect’, in others it is ‘kinect’, therefore I decided to make different ways to parse the directory for images (‘rgb’ folder) and for bounding boxes (‘bbox’ folder). The dictionary is used to change the names: dict_instr = {'Kinect': 'kinect'} Next step: if the name ‘Kinect’ is not in the xml path folder, then it changes to lower case ‘kinect’: dir_name_xml = os.path.join(folder, folder_name, 'bbox', in_folder_name, instrument) if sub_instrument not in os.listdir(dir_name_xml): sub_instrument = dict_instr[sub_instrument] In the result, the script for the “Brown_background” and “White_background” is as following: subfolder = ['Brown_background', 'White_background']sub_instruments = ['Kinect', 'webcam']dict_instr = {'Kinect': 'kinect'}# open file to write to the end of a filewith open('train.csv', mode='a') as file: writer = csv.writer(file, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL) for folder_name in subfolder: for in_folder_name in in_subfolder: for instrument in instruments: for sub_instrument in sub_instruments: directory_name = os.path.join(folder, folder_name, 'rgb', in_folder_name, instrument, sub_instrument) dir_name_xml = os.path.join(folder, folder_name, 'bbox', in_folder_name, instrument) if sub_instrument not in os.listdir(dir_name_xml): sub_instrument = dict_instr[sub_instrument] directory_name_xml = os.path.join(dir_name_xml, sub_instrument) for filename in os.listdir(directory_name_xml): label = instrument filename_jpg = filename[:-4] + '.jpg' filename_str = os.path.join(directory_name, filename_jpg) handler = open(os.path.join(directory_name_xml, filename)).read() soup = BeautifulSoup(handler, "xml") xmin = int(soup.xmin.string) xmax = int(soup.xmax.string) ymin = int(soup.ymin.string) ymax = int(soup.ymax.string) row = [filename_str, xmin, ymin, xmax, ymax, label] writer.writerow(row) After running the code above we should do preprocessing, as some of the xml files do not have correlated image: import pandas as pdimport os.pathlist_indexes_to_drop = []data = pd.read_csv("train.csv", header=None)with open('train_new.csv', mode='a') as file: for i in range(len(data)): fname = data.iloc[i, 0] if not os.path.isfile(fname): list_indexes_to_drop.append(i)data = data.drop(data.index[list_indexes_to_drop])data.to_csv(path_or_buf='train.csv', index=False, header=None) After preprocessing the data should be split. All the elements in CSV file were randomly shuffled data = pd.read_csv("train.csv", header=None)data = data.sample(frac=1).reset_index(drop=True) then the data was split to a training set (80%) and a testing set (20%) by the next code: amount_80 = int(0.8*len(data))train_data = data[:amount_80]test_data = data[amount_80:]print(len(train_data))print(len(test_data)) We should save train_data as train_annotations.csv: train_data.to_csv(path_or_buf='train_annotations', index=False) Next step is to download weights for training neural network from the website and put it into a folder ‘weights’: !mkdir weights!wget -O /content/weights/resnet50_coco_best_v2.h5 https://github.com/fizyr/keras-retinanet/releases/download/0.5.1/resnet50_coco_best_v2.1.0.h5 We should also create folders ‘snapshots’ where RetinaNet will save weights during training, and ‘tensorboard’, where information about training will be solved: !mkdir /content/drive/My\ Drive/kth_article/snapshots!mkdir /content/drive/My\ Drive/kth_article/tensorboard Next step is to create csv file ‘classes.csv’ (which contains names of the instruments): dict_classes = { 'hammer1': 0, 'hammer2': 1, 'hammer3': 2, 'plier1': 3, 'plier2': 4, 'plier3': 5, 'screw1': 6, 'screw2': 7, 'screw3': 8}with open('classes.csv', mode='w') as file: writer = csv.writer(file, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL) for key, val in dict_classes.items(): row = [key, val] print(row) writer.writerow(row) We also need to install RetinaNet: !cd ~!git clone https://github.com/fizyr/keras-retinanet%cd keras-retinanet!git checkout 42068ef9e406602d92a1afe2ee7d470f7e9860df!python setup.py install!python setup.py build_ext --inplace We need to return to the parent directory: %cd .. Next script is for training neural network: !retinanet-train --weights /weights/resnet50_coco_best_v2.h5 \--batch-size 4 --steps 4001 --epochs 20 \--snapshot-path snapshots --tensorboard-dir tensorboard \csv train_annotations.csv classes.csv train_annotations.csv: file with information about all the images from training data, the format for each image should be made in the following template: <path_to_image>,<xmin>,<ymin>,<xmax>,<ymax>,<label>, where xmin, ymin, xmax, ymax are bounding boxes given in pixels, label is a name of class name; labels.csv: file with labels for classes, should be followed by the template: <class_name>, <id> After training 3 of epochs, we will get some accuracy. We can validate the model on the training data. Another option is to test accuracy on some real images: To test the model on a validation set, we need to convert weights to a testing format: !retinanet-convert-model snapshots/resnet50_csv_03.h5 weights/resnet50_csv_03.h5 To check results on a testing set: !retinanet-evaluate csv val_annotations.csv classes.csv weights/resnet50_csv_03.h5 We can see that results after epochs of training are already good on a testing set, as the Mean Average Precision is 66%: 116 instances of class hammer1 with average precision: 0.7571113 instances of class hammer2 with average precision: 0.6968110 instances of class hammer3 with average precision: 0.8040123 instances of class plier1 with average precision: 0.5229119 instances of class plier2 with average precision: 0.5567122 instances of class plier3 with average precision: 0.8953126 instances of class screw1 with average precision: 0.4729152 instances of class screw2 with average precision: 0.5130131 instances of class screw3 with average precision: 0.7651mAP: 0.6649 at the same time not so good on the real environment: to check it, we can run the prediction of Retinanet on our image. # show images inline%matplotlib inline# automatically reload modules when they have changed%load_ext autoreload%autoreload 2# import kerasimport keras# import keras_retinanetfrom keras_retinanet import modelsfrom keras_retinanet.utils.image import read_image_bgr, preprocess_image, resize_imagefrom keras_retinanet.utils.visualization import draw_box, draw_captionfrom keras_retinanet.utils.colors import label_color#from keras_retinanet.keras_retinanet.utils.gpu import setup_gpu# import miscellaneous modulesimport matplotlib.pyplot as pltimport cv2import osimport numpy as npimport time# use this to change which GPU to usegpu = 0# set the modified tf session as backend in keras#setup_gpu(gpu)# adjust this to point to your downloaded/trained model# models can be downloaded here: https://github.com/fizyr/keras-retinanet/releasesmodel_path = os.path.join('..', 'weights', 'resnet50_csv_04.h5')model = models.load_model(model_path, backbone_name='resnet50')# if the model is not converted to an inference model, use the line below# see: https://github.com/fizyr/keras-retinanet#converting-a-training-model-to-inference-model#model = models.convert_model(model)#print(model.summary())# load label to names mapping for visualization purposeslabels_to_names = { 0: 'hammer1', 1: 'hammer2', 3: 'hammer3', 4: 'plier1', 5: 'plier2', 6: 'plier3', 7: 'screw1', 8: 'screw2', 9: 'screw3'}# load imageimage = read_image_bgr('/content/img.jpg')# copy to draw ondraw = image.copy()draw = cv2.cvtColor(draw, cv2.COLOR_BGR2RGB)# preprocess image for networkimage = preprocess_image(image)image, scale = resize_image(image)# process imagestart = time.time()boxes, scores, labels = model.predict_on_batch(np.expand_dims(image, axis=0))print("processing time: ", time.time() - start)# correct for image scaleboxes /= scale# visualize detectionsfor box, score, label in zip(boxes[0], scores[0], labels[0]): # scores are sorted so we can break if score < 0.21: break color = label_color(label) b = box.astype(int) draw_box(draw, b, color=color) caption = "{} {:.3f}".format(labels_to_names[label], score) draw_caption(draw, b, caption) plt.figure(figsize=(15, 15))plt.axis('off')plt.imshow(draw)plt.show() I took 21% of the threshold, so the model predicts the instrument with 22% accuracy. We can train the model further to improve the model, and train model for more than 4 epochs. Conclusion. This article is a part of the research, where we want to create a model, which would recognize the instruments in the video. We used KTH Handtool Dataset to improve the accuracy of a model. Experiments showed that additional images improve object detection. There are some improvements to the model that can be made (extra dataset can be used to improve instrument detection, more epochs for training can be done). References Focal Loss for Dense Object DetectionKeras-retinanetAn Introduction to Implementing Retinanet in Keras fro Multi Object Detection on Custom DatasetKTH Handtool Dataset Focal Loss for Dense Object Detection Keras-retinanet An Introduction to Implementing Retinanet in Keras fro Multi Object Detection on Custom Dataset KTH Handtool Dataset
[ { "code": null, "e": 487, "s": 172, "text": "In this article, RetinaNet is trained in Google Colab to detect plier, hammer and screwdriver instruments. The dataset was taken from an opened source called KTH Handtools Dataset. It consists of 3 types of images for the handtools: hammer, plier and screwdriver in different illuminations and different locations." }, { "code": null, "e": 552, "s": 487, "text": "Jupyter notebook code for the article can be found in my github." }, { "code": null, "e": 766, "s": 552, "text": "Firstly, the dataset should be downloaded from the source locally (unzip file and rename it to KTH_Handtool_Dataset), then download folder into google drive. After that we should mount google disc to google colab:" }, { "code": null, "e": 826, "s": 766, "text": "from google.colab import drivedrive.mount('/content/drive')" }, { "code": null, "e": 1338, "s": 826, "text": "Secondly, the data need to be prepared to train RetinaNet: all the images in the initial dataset should be saved in different folders with correlated bounding boxes for each image. To train RetinaNet we need to create a CSV file: each line in this file should consist of the name of the image file from the dataset, bounding box coordinates for each object and the name of a class for the object. The following script parses all the folders (there are 3 folders for blue, white and brown background for images)." }, { "code": null, "e": 1552, "s": 1338, "text": "The first folder to parse is ‘Blue_background’. The difference between this folder and ‘Brown_background’ or ‘White_background’ is that the folders with instruments do not contain ‘Kinect’ and ‘webcam’ subfolders." }, { "code": null, "e": 2832, "s": 1552, "text": "from bs4 import BeautifulSoupimport osimport csvfolder = '/content/drive/My Drive/KTH_Handtool_Dataset'subfolder = ['Blue_background']in_subfolder = ['Artificial', 'Cloudy', 'Directed']instruments = ['hammer1', 'hammer2', 'hammer3', 'plier1', 'plier2', 'plier3', 'screw1', 'screw2', 'screw3']with open('train.csv', mode='w') as file: writer = csv.writer(file, delimiter=',', quotechar='\"', quoting=csv.QUOTE_MINIMAL) for folder_name in subfolder: for in_folder_name in in_subfolder: for instrument in instruments: directory_name = os.path.join(folder, folder_name, 'rgb', in_folder_name, instrument) directory_name_xml = os.path.join(folder, folder_name, 'bboxes', in_folder_name, instrument) for filename in os.listdir(directory_name_xml): label = instrument filename_jpg = filename[:-4] + '.jpg' filename_str = os.path.join(directory_name, filename_jpg) handler = open(os.path.join(directory_name_xml, filename)).read() soup = BeautifulSoup(handler, \"xml\") xmin = int(soup.xmin.string) xmax = int(soup.xmax.string) ymin = int(soup.ymin.string) ymax = int(soup.ymax.string) row = [filename_str, xmin, ymin, xmax, ymax, label] writer.writerow(row)" }, { "code": null, "e": 2956, "s": 2832, "text": "parameter folder — the directory where all the data is stored (parent directory for extracted KTH_Handtool_Dataset folder)." }, { "code": null, "e": 3247, "s": 2956, "text": "For ‘White_background’ and ‘Brown_background’ script should be different as they contain subfolders ‘Kinect’ and ‘webcam’ in the folders for instruments (‘hammer1’, ‘hammer2’, ‘hammer3’, ‘plier1’, ‘plier2’, ‘plier3’, ‘screw1’, ‘screw2’, ‘screw3’). I added this line to include these folders" }, { "code": null, "e": 3286, "s": 3247, "text": "for sub_instrument in sub_instruments:" }, { "code": null, "e": 3606, "s": 3286, "text": "Another trick is that ‘rgb’ folder and ‘bbox’ folder have different names for Kinect: in some folders it is ‘Kinect’, in others it is ‘kinect’, therefore I decided to make different ways to parse the directory for images (‘rgb’ folder) and for bounding boxes (‘bbox’ folder). The dictionary is used to change the names:" }, { "code": null, "e": 3640, "s": 3606, "text": "dict_instr = {'Kinect': 'kinect'}" }, { "code": null, "e": 3743, "s": 3640, "text": "Next step: if the name ‘Kinect’ is not in the xml path folder, then it changes to lower case ‘kinect’:" }, { "code": null, "e": 3943, "s": 3743, "text": "dir_name_xml = os.path.join(folder, folder_name, 'bbox', in_folder_name, instrument) if sub_instrument not in os.listdir(dir_name_xml): sub_instrument = dict_instr[sub_instrument]" }, { "code": null, "e": 4036, "s": 3943, "text": "In the result, the script for the “Brown_background” and “White_background” is as following:" }, { "code": null, "e": 5458, "s": 4036, "text": "subfolder = ['Brown_background', 'White_background']sub_instruments = ['Kinect', 'webcam']dict_instr = {'Kinect': 'kinect'}# open file to write to the end of a filewith open('train.csv', mode='a') as file: writer = csv.writer(file, delimiter=',', quotechar='\"', quoting=csv.QUOTE_MINIMAL) for folder_name in subfolder: for in_folder_name in in_subfolder: for instrument in instruments: for sub_instrument in sub_instruments: directory_name = os.path.join(folder, folder_name, 'rgb', in_folder_name, instrument, sub_instrument) dir_name_xml = os.path.join(folder, folder_name, 'bbox', in_folder_name, instrument) if sub_instrument not in os.listdir(dir_name_xml): sub_instrument = dict_instr[sub_instrument] directory_name_xml = os.path.join(dir_name_xml, sub_instrument) for filename in os.listdir(directory_name_xml): label = instrument filename_jpg = filename[:-4] + '.jpg' filename_str = os.path.join(directory_name, filename_jpg) handler = open(os.path.join(directory_name_xml, filename)).read() soup = BeautifulSoup(handler, \"xml\") xmin = int(soup.xmin.string) xmax = int(soup.xmax.string) ymin = int(soup.ymin.string) ymax = int(soup.ymax.string) row = [filename_str, xmin, ymin, xmax, ymax, label] writer.writerow(row)" }, { "code": null, "e": 5570, "s": 5458, "text": "After running the code above we should do preprocessing, as some of the xml files do not have correlated image:" }, { "code": null, "e": 5954, "s": 5570, "text": "import pandas as pdimport os.pathlist_indexes_to_drop = []data = pd.read_csv(\"train.csv\", header=None)with open('train_new.csv', mode='a') as file: for i in range(len(data)): fname = data.iloc[i, 0] if not os.path.isfile(fname): list_indexes_to_drop.append(i)data = data.drop(data.index[list_indexes_to_drop])data.to_csv(path_or_buf='train.csv', index=False, header=None)" }, { "code": null, "e": 6052, "s": 5954, "text": "After preprocessing the data should be split. All the elements in CSV file were randomly shuffled" }, { "code": null, "e": 6146, "s": 6052, "text": "data = pd.read_csv(\"train.csv\", header=None)data = data.sample(frac=1).reset_index(drop=True)" }, { "code": null, "e": 6236, "s": 6146, "text": "then the data was split to a training set (80%) and a testing set (20%) by the next code:" }, { "code": null, "e": 6367, "s": 6236, "text": "amount_80 = int(0.8*len(data))train_data = data[:amount_80]test_data = data[amount_80:]print(len(train_data))print(len(test_data))" }, { "code": null, "e": 6419, "s": 6367, "text": "We should save train_data as train_annotations.csv:" }, { "code": null, "e": 6483, "s": 6419, "text": "train_data.to_csv(path_or_buf='train_annotations', index=False)" }, { "code": null, "e": 6597, "s": 6483, "text": "Next step is to download weights for training neural network from the website and put it into a folder ‘weights’:" }, { "code": null, "e": 6756, "s": 6597, "text": "!mkdir weights!wget -O /content/weights/resnet50_coco_best_v2.h5 https://github.com/fizyr/keras-retinanet/releases/download/0.5.1/resnet50_coco_best_v2.1.0.h5" }, { "code": null, "e": 6917, "s": 6756, "text": "We should also create folders ‘snapshots’ where RetinaNet will save weights during training, and ‘tensorboard’, where information about training will be solved:" }, { "code": null, "e": 7026, "s": 6917, "text": "!mkdir /content/drive/My\\ Drive/kth_article/snapshots!mkdir /content/drive/My\\ Drive/kth_article/tensorboard" }, { "code": null, "e": 7115, "s": 7026, "text": "Next step is to create csv file ‘classes.csv’ (which contains names of the instruments):" }, { "code": null, "e": 7505, "s": 7115, "text": "dict_classes = { 'hammer1': 0, 'hammer2': 1, 'hammer3': 2, 'plier1': 3, 'plier2': 4, 'plier3': 5, 'screw1': 6, 'screw2': 7, 'screw3': 8}with open('classes.csv', mode='w') as file: writer = csv.writer(file, delimiter=',', quotechar='\"', quoting=csv.QUOTE_MINIMAL) for key, val in dict_classes.items(): row = [key, val] print(row) writer.writerow(row)" }, { "code": null, "e": 7540, "s": 7505, "text": "We also need to install RetinaNet:" }, { "code": null, "e": 7730, "s": 7540, "text": "!cd ~!git clone https://github.com/fizyr/keras-retinanet%cd keras-retinanet!git checkout 42068ef9e406602d92a1afe2ee7d470f7e9860df!python setup.py install!python setup.py build_ext --inplace" }, { "code": null, "e": 7773, "s": 7730, "text": "We need to return to the parent directory:" }, { "code": null, "e": 7780, "s": 7773, "text": "%cd .." }, { "code": null, "e": 7824, "s": 7780, "text": "Next script is for training neural network:" }, { "code": null, "e": 8022, "s": 7824, "text": "!retinanet-train --weights /weights/resnet50_coco_best_v2.h5 \\--batch-size 4 --steps 4001 --epochs 20 \\--snapshot-path snapshots --tensorboard-dir tensorboard \\csv train_annotations.csv classes.csv" }, { "code": null, "e": 8325, "s": 8022, "text": "train_annotations.csv: file with information about all the images from training data, the format for each image should be made in the following template: <path_to_image>,<xmin>,<ymin>,<xmax>,<ymax>,<label>, where xmin, ymin, xmax, ymax are bounding boxes given in pixels, label is a name of class name;" }, { "code": null, "e": 8422, "s": 8325, "text": "labels.csv: file with labels for classes, should be followed by the template: <class_name>, <id>" }, { "code": null, "e": 8581, "s": 8422, "text": "After training 3 of epochs, we will get some accuracy. We can validate the model on the training data. Another option is to test accuracy on some real images:" }, { "code": null, "e": 8668, "s": 8581, "text": "To test the model on a validation set, we need to convert weights to a testing format:" }, { "code": null, "e": 8749, "s": 8668, "text": "!retinanet-convert-model snapshots/resnet50_csv_03.h5 weights/resnet50_csv_03.h5" }, { "code": null, "e": 8784, "s": 8749, "text": "To check results on a testing set:" }, { "code": null, "e": 8867, "s": 8784, "text": "!retinanet-evaluate csv val_annotations.csv classes.csv weights/resnet50_csv_03.h5" }, { "code": null, "e": 8989, "s": 8867, "text": "We can see that results after epochs of training are already good on a testing set, as the Mean Average Precision is 66%:" }, { "code": null, "e": 9544, "s": 8989, "text": "116 instances of class hammer1 with average precision: 0.7571113 instances of class hammer2 with average precision: 0.6968110 instances of class hammer3 with average precision: 0.8040123 instances of class plier1 with average precision: 0.5229119 instances of class plier2 with average precision: 0.5567122 instances of class plier3 with average precision: 0.8953126 instances of class screw1 with average precision: 0.4729152 instances of class screw2 with average precision: 0.5130131 instances of class screw3 with average precision: 0.7651mAP: 0.6649" }, { "code": null, "e": 9664, "s": 9544, "text": "at the same time not so good on the real environment: to check it, we can run the prediction of Retinanet on our image." }, { "code": null, "e": 11929, "s": 9664, "text": "# show images inline%matplotlib inline# automatically reload modules when they have changed%load_ext autoreload%autoreload 2# import kerasimport keras# import keras_retinanetfrom keras_retinanet import modelsfrom keras_retinanet.utils.image import read_image_bgr, preprocess_image, resize_imagefrom keras_retinanet.utils.visualization import draw_box, draw_captionfrom keras_retinanet.utils.colors import label_color#from keras_retinanet.keras_retinanet.utils.gpu import setup_gpu# import miscellaneous modulesimport matplotlib.pyplot as pltimport cv2import osimport numpy as npimport time# use this to change which GPU to usegpu = 0# set the modified tf session as backend in keras#setup_gpu(gpu)# adjust this to point to your downloaded/trained model# models can be downloaded here: https://github.com/fizyr/keras-retinanet/releasesmodel_path = os.path.join('..', 'weights', 'resnet50_csv_04.h5')model = models.load_model(model_path, backbone_name='resnet50')# if the model is not converted to an inference model, use the line below# see: https://github.com/fizyr/keras-retinanet#converting-a-training-model-to-inference-model#model = models.convert_model(model)#print(model.summary())# load label to names mapping for visualization purposeslabels_to_names = { 0: 'hammer1', 1: 'hammer2', 3: 'hammer3', 4: 'plier1', 5: 'plier2', 6: 'plier3', 7: 'screw1', 8: 'screw2', 9: 'screw3'}# load imageimage = read_image_bgr('/content/img.jpg')# copy to draw ondraw = image.copy()draw = cv2.cvtColor(draw, cv2.COLOR_BGR2RGB)# preprocess image for networkimage = preprocess_image(image)image, scale = resize_image(image)# process imagestart = time.time()boxes, scores, labels = model.predict_on_batch(np.expand_dims(image, axis=0))print(\"processing time: \", time.time() - start)# correct for image scaleboxes /= scale# visualize detectionsfor box, score, label in zip(boxes[0], scores[0], labels[0]): # scores are sorted so we can break if score < 0.21: break color = label_color(label) b = box.astype(int) draw_box(draw, b, color=color) caption = \"{} {:.3f}\".format(labels_to_names[label], score) draw_caption(draw, b, caption) plt.figure(figsize=(15, 15))plt.axis('off')plt.imshow(draw)plt.show()" }, { "code": null, "e": 12014, "s": 11929, "text": "I took 21% of the threshold, so the model predicts the instrument with 22% accuracy." }, { "code": null, "e": 12107, "s": 12014, "text": "We can train the model further to improve the model, and train model for more than 4 epochs." }, { "code": null, "e": 12119, "s": 12107, "text": "Conclusion." }, { "code": null, "e": 12534, "s": 12119, "text": "This article is a part of the research, where we want to create a model, which would recognize the instruments in the video. We used KTH Handtool Dataset to improve the accuracy of a model. Experiments showed that additional images improve object detection. There are some improvements to the model that can be made (extra dataset can be used to improve instrument detection, more epochs for training can be done)." }, { "code": null, "e": 12545, "s": 12534, "text": "References" }, { "code": null, "e": 12713, "s": 12545, "text": "Focal Loss for Dense Object DetectionKeras-retinanetAn Introduction to Implementing Retinanet in Keras fro Multi Object Detection on Custom DatasetKTH Handtool Dataset" }, { "code": null, "e": 12751, "s": 12713, "text": "Focal Loss for Dense Object Detection" }, { "code": null, "e": 12767, "s": 12751, "text": "Keras-retinanet" }, { "code": null, "e": 12863, "s": 12767, "text": "An Introduction to Implementing Retinanet in Keras fro Multi Object Detection on Custom Dataset" } ]
Contour hatching in Matplotlib plot
To plot contour with hatching, we can take the following steps − Set the figure size and adjust the padding between and around the subplots. Create x, y and z data points using numpy. Flat the x and y data points. Create a figure and a set of subplots. Plot a contour with different hatches. Create a colorbar for a scalar mappable instance. To display the figure, use show() method. import matplotlib.pyplot as plt import numpy as np plt.rcParams["figure.figsize"] = [7.50, 3.50] plt.rcParams["figure.autolayout"] = True x = np.linspace(-3, 5, 150).reshape(1, -1) y = np.linspace(-3, 5, 120).reshape(-1, 1) z = np.cos(x) + np.sin(y) x, y = x.flatten(), y.flatten() fig1, ax1 = plt.subplots() cs = ax1.contourf(x, y, z, hatches=['-', '/', '\\', '//'], cmap='gray', extend='both', alpha=0.5) fig1.colorbar(cs) plt.show()
[ { "code": null, "e": 1127, "s": 1062, "text": "To plot contour with hatching, we can take the following steps −" }, { "code": null, "e": 1203, "s": 1127, "text": "Set the figure size and adjust the padding between and around the subplots." }, { "code": null, "e": 1246, "s": 1203, "text": "Create x, y and z data points using numpy." }, { "code": null, "e": 1276, "s": 1246, "text": "Flat the x and y data points." }, { "code": null, "e": 1315, "s": 1276, "text": "Create a figure and a set of subplots." }, { "code": null, "e": 1354, "s": 1315, "text": "Plot a contour with different hatches." }, { "code": null, "e": 1404, "s": 1354, "text": "Create a colorbar for a scalar mappable instance." }, { "code": null, "e": 1446, "s": 1404, "text": "To display the figure, use show() method." }, { "code": null, "e": 1906, "s": 1446, "text": "import matplotlib.pyplot as plt\nimport numpy as np\n\nplt.rcParams[\"figure.figsize\"] = [7.50, 3.50]\nplt.rcParams[\"figure.autolayout\"] = True\n\nx = np.linspace(-3, 5, 150).reshape(1, -1)\ny = np.linspace(-3, 5, 120).reshape(-1, 1)\nz = np.cos(x) + np.sin(y)\n\nx, y = x.flatten(), y.flatten()\n\nfig1, ax1 = plt.subplots()\n\ncs = ax1.contourf(x, y, z, hatches=['-', '/', '\\\\', '//'],\n cmap='gray', extend='both', alpha=0.5)\nfig1.colorbar(cs)\n\nplt.show()" } ]
How to Install MySQL and Create a Sample Database | Towards Data Science
Data scientists and analyst are expected to be able to write and execute complex queries in SQL. If you’re just getting started with SQL or are looking for a sandbox to test queries, then this guide is for you. There are some great resources for SQL such as HackerRank, LeetCode, and W3Schools, but I think one of the best ways to gain proficiency is to practice with your own database with your SQL editor of choice. In this guide, we’ll be walking through the following steps: Installing MySQL on macOS Adding the MySQL shell path Creating a user account Creating a sample database with employee data Writing SQL queries MySQL is the most popular Open Source SQL database management system and is developed by Oracle Corporation. The 2020 Developer Survey by Stack Overflow confirms this claim in terms of popularity as seen below. We’ll be installing MySQL Community Server 8.0.x using a native package which is located inside a disk image (.dmg). Download the .dmg version from here (find DMG Archive). This version will initialize the data directory and create the MySQL grant tables. After clicking the Download button, you’ll be taken to a page where you’ll be asked to ‘Login’ or ‘Sign Up’ for a free account. You can bypass by clicking No thanks, just start my download. Go to the downloaded file, right click and Open. Follow the instructions until you reach Configuration. For Configuration, select Use Strong Password Encryption which is the default. Click here to read more about MySQL passwords. Enter a password for the root user. The root account is the default superuser account that has all privileges in all of your MySQL databases. MySQL is now installed. If you open System Preferences, you should see MySQL in your panel as seen below. The MySQL preference pane enables you to start, stop, and control automated startup during boot of your MySQL installation. Open it and clickStart MySQL Server if an instance is not already running. The green dots indicate that the server is running. I personally leave this box unchecked to save memory Start MySQL when your computer starts up. Just remember to start the server after rebooting. The shell path for a user in macOS is a set of paths in the filing system whereby the user has permissions to use certain applications, commands and programs without the need to specify the full path to that command or program in the Terminal. The following steps will enable us to enter the command mysql in any working directory in the command line (Terminal). Note that zsh (Z shell) is the default shell for macOS Catalina. If you’re on a different version, you can try using the bash command below. Open Terminal (⌘+Space, type in Terminal)Once you’re in Terminal, type cd to go to the home directoryIf you’re using zsh, type nano .zshrcIf you’re using bash, type nano .bash_profileCopy and paste these two aliases: Open Terminal (⌘+Space, type in Terminal) Once you’re in Terminal, type cd to go to the home directory If you’re using zsh, type nano .zshrcIf you’re using bash, type nano .bash_profile Copy and paste these two aliases: alias mysql=/usr/local/mysql/bin/mysqlalias mysqladmin=/usr/local/mysql/bin/mysqladmin 5. Save the file control + O, confirm with Enter, and control + X to exit.6. Quit (⌘+Q)Terminal and reopen it To test the server, enter the following command (you’ll need to enter the password you created when installing MySQL): mysqladmin -u root -p version You may not want to be using the root account all the time. You can create various accounts and grant different levels of privileges. Here are the steps: Log in as the root user: mysql -u root -p In the following command, replace userand password with your username and password of choice. I recommend you create an account using the same name as your macOS system username. CREATE USER ‘user’@‘localhost’ IDENTIFIED BY ‘root-password’; The following statement will grant all privileges to a user account on all databases. Replace user with the username you chose. Use quotes (‘’). GRANT ALL ON *.* TO ‘user’@‘localhost’ WITH GRANT OPTION; Try logging in with the newly created user. First, type QUIT to end the current session and log in with the new credentials. For example: mysql -u miguel -p Tip: Because ‘miguel’ is also the same name as my system username, I can simply type mysql -p an omit the -u miguel part. TypeQUIT, but stay in Terminal and move on to the next section. The Employees sample database was developed by Patrick Crews and Giuseppe Maxia and contains 4 million records. It contains fake employee data such as salaries, names, job titles, etc. Here is the schema: First, go to the Employees database on GitHub to download the repo. Download the repo by clicking on Code 👉 Download ZIP. In Terminal, change to the directory where you saved the file. In my case:cd Downloads Unzip the file by running this: unzip test_db-master.zipIn case that doesn’t work, you can manually open the file test_db-master.zip in Finder. Change directory to the unzipped folder:cd test_db-master Now you’re ready to install the database. Type in the following command (replace user with your own username). mysql -u user -p < employees.sql To test the installation, run the following command (replace user). mysql -u ‘user’ -p < test_employees_md5.sql The notebook below contains several simple queries to get you started. You can also view the notebook on Jupyter Nbviewer: nbviewer.jupyter.org If you’re interested in running SQL queries in Jupyter, check out my guide on how to do it: medium.com Please let me know if you have any questions or comments. Thanks!
[ { "code": null, "e": 382, "s": 171, "text": "Data scientists and analyst are expected to be able to write and execute complex queries in SQL. If you’re just getting started with SQL or are looking for a sandbox to test queries, then this guide is for you." }, { "code": null, "e": 589, "s": 382, "text": "There are some great resources for SQL such as HackerRank, LeetCode, and W3Schools, but I think one of the best ways to gain proficiency is to practice with your own database with your SQL editor of choice." }, { "code": null, "e": 650, "s": 589, "text": "In this guide, we’ll be walking through the following steps:" }, { "code": null, "e": 676, "s": 650, "text": "Installing MySQL on macOS" }, { "code": null, "e": 704, "s": 676, "text": "Adding the MySQL shell path" }, { "code": null, "e": 728, "s": 704, "text": "Creating a user account" }, { "code": null, "e": 774, "s": 728, "text": "Creating a sample database with employee data" }, { "code": null, "e": 794, "s": 774, "text": "Writing SQL queries" }, { "code": null, "e": 1005, "s": 794, "text": "MySQL is the most popular Open Source SQL database management system and is developed by Oracle Corporation. The 2020 Developer Survey by Stack Overflow confirms this claim in terms of popularity as seen below." }, { "code": null, "e": 1122, "s": 1005, "text": "We’ll be installing MySQL Community Server 8.0.x using a native package which is located inside a disk image (.dmg)." }, { "code": null, "e": 1261, "s": 1122, "text": "Download the .dmg version from here (find DMG Archive). This version will initialize the data directory and create the MySQL grant tables." }, { "code": null, "e": 1451, "s": 1261, "text": "After clicking the Download button, you’ll be taken to a page where you’ll be asked to ‘Login’ or ‘Sign Up’ for a free account. You can bypass by clicking No thanks, just start my download." }, { "code": null, "e": 1500, "s": 1451, "text": "Go to the downloaded file, right click and Open." }, { "code": null, "e": 1555, "s": 1500, "text": "Follow the instructions until you reach Configuration." }, { "code": null, "e": 1681, "s": 1555, "text": "For Configuration, select Use Strong Password Encryption which is the default. Click here to read more about MySQL passwords." }, { "code": null, "e": 1823, "s": 1681, "text": "Enter a password for the root user. The root account is the default superuser account that has all privileges in all of your MySQL databases." }, { "code": null, "e": 1929, "s": 1823, "text": "MySQL is now installed. If you open System Preferences, you should see MySQL in your panel as seen below." }, { "code": null, "e": 2053, "s": 1929, "text": "The MySQL preference pane enables you to start, stop, and control automated startup during boot of your MySQL installation." }, { "code": null, "e": 2180, "s": 2053, "text": "Open it and clickStart MySQL Server if an instance is not already running. The green dots indicate that the server is running." }, { "code": null, "e": 2275, "s": 2180, "text": "I personally leave this box unchecked to save memory Start MySQL when your computer starts up." }, { "code": null, "e": 2326, "s": 2275, "text": "Just remember to start the server after rebooting." }, { "code": null, "e": 2570, "s": 2326, "text": "The shell path for a user in macOS is a set of paths in the filing system whereby the user has permissions to use certain applications, commands and programs without the need to specify the full path to that command or program in the Terminal." }, { "code": null, "e": 2689, "s": 2570, "text": "The following steps will enable us to enter the command mysql in any working directory in the command line (Terminal)." }, { "code": null, "e": 2830, "s": 2689, "text": "Note that zsh (Z shell) is the default shell for macOS Catalina. If you’re on a different version, you can try using the bash command below." }, { "code": null, "e": 3047, "s": 2830, "text": "Open Terminal (⌘+Space, type in Terminal)Once you’re in Terminal, type cd to go to the home directoryIf you’re using zsh, type nano .zshrcIf you’re using bash, type nano .bash_profileCopy and paste these two aliases:" }, { "code": null, "e": 3089, "s": 3047, "text": "Open Terminal (⌘+Space, type in Terminal)" }, { "code": null, "e": 3150, "s": 3089, "text": "Once you’re in Terminal, type cd to go to the home directory" }, { "code": null, "e": 3233, "s": 3150, "text": "If you’re using zsh, type nano .zshrcIf you’re using bash, type nano .bash_profile" }, { "code": null, "e": 3267, "s": 3233, "text": "Copy and paste these two aliases:" }, { "code": null, "e": 3354, "s": 3267, "text": "alias mysql=/usr/local/mysql/bin/mysqlalias mysqladmin=/usr/local/mysql/bin/mysqladmin" }, { "code": null, "e": 3464, "s": 3354, "text": "5. Save the file control + O, confirm with Enter, and control + X to exit.6. Quit (⌘+Q)Terminal and reopen it" }, { "code": null, "e": 3583, "s": 3464, "text": "To test the server, enter the following command (you’ll need to enter the password you created when installing MySQL):" }, { "code": null, "e": 3613, "s": 3583, "text": "mysqladmin -u root -p version" }, { "code": null, "e": 3767, "s": 3613, "text": "You may not want to be using the root account all the time. You can create various accounts and grant different levels of privileges. Here are the steps:" }, { "code": null, "e": 3792, "s": 3767, "text": "Log in as the root user:" }, { "code": null, "e": 3809, "s": 3792, "text": "mysql -u root -p" }, { "code": null, "e": 3988, "s": 3809, "text": "In the following command, replace userand password with your username and password of choice. I recommend you create an account using the same name as your macOS system username." }, { "code": null, "e": 4050, "s": 3988, "text": "CREATE USER ‘user’@‘localhost’ IDENTIFIED BY ‘root-password’;" }, { "code": null, "e": 4195, "s": 4050, "text": "The following statement will grant all privileges to a user account on all databases. Replace user with the username you chose. Use quotes (‘’)." }, { "code": null, "e": 4253, "s": 4195, "text": "GRANT ALL ON *.* TO ‘user’@‘localhost’ WITH GRANT OPTION;" }, { "code": null, "e": 4391, "s": 4253, "text": "Try logging in with the newly created user. First, type QUIT to end the current session and log in with the new credentials. For example:" }, { "code": null, "e": 4410, "s": 4391, "text": "mysql -u miguel -p" }, { "code": null, "e": 4532, "s": 4410, "text": "Tip: Because ‘miguel’ is also the same name as my system username, I can simply type mysql -p an omit the -u miguel part." }, { "code": null, "e": 4596, "s": 4532, "text": "TypeQUIT, but stay in Terminal and move on to the next section." }, { "code": null, "e": 4801, "s": 4596, "text": "The Employees sample database was developed by Patrick Crews and Giuseppe Maxia and contains 4 million records. It contains fake employee data such as salaries, names, job titles, etc. Here is the schema:" }, { "code": null, "e": 4869, "s": 4801, "text": "First, go to the Employees database on GitHub to download the repo." }, { "code": null, "e": 4923, "s": 4869, "text": "Download the repo by clicking on Code 👉 Download ZIP." }, { "code": null, "e": 5010, "s": 4923, "text": "In Terminal, change to the directory where you saved the file. In my case:cd Downloads" }, { "code": null, "e": 5154, "s": 5010, "text": "Unzip the file by running this: unzip test_db-master.zipIn case that doesn’t work, you can manually open the file test_db-master.zip in Finder." }, { "code": null, "e": 5212, "s": 5154, "text": "Change directory to the unzipped folder:cd test_db-master" }, { "code": null, "e": 5323, "s": 5212, "text": "Now you’re ready to install the database. Type in the following command (replace user with your own username)." }, { "code": null, "e": 5356, "s": 5323, "text": "mysql -u user -p < employees.sql" }, { "code": null, "e": 5424, "s": 5356, "text": "To test the installation, run the following command (replace user)." }, { "code": null, "e": 5468, "s": 5424, "text": "mysql -u ‘user’ -p < test_employees_md5.sql" }, { "code": null, "e": 5539, "s": 5468, "text": "The notebook below contains several simple queries to get you started." }, { "code": null, "e": 5591, "s": 5539, "text": "You can also view the notebook on Jupyter Nbviewer:" }, { "code": null, "e": 5612, "s": 5591, "text": "nbviewer.jupyter.org" }, { "code": null, "e": 5704, "s": 5612, "text": "If you’re interested in running SQL queries in Jupyter, check out my guide on how to do it:" }, { "code": null, "e": 5715, "s": 5704, "text": "medium.com" } ]
Querying PostgreSQL data with BigQuery | by Antonio Cachuan | Towards Data Science
Query data in your Cloud SQL instance in real-time, reducing ELT development time, avoid copying and moving data thanks to BigQuery Cloud SQL federation. As you know Cloud SQL is a fully-managed relational database service for MySQL, PostgreSQL, and SQL Server. These kinds of databases are oriented to host applications or transactional data. BigQuery on the other hand enables analytics workloads. In a traditional world, you have to develop a data pipeline to extract the data from your application’s database. Let’s discover how to take advantage of table federation in this case from a PostgreSQL database. In this scenario, we are responsible as Data Engineers to enable the data from a transactional database (Cloud SQL) in the analytical zone (BigQuery). This data could be used by other processes or for other roles like Data Scientists or Machine Learning Engineers. As usual, the company wants that the latest transactional data generated could be consumed by our team immediately. Thanks to BigQuery Cloud SQL Federation we are capable to query in real-time the data in Cloud SQL. This feature empowers new alternatives to the traditional batch ingestion. Now we can consume data from the same day or hour it was created. For example, a report on Data Studio could show the information of sales for the last hour and the best part is that if you have experience with BigQuery will be not difficult to start using it. Before you continue In order to follow the next steps, you a Google Cloud account. To open an account just sign up on the GCP site and automatically get $300 in credits. In order to reproduce this case first, we create the transactional DB. In this DB, we will generate a table to host the sales from a fictional store. Let's create the Cloud SQL instance in this case PostgreSQL If It is the first time creating a Cloud SQL you need to enable the Compute Engine API this allows GCP to create and manage Compute Engines on your behalf. Just look in the window and click the blue button. For this case, set the smallest values for the new instance. With these values, we are assuring the cheapest price. If you want to understand in detail the price breakdown check here. Zone availability: Single zone Machine type: Shared core Storage: HDD Creating the instance took around 10 minutes. Then let’s connect to the instance using Cloud Shell. This is the easiest form to connect. Just remember the password you set up before. Exist other alternatives like a connection using a database client from a local machine. After connect, create the transaction table. A simple table with a primary key and other regular fields. Also, we insert some data into the table. So we have our PostgreSQL database and our transactional table, keep the insert script and feel free to insert data after the next step simulating a new transaction. We are prepared to move to our analytical zone. BigQuery is the place where our data engineering and data science team spent a bunch of their time in order to transform and get insights from the company data. To continue with the case, BigQuery needs to establish a connection with our Cloud SQL instance. This connection will be permitted us to query the data we've just inserted. In your browser, open another tab and enter the BigQuery interface. There we go to ‘ADD DATA’ and then click on ‘External data source’ If it's the first time enable the Big Query connection API. In the right part of the window, a form asking for the external data source shows up. Add your connection id (a string you need to copy from the Cloud SQL page), database name, and user password. Important- The federated table only works in the same project. This means your Cloud SQL instance and the Big Query Federated table need to be in the same project.- Only works with Cloud SQL instances with Public IP connectivity. That’s all as simple as that. The best is that each change in your CloudSQL tabla will be reflected on the federated table so there is no need to run an extraction pipeline. Finally, just run a simple query! Like you observe to query the table you can use the same BigQuery UI except for a special syntax so all the features like schedule query or save are allowed. Another feature is that you are not limited to use only the data from Cloud SQL you could join a Federated query result with a BigQuery table. This means you could build an enrichment process that takes the transaction data from PostgreSQL and add other variables like customer info to send special promotions based on their purchase. SELECT a.customer_id, a.name, b.first_order_dateFROM bqdataset.customers AS aLEFT OUTER JOIN EXTERNAL_QUERY( 'projects/datapth/databaseconnection', '''SELECT * FROM public.transactions''') AS bON b.customer_id = a.customer_idGROUP BY c.customer_id, c.name, rq.first_order_date; Pricing: You are charged for the number of bytes returned from the external query (5$ per TB). Performance: BigQuery needs to wait for the source database to execute the external query and temporarily move data from the external data source to BigQuery. Limitation: PostgreSQL supports many non-standard data types that are not supported in BigQuery likemoney, path, uuid or boxer. BigQuery Cloud SQL federation is a fantastic feature to accelerate your ELT development. If you want to go further I recommend checking this article on Incremental data ingestion pipelines and then come back and rethink this small pipeline for feeding a daily reporting dashboard. PS if you have any questions, or would like something clarified, ping me on Telegram, Facebook, or LinkedIn or I like having a data discussion 😊
[ { "code": null, "e": 326, "s": 172, "text": "Query data in your Cloud SQL instance in real-time, reducing ELT development time, avoid copying and moving data thanks to BigQuery Cloud SQL federation." }, { "code": null, "e": 572, "s": 326, "text": "As you know Cloud SQL is a fully-managed relational database service for MySQL, PostgreSQL, and SQL Server. These kinds of databases are oriented to host applications or transactional data. BigQuery on the other hand enables analytics workloads." }, { "code": null, "e": 784, "s": 572, "text": "In a traditional world, you have to develop a data pipeline to extract the data from your application’s database. Let’s discover how to take advantage of table federation in this case from a PostgreSQL database." }, { "code": null, "e": 1049, "s": 784, "text": "In this scenario, we are responsible as Data Engineers to enable the data from a transactional database (Cloud SQL) in the analytical zone (BigQuery). This data could be used by other processes or for other roles like Data Scientists or Machine Learning Engineers." }, { "code": null, "e": 1165, "s": 1049, "text": "As usual, the company wants that the latest transactional data generated could be consumed by our team immediately." }, { "code": null, "e": 1601, "s": 1165, "text": "Thanks to BigQuery Cloud SQL Federation we are capable to query in real-time the data in Cloud SQL. This feature empowers new alternatives to the traditional batch ingestion. Now we can consume data from the same day or hour it was created. For example, a report on Data Studio could show the information of sales for the last hour and the best part is that if you have experience with BigQuery will be not difficult to start using it." }, { "code": null, "e": 1621, "s": 1601, "text": "Before you continue" }, { "code": null, "e": 1771, "s": 1621, "text": "In order to follow the next steps, you a Google Cloud account. To open an account just sign up on the GCP site and automatically get $300 in credits." }, { "code": null, "e": 1921, "s": 1771, "text": "In order to reproduce this case first, we create the transactional DB. In this DB, we will generate a table to host the sales from a fictional store." }, { "code": null, "e": 1981, "s": 1921, "text": "Let's create the Cloud SQL instance in this case PostgreSQL" }, { "code": null, "e": 2188, "s": 1981, "text": "If It is the first time creating a Cloud SQL you need to enable the Compute Engine API this allows GCP to create and manage Compute Engines on your behalf. Just look in the window and click the blue button." }, { "code": null, "e": 2372, "s": 2188, "text": "For this case, set the smallest values for the new instance. With these values, we are assuring the cheapest price. If you want to understand in detail the price breakdown check here." }, { "code": null, "e": 2403, "s": 2372, "text": "Zone availability: Single zone" }, { "code": null, "e": 2429, "s": 2403, "text": "Machine type: Shared core" }, { "code": null, "e": 2442, "s": 2429, "text": "Storage: HDD" }, { "code": null, "e": 2488, "s": 2442, "text": "Creating the instance took around 10 minutes." }, { "code": null, "e": 2714, "s": 2488, "text": "Then let’s connect to the instance using Cloud Shell. This is the easiest form to connect. Just remember the password you set up before. Exist other alternatives like a connection using a database client from a local machine." }, { "code": null, "e": 2819, "s": 2714, "text": "After connect, create the transaction table. A simple table with a primary key and other regular fields." }, { "code": null, "e": 2861, "s": 2819, "text": "Also, we insert some data into the table." }, { "code": null, "e": 3027, "s": 2861, "text": "So we have our PostgreSQL database and our transactional table, keep the insert script and feel free to insert data after the next step simulating a new transaction." }, { "code": null, "e": 3075, "s": 3027, "text": "We are prepared to move to our analytical zone." }, { "code": null, "e": 3236, "s": 3075, "text": "BigQuery is the place where our data engineering and data science team spent a bunch of their time in order to transform and get insights from the company data." }, { "code": null, "e": 3409, "s": 3236, "text": "To continue with the case, BigQuery needs to establish a connection with our Cloud SQL instance. This connection will be permitted us to query the data we've just inserted." }, { "code": null, "e": 3477, "s": 3409, "text": "In your browser, open another tab and enter the BigQuery interface." }, { "code": null, "e": 3544, "s": 3477, "text": "There we go to ‘ADD DATA’ and then click on ‘External data source’" }, { "code": null, "e": 3604, "s": 3544, "text": "If it's the first time enable the Big Query connection API." }, { "code": null, "e": 3800, "s": 3604, "text": "In the right part of the window, a form asking for the external data source shows up. Add your connection id (a string you need to copy from the Cloud SQL page), database name, and user password." }, { "code": null, "e": 4030, "s": 3800, "text": "Important- The federated table only works in the same project. This means your Cloud SQL instance and the Big Query Federated table need to be in the same project.- Only works with Cloud SQL instances with Public IP connectivity." }, { "code": null, "e": 4204, "s": 4030, "text": "That’s all as simple as that. The best is that each change in your CloudSQL tabla will be reflected on the federated table so there is no need to run an extraction pipeline." }, { "code": null, "e": 4396, "s": 4204, "text": "Finally, just run a simple query! Like you observe to query the table you can use the same BigQuery UI except for a special syntax so all the features like schedule query or save are allowed." }, { "code": null, "e": 4731, "s": 4396, "text": "Another feature is that you are not limited to use only the data from Cloud SQL you could join a Federated query result with a BigQuery table. This means you could build an enrichment process that takes the transaction data from PostgreSQL and add other variables like customer info to send special promotions based on their purchase." }, { "code": null, "e": 5011, "s": 4731, "text": "SELECT a.customer_id, a.name, b.first_order_dateFROM bqdataset.customers AS aLEFT OUTER JOIN EXTERNAL_QUERY( 'projects/datapth/databaseconnection', '''SELECT * FROM public.transactions''') AS bON b.customer_id = a.customer_idGROUP BY c.customer_id, c.name, rq.first_order_date;" }, { "code": null, "e": 5106, "s": 5011, "text": "Pricing: You are charged for the number of bytes returned from the external query (5$ per TB)." }, { "code": null, "e": 5265, "s": 5106, "text": "Performance: BigQuery needs to wait for the source database to execute the external query and temporarily move data from the external data source to BigQuery." }, { "code": null, "e": 5393, "s": 5265, "text": "Limitation: PostgreSQL supports many non-standard data types that are not supported in BigQuery likemoney, path, uuid or boxer." }, { "code": null, "e": 5674, "s": 5393, "text": "BigQuery Cloud SQL federation is a fantastic feature to accelerate your ELT development. If you want to go further I recommend checking this article on Incremental data ingestion pipelines and then come back and rethink this small pipeline for feeding a daily reporting dashboard." } ]
Simple registration form using Python Tkinter
Tkinter is a python library for developing GUI (Graphical User Interfaces). We use the tkinter library for creating an application of UI (User Interface), to create windows and all other graphical user interfaces. If you’re using python 3.x(which is recommended), Tkinter will come with Python as a standard package, so we don’t need to install anything to use it. Before creating a registration form in Tkinter, let’s first create a simple GUI application in Tkinter. Below is the program to create a window by just importing Tkinter and set its title − from tkinter import * from tkinter import ttk window = Tk() window.title("Welcome to TutorialsPoint") window.geometry('325x250') window.configure(background = "gray") ttk.Button(window, text="Hello, Tkinter").grid() window.mainloop() On running above lines of code, you will see the output something like − Let's understand the above lines of code − Firstly we import all the modules we need, we have imported ttk and * (all) from tkinter library. Firstly we import all the modules we need, we have imported ttk and * (all) from tkinter library. To create the main window of our application, we use Tk class. To create the main window of our application, we use Tk class. window.title(), give the title to our Window app. window.title(), give the title to our Window app. window.geometry(), set the size of the window and window.configure(), set its background color. window.geometry(), set the size of the window and window.configure(), set its background color. ttk.Button() makes a button. ttk.Button() makes a button. ttk.Button(window, text="Hello, Tkinter").grid() – window means Tk so it shows in the window we created, text- will display the text in the window and grid will make it in a grid. ttk.Button(window, text="Hello, Tkinter").grid() – window means Tk so it shows in the window we created, text- will display the text in the window and grid will make it in a grid. Window.mainloop(), this function calls the endless loop of the window, so will remain open till the user closes it. Window.mainloop(), this function calls the endless loop of the window, so will remain open till the user closes it. Let’s try to extend our previous example, by adding a couple of Labels (label is a simple widget which displays a piece of text or image) and button(button are usually mapped directly onto a user action which means on clicking a button, some action should occur) in the code. from tkinter import * from tkinter import ttk window = Tk() window.title("Welcome to TutorialsPoint") window.geometry('400x400') window.configure(background = "grey"); a = Label(window ,text = "First Name").grid(row = 0,column = 0) b = Label(window ,text = "Last Name").grid(row = 1,column = 0) c = Label(window ,text = "Email Id").grid(row = 2,column = 0) d = Label(window ,text = "Contact Number").grid(row = 3,column = 0) a1 = Entry(window).grid(row = 0,column = 1) b1 = Entry(window).grid(row = 1,column = 1) c1 = Entry(window).grid(row = 2,column = 1) d1 = Entry(window).grid(row = 3,column = 1) def clicked(): res = "Welcome to " + txt.get() lbl.configure(text= res) btn = ttk.Button(window ,text="Submit").grid(row=4,column=0) window.mainloop() On running the above code, we will see the output screen something like − Now let’s create something from the real world, maybe a loan interest calculator. For that, we need a couple of items(variables) to be known, like principal amount, loan rate (r), balance (Bs) after s payments. To calculate loan after s payment, we use the formula in below program − Ps = ((1+r)^s.Bo) – (((1 + r)^s – 1)/ r)*p Where − Rate = Rate of interest like 7.5% i = rate/100, annual rate in decimal r = period rate = i/12 Po = Principal amount Ps = Balance after s payments s = number of monthly payments p = period (monthly) payment So below is the Interest rate calculator program, which will show a pop-up window, where users can set desired value (loan amount, rate, number of installments) and will get the monthly payment amount and remaining loan he needs to pay with the help of python tkinter library. from tkinter import * fields = ('Annual Rate', 'Number of Payments', 'Loan Principle', 'Monthly Payment', 'Remaining Loan') def monthly_payment(entries): # period rate: r = (float(entries['Annual Rate'].get()) / 100) / 12 print("r", r) # principal loan: loan = float(entries['Loan Principle'].get()) n = float(entries['Number of Payments'].get()) remaining_loan = float(entries['Remaining Loan'].get()) q = (1 + r)** n monthly = r * ( (q * loan - remaining_loan) / ( q - 1 )) monthly = ("%8.2f" % monthly).strip() entries['Monthly Payment'].delete(0,END) entries['Monthly Payment'].insert(0, monthly ) print("Monthly Payment: %f" % float(monthly)) def final_balance(entries): # period rate: r = (float(entries['Annual Rate'].get()) / 100) / 12 print("r", r) # principal loan: loan = float(entries['Loan Principle'].get()) n = float(entries['Number of Payments'].get()) q = (1 + r)** n monthly = float(entries['Monthly Payment'].get()) q = (1 + r)** n remaining = q * loan - ( (q - 1) / r) * monthly remaining = ("%8.2f" % remaining).strip() entries['Remaining Loan'].delete(0,END) entries['Remaining Loan'].insert(0, remaining ) print("Remaining Loan: %f" % float(remaining)) def makeform(root, fields): entries = {} for field in fields: row = Frame(root) lab = Label(row, width=22, text=field+": ", anchor='w') ent = Entry(row) ent.insert(0,"0") row.pack(side = TOP, fill = X, padx = 5 , pady = 5) lab.pack(side = LEFT) ent.pack(side = RIGHT, expand = YES, fill = X) entries[field] = ent return entries if __name__ == '__main__': root = Tk() ents = makeform(root, fields) root.bind('<Return>', (lambda event, e = ents: fetch(e))) b1 = Button(root, text = 'Final Balance', command=(lambda e = ents: final_balance(e))) b1.pack(side = LEFT, padx = 5, pady = 5) b2 = Button(root, text='Monthly Payment', command=(lambda e = ents: monthly_payment(e))) b2.pack(side = LEFT, padx = 5, pady = 5) b3 = Button(root, text = 'Quit', command = root.quit) b3.pack(side = LEFT, padx = 5, pady = 5) root.mainloop() From above we see the user is able to find the final(remaining) balance and monthly payment by entering the loan amount, rate and no. of payments.
[ { "code": null, "e": 1276, "s": 1062, "text": "Tkinter is a python library for developing GUI (Graphical User Interfaces). We use the tkinter library for creating an application of UI (User Interface), to create windows and all other graphical user interfaces." }, { "code": null, "e": 1427, "s": 1276, "text": "If you’re using python 3.x(which is recommended), Tkinter will come with Python as a standard package, so we don’t need to install anything to use it." }, { "code": null, "e": 1531, "s": 1427, "text": "Before creating a registration form in Tkinter, let’s first create a simple GUI application in Tkinter." }, { "code": null, "e": 1617, "s": 1531, "text": "Below is the program to create a window by just importing Tkinter and set its title −" }, { "code": null, "e": 1851, "s": 1617, "text": "from tkinter import *\nfrom tkinter import ttk\nwindow = Tk()\nwindow.title(\"Welcome to TutorialsPoint\")\nwindow.geometry('325x250')\nwindow.configure(background = \"gray\")\nttk.Button(window, text=\"Hello, Tkinter\").grid()\nwindow.mainloop()" }, { "code": null, "e": 1924, "s": 1851, "text": "On running above lines of code, you will see the output something like −" }, { "code": null, "e": 1967, "s": 1924, "text": "Let's understand the above lines of code −" }, { "code": null, "e": 2065, "s": 1967, "text": "Firstly we import all the modules we need, we have imported ttk and * (all) from tkinter library." }, { "code": null, "e": 2163, "s": 2065, "text": "Firstly we import all the modules we need, we have imported ttk and * (all) from tkinter library." }, { "code": null, "e": 2226, "s": 2163, "text": "To create the main window of our application, we use Tk class." }, { "code": null, "e": 2289, "s": 2226, "text": "To create the main window of our application, we use Tk class." }, { "code": null, "e": 2339, "s": 2289, "text": "window.title(), give the title to our Window app." }, { "code": null, "e": 2389, "s": 2339, "text": "window.title(), give the title to our Window app." }, { "code": null, "e": 2485, "s": 2389, "text": "window.geometry(), set the size of the window and window.configure(), set its background color." }, { "code": null, "e": 2581, "s": 2485, "text": "window.geometry(), set the size of the window and window.configure(), set its background color." }, { "code": null, "e": 2610, "s": 2581, "text": "ttk.Button() makes a button." }, { "code": null, "e": 2639, "s": 2610, "text": "ttk.Button() makes a button." }, { "code": null, "e": 2819, "s": 2639, "text": "ttk.Button(window, text=\"Hello, Tkinter\").grid() – window means Tk so it shows in the window we created, text- will display the text in the window and grid will make it in a grid." }, { "code": null, "e": 2999, "s": 2819, "text": "ttk.Button(window, text=\"Hello, Tkinter\").grid() – window means Tk so it shows in the window we created, text- will display the text in the window and grid will make it in a grid." }, { "code": null, "e": 3115, "s": 2999, "text": "Window.mainloop(), this function calls the endless loop of the window, so will remain open till the user closes it." }, { "code": null, "e": 3231, "s": 3115, "text": "Window.mainloop(), this function calls the endless loop of the window, so will remain open till the user closes it." }, { "code": null, "e": 3507, "s": 3231, "text": "Let’s try to extend our previous example, by adding a couple of Labels (label is a simple widget which displays a piece of text or image) and button(button are usually mapped directly onto a user action which means on clicking a button, some action should occur) in the code." }, { "code": null, "e": 4265, "s": 3507, "text": "from tkinter import *\nfrom tkinter import ttk\nwindow = Tk()\nwindow.title(\"Welcome to TutorialsPoint\")\nwindow.geometry('400x400')\nwindow.configure(background = \"grey\");\na = Label(window ,text = \"First Name\").grid(row = 0,column = 0)\nb = Label(window ,text = \"Last Name\").grid(row = 1,column = 0)\nc = Label(window ,text = \"Email Id\").grid(row = 2,column = 0)\nd = Label(window ,text = \"Contact Number\").grid(row = 3,column = 0)\na1 = Entry(window).grid(row = 0,column = 1)\nb1 = Entry(window).grid(row = 1,column = 1)\nc1 = Entry(window).grid(row = 2,column = 1)\nd1 = Entry(window).grid(row = 3,column = 1)\ndef clicked():\n res = \"Welcome to \" + txt.get()\n lbl.configure(text= res)\nbtn = ttk.Button(window ,text=\"Submit\").grid(row=4,column=0)\nwindow.mainloop()" }, { "code": null, "e": 4339, "s": 4265, "text": "On running the above code, we will see the output screen something like −" }, { "code": null, "e": 4623, "s": 4339, "text": "Now let’s create something from the real world, maybe a loan interest calculator. For that, we need a couple of items(variables) to be known, like principal amount, loan rate (r), balance (Bs) after s payments. To calculate loan after s payment, we use the formula in below program −" }, { "code": null, "e": 4666, "s": 4623, "text": "Ps = ((1+r)^s.Bo) – (((1 + r)^s – 1)/ r)*p" }, { "code": null, "e": 4674, "s": 4666, "text": "Where −" }, { "code": null, "e": 4708, "s": 4674, "text": "Rate = Rate of interest like 7.5%" }, { "code": null, "e": 4745, "s": 4708, "text": "i = rate/100, annual rate in decimal" }, { "code": null, "e": 4768, "s": 4745, "text": "r = period rate = i/12" }, { "code": null, "e": 4790, "s": 4768, "text": "Po = Principal amount" }, { "code": null, "e": 4820, "s": 4790, "text": "Ps = Balance after s payments" }, { "code": null, "e": 4851, "s": 4820, "text": "s = number of monthly payments" }, { "code": null, "e": 4880, "s": 4851, "text": "p = period (monthly) payment" }, { "code": null, "e": 5157, "s": 4880, "text": "So below is the Interest rate calculator program, which will show a pop-up window, where users can set desired value (loan amount, rate, number of installments) and will get the monthly payment amount and remaining loan he needs to pay with the help of python tkinter library." }, { "code": null, "e": 7334, "s": 5157, "text": "from tkinter import *\nfields = ('Annual Rate', 'Number of Payments', 'Loan Principle', 'Monthly Payment', 'Remaining Loan')\ndef monthly_payment(entries):\n # period rate:\n r = (float(entries['Annual Rate'].get()) / 100) / 12\n print(\"r\", r)\n # principal loan:\n loan = float(entries['Loan Principle'].get())\n n = float(entries['Number of Payments'].get())\n remaining_loan = float(entries['Remaining Loan'].get())\n q = (1 + r)** n\n monthly = r * ( (q * loan - remaining_loan) / ( q - 1 ))\n monthly = (\"%8.2f\" % monthly).strip()\n entries['Monthly Payment'].delete(0,END)\n entries['Monthly Payment'].insert(0, monthly )\n print(\"Monthly Payment: %f\" % float(monthly))\ndef final_balance(entries):\n # period rate:\n r = (float(entries['Annual Rate'].get()) / 100) / 12\n print(\"r\", r)\n # principal loan:\n loan = float(entries['Loan Principle'].get())\n n = float(entries['Number of Payments'].get())\n q = (1 + r)** n\n monthly = float(entries['Monthly Payment'].get())\n q = (1 + r)** n\n remaining = q * loan - ( (q - 1) / r) * monthly\n remaining = (\"%8.2f\" % remaining).strip()\n entries['Remaining Loan'].delete(0,END)\n entries['Remaining Loan'].insert(0, remaining )\n print(\"Remaining Loan: %f\" % float(remaining))\ndef makeform(root, fields):\n entries = {}\n for field in fields:\n row = Frame(root)\n lab = Label(row, width=22, text=field+\": \", anchor='w')\n ent = Entry(row)\n ent.insert(0,\"0\")\n row.pack(side = TOP, fill = X, padx = 5 , pady = 5)\n lab.pack(side = LEFT)\n ent.pack(side = RIGHT, expand = YES, fill = X)\n entries[field] = ent\n return entries\nif __name__ == '__main__':\n root = Tk()\n ents = makeform(root, fields)\n root.bind('<Return>', (lambda event, e = ents: fetch(e)))\n b1 = Button(root, text = 'Final Balance',\n command=(lambda e = ents: final_balance(e)))\n b1.pack(side = LEFT, padx = 5, pady = 5)\n b2 = Button(root, text='Monthly Payment',\n command=(lambda e = ents: monthly_payment(e)))\n b2.pack(side = LEFT, padx = 5, pady = 5)\n b3 = Button(root, text = 'Quit', command = root.quit)\n b3.pack(side = LEFT, padx = 5, pady = 5)\n root.mainloop()" }, { "code": null, "e": 7481, "s": 7334, "text": "From above we see the user is able to find the final(remaining) balance and monthly payment by entering the loan amount, rate and no. of payments." } ]
Determine the type of an image in Python?
In this section we are going to see the type of image file we have. So consider a situation, where in a directory we have hundreds of image file and we want to get all the jgeg(or any particular image file type) file type. All this we are going to do programmatically using python. Python provide library to determine the type of an image, on such library is imghdr. The python imghdr package determines the type of image contained in a file or byte stream. There is a very much chances that if you are using python 3.6 or higher, imghdr module is an standard package and will come with python installation. To install imghdr in your machine, just run below command in your command terminal: pip install imghdr After successful installation, to verify imghdr is installed properly or not just import the module in your python shell. >>> import imghdr >>> In case you are getting no error, it means imghdr is installed in your machine. The imghdr package defines the following function: imghdr.what(filename[, h]) Filename: tests the image data contained in the file named by filename and returns a string describing the image type. Filename: tests the image data contained in the file named by filename and returns a string describing the image type. h: Its optional, in case h is there- then the filename is ignored and h is assumed to contain the byte stream to test. h: Its optional, in case h is there- then the filename is ignored and h is assumed to contain the byte stream to test. Below are the permitted image types which are recognized using imghdr package. But we can extend the list of file types which imghdr package can recognise by appending to this variable. This function contains list of functions performing the individual tests. Each function takes two arguments: the byte-stream and an open file-like object. However, when what() is called with a byte-stream, the file-like object will be None. The test function will return the image type as a string else None if it failed. >>> import imghdr >>> imghdr.what('clock.jpg') 'jpeg' Below is just one implementation of imghdr package, where if some particular imagefile extension is there do particular operation: def identify_filetype(url, imageName, folderName): session = _setupSession() try: # time out is another parameter tuned image = session.get(url, timeout = 5) with open(os.path.join(folderName, imageName),'wb') as fout: fout.write(image.content) fileExtension = imghdr.what(os.path.join(folderName, imageName)) if fileExtension is None: os.remove(os.path.join(folderName, imageName)) else: newName = imageName + '.' + str(fileExtension) os.rename(os.path.join(folderName, imageName), os.path.join(folderName, newName)) except Exception as e: print ("failed to download one pages with url of " + str(url))
[ { "code": null, "e": 1344, "s": 1062, "text": "In this section we are going to see the type of image file we have. So consider a situation, where in a directory we have hundreds of image file and we want to get all the jgeg(or any particular image file type) file type. All this we are going to do programmatically using python." }, { "code": null, "e": 1429, "s": 1344, "text": "Python provide\nlibrary to determine the type of an image, on such library is imghdr." }, { "code": null, "e": 1520, "s": 1429, "text": "The python imghdr package determines the type of image contained in a file or byte stream." }, { "code": null, "e": 1670, "s": 1520, "text": "There is a very much chances that if you are using python 3.6 or higher, imghdr module is an standard package and will come with python installation." }, { "code": null, "e": 1754, "s": 1670, "text": "To install imghdr in your machine, just run below command in your command terminal:" }, { "code": null, "e": 1773, "s": 1754, "text": "pip install imghdr" }, { "code": null, "e": 1895, "s": 1773, "text": "After successful installation, to verify imghdr is installed properly or not just import the module in your python shell." }, { "code": null, "e": 1917, "s": 1895, "text": ">>> import imghdr\n>>>" }, { "code": null, "e": 1997, "s": 1917, "text": "In case you are getting no error, it means imghdr is installed in your machine." }, { "code": null, "e": 2048, "s": 1997, "text": "The imghdr package defines the following function:" }, { "code": null, "e": 2075, "s": 2048, "text": "imghdr.what(filename[, h])" }, { "code": null, "e": 2194, "s": 2075, "text": "Filename: tests the image data contained in the file named by filename and returns a string describing the image type." }, { "code": null, "e": 2313, "s": 2194, "text": "Filename: tests the image data contained in the file named by filename and returns a string describing the image type." }, { "code": null, "e": 2432, "s": 2313, "text": "h: Its optional, in case h is there- then the filename is ignored and h is assumed to contain the byte stream to test." }, { "code": null, "e": 2551, "s": 2432, "text": "h: Its optional, in case h is there- then the filename is ignored and h is assumed to contain the byte stream to test." }, { "code": null, "e": 2630, "s": 2551, "text": "Below are the permitted image types which are recognized using imghdr package." }, { "code": null, "e": 2737, "s": 2630, "text": "But we can extend the list of file types which imghdr package can recognise by appending to this variable." }, { "code": null, "e": 2978, "s": 2737, "text": "This function contains list of functions performing the individual tests. Each function takes two arguments: the byte-stream and an open file-like object. However, when what() is called with a byte-stream, the file-like object will be None." }, { "code": null, "e": 3059, "s": 2978, "text": "The test function will return the image type as a string else None if it failed." }, { "code": null, "e": 3113, "s": 3059, "text": ">>> import imghdr\n>>> imghdr.what('clock.jpg')\n'jpeg'" }, { "code": null, "e": 3244, "s": 3113, "text": "Below is just one implementation of imghdr package, where if some particular imagefile extension is there do particular operation:" }, { "code": null, "e": 3923, "s": 3244, "text": "def identify_filetype(url, imageName, folderName):\n session = _setupSession()\n try:\n # time out is another parameter tuned\n image = session.get(url, timeout = 5)\n with open(os.path.join(folderName, imageName),'wb') as fout:\n fout.write(image.content)\n fileExtension = imghdr.what(os.path.join(folderName, imageName))\n if fileExtension is None:\n os.remove(os.path.join(folderName, imageName))\n else:\n newName = imageName + '.' + str(fileExtension)\n os.rename(os.path.join(folderName, imageName), os.path.join(folderName, newName))\nexcept Exception as e:\nprint (\"failed to download one pages with url of \" + str(url))" } ]
How to create a 3D-array from data frame in R?
A 3D-array is a 3-dimensional array and it is actually a collection of 2D arrays. We can create a 3D-array of a data frame in R by using simplify2array function, this function will break the data frame into arrays that will form a 3D-array. Consider the below data frame: Live Demo > set.seed(254) > x<-sample(0:1,20,replace=TRUE) > y<-rpois(20,5) > z<-rpois(20,3) > a<-rpois(20,5) > b<-rpois(20,4) > c<-rpois(20,8) > df1<-data.frame(x,y,z,a,b,c) > df1 x y z a b c 1 0 4 6 9 5 5 2 0 5 1 4 2 1 3 0 6 1 4 5 6 4 1 6 3 5 4 12 5 1 9 8 6 6 11 6 1 8 2 6 2 7 7 0 4 4 6 4 4 8 1 6 2 4 3 4 9 0 5 0 3 4 9 10 0 2 2 4 3 7 11 1 6 1 5 5 7 12 1 8 1 2 4 9 13 0 3 3 4 4 11 14 1 7 3 2 6 11 15 1 8 2 6 4 15 16 0 7 1 5 2 12 17 1 6 1 2 5 7 18 1 6 6 3 2 10 19 1 7 1 5 2 5 20 1 4 2 6 2 6 Creating 3D-array from df1: > simplify2array(by(df1,df1$x,as.matrix)) df1$x: 0 x y z a b c 1 0 4 6 9 5 5 2 0 5 1 4 2 1 3 0 6 1 4 5 6 7 0 4 4 6 4 4 9 0 5 0 3 4 9 10 0 2 2 4 3 7 13 0 3 3 4 4 11 16 0 7 1 5 2 12 df1$x: 1 x y z a b c 4 1 6 3 5 4 12 5 1 9 8 6 6 11 6 1 8 2 6 2 7 8 1 6 2 4 3 4 11 1 6 1 5 5 7 12 1 8 1 2 4 9 14 1 7 3 2 6 11 15 1 8 2 6 4 15 17 1 6 1 2 5 7 18 1 6 6 3 2 10 19 1 7 1 5 2 5 20 1 4 2 6 2 6 Live Demo > g1<-sample(1:4,20,replace=TRUE) > g2<-rnorm(20,1,0.3) > g3<-runif(20,1,2) > g4<-rpois(20,5) > df2<-data.frame(g1,g2,g3,g4) > df2 g1 g2 g3 g4 1 2 1.3239241 1.467573 6 2 1 0.7099436 1.881370 3 3 4 0.9902820 1.161732 11 4 4 0.7175320 1.814506 4 5 1 0.5081105 1.162827 7 6 2 1.4972085 1.847154 8 7 4 1.5698800 1.570466 11 8 3 0.6383917 1.471682 7 9 1 1.1184017 1.003588 5 10 4 0.4746240 1.243342 11 11 1 1.1368320 1.158829 7 12 4 1.4137269 1.289525 5 13 2 0.7776044 1.070726 7 14 4 1.3936072 1.313207 5 15 4 0.6832531 1.496392 3 16 2 0.9838087 1.807298 6 17 3 1.0371888 1.039574 9 18 4 1.7064870 1.517545 6 19 3 1.0512661 1.013286 4 20 1 1.0290387 1.899809 9 > simplify2array(by(df2,df2$g1,as.matrix)) df2$g1: 1 g1 g2 g3 g4 2 1 0.7099436 1.881370 3 5 1 0.5081105 1.162827 7 9 1 1.1184017 1.003588 5 11 1 1.1368320 1.158829 7 20 1 1.0290387 1.899809 9 df2$g1: 2 g1 g2 g3 g4 1 2 1.3239241 1.467573 6 6 2 1.4972085 1.847154 8 13 2 0.7776044 1.070726 7 16 2 0.9838087 1.807298 6 df2$g1: 3 g1 g2 g3 g4 8 3 0.6383917 1.471682 7 17 3 1.0371888 1.039574 9 19 3 1.0512661 1.013286 4 df2$g1: 4 g1 g2 g3 g4 3 4 0.9902820 1.161732 11 4 4 0.7175320 1.814506 4 7 4 1.5698800 1.570466 11 10 4 0.4746240 1.243342 11 12 4 1.4137269 1.289525 5 14 4 1.3936072 1.313207 5 15 4 0.6832531 1.496392 3 18 4 1.7064870 1.517545 6
[ { "code": null, "e": 1303, "s": 1062, "text": "A 3D-array is a 3-dimensional array and it is actually a collection of 2D arrays. We can create a 3D-array of a data frame in R by using simplify2array function, this function will break the data frame into arrays that will form a 3D-array." }, { "code": null, "e": 1334, "s": 1303, "text": "Consider the below data frame:" }, { "code": null, "e": 1344, "s": 1334, "text": "Live Demo" }, { "code": null, "e": 1515, "s": 1344, "text": "> set.seed(254)\n> x<-sample(0:1,20,replace=TRUE)\n> y<-rpois(20,5)\n> z<-rpois(20,3)\n> a<-rpois(20,5)\n> b<-rpois(20,4)\n> c<-rpois(20,8)\n> df1<-data.frame(x,y,z,a,b,c)\n> df1" }, { "code": null, "e": 1825, "s": 1515, "text": "x y z a b c\n1 0 4 6 9 5 5\n2 0 5 1 4 2 1\n3 0 6 1 4 5 6\n4 1 6 3 5 4 12\n5 1 9 8 6 6 11\n6 1 8 2 6 2 7\n7 0 4 4 6 4 4\n8 1 6 2 4 3 4\n9 0 5 0 3 4 9\n10 0 2 2 4 3 7\n11 1 6 1 5 5 7\n12 1 8 1 2 4 9\n13 0 3 3 4 4 11\n14 1 7 3 2 6 11\n15 1 8 2 6 4 15\n16 0 7 1 5 2 12\n17 1 6 1 2 5 7\n18 1 6 6 3 2 10\n19 1 7 1 5 2 5\n20 1 4 2 6 2 6" }, { "code": null, "e": 1853, "s": 1825, "text": "Creating 3D-array from df1:" }, { "code": null, "e": 1904, "s": 1853, "text": "> simplify2array(by(df1,df1$x,as.matrix))\ndf1$x: 0" }, { "code": null, "e": 2033, "s": 1904, "text": "x y z a b c\n1 0 4 6 9 5 5\n2 0 5 1 4 2 1\n3 0 6 1 4 5 6\n7 0 4 4 6 4 4\n9 0 5 0 3 4 9\n10 0 2 2 4 3 7\n13 0 3 3 4 4 11\n16 0 7 1 5 2 12" }, { "code": null, "e": 2042, "s": 2033, "text": "df1$x: 1" }, { "code": null, "e": 2235, "s": 2042, "text": "x y z a b c\n4 1 6 3 5 4 12\n5 1 9 8 6 6 11\n6 1 8 2 6 2 7\n8 1 6 2 4 3 4\n11 1 6 1 5 5 7\n12 1 8 1 2 4 9\n14 1 7 3 2 6 11\n15 1 8 2 6 4 15\n17 1 6 1 2 5 7\n18 1 6 6 3 2 10\n19 1 7 1 5 2 5\n20 1 4 2 6 2 6" }, { "code": null, "e": 2245, "s": 2235, "text": "Live Demo" }, { "code": null, "e": 2376, "s": 2245, "text": "> g1<-sample(1:4,20,replace=TRUE)\n> g2<-rnorm(20,1,0.3)\n> g3<-runif(20,1,2)\n> g4<-rpois(20,5)\n> df2<-data.frame(g1,g2,g3,g4)\n> df2" }, { "code": null, "e": 2902, "s": 2376, "text": "g1 g2 g3 g4\n1 2 1.3239241 1.467573 6\n2 1 0.7099436 1.881370 3\n3 4 0.9902820 1.161732 11\n4 4 0.7175320 1.814506 4\n5 1 0.5081105 1.162827 7\n6 2 1.4972085 1.847154 8\n7 4 1.5698800 1.570466 11\n8 3 0.6383917 1.471682 7\n9 1 1.1184017 1.003588 5\n10 4 0.4746240 1.243342 11\n11 1 1.1368320 1.158829 7\n12 4 1.4137269 1.289525 5\n13 2 0.7776044 1.070726 7\n14 4 1.3936072 1.313207 5\n15 4 0.6832531 1.496392 3\n16 2 0.9838087 1.807298 6\n17 3 1.0371888 1.039574 9\n18 4 1.7064870 1.517545 6\n19 3 1.0512661 1.013286 4\n20 1 1.0290387 1.899809 9" }, { "code": null, "e": 2955, "s": 2902, "text": "> simplify2array(by(df2,df2$g1,as.matrix))\ndf2$g1: 1" }, { "code": null, "e": 3094, "s": 2955, "text": "g1 g2 g3 g4\n2 1 0.7099436 1.881370 3\n5 1 0.5081105 1.162827 7\n9 1 1.1184017 1.003588 5\n11 1 1.1368320 1.158829 7\n20 1 1.0290387 1.899809 9" }, { "code": null, "e": 3104, "s": 3094, "text": "df2$g1: 2" }, { "code": null, "e": 3218, "s": 3104, "text": "g1 g2 g3 g4\n1 2 1.3239241 1.467573 6\n6 2 1.4972085 1.847154 8\n13 2 0.7776044 1.070726 7\n16 2 0.9838087 1.807298 6" }, { "code": null, "e": 3228, "s": 3218, "text": "df2$g1: 3" }, { "code": null, "e": 3317, "s": 3228, "text": "g1 g2 g3 g4\n8 3 0.6383917 1.471682 7\n17 3 1.0371888 1.039574 9\n19 3 1.0512661 1.013286 4" }, { "code": null, "e": 3327, "s": 3317, "text": "df2$g1: 4" }, { "code": null, "e": 3547, "s": 3327, "text": "g1 g2 g3 g4\n3 4 0.9902820 1.161732 11\n4 4 0.7175320 1.814506 4\n7 4 1.5698800 1.570466 11\n10 4 0.4746240 1.243342 11\n12 4 1.4137269 1.289525 5\n14 4 1.3936072 1.313207 5\n15 4 0.6832531 1.496392 3\n18 4 1.7064870 1.517545 6" } ]
How to replace NA with 0 and other values to 1 in an R data frame column?
Sometimes we want to convert a column of an R data frame to binary column using 0 and 1, it is especially done in situations where we have some NAs in the column of the data frame and the other values can be converted to 1 due to some characteristics. To replace NA with 0 and other values to 1, we can use ifelse function. Live Demo Consider the below data frame − x1<−1:20 x2<−sample(c(NA,rnorm(5)),20,replace=TRUE) df1<−data.frame(x1,x2) df1 x1 x2 1 1 1.6562106 2 2 NA 3 3 2.2323438 4 4 2.2323438 5 5 1.2038679 6 6 2.2323438 7 7 NA 8 8 1.6562106 9 9 NA 10 10 1.2038679 11 11 −0.6898052 12 12 2.2323438 13 13 2.2323438 14 14 1.6562106 15 15 NA 16 16 1.2038679 17 17 NA 18 18 −0.6898052 19 19 2.2323438 20 20 2.2323438 Replacing NAs with 0 and other values to 1 in column x2 − df1$x2<−ifelse(is.na(df1$x2),0,1) df1 x1 x2 1 1 1 2 2 0 3 3 1 4 4 1 5 5 1 6 6 1 7 7 0 8 8 1 9 9 0 10 10 1 11 11 1 12 12 1 13 13 1 14 14 1 15 15 0 16 16 1 17 17 0 18 18 1 19 19 1 20 20 1 Live Demo y1<−LETTERS[1:20] y2<−sample(c(NA,rpois(5,2)),20,replace=TRUE) df2<−data.frame(y1,y2) df2 y1 y2 1 A 2 2 B 5 3 C NA 4 D 3 5 E 1 6 F NA 7 G 1 8 H 3 9 I 5 10 J 5 11 K NA 12 L NA 13 M 2 14 N 5 15 O 3 16 P 2 17 Q 2 18 R 1 19 S 2 20 T NA Replacing NAs with 0 and other values to 1 in column y2 − df2$y2<−ifelse(is.na(df2$y2),0,1) df2 y1 y2 1 A 1 2 B 1 3 C 0 4 D 1 5 E 1 6 F 0 7 G 1 8 H 1 9 I 1 10 J 1 11 K 0 12 L 0 13 M 1 14 N 1 15 O 1 16 P 1 17 Q 1 18 R 1 19 S 1 20 T 0
[ { "code": null, "e": 1386, "s": 1062, "text": "Sometimes we want to convert a column of an R data frame to binary column using 0 and 1, it is especially done in situations where we have some NAs in the column of the data frame and the other values can be converted to 1 due to some characteristics. To replace NA with 0 and other values to 1, we can use ifelse function." }, { "code": null, "e": 1397, "s": 1386, "text": " Live Demo" }, { "code": null, "e": 1429, "s": 1397, "text": "Consider the below data frame −" }, { "code": null, "e": 1508, "s": 1429, "text": "x1<−1:20\nx2<−sample(c(NA,rnorm(5)),20,replace=TRUE)\ndf1<−data.frame(x1,x2)\ndf1" }, { "code": null, "e": 1783, "s": 1508, "text": "x1 x2\n1 1 1.6562106\n2 2 NA\n3 3 2.2323438\n4 4 2.2323438\n5 5 1.2038679\n6 6 2.2323438\n7 7 NA\n8 8 1.6562106\n9 9 NA\n10 10 1.2038679\n11 11 −0.6898052\n12 12 2.2323438\n13 13 2.2323438\n14 14 1.6562106\n15 15 NA\n16 16 1.2038679\n17 17 NA\n18 18 −0.6898052\n19 19 2.2323438\n20 20 2.2323438" }, { "code": null, "e": 1841, "s": 1783, "text": "Replacing NAs with 0 and other values to 1 in column x2 −" }, { "code": null, "e": 1879, "s": 1841, "text": "df1$x2<−ifelse(is.na(df1$x2),0,1)\ndf1" }, { "code": null, "e": 2027, "s": 1879, "text": "x1 x2\n1 1 1\n2 2 0\n3 3 1\n4 4 1\n5 5 1\n6 6 1\n7 7 0\n8 8 1\n9 9 0\n10 10 1\n11 11 1\n12 12 1\n13 13 1\n14 14 1\n15 15 0\n16 16 1\n17 17 0\n18 18 1\n19 19 1\n20 20 1" }, { "code": null, "e": 2038, "s": 2027, "text": " Live Demo" }, { "code": null, "e": 2128, "s": 2038, "text": "y1<−LETTERS[1:20]\ny2<−sample(c(NA,rpois(5,2)),20,replace=TRUE)\ndf2<−data.frame(y1,y2)\ndf2" }, { "code": null, "e": 2270, "s": 2128, "text": "y1 y2\n1 A 2\n2 B 5\n3 C NA\n4 D 3\n5 E 1\n6 F NA\n7 G 1\n8 H 3\n9 I 5\n10 J 5\n11 K NA\n12 L NA\n13 M 2\n14 N 5\n15 O 3\n16 P 2\n17 Q 2\n18 R 1\n19 S 2\n20 T NA" }, { "code": null, "e": 2328, "s": 2270, "text": "Replacing NAs with 0 and other values to 1 in column y2 −" }, { "code": null, "e": 2366, "s": 2328, "text": "df2$y2<−ifelse(is.na(df2$y2),0,1)\ndf2" }, { "code": null, "e": 2503, "s": 2366, "text": "y1 y2\n1 A 1\n2 B 1\n3 C 0\n4 D 1\n5 E 1\n6 F 0\n7 G 1\n8 H 1\n9 I 1\n10 J 1\n11 K 0\n12 L 0\n13 M 1\n14 N 1\n15 O 1\n16 P 1\n17 Q 1\n18 R 1\n19 S 1\n20 T 0" } ]
Practical Cython— Music Retrieval: Short Time Fourier Transform | by Stefano Bosisio | Towards Data Science
medium.com I love so much Cython, as it takes the best of the two main programming worlds: C and Python. Both languages can be combined together in a straightforward way, in order to give you more computational efficient APIs or scripts. Furthermore, coding in Cython and C helps you to understand what is underneath common python packages such as sklearn, bringing data scientists to a further step, which is quite far from the simple import torch and predefined algorithms usage. In this series I am going to show you how to implement in C and Cyhton algorithms to analyse music, following as a reference the corresponding sklearn and scipypackages. This lesson deals with the Short-time Fourier Transform or STFT. This algorithm is widely used to analyse frequencies of a signal and their evolution in time. The codes are stored in this repository: github.com The codes are extremely useful to understand how to structure a Cython project, subdividing codes in folders and install the final package. This post contains external link to Amazon affiliate program. Feel free to skip this section if you want immediately to get your hands dirty with the code. This is just a light introduction to STFT and its theory. The main aspect of the Fourier Transform is to map (or let’s say to sketch) a signal into a frequency domain, pointing out the most important frequencies which constitutes the signal itself. This mapping has wide implications in many fields: in biomedical engineering (e.g. studying frequency contributions in electrocardiogram (ECG) to detect possible diseases or heart malfunctions1 2 3), computational science (e.g. compression algorithms such as mp3, jpeg 4) or finance (e.g. studying stock prices, bond prices behaviours5). This mapping is beneficial for studying music signals as well, as the main frequency content can be retrieved and analysed, for example, to create a genres classifiers or app like Shazam (e.g. check my post ). However, it is sometimes interesting and helpful to understand the frequency evolution in time and amplitude, in order to find specific noises or to equalise frequencies in a recording session, or to create neural network algorithms to convert a speech signal to text (e.g. DeepPavlov). In practice, STFT divides a time singal into short segments of equal length ( window_length ) and then the Fourier transform of each segment is computed. The resulting segment-frequency content can be plotted against time and it is called spectrogram. Practically, the STFT can be summarised in these steps: Take an input signal ( e.g. mp3 file) Multiply the signal by a window function (e.g. Hamming function). This will help the Fourier transform to be computed at the extremes of each segment, in order to avoid possible discontinuities in the signal, which may block the Fourier Transform computation Slide with a window and a hop-size window along the singal/time and compute the Fourier transform Fig. 1 helps to better understand what STFT does. An input signal with a defined amplitude, in decibel, and time, in seconds, is encapsulated in N windows of size windowSize. Every HopSize, the window define a signal-segment, which is Fourier transformed. The output frequency, in Hertz, can be plotted as a function of time. The last thing to remember is the Nyquist frequency. If you look at the spectrogram, you will find out that the maximum plotted frequency is half of the sampling frequency of the signal, in order to avoid an issue known as aliasing in the Fourier Transform. This means that out of the N complex Fourier coefficients retrieved from a signal, with sampling frequency fs (e.g. an audio file have usually a sampling frequency of 44100 Hz) only half of these are useful, representing frequencies from 0 to fs/2 For more info I definitely recommend these very valuable and useful books: Physics of Oscillations and Waves: With use of Matlab and Python (Undergraduate Texts in Physics) An Introduction to Information Theory, Symbols, Signals and Noise (Dover Books on Mathematics) Fourier Transforms (Dover Books on Mathematics) Figure 2 shows a plan of action to implement STFT in Cython and C. The entire process can be divided in three chunks, which makes use respectively of Python, Cython and C: The Python interface deals with input/output and manipulation of the signal: The Python interface deals with input/output and manipulation of the signal: The mp3/wav input file is opened with scipy or pydub Left and right channels are isolated and only channel[:,0] is analysed Padding is perform on the signal, so the total number of elements is a power of 2, which improves the performance of the Fourier transform library fftw 2. The Cython interface translates the Pythonic inputs to memoryviews, which can then be easily passed as pointers to the C suite: To have a concreate idea, fig.3 shows an example for creating a memoryview in Cython from an array of zeros, np.zeros of length n_elements 3. The C interface performs the core STFT operations. In a nutshell: Define the fftw3 library domain with the fftw elements and plan. The plan is necessary to perform the Fast Fourier Transform: Create a Hamming window Perform the STFT for loop: multiply input chunk by Hamming window, compute the Fast Fourier transform on the output, store half of this result. It is very important to highlight that the current structure makes sure the most computational intense operations are performed in the C code and all the info are then returned back to Python. This allows to lower the computational cost — as many algorithms are Python implemented only and not in C — and improve the overall Python code performance. Code: https://github.com/Steboss/music_retrieval/blob/master/stft/installer/tester.py The Python script handles input audio files and prepares all the info for the STFT to be performed. Firstly,pydub can be used to open and read an mp3 file, while scipy offers a built-in function to read wav extensions: Then we can define all the “constants” in our code and call the STFT: In fig. 6, the windowSize and hopSize are defined. As shown in fig.1 these two quantities can overlap. In general, more overlap gives more analysis points and smoother results across time, but the price to pay is a higher computational cost. Overall for this tutorial we can attain to the same size, but feel free to experiment as much as possible. Code: https://github.com/Steboss/music_retrieval/blob/master/stft/installer/stft.pyx Cython codes usually have two extensions: pyx and pxd . The former is where the main code is written, while the second one if used for declarations. In this tutorial we are going to use just the pyx extension, in order to get more familiar with Cython and have to deal with just one piece of code. The first step is to import our Ccode in Cython — later we will see how the c code is structured: C code is imported using cdef extern from YOUR_C_CODE: declaration. Following, the name of the C function, that we want to use, has to be declared, along with all the types, exactly as if we were in a C code. Thus, the function stft returns an array as a pointer, so double* for stft C-function. The arguments are the input audio channel double *wav_data , the number of samples within the final STFT signal int samples, the window and hop size int windowSize, int hop_Size , the sampling frequency and the length of the audio channel int sample_freq, int length . samples can be computed as: samples = int((length/windowSize/2))*((windowSize/2)+1)) The next step is to create and declare our main function., which will be called in our python script (whose name here is play ): cpdef play(audData, rate, windowSize, hopSize): cpdef declares that the following function will contain both C and Python code and types. Further options are def as per normal Python, in case we want to call directly a Python function and cdef , which is used to declare a pure C function. The second important concept is to pass all the arguments to the main stft C function, as shown in fig.8 Basically, memoryviews can be easily passed to a C function and they will be treated as 1D arrays. First, to create a memoryview it is necessary to define the type and the size. Lines 2 and 6 of fig. 8 shows how to create 1D memory view: cdef double[:] name_of_memory_view = content where [:] defines a 1D memory view. Finally, memoryviews can be passed to a C function as &name_of_memory_view[0] as done in line 8 for &audData_view[0] and &magnitude[0] . In the next tutorials I will show you in more detail how to deal with 2D memory views and pass them to C, as this has to deal with the memory-buffer. It is worth to notice that in line 8 &magnitude[0] is passed to C as a vector of 0s. This allows to deal with an already initialized pointer in C and fill them with the STFT values. Code: https://github.com/Steboss/music_retrieval/blob/master/stft/c_code/stft.c The first step is to import the necessary libraries: fftw3 is the most important library, which allows to compute fast Fourier transform, as well as cosine transform and so on. Secondly, we need to define some constant, such as pi value and the Hamming window function, to create Hamming window values. This can be easily done in this way: At this point, we can start playing with fftw3 . As fig.11 shows, we need to create a fftw_complex array for the input data stft_data whose size is windowSize ; an output array fft_result with the same dimensions of stft_data to store the Fourier transform windowed signal and finally a storage array to store all the transformed windows together, whose size is samples . To allow fftw to compute a Fourier transform on the windowed input signal we need a Fourier plan object which is create on line 17 in this way: fftw_plan plan_forward;plan_forward = fftw_plan_dft_1d(number_of_elements_in_output, input_data_to_be_transformed, output_transformed_data, flag_for_forward_transform, flag_for_inverse_transform); The last step is to implement the real STFT algorithm: Fig. 12 helps you to understand how the STFT is implemented. First of all, the stft_data array is filled with the current window-sized signal amplitude values (line 10). Then this segment is Fourier transformed, calling the fftw_execute(plan_forward) . Indeed, fftw_execute will trigger the execution of the Fourier Transform plan, which was defined above, in order to transform the current input and save the result in the output array fft_result . Finally, half of the Fourier samples are saved in the storage array, which will be returned to Cython later on, and the chunkPosition is updated by hop_size/2 so it is possible to proceed to the next window — do you remember Nyquist frequency? At this point it is possible to fill the magnitude array, which was a memoryview created in Cython, with the Fourier transformed signals from storage The values in magnitude will be thus transferred from C to Cython. In Cython it is then possible to manipulate the magnitude array to return a numpy array, that can be read in Python: This is the very last step before having fun in analysing STFT. To install the create Python package in Cython I usually divide my codes in two folder: c_code , where stft.c is stored and installer , where I save pyx and Python files. In the installer folder I create setup.py file, which is a Python file to install and compile Cython and C codes. The code is pretty straightforward: In the code, fig. 15, we can see that the main elements to compile codes are Extension and setup . The former define the name of the current extension stft.stft , the list of pyx files [stft.pyx] , the list of libraries and extra argument needed to compile the C code . In particular, in this context we need to specifiy fftw3and m as additional libraries , where one declares the usage of the Fast Fourier Transform library and the second the math library (when a C code is compiled usually m is added as -lm and fftw3 as-lfftw3 ). The extra arguments make sure we have an O3 optimization process in compiling C, while std=C99 is to inform the compiler that we are using a standard version C99. Finally, setup stores the package information, namely the name of the final module — so Python imports the current modules as import stft — the version control, the extensions for C and any directive for Cython. As regards Cython directives I recommend you to give a look at the guide. Once the code is ready we can run setup.py as: python setup.py build_ext --inplace build_ext is the command to build the current extension and --inplace allows to install the current package in the working directory. Code:https://github.com/Steboss/music_retrieval/tree/master/stft/examples A good example for studying STFT is Aphex Twin “Equation” music If we analyse the spectrogram between 300 and 350 s: start = 300*rate end = 350*rate channel1 = audData[:,0][start:end] setting windowSize=4096 and hopSize=4096 a demoniac face will be uncovered within the music spectrogram: I hope you like this first introduction to Cython :) feel free to send me an email for questions or comments at: stefanobosisio1@gmail.com
[ { "code": null, "e": 183, "s": 172, "text": "medium.com" }, { "code": null, "e": 654, "s": 183, "text": "I love so much Cython, as it takes the best of the two main programming worlds: C and Python. Both languages can be combined together in a straightforward way, in order to give you more computational efficient APIs or scripts. Furthermore, coding in Cython and C helps you to understand what is underneath common python packages such as sklearn, bringing data scientists to a further step, which is quite far from the simple import torch and predefined algorithms usage." }, { "code": null, "e": 824, "s": 654, "text": "In this series I am going to show you how to implement in C and Cyhton algorithms to analyse music, following as a reference the corresponding sklearn and scipypackages." }, { "code": null, "e": 1024, "s": 824, "text": "This lesson deals with the Short-time Fourier Transform or STFT. This algorithm is widely used to analyse frequencies of a signal and their evolution in time. The codes are stored in this repository:" }, { "code": null, "e": 1035, "s": 1024, "text": "github.com" }, { "code": null, "e": 1175, "s": 1035, "text": "The codes are extremely useful to understand how to structure a Cython project, subdividing codes in folders and install the final package." }, { "code": null, "e": 1237, "s": 1175, "text": "This post contains external link to Amazon affiliate program." }, { "code": null, "e": 1389, "s": 1237, "text": "Feel free to skip this section if you want immediately to get your hands dirty with the code. This is just a light introduction to STFT and its theory." }, { "code": null, "e": 2415, "s": 1389, "text": "The main aspect of the Fourier Transform is to map (or let’s say to sketch) a signal into a frequency domain, pointing out the most important frequencies which constitutes the signal itself. This mapping has wide implications in many fields: in biomedical engineering (e.g. studying frequency contributions in electrocardiogram (ECG) to detect possible diseases or heart malfunctions1 2 3), computational science (e.g. compression algorithms such as mp3, jpeg 4) or finance (e.g. studying stock prices, bond prices behaviours5). This mapping is beneficial for studying music signals as well, as the main frequency content can be retrieved and analysed, for example, to create a genres classifiers or app like Shazam (e.g. check my post ). However, it is sometimes interesting and helpful to understand the frequency evolution in time and amplitude, in order to find specific noises or to equalise frequencies in a recording session, or to create neural network algorithms to convert a speech signal to text (e.g. DeepPavlov)." }, { "code": null, "e": 2667, "s": 2415, "text": "In practice, STFT divides a time singal into short segments of equal length ( window_length ) and then the Fourier transform of each segment is computed. The resulting segment-frequency content can be plotted against time and it is called spectrogram." }, { "code": null, "e": 2723, "s": 2667, "text": "Practically, the STFT can be summarised in these steps:" }, { "code": null, "e": 2761, "s": 2723, "text": "Take an input signal ( e.g. mp3 file)" }, { "code": null, "e": 3020, "s": 2761, "text": "Multiply the signal by a window function (e.g. Hamming function). This will help the Fourier transform to be computed at the extremes of each segment, in order to avoid possible discontinuities in the signal, which may block the Fourier Transform computation" }, { "code": null, "e": 3118, "s": 3020, "text": "Slide with a window and a hop-size window along the singal/time and compute the Fourier transform" }, { "code": null, "e": 3444, "s": 3118, "text": "Fig. 1 helps to better understand what STFT does. An input signal with a defined amplitude, in decibel, and time, in seconds, is encapsulated in N windows of size windowSize. Every HopSize, the window define a signal-segment, which is Fourier transformed. The output frequency, in Hertz, can be plotted as a function of time." }, { "code": null, "e": 3950, "s": 3444, "text": "The last thing to remember is the Nyquist frequency. If you look at the spectrogram, you will find out that the maximum plotted frequency is half of the sampling frequency of the signal, in order to avoid an issue known as aliasing in the Fourier Transform. This means that out of the N complex Fourier coefficients retrieved from a signal, with sampling frequency fs (e.g. an audio file have usually a sampling frequency of 44100 Hz) only half of these are useful, representing frequencies from 0 to fs/2" }, { "code": null, "e": 4025, "s": 3950, "text": "For more info I definitely recommend these very valuable and useful books:" }, { "code": null, "e": 4123, "s": 4025, "text": "Physics of Oscillations and Waves: With use of Matlab and Python (Undergraduate Texts in Physics)" }, { "code": null, "e": 4218, "s": 4123, "text": "An Introduction to Information Theory, Symbols, Signals and Noise (Dover Books on Mathematics)" }, { "code": null, "e": 4266, "s": 4218, "text": "Fourier Transforms (Dover Books on Mathematics)" }, { "code": null, "e": 4438, "s": 4266, "text": "Figure 2 shows a plan of action to implement STFT in Cython and C. The entire process can be divided in three chunks, which makes use respectively of Python, Cython and C:" }, { "code": null, "e": 4515, "s": 4438, "text": "The Python interface deals with input/output and manipulation of the signal:" }, { "code": null, "e": 4592, "s": 4515, "text": "The Python interface deals with input/output and manipulation of the signal:" }, { "code": null, "e": 4645, "s": 4592, "text": "The mp3/wav input file is opened with scipy or pydub" }, { "code": null, "e": 4716, "s": 4645, "text": "Left and right channels are isolated and only channel[:,0] is analysed" }, { "code": null, "e": 4868, "s": 4716, "text": "Padding is perform on the signal, so the total number of elements is a power of 2, which improves the performance of the Fourier transform library fftw" }, { "code": null, "e": 4999, "s": 4868, "text": "2. The Cython interface translates the Pythonic inputs to memoryviews, which can then be easily passed as pointers to the C suite:" }, { "code": null, "e": 5138, "s": 4999, "text": "To have a concreate idea, fig.3 shows an example for creating a memoryview in Cython from an array of zeros, np.zeros of length n_elements" }, { "code": null, "e": 5207, "s": 5138, "text": "3. The C interface performs the core STFT operations. In a nutshell:" }, { "code": null, "e": 5333, "s": 5207, "text": "Define the fftw3 library domain with the fftw elements and plan. The plan is necessary to perform the Fast Fourier Transform:" }, { "code": null, "e": 5357, "s": 5333, "text": "Create a Hamming window" }, { "code": null, "e": 5501, "s": 5357, "text": "Perform the STFT for loop: multiply input chunk by Hamming window, compute the Fast Fourier transform on the output, store half of this result." }, { "code": null, "e": 5851, "s": 5501, "text": "It is very important to highlight that the current structure makes sure the most computational intense operations are performed in the C code and all the info are then returned back to Python. This allows to lower the computational cost — as many algorithms are Python implemented only and not in C — and improve the overall Python code performance." }, { "code": null, "e": 5937, "s": 5851, "text": "Code: https://github.com/Steboss/music_retrieval/blob/master/stft/installer/tester.py" }, { "code": null, "e": 6037, "s": 5937, "text": "The Python script handles input audio files and prepares all the info for the STFT to be performed." }, { "code": null, "e": 6156, "s": 6037, "text": "Firstly,pydub can be used to open and read an mp3 file, while scipy offers a built-in function to read wav extensions:" }, { "code": null, "e": 6226, "s": 6156, "text": "Then we can define all the “constants” in our code and call the STFT:" }, { "code": null, "e": 6575, "s": 6226, "text": "In fig. 6, the windowSize and hopSize are defined. As shown in fig.1 these two quantities can overlap. In general, more overlap gives more analysis points and smoother results across time, but the price to pay is a higher computational cost. Overall for this tutorial we can attain to the same size, but feel free to experiment as much as possible." }, { "code": null, "e": 6660, "s": 6575, "text": "Code: https://github.com/Steboss/music_retrieval/blob/master/stft/installer/stft.pyx" }, { "code": null, "e": 6958, "s": 6660, "text": "Cython codes usually have two extensions: pyx and pxd . The former is where the main code is written, while the second one if used for declarations. In this tutorial we are going to use just the pyx extension, in order to get more familiar with Cython and have to deal with just one piece of code." }, { "code": null, "e": 7056, "s": 6958, "text": "The first step is to import our Ccode in Cython — later we will see how the c code is structured:" }, { "code": null, "e": 7649, "s": 7056, "text": "C code is imported using cdef extern from YOUR_C_CODE: declaration. Following, the name of the C function, that we want to use, has to be declared, along with all the types, exactly as if we were in a C code. Thus, the function stft returns an array as a pointer, so double* for stft C-function. The arguments are the input audio channel double *wav_data , the number of samples within the final STFT signal int samples, the window and hop size int windowSize, int hop_Size , the sampling frequency and the length of the audio channel int sample_freq, int length . samples can be computed as:" }, { "code": null, "e": 7706, "s": 7649, "text": "samples = int((length/windowSize/2))*((windowSize/2)+1))" }, { "code": null, "e": 7835, "s": 7706, "text": "The next step is to create and declare our main function., which will be called in our python script (whose name here is play ):" }, { "code": null, "e": 7883, "s": 7835, "text": "cpdef play(audData, rate, windowSize, hopSize):" }, { "code": null, "e": 8125, "s": 7883, "text": "cpdef declares that the following function will contain both C and Python code and types. Further options are def as per normal Python, in case we want to call directly a Python function and cdef , which is used to declare a pure C function." }, { "code": null, "e": 8230, "s": 8125, "text": "The second important concept is to pass all the arguments to the main stft C function, as shown in fig.8" }, { "code": null, "e": 9018, "s": 8230, "text": "Basically, memoryviews can be easily passed to a C function and they will be treated as 1D arrays. First, to create a memoryview it is necessary to define the type and the size. Lines 2 and 6 of fig. 8 shows how to create 1D memory view: cdef double[:] name_of_memory_view = content where [:] defines a 1D memory view. Finally, memoryviews can be passed to a C function as &name_of_memory_view[0] as done in line 8 for &audData_view[0] and &magnitude[0] . In the next tutorials I will show you in more detail how to deal with 2D memory views and pass them to C, as this has to deal with the memory-buffer. It is worth to notice that in line 8 &magnitude[0] is passed to C as a vector of 0s. This allows to deal with an already initialized pointer in C and fill them with the STFT values." }, { "code": null, "e": 9098, "s": 9018, "text": "Code: https://github.com/Steboss/music_retrieval/blob/master/stft/c_code/stft.c" }, { "code": null, "e": 9151, "s": 9098, "text": "The first step is to import the necessary libraries:" }, { "code": null, "e": 9275, "s": 9151, "text": "fftw3 is the most important library, which allows to compute fast Fourier transform, as well as cosine transform and so on." }, { "code": null, "e": 9438, "s": 9275, "text": "Secondly, we need to define some constant, such as pi value and the Hamming window function, to create Hamming window values. This can be easily done in this way:" }, { "code": null, "e": 9954, "s": 9438, "text": "At this point, we can start playing with fftw3 . As fig.11 shows, we need to create a fftw_complex array for the input data stft_data whose size is windowSize ; an output array fft_result with the same dimensions of stft_data to store the Fourier transform windowed signal and finally a storage array to store all the transformed windows together, whose size is samples . To allow fftw to compute a Fourier transform on the windowed input signal we need a Fourier plan object which is create on line 17 in this way:" }, { "code": null, "e": 10279, "s": 9954, "text": "fftw_plan plan_forward;plan_forward = fftw_plan_dft_1d(number_of_elements_in_output, input_data_to_be_transformed, output_transformed_data, flag_for_forward_transform, flag_for_inverse_transform);" }, { "code": null, "e": 10334, "s": 10279, "text": "The last step is to implement the real STFT algorithm:" }, { "code": null, "e": 11028, "s": 10334, "text": "Fig. 12 helps you to understand how the STFT is implemented. First of all, the stft_data array is filled with the current window-sized signal amplitude values (line 10). Then this segment is Fourier transformed, calling the fftw_execute(plan_forward) . Indeed, fftw_execute will trigger the execution of the Fourier Transform plan, which was defined above, in order to transform the current input and save the result in the output array fft_result . Finally, half of the Fourier samples are saved in the storage array, which will be returned to Cython later on, and the chunkPosition is updated by hop_size/2 so it is possible to proceed to the next window — do you remember Nyquist frequency?" }, { "code": null, "e": 11178, "s": 11028, "text": "At this point it is possible to fill the magnitude array, which was a memoryview created in Cython, with the Fourier transformed signals from storage" }, { "code": null, "e": 11362, "s": 11178, "text": "The values in magnitude will be thus transferred from C to Cython. In Cython it is then possible to manipulate the magnitude array to return a numpy array, that can be read in Python:" }, { "code": null, "e": 11747, "s": 11362, "text": "This is the very last step before having fun in analysing STFT. To install the create Python package in Cython I usually divide my codes in two folder: c_code , where stft.c is stored and installer , where I save pyx and Python files. In the installer folder I create setup.py file, which is a Python file to install and compile Cython and C codes. The code is pretty straightforward:" }, { "code": null, "e": 12729, "s": 11747, "text": "In the code, fig. 15, we can see that the main elements to compile codes are Extension and setup . The former define the name of the current extension stft.stft , the list of pyx files [stft.pyx] , the list of libraries and extra argument needed to compile the C code . In particular, in this context we need to specifiy fftw3and m as additional libraries , where one declares the usage of the Fast Fourier Transform library and the second the math library (when a C code is compiled usually m is added as -lm and fftw3 as-lfftw3 ). The extra arguments make sure we have an O3 optimization process in compiling C, while std=C99 is to inform the compiler that we are using a standard version C99. Finally, setup stores the package information, namely the name of the final module — so Python imports the current modules as import stft — the version control, the extensions for C and any directive for Cython. As regards Cython directives I recommend you to give a look at the guide." }, { "code": null, "e": 12776, "s": 12729, "text": "Once the code is ready we can run setup.py as:" }, { "code": null, "e": 12812, "s": 12776, "text": "python setup.py build_ext --inplace" }, { "code": null, "e": 12946, "s": 12812, "text": "build_ext is the command to build the current extension and --inplace allows to install the current package in the working directory." }, { "code": null, "e": 13020, "s": 12946, "text": "Code:https://github.com/Steboss/music_retrieval/tree/master/stft/examples" }, { "code": null, "e": 13084, "s": 13020, "text": "A good example for studying STFT is Aphex Twin “Equation” music" }, { "code": null, "e": 13137, "s": 13084, "text": "If we analyse the spectrogram between 300 and 350 s:" }, { "code": null, "e": 13206, "s": 13137, "text": "start = 300*rate end = 350*rate channel1 = audData[:,0][start:end]" }, { "code": null, "e": 13311, "s": 13206, "text": "setting windowSize=4096 and hopSize=4096 a demoniac face will be uncovered within the music spectrogram:" } ]
Designing Intelligent Python Dictionaries | by Chaitanya Baweja | Towards Data Science
Last week while working on a hobby project, I encountered a very interesting design problem: How do you deal with wrong user input? Let me explain. Dictionaries in Python represent pairs of keys and values. For example: student_grades = {'John': 'A', 'Mary': 'C', 'Rob': 'B'}# To check grade of John, we callprint(student_grades['John'])# Output: A What happens when you try to access a key which is not present? print(student_grades['Maple'])# Output: KeyError Traceback (most recent call last)<ipython-input-6-51fec14f477a> in <module>----> print(student_grades['Maple'])KeyError: 'Maple' You receive a KeyError. KeyError occurs whenever a dict() object is requested value for a key that is not present in the dictionary. This error becomes extremely common when you take user input. For example: student_name = input("Please enter student name: ")print(student_grades[student_name]) This tutorial provides several ways in which we can deal with key errors in Python Dictionaries. We will work our way towards building an intelligent python dictionary that can deal with a variety of typos in user input. A very lazy method would be to return a default value whenever the requested key is not present. This can be done using the get() method: default_grade = 'Not Available'print(student_grades.get('Maple',default_grade))# Output:# Not Available You can read more about the get() method here. Let’s suppose you have a dictionary containing country-specific population data. The code will ask the user for a country name and would print its population. # OutputPlease enter Country Name: France65 But, let’s say the user types input as ‘france’. Currently, in our dictionary all keys have first letter in Capital. What will be the output? Please enter Country Name: france-----------------------------------------------------------------KeyError Traceback (most recent call last)<ipython-input-6-51fec14f477a> in <module> 2 Country_Name = input('Please enter Country Name: ') 3 ----> 4 print(population_dict[Country_Name])KeyError: 'france' As ‘france’ is not a key in the dictionary, we receive an error. A simple workaround: store all country names in lower-case letters. Also, convert whatever input the user types to lower-case. Please enter Country Name: france65 But, now let’s say the user enters ‘Frrance’ instead of ‘France’. How can we deal with this? One way would be to use conditional statements. We check if the given user_input is available as a key. If it is not available, then we print a message. It’s best to put this in a loop and break on a special flag input like exit. The loop will run in continuation until the user enters exit . While the above method ‘works’, it’s not the ‘intelligent method’ that we promised in the intro. We want our program to be robust, and to detect simple typos like frrance and chhina (very similar to google search). After some research, I was able to find a couple of libraries that could suit our purpose. My favorite is the standard python library: difflib. difflib can be used to compare files, strings, lists etc and produce difference information in various formats. The module provides a variety of classes and functions for comparing sequences. We will use two features from difflib: SequenceMatcher and get_close_matches. Let’s take a brief look at both of them. You can skip to the next section if you are only curious about the application. SequenceMatcher class is used to compare two sequences. We define its object as follows: difflib.SequenceMatcher(isjunk=None, a='', b='', autojunk=True) isjunk : used to specify junk elements(white-spaces, newlines, etc.) that we wish to ignore while comparing two blocks of text. We pass None here. a and b: strings that we wish to compare. autojunk : a heuristic that automatically treats certain sequence items as junk. Let’s use SequenceMatcher to compare two strings chinna and china: In the code above, we used the ratio() method. ratio returns a measure of the sequences’ similarity as a float in the range [0, 1]. Now, we have a way of comparing two strings based on similarity. But, what happens if we wish to find all the strings(stored in a database) that are similar to a particular string. get_close_matches() returns a list containing the best matches from a list of possibilities. difflib.get_close_matches(word, possibilities, n=3, cutoff=0.6) word: String for which matches are required. possibilities: List of strings against which to match word. Optional n: Max number of close matches to return. By default, 3; must be greater than 0. Optional cutoff: Similarity ratio must be higher than this value. By default, 0.6. The best n matches among the possibilities are returned in a list, sorted by similarity score, most similar first. Let’s take a look at an example: Now that we have the difflib at our disposal, let’s bring everything together and build a typo-proof python dictionary. We have to focus on the case when the Country_name given by the user is not present in population_dict.keys() . In this case, we try to find a country with a similar name to user input and output its population. # pass country_name in word and dict keys in possibilitiesmaybe_country = get_close_matches(Country_Name, population_dict.keys())# Then we pick the first(most similar) string from the returned listprint(population_dict[maybe_country[0]]) The final code will need to account for some other cases. For example, if there is no similar string or confirming from user if this is the string that they require. Take a look: Output: The goal of this tutorial was to provide you with a guide towards building dictionaries that are robust to user input. We looked at ways to deal with a variety of errors like type-case errors and small typos. We can build further on this and look at a variety of other applications. Example: Using NLPs to better understand user input and bring nearby results in search engines. Hope you found this tutorial useful! Introduction to Python Dictionaries difflib — Helpers for computing deltas
[ { "code": null, "e": 264, "s": 171, "text": "Last week while working on a hobby project, I encountered a very interesting design problem:" }, { "code": null, "e": 303, "s": 264, "text": "How do you deal with wrong user input?" }, { "code": null, "e": 319, "s": 303, "text": "Let me explain." }, { "code": null, "e": 391, "s": 319, "text": "Dictionaries in Python represent pairs of keys and values. For example:" }, { "code": null, "e": 520, "s": 391, "text": "student_grades = {'John': 'A', 'Mary': 'C', 'Rob': 'B'}# To check grade of John, we callprint(student_grades['John'])# Output: A" }, { "code": null, "e": 584, "s": 520, "text": "What happens when you try to access a key which is not present?" }, { "code": null, "e": 786, "s": 584, "text": "print(student_grades['Maple'])# Output: KeyError Traceback (most recent call last)<ipython-input-6-51fec14f477a> in <module>----> print(student_grades['Maple'])KeyError: 'Maple'" }, { "code": null, "e": 810, "s": 786, "text": "You receive a KeyError." }, { "code": null, "e": 919, "s": 810, "text": "KeyError occurs whenever a dict() object is requested value for a key that is not present in the dictionary." }, { "code": null, "e": 994, "s": 919, "text": "This error becomes extremely common when you take user input. For example:" }, { "code": null, "e": 1082, "s": 994, "text": "student_name = input(\"Please enter student name: \")print(student_grades[student_name]) " }, { "code": null, "e": 1179, "s": 1082, "text": "This tutorial provides several ways in which we can deal with key errors in Python Dictionaries." }, { "code": null, "e": 1303, "s": 1179, "text": "We will work our way towards building an intelligent python dictionary that can deal with a variety of typos in user input." }, { "code": null, "e": 1441, "s": 1303, "text": "A very lazy method would be to return a default value whenever the requested key is not present. This can be done using the get() method:" }, { "code": null, "e": 1545, "s": 1441, "text": "default_grade = 'Not Available'print(student_grades.get('Maple',default_grade))# Output:# Not Available" }, { "code": null, "e": 1592, "s": 1545, "text": "You can read more about the get() method here." }, { "code": null, "e": 1751, "s": 1592, "text": "Let’s suppose you have a dictionary containing country-specific population data. The code will ask the user for a country name and would print its population." }, { "code": null, "e": 1795, "s": 1751, "text": "# OutputPlease enter Country Name: France65" }, { "code": null, "e": 1937, "s": 1795, "text": "But, let’s say the user types input as ‘france’. Currently, in our dictionary all keys have first letter in Capital. What will be the output?" }, { "code": null, "e": 2273, "s": 1937, "text": "Please enter Country Name: france-----------------------------------------------------------------KeyError Traceback (most recent call last)<ipython-input-6-51fec14f477a> in <module> 2 Country_Name = input('Please enter Country Name: ') 3 ----> 4 print(population_dict[Country_Name])KeyError: 'france'" }, { "code": null, "e": 2338, "s": 2273, "text": "As ‘france’ is not a key in the dictionary, we receive an error." }, { "code": null, "e": 2406, "s": 2338, "text": "A simple workaround: store all country names in lower-case letters." }, { "code": null, "e": 2465, "s": 2406, "text": "Also, convert whatever input the user types to lower-case." }, { "code": null, "e": 2501, "s": 2465, "text": "Please enter Country Name: france65" }, { "code": null, "e": 2594, "s": 2501, "text": "But, now let’s say the user enters ‘Frrance’ instead of ‘France’. How can we deal with this?" }, { "code": null, "e": 2642, "s": 2594, "text": "One way would be to use conditional statements." }, { "code": null, "e": 2747, "s": 2642, "text": "We check if the given user_input is available as a key. If it is not available, then we print a message." }, { "code": null, "e": 2824, "s": 2747, "text": "It’s best to put this in a loop and break on a special flag input like exit." }, { "code": null, "e": 2887, "s": 2824, "text": "The loop will run in continuation until the user enters exit ." }, { "code": null, "e": 2984, "s": 2887, "text": "While the above method ‘works’, it’s not the ‘intelligent method’ that we promised in the intro." }, { "code": null, "e": 3102, "s": 2984, "text": "We want our program to be robust, and to detect simple typos like frrance and chhina (very similar to google search)." }, { "code": null, "e": 3246, "s": 3102, "text": "After some research, I was able to find a couple of libraries that could suit our purpose. My favorite is the standard python library: difflib." }, { "code": null, "e": 3358, "s": 3246, "text": "difflib can be used to compare files, strings, lists etc and produce difference information in various formats." }, { "code": null, "e": 3438, "s": 3358, "text": "The module provides a variety of classes and functions for comparing sequences." }, { "code": null, "e": 3516, "s": 3438, "text": "We will use two features from difflib: SequenceMatcher and get_close_matches." }, { "code": null, "e": 3637, "s": 3516, "text": "Let’s take a brief look at both of them. You can skip to the next section if you are only curious about the application." }, { "code": null, "e": 3726, "s": 3637, "text": "SequenceMatcher class is used to compare two sequences. We define its object as follows:" }, { "code": null, "e": 3790, "s": 3726, "text": "difflib.SequenceMatcher(isjunk=None, a='', b='', autojunk=True)" }, { "code": null, "e": 3937, "s": 3790, "text": "isjunk : used to specify junk elements(white-spaces, newlines, etc.) that we wish to ignore while comparing two blocks of text. We pass None here." }, { "code": null, "e": 3979, "s": 3937, "text": "a and b: strings that we wish to compare." }, { "code": null, "e": 4060, "s": 3979, "text": "autojunk : a heuristic that automatically treats certain sequence items as junk." }, { "code": null, "e": 4127, "s": 4060, "text": "Let’s use SequenceMatcher to compare two strings chinna and china:" }, { "code": null, "e": 4174, "s": 4127, "text": "In the code above, we used the ratio() method." }, { "code": null, "e": 4259, "s": 4174, "text": "ratio returns a measure of the sequences’ similarity as a float in the range [0, 1]." }, { "code": null, "e": 4324, "s": 4259, "text": "Now, we have a way of comparing two strings based on similarity." }, { "code": null, "e": 4440, "s": 4324, "text": "But, what happens if we wish to find all the strings(stored in a database) that are similar to a particular string." }, { "code": null, "e": 4533, "s": 4440, "text": "get_close_matches() returns a list containing the best matches from a list of possibilities." }, { "code": null, "e": 4598, "s": 4533, "text": "difflib.get_close_matches(word, possibilities, n=3, cutoff=0.6) " }, { "code": null, "e": 4643, "s": 4598, "text": "word: String for which matches are required." }, { "code": null, "e": 4703, "s": 4643, "text": "possibilities: List of strings against which to match word." }, { "code": null, "e": 4793, "s": 4703, "text": "Optional n: Max number of close matches to return. By default, 3; must be greater than 0." }, { "code": null, "e": 4876, "s": 4793, "text": "Optional cutoff: Similarity ratio must be higher than this value. By default, 0.6." }, { "code": null, "e": 4991, "s": 4876, "text": "The best n matches among the possibilities are returned in a list, sorted by similarity score, most similar first." }, { "code": null, "e": 5024, "s": 4991, "text": "Let’s take a look at an example:" }, { "code": null, "e": 5144, "s": 5024, "text": "Now that we have the difflib at our disposal, let’s bring everything together and build a typo-proof python dictionary." }, { "code": null, "e": 5356, "s": 5144, "text": "We have to focus on the case when the Country_name given by the user is not present in population_dict.keys() . In this case, we try to find a country with a similar name to user input and output its population." }, { "code": null, "e": 5594, "s": 5356, "text": "# pass country_name in word and dict keys in possibilitiesmaybe_country = get_close_matches(Country_Name, population_dict.keys())# Then we pick the first(most similar) string from the returned listprint(population_dict[maybe_country[0]])" }, { "code": null, "e": 5773, "s": 5594, "text": "The final code will need to account for some other cases. For example, if there is no similar string or confirming from user if this is the string that they require. Take a look:" }, { "code": null, "e": 5781, "s": 5773, "text": "Output:" }, { "code": null, "e": 5900, "s": 5781, "text": "The goal of this tutorial was to provide you with a guide towards building dictionaries that are robust to user input." }, { "code": null, "e": 5990, "s": 5900, "text": "We looked at ways to deal with a variety of errors like type-case errors and small typos." }, { "code": null, "e": 6160, "s": 5990, "text": "We can build further on this and look at a variety of other applications. Example: Using NLPs to better understand user input and bring nearby results in search engines." }, { "code": null, "e": 6197, "s": 6160, "text": "Hope you found this tutorial useful!" }, { "code": null, "e": 6233, "s": 6197, "text": "Introduction to Python Dictionaries" } ]
Redis - String Get Command
Redis GET command is used to get the value stored in the specified key. If the key does not exist, then nil is returned. If the returned value is not a string, then error is returned. Simple string reply. Value or key or nil. Following is the basic syntax of Redis GET command. redis 127.0.0.1:6379> GET KEY_NAME First, set a key in Redis and then get it. redis 127.0.0.1:6379> SET tutorialspoint redis OK redis 127.0.0.1:6379> GET tutorialspoint "redis" 22 Lectures 40 mins Skillbakerystudios Print Add Notes Bookmark this page
[ { "code": null, "e": 2229, "s": 2045, "text": "Redis GET command is used to get the value stored in the specified key. If the key does not exist, then nil is returned. If the returned value is not a string, then error is returned." }, { "code": null, "e": 2271, "s": 2229, "text": "Simple string reply. Value or key or nil." }, { "code": null, "e": 2323, "s": 2271, "text": "Following is the basic syntax of Redis GET command." }, { "code": null, "e": 2359, "s": 2323, "text": "redis 127.0.0.1:6379> GET KEY_NAME\n" }, { "code": null, "e": 2402, "s": 2359, "text": "First, set a key in Redis and then get it." }, { "code": null, "e": 2506, "s": 2402, "text": "redis 127.0.0.1:6379> SET tutorialspoint redis \nOK \nredis 127.0.0.1:6379> GET tutorialspoint \n\"redis\" \n" }, { "code": null, "e": 2538, "s": 2506, "text": "\n 22 Lectures \n 40 mins\n" }, { "code": null, "e": 2558, "s": 2538, "text": " Skillbakerystudios" }, { "code": null, "e": 2565, "s": 2558, "text": " Print" }, { "code": null, "e": 2576, "s": 2565, "text": " Add Notes" } ]
Convert short to String in Java
The valueOf() method is used in Java to convert short to string. Let’s say we have the following short value. short val = 20; Converting the above short value to string. String.valueOf(val); Live Demo public class Demo { public static void main(String[] args) { short shortVal = 55; // converting short to String String str = String.valueOf(shortVal); System.out.println("String: "+str); } } String: 55
[ { "code": null, "e": 1127, "s": 1062, "text": "The valueOf() method is used in Java to convert short to string." }, { "code": null, "e": 1172, "s": 1127, "text": "Let’s say we have the following short value." }, { "code": null, "e": 1188, "s": 1172, "text": "short val = 20;" }, { "code": null, "e": 1232, "s": 1188, "text": "Converting the above short value to string." }, { "code": null, "e": 1253, "s": 1232, "text": "String.valueOf(val);" }, { "code": null, "e": 1264, "s": 1253, "text": " Live Demo" }, { "code": null, "e": 1485, "s": 1264, "text": "public class Demo {\n public static void main(String[] args) {\n short shortVal = 55;\n // converting short to String\n String str = String.valueOf(shortVal);\n System.out.println(\"String: \"+str);\n }\n}" }, { "code": null, "e": 1496, "s": 1485, "text": "String: 55" } ]
Bernoulli Distribution in R - GeeksforGeeks
21 Apr, 2021 Bernoulli Distribution is a special case of Binomial distribution where only a single trial is performed. It is a discrete probability distribution for a Bernoulli trial (a trial that has only two outcomes i.e. either success or failure). For example, it can be represented as a coin toss where the probability of getting the head is 0.5 and getting a tail is 0.5. It is a probability distribution of a random variable that takes value 1 with probability p and the value 0 with probability q=1-p. The Bernoulli distribution is a special case of the binomial distribution with n=1. The probability mass function f of this distribution, over possible outcomes k, is given by : The above relation can also be expressed as: In R Programming Language, there are 4 built-in functions to for Bernoulli distribution and all of them are discussed below. dbern( ) function in R programming measures density function of Bernoulli distribution. Syntax: dbern(x, prob, log = FALSE) Parameter: x: vector of quantiles prob: probability of success on each trial log: logical; if TRUE, probabilities p are given as log(p) In statistics, it is given by below formula: Example: R # import Rlab librarylibrary(Rlab) # x values for the# dbern( ) functionx <- seq(0, 10, by = 1) # using dbern( ) function# to x to obtain corresponding# Bernoulli PDFy <- dbern(x, prob = 0.7) # plot dbern valuesplot(y, type = "o") Output: pbern( ) function in R programming giver the distribution function for the Bernoulli distribution. The distribution function or cumulative distribution function (CDF) or cumulative frequency function, describes the probability that a variate X takes on a value less than or equal to a number x. Syntax: pbern(q, prob, lower.tail = TRUE, log.p = FALSE) Parameter: q: vector of quantiles prob: probability of success on each trial lowe.tail: logical log.p: logical; if TRUE, probabilities p are given as log(p). Example: R # import Rlab librarylibrary(Rlab) # x values for the# pbern( ) functionx <- seq(0, 10, by = 1) # using pbern( ) function# to x to obtain corresponding# Bernoulli CDFy <- pbern(x, prob = 0.7) # plot pbern valuesplot(y, type = "o") Output: The above plot represents the Cumulative Distribution Function of Bernoulli Distribution in R. qbern( ) gives the quantile function for the Bernoulli distribution. A quantile function in statistical terms specifies the value of the random variable such that the probability of the variable being less than or equal to that value equals the given probability. Syntax: qbern(p, prob, lower.tail = TRUE, log.p = FALSE) Parameter: p: vector of probabilities. prob: probability of success on each trial. lower.tail: logical log.p: logical; if TRUE, probabilities p are given as log(p). Example: R # import Rlab librarylibrary(Rlab) # x values for the# qbern( ) functionx <- seq(0, 1, by = 0.2) # using qbern( ) function# to x to obtain corresponding# Bernoulli QFy <- qbern(x, prob = 0.5,lower.tail = TRUE, log.p = FALSE) # plot qbern valuesplot(y, type = "o") Output: rbern( ) function in R programming is used to generate a vector of random numbers which are Bernoulli distributed. Syntax: rbern(n, prob) Parameter: n: number of observations. prob: number of observations. Example: R # import Rlab librarylibrary(Rlab)set.seed(98999) # sample sizeN <- 1000 # generate random variables using# rbern( ) functionrandom_values <- rbern(N, prob = 0.5) # print the valuesprint(random_values) # plot of randomly# drawn densityhist(random_values,breaks = 10,main = "") Output: [1] 0 0 1 1 0 0 0 0 0 1 0 0 0 1 0 1 1 0 1 0 1 1 1 0 1 0 1 1 0 1 0 1 1 0 1 1 1 1 0 1 0 0 0 0 1 1 1 0 1 1 1 1 1 1 0 0 0 0 1 0 0 0 0 0 0 0 1 [68] 1 1 1 0 0 1 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 1 0 1 1 0 0 0 1 1 0 1 1 1 1 0 1 0 0 0 0 1 0 0 1 1 0 0 0 0 0 1 0 1 0 0 1 0 1 0 1 0 1 1 1 0 1 [135] 0 0 0 0 0 1 1 0 1 1 1 1 0 0 0 0 0 1 1 0 0 0 1 1 1 1 1 0 1 1 0 0 1 0 0 1 0 1 1 0 1 1 0 1 1 1 1 1 0 0 1 0 0 0 0 1 1 0 0 0 0 0 1 1 0 0 0 [202] 1 1 1 1 1 1 1 0 0 0 0 1 1 1 0 0 0 0 0 1 1 0 1 1 0 0 1 0 0 1 1 1 0 1 1 0 1 1 0 0 0 0 0 0 0 1 1 0 0 1 1 0 0 1 1 1 1 1 0 1 1 1 1 1 0 1 0 [269] 0 0 1 1 0 1 0 1 0 1 0 0 0 0 1 1 0 0 1 1 1 1 0 0 1 0 0 1 0 0 0 0 0 1 1 0 0 1 0 0 1 0 1 0 1 0 1 0 0 0 1 0 0 1 1 0 1 0 1 0 0 0 0 1 0 0 0 [336] 0 0 1 0 0 1 0 1 0 1 0 0 1 0 1 0 1 1 1 1 0 0 0 0 1 1 0 1 1 1 1 1 1 1 1 1 0 0 1 1 1 0 1 1 0 1 0 1 0 0 1 0 1 1 0 1 1 1 0 0 1 0 1 1 0 0 0 [403] 1 0 1 0 0 1 1 0 1 1 1 1 0 1 1 1 0 1 0 0 1 0 1 0 1 0 0 0 0 1 1 0 0 0 0 1 1 1 1 1 0 0 1 0 0 1 0 0 0 0 1 0 1 1 0 0 0 1 1 0 1 1 1 0 1 0 1 [470] 1 0 1 1 1 0 0 0 0 1 0 0 0 0 1 1 1 1 1 0 0 1 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 1 1 1 1 1 0 0 1 1 0 0 1 0 1 0 1 1 1 1 1 0 1 1 0 [537] 0 0 1 0 0 0 1 1 0 0 1 0 1 1 1 0 1 1 1 1 1 0 1 1 1 0 1 0 1 0 0 1 1 1 0 0 1 1 0 1 0 0 1 1 0 1 1 0 1 1 1 0 0 0 0 0 1 1 0 1 1 1 1 0 0 1 0 [604] 1 0 1 0 1 1 0 0 0 1 0 1 0 0 0 0 0 0 1 0 0 0 0 1 1 1 1 0 1 0 0 1 0 0 0 1 0 1 0 1 1 0 1 1 1 0 1 1 1 0 1 0 1 1 0 0 0 0 1 0 0 1 0 1 0 1 1 [671] 0 0 0 1 0 1 0 0 1 0 0 0 0 1 1 1 0 1 0 1 1 0 1 1 1 0 1 1 0 1 0 0 0 0 0 0 1 0 0 1 1 0 1 1 1 1 0 1 0 1 1 1 1 1 1 1 1 1 1 0 0 0 1 0 0 1 1 [738] 1 0 0 1 1 1 1 1 0 0 1 0 1 0 0 0 1 1 0 0 0 1 0 1 0 0 0 0 0 1 0 1 1 0 0 0 0 0 0 0 0 1 0 0 0 1 1 1 0 1 0 1 0 1 1 1 1 1 1 0 1 0 1 1 0 1 1 [805] 0 1 0 1 0 1 1 0 0 1 1 1 1 0 1 1 1 0 0 0 0 1 0 0 1 1 0 0 1 1 0 0 0 1 1 1 0 0 1 0 1 0 0 0 0 1 0 1 1 0 0 0 0 1 1 0 1 0 1 1 0 1 0 1 1 1 1 [872] 1 1 0 1 0 0 1 0 0 1 1 0 1 0 0 1 0 0 0 0 0 1 1 1 0 1 1 0 0 1 1 1 0 1 0 0 1 0 1 1 1 0 1 0 0 0 1 1 0 0 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 0 0 [939] 1 1 1 1 0 1 0 0 1 1 1 0 1 0 0 1 1 0 1 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 1 1 0 0 1 0 0 1 1 0 0 0 1 1 1 0 0 0 0 1 The above plot represents Randomly Drawn Numbers of Bernoulli Distribution in R. Picked R-Mathematics R Language Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Change Color of Bars in Barchart using ggplot2 in R How to Change Axis Scales in R Plots? Group by function in R using Dplyr How to Split Column Into Multiple Columns in R DataFrame? How to import an Excel File into R ? How to filter R DataFrame by values in a column? How to filter R dataframe by multiple conditions? How to change the order of bars in bar chart in R ? R - if statement Replace Specific Characters in String in R
[ { "code": null, "e": 25242, "s": 25214, "text": "\n21 Apr, 2021" }, { "code": null, "e": 25823, "s": 25242, "text": "Bernoulli Distribution is a special case of Binomial distribution where only a single trial is performed. It is a discrete probability distribution for a Bernoulli trial (a trial that has only two outcomes i.e. either success or failure). For example, it can be represented as a coin toss where the probability of getting the head is 0.5 and getting a tail is 0.5. It is a probability distribution of a random variable that takes value 1 with probability p and the value 0 with probability q=1-p. The Bernoulli distribution is a special case of the binomial distribution with n=1." }, { "code": null, "e": 25917, "s": 25823, "text": "The probability mass function f of this distribution, over possible outcomes k, is given by :" }, { "code": null, "e": 25964, "s": 25919, "text": "The above relation can also be expressed as:" }, { "code": null, "e": 26089, "s": 25964, "text": "In R Programming Language, there are 4 built-in functions to for Bernoulli distribution and all of them are discussed below." }, { "code": null, "e": 26179, "s": 26089, "text": "dbern( ) function in R programming measures density function of Bernoulli distribution. " }, { "code": null, "e": 26215, "s": 26179, "text": "Syntax: dbern(x, prob, log = FALSE)" }, { "code": null, "e": 26226, "s": 26215, "text": "Parameter:" }, { "code": null, "e": 26249, "s": 26226, "text": "x: vector of quantiles" }, { "code": null, "e": 26292, "s": 26249, "text": "prob: probability of success on each trial" }, { "code": null, "e": 26351, "s": 26292, "text": "log: logical; if TRUE, probabilities p are given as log(p)" }, { "code": null, "e": 26396, "s": 26351, "text": "In statistics, it is given by below formula:" }, { "code": null, "e": 26406, "s": 26396, "text": "Example: " }, { "code": null, "e": 26408, "s": 26406, "text": "R" }, { "code": "# import Rlab librarylibrary(Rlab) # x values for the# dbern( ) functionx <- seq(0, 10, by = 1) # using dbern( ) function# to x to obtain corresponding# Bernoulli PDFy <- dbern(x, prob = 0.7) # plot dbern valuesplot(y, type = \"o\")", "e": 26645, "s": 26408, "text": null }, { "code": null, "e": 26653, "s": 26645, "text": "Output:" }, { "code": null, "e": 26948, "s": 26653, "text": "pbern( ) function in R programming giver the distribution function for the Bernoulli distribution. The distribution function or cumulative distribution function (CDF) or cumulative frequency function, describes the probability that a variate X takes on a value less than or equal to a number x." }, { "code": null, "e": 27005, "s": 26948, "text": "Syntax: pbern(q, prob, lower.tail = TRUE, log.p = FALSE)" }, { "code": null, "e": 27016, "s": 27005, "text": "Parameter:" }, { "code": null, "e": 27039, "s": 27016, "text": "q: vector of quantiles" }, { "code": null, "e": 27082, "s": 27039, "text": "prob: probability of success on each trial" }, { "code": null, "e": 27101, "s": 27082, "text": "lowe.tail: logical" }, { "code": null, "e": 27163, "s": 27101, "text": "log.p: logical; if TRUE, probabilities p are given as log(p)." }, { "code": null, "e": 27172, "s": 27163, "text": "Example:" }, { "code": null, "e": 27174, "s": 27172, "text": "R" }, { "code": "# import Rlab librarylibrary(Rlab) # x values for the# pbern( ) functionx <- seq(0, 10, by = 1) # using pbern( ) function# to x to obtain corresponding# Bernoulli CDFy <- pbern(x, prob = 0.7) # plot pbern valuesplot(y, type = \"o\") ", "e": 27417, "s": 27174, "text": null }, { "code": null, "e": 27425, "s": 27417, "text": "Output:" }, { "code": null, "e": 27520, "s": 27425, "text": "The above plot represents the Cumulative Distribution Function of Bernoulli Distribution in R." }, { "code": null, "e": 27784, "s": 27520, "text": "qbern( ) gives the quantile function for the Bernoulli distribution. A quantile function in statistical terms specifies the value of the random variable such that the probability of the variable being less than or equal to that value equals the given probability." }, { "code": null, "e": 27841, "s": 27784, "text": "Syntax: qbern(p, prob, lower.tail = TRUE, log.p = FALSE)" }, { "code": null, "e": 27852, "s": 27841, "text": "Parameter:" }, { "code": null, "e": 27880, "s": 27852, "text": "p: vector of probabilities." }, { "code": null, "e": 27924, "s": 27880, "text": "prob: probability of success on each trial." }, { "code": null, "e": 27944, "s": 27924, "text": "lower.tail: logical" }, { "code": null, "e": 28006, "s": 27944, "text": "log.p: logical; if TRUE, probabilities p are given as log(p)." }, { "code": null, "e": 28016, "s": 28006, "text": "Example: " }, { "code": null, "e": 28018, "s": 28016, "text": "R" }, { "code": "# import Rlab librarylibrary(Rlab) # x values for the# qbern( ) functionx <- seq(0, 1, by = 0.2) # using qbern( ) function# to x to obtain corresponding# Bernoulli QFy <- qbern(x, prob = 0.5,lower.tail = TRUE, log.p = FALSE) # plot qbern valuesplot(y, type = \"o\")", "e": 28290, "s": 28018, "text": null }, { "code": null, "e": 28298, "s": 28290, "text": "Output:" }, { "code": null, "e": 28413, "s": 28298, "text": "rbern( ) function in R programming is used to generate a vector of random numbers which are Bernoulli distributed." }, { "code": null, "e": 28436, "s": 28413, "text": "Syntax: rbern(n, prob)" }, { "code": null, "e": 28447, "s": 28436, "text": "Parameter:" }, { "code": null, "e": 28474, "s": 28447, "text": "n: number of observations." }, { "code": null, "e": 28504, "s": 28474, "text": "prob: number of observations." }, { "code": null, "e": 28513, "s": 28504, "text": "Example:" }, { "code": null, "e": 28515, "s": 28513, "text": "R" }, { "code": "# import Rlab librarylibrary(Rlab)set.seed(98999) # sample sizeN <- 1000 # generate random variables using# rbern( ) functionrandom_values <- rbern(N, prob = 0.5) # print the valuesprint(random_values) # plot of randomly# drawn densityhist(random_values,breaks = 10,main = \"\")", "e": 28802, "s": 28515, "text": null }, { "code": null, "e": 28811, "s": 28802, "text": "Output: " }, { "code": null, "e": 28952, "s": 28811, "text": " [1] 0 0 1 1 0 0 0 0 0 1 0 0 0 1 0 1 1 0 1 0 1 1 1 0 1 0 1 1 0 1 0 1 1 0 1 1 1 1 0 1 0 0 0 0 1 1 1 0 1 1 1 1 1 1 0 0 0 0 1 0 0 0 0 0 0 0 1" }, { "code": null, "e": 29093, "s": 28952, "text": " [68] 1 1 1 0 0 1 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 1 0 1 1 0 0 0 1 1 0 1 1 1 1 0 1 0 0 0 0 1 0 0 1 1 0 0 0 0 0 1 0 1 0 0 1 0 1 0 1 0 1 1 1 0 1" }, { "code": null, "e": 29234, "s": 29093, "text": " [135] 0 0 0 0 0 1 1 0 1 1 1 1 0 0 0 0 0 1 1 0 0 0 1 1 1 1 1 0 1 1 0 0 1 0 0 1 0 1 1 0 1 1 0 1 1 1 1 1 0 0 1 0 0 0 0 1 1 0 0 0 0 0 1 1 0 0 0" }, { "code": null, "e": 29375, "s": 29234, "text": " [202] 1 1 1 1 1 1 1 0 0 0 0 1 1 1 0 0 0 0 0 1 1 0 1 1 0 0 1 0 0 1 1 1 0 1 1 0 1 1 0 0 0 0 0 0 0 1 1 0 0 1 1 0 0 1 1 1 1 1 0 1 1 1 1 1 0 1 0" }, { "code": null, "e": 29516, "s": 29375, "text": " [269] 0 0 1 1 0 1 0 1 0 1 0 0 0 0 1 1 0 0 1 1 1 1 0 0 1 0 0 1 0 0 0 0 0 1 1 0 0 1 0 0 1 0 1 0 1 0 1 0 0 0 1 0 0 1 1 0 1 0 1 0 0 0 0 1 0 0 0" }, { "code": null, "e": 29657, "s": 29516, "text": " [336] 0 0 1 0 0 1 0 1 0 1 0 0 1 0 1 0 1 1 1 1 0 0 0 0 1 1 0 1 1 1 1 1 1 1 1 1 0 0 1 1 1 0 1 1 0 1 0 1 0 0 1 0 1 1 0 1 1 1 0 0 1 0 1 1 0 0 0" }, { "code": null, "e": 29798, "s": 29657, "text": " [403] 1 0 1 0 0 1 1 0 1 1 1 1 0 1 1 1 0 1 0 0 1 0 1 0 1 0 0 0 0 1 1 0 0 0 0 1 1 1 1 1 0 0 1 0 0 1 0 0 0 0 1 0 1 1 0 0 0 1 1 0 1 1 1 0 1 0 1" }, { "code": null, "e": 29939, "s": 29798, "text": " [470] 1 0 1 1 1 0 0 0 0 1 0 0 0 0 1 1 1 1 1 0 0 1 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 1 1 1 1 1 0 0 1 1 0 0 1 0 1 0 1 1 1 1 1 0 1 1 0" }, { "code": null, "e": 30080, "s": 29939, "text": " [537] 0 0 1 0 0 0 1 1 0 0 1 0 1 1 1 0 1 1 1 1 1 0 1 1 1 0 1 0 1 0 0 1 1 1 0 0 1 1 0 1 0 0 1 1 0 1 1 0 1 1 1 0 0 0 0 0 1 1 0 1 1 1 1 0 0 1 0" }, { "code": null, "e": 30221, "s": 30080, "text": " [604] 1 0 1 0 1 1 0 0 0 1 0 1 0 0 0 0 0 0 1 0 0 0 0 1 1 1 1 0 1 0 0 1 0 0 0 1 0 1 0 1 1 0 1 1 1 0 1 1 1 0 1 0 1 1 0 0 0 0 1 0 0 1 0 1 0 1 1" }, { "code": null, "e": 30362, "s": 30221, "text": " [671] 0 0 0 1 0 1 0 0 1 0 0 0 0 1 1 1 0 1 0 1 1 0 1 1 1 0 1 1 0 1 0 0 0 0 0 0 1 0 0 1 1 0 1 1 1 1 0 1 0 1 1 1 1 1 1 1 1 1 1 0 0 0 1 0 0 1 1" }, { "code": null, "e": 30503, "s": 30362, "text": " [738] 1 0 0 1 1 1 1 1 0 0 1 0 1 0 0 0 1 1 0 0 0 1 0 1 0 0 0 0 0 1 0 1 1 0 0 0 0 0 0 0 0 1 0 0 0 1 1 1 0 1 0 1 0 1 1 1 1 1 1 0 1 0 1 1 0 1 1" }, { "code": null, "e": 30644, "s": 30503, "text": " [805] 0 1 0 1 0 1 1 0 0 1 1 1 1 0 1 1 1 0 0 0 0 1 0 0 1 1 0 0 1 1 0 0 0 1 1 1 0 0 1 0 1 0 0 0 0 1 0 1 1 0 0 0 0 1 1 0 1 0 1 1 0 1 0 1 1 1 1" }, { "code": null, "e": 30785, "s": 30644, "text": " [872] 1 1 0 1 0 0 1 0 0 1 1 0 1 0 0 1 0 0 0 0 0 1 1 1 0 1 1 0 0 1 1 1 0 1 0 0 1 0 1 1 1 0 1 0 0 0 1 1 0 0 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 0 0" }, { "code": null, "e": 30916, "s": 30785, "text": " [939] 1 1 1 1 0 1 0 0 1 1 1 0 1 0 0 1 1 0 1 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 1 1 0 0 1 0 0 1 1 0 0 0 1 1 1 0 0 0 0 1" }, { "code": null, "e": 30997, "s": 30916, "text": "The above plot represents Randomly Drawn Numbers of Bernoulli Distribution in R." }, { "code": null, "e": 31004, "s": 30997, "text": "Picked" }, { "code": null, "e": 31018, "s": 31004, "text": "R-Mathematics" }, { "code": null, "e": 31029, "s": 31018, "text": "R Language" }, { "code": null, "e": 31127, "s": 31029, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 31136, "s": 31127, "text": "Comments" }, { "code": null, "e": 31149, "s": 31136, "text": "Old Comments" }, { "code": null, "e": 31201, "s": 31149, "text": "Change Color of Bars in Barchart using ggplot2 in R" }, { "code": null, "e": 31239, "s": 31201, "text": "How to Change Axis Scales in R Plots?" }, { "code": null, "e": 31274, "s": 31239, "text": "Group by function in R using Dplyr" }, { "code": null, "e": 31332, "s": 31274, "text": "How to Split Column Into Multiple Columns in R DataFrame?" }, { "code": null, "e": 31369, "s": 31332, "text": "How to import an Excel File into R ?" }, { "code": null, "e": 31418, "s": 31369, "text": "How to filter R DataFrame by values in a column?" }, { "code": null, "e": 31468, "s": 31418, "text": "How to filter R dataframe by multiple conditions?" }, { "code": null, "e": 31520, "s": 31468, "text": "How to change the order of bars in bar chart in R ?" }, { "code": null, "e": 31537, "s": 31520, "text": "R - if statement" } ]
Connecting to the Internet Using Command Line in Linux - GeeksforGeeks
10 May, 2020 Many of the times you may use a Linux system that does not have a GUI after install and it needs an internet connection to set up a desktop environment, also you may use Linux servers without a GUI and you need to connect over a wireless network using the command line. Below you will see Steps to connect to a wireless network using the command line. Determine your Network Interface The first thing you need to do is determining your Wireless Interface, to do so give the following command: iwconfig This will list out all the active network interfaces, most of the time it will be a wlan0 for your wireless network but can be something other, depending on your hardware. Turn on your Wireless Interface Now you need to ensure that your network interface is up and working, to do so give the following command. sudo ifconfig wlan0 up wlan0 is your network interface, make sure you change it if your one is different. Scan for available wireless access points Now you will need to scan for all the available Access points, to do so give the following command sudo iwlist scan | more where more will help you get systematic scroll as the list could be long and you do not want that some entries disappear and you cannot scroll up as you are working in the command-line interface. Look at the ESSID, that is the name of your wireless network. To find an open network just check items that show Encryption Key set to off. Create a WPA supplicant configuration file The most common and widely tool used is WPA supplicant, most of the distros have it in default, just give the command wpa_passphrase Now if you see any error you are in a deadlock situation as you cannot use this tool or it’s not installed. To create a configuration file for wpa_supplicant, run the following command: wpa_passphrase ESSID > /etc/wpa_supplicant/wpa_supplicant.conf Where ESSID will be your Access point name which you have noted from iwlist command, now after running the command your prompt is still not ended, now you need to type the security key of the Access point you need to connect to and press Enter and your prompt ends now. After creating file check if the command worked, just give command: cd /etc/wpa_supplicant Type the following: tail wpa_supplicant.conf and you should see something like below: network={ ssid="yournetwork" #psk="yourpassword" psk=564871f3638a28fd6f68sdd1fe41d1c75f0124ad34536a3f0747fe417432d888888 } Find name of your wireless driver Before proper connectivity there is more piece of information you will need which is the name of your driver of wireless network card, just give the command: wpa_supplicant -help | more The command will list the section of drivers which will look like this: drivers: nl80211 = Linux nl80211/cfg80211 wext = Linux wireless extensions (generic) wired = Wired Ethernet driver none = no driver (RADIUS server/WPS ER) Now, in this case, my appropriate driver is nl80211, this will be used in further connectivity. Connect to the internet The first step is to run the wpa_supplicant command : sudo wpa_supplicant –B -D “driver” -i “interface” -c /etc/wpa_supplicant/wpa_supplicant.conf where “driver” will be your driver(nl80211 in my case) without double quotes and “interface” will be your interface(wlan0 in my case) without double quotes. Finally run the command: sudo dhclient This is for the DCHP client –dhclient– which will establish networking routing on the local Network. Now still to check connectivity you can just ping any website. Linux-Unix Write From Home Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments TCP Server-Client implementation in C ZIP command in Linux with examples tar command in Linux with examples SORT command in Linux/Unix with examples UDP Server-Client implementation in C Convert integer to string in Python Convert string to integer in Python How to set input type date in dd-mm-yyyy format using HTML ? Python infinity Matplotlib.pyplot.title() in Python
[ { "code": null, "e": 24526, "s": 24498, "text": "\n10 May, 2020" }, { "code": null, "e": 24878, "s": 24526, "text": "Many of the times you may use a Linux system that does not have a GUI after install and it needs an internet connection to set up a desktop environment, also you may use Linux servers without a GUI and you need to connect over a wireless network using the command line. Below you will see Steps to connect to a wireless network using the command line." }, { "code": null, "e": 24911, "s": 24878, "text": "Determine your Network Interface" }, { "code": null, "e": 25019, "s": 24911, "text": "The first thing you need to do is determining your Wireless Interface, to do so give the following command:" }, { "code": null, "e": 25028, "s": 25019, "text": "iwconfig" }, { "code": null, "e": 25200, "s": 25028, "text": "This will list out all the active network interfaces, most of the time it will be a wlan0 for your wireless network but can be something other, depending on your hardware." }, { "code": null, "e": 25232, "s": 25200, "text": "Turn on your Wireless Interface" }, { "code": null, "e": 25339, "s": 25232, "text": "Now you need to ensure that your network interface is up and working, to do so give the following command." }, { "code": null, "e": 25363, "s": 25339, "text": "sudo ifconfig wlan0 up " }, { "code": null, "e": 25446, "s": 25363, "text": "wlan0 is your network interface, make sure you change it if your one is different." }, { "code": null, "e": 25488, "s": 25446, "text": "Scan for available wireless access points" }, { "code": null, "e": 25587, "s": 25488, "text": "Now you will need to scan for all the available Access points, to do so give the following command" }, { "code": null, "e": 25611, "s": 25587, "text": "sudo iwlist scan | more" }, { "code": null, "e": 25947, "s": 25611, "text": "where more will help you get systematic scroll as the list could be long and you do not want that some entries disappear and you cannot scroll up as you are working in the command-line interface. Look at the ESSID, that is the name of your wireless network. To find an open network just check items that show Encryption Key set to off." }, { "code": null, "e": 25990, "s": 25947, "text": "Create a WPA supplicant configuration file" }, { "code": null, "e": 26108, "s": 25990, "text": "The most common and widely tool used is WPA supplicant, most of the distros have it in default, just give the command" }, { "code": null, "e": 26123, "s": 26108, "text": "wpa_passphrase" }, { "code": null, "e": 26309, "s": 26123, "text": "Now if you see any error you are in a deadlock situation as you cannot use this tool or it’s not installed. To create a configuration file for wpa_supplicant, run the following command:" }, { "code": null, "e": 26372, "s": 26309, "text": "wpa_passphrase ESSID > /etc/wpa_supplicant/wpa_supplicant.conf" }, { "code": null, "e": 26642, "s": 26372, "text": "Where ESSID will be your Access point name which you have noted from iwlist command, now after running the command your prompt is still not ended, now you need to type the security key of the Access point you need to connect to and press Enter and your prompt ends now." }, { "code": null, "e": 26710, "s": 26642, "text": "After creating file check if the command worked, just give command:" }, { "code": null, "e": 26733, "s": 26710, "text": "cd /etc/wpa_supplicant" }, { "code": null, "e": 26753, "s": 26733, "text": "Type the following:" }, { "code": null, "e": 26778, "s": 26753, "text": "tail wpa_supplicant.conf" }, { "code": null, "e": 26819, "s": 26778, "text": "and you should see something like below:" }, { "code": null, "e": 26942, "s": 26819, "text": "network={\nssid=\"yournetwork\"\n#psk=\"yourpassword\"\npsk=564871f3638a28fd6f68sdd1fe41d1c75f0124ad34536a3f0747fe417432d888888\n}" }, { "code": null, "e": 26976, "s": 26942, "text": "Find name of your wireless driver" }, { "code": null, "e": 27134, "s": 26976, "text": "Before proper connectivity there is more piece of information you will need which is the name of your driver of wireless network card, just give the command:" }, { "code": null, "e": 27162, "s": 27134, "text": "wpa_supplicant -help | more" }, { "code": null, "e": 27234, "s": 27162, "text": "The command will list the section of drivers which will look like this:" }, { "code": null, "e": 27390, "s": 27234, "text": "drivers:\nnl80211 = Linux nl80211/cfg80211\nwext = Linux wireless extensions (generic)\nwired = Wired Ethernet driver\nnone = no driver (RADIUS server/WPS ER)\n" }, { "code": null, "e": 27486, "s": 27390, "text": "Now, in this case, my appropriate driver is nl80211, this will be used in further connectivity." }, { "code": null, "e": 27510, "s": 27486, "text": "Connect to the internet" }, { "code": null, "e": 27564, "s": 27510, "text": "The first step is to run the wpa_supplicant command :" }, { "code": null, "e": 27657, "s": 27564, "text": "sudo wpa_supplicant –B -D “driver” -i “interface” -c /etc/wpa_supplicant/wpa_supplicant.conf" }, { "code": null, "e": 27814, "s": 27657, "text": "where “driver” will be your driver(nl80211 in my case) without double quotes and “interface” will be your interface(wlan0 in my case) without double quotes." }, { "code": null, "e": 27839, "s": 27814, "text": "Finally run the command:" }, { "code": null, "e": 27853, "s": 27839, "text": "sudo dhclient" }, { "code": null, "e": 28017, "s": 27853, "text": "This is for the DCHP client –dhclient– which will establish networking routing on the local Network. Now still to check connectivity you can just ping any website." }, { "code": null, "e": 28028, "s": 28017, "text": "Linux-Unix" }, { "code": null, "e": 28044, "s": 28028, "text": "Write From Home" }, { "code": null, "e": 28142, "s": 28044, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 28151, "s": 28142, "text": "Comments" }, { "code": null, "e": 28164, "s": 28151, "text": "Old Comments" }, { "code": null, "e": 28202, "s": 28164, "text": "TCP Server-Client implementation in C" }, { "code": null, "e": 28237, "s": 28202, "text": "ZIP command in Linux with examples" }, { "code": null, "e": 28272, "s": 28237, "text": "tar command in Linux with examples" }, { "code": null, "e": 28313, "s": 28272, "text": "SORT command in Linux/Unix with examples" }, { "code": null, "e": 28351, "s": 28313, "text": "UDP Server-Client implementation in C" }, { "code": null, "e": 28387, "s": 28351, "text": "Convert integer to string in Python" }, { "code": null, "e": 28423, "s": 28387, "text": "Convert string to integer in Python" }, { "code": null, "e": 28484, "s": 28423, "text": "How to set input type date in dd-mm-yyyy format using HTML ?" }, { "code": null, "e": 28500, "s": 28484, "text": "Python infinity" } ]
Convert given time into words - GeeksforGeeks
29 Apr, 2021 Given a time in the format of hh:mm (12-hour format) 0 < hh < 12, 0 <= mm < 60. The task is to convert it into words as shown:Examples : Input : h = 5, m = 0 Output : five o' clock Input : h = 6, m = 24 Output : twenty four minutes past six Corner cases are m = 0, m = 15, m = 30 and m = 45. 6:00 six o'clock 6:10 ten minutes past six 6:15 quarter past six 6:30 half past six 6:45 quarter to seven 6:47 thirteen minutes to seven The idea is to use the if-else-if statement to determine the time in words. According to the above-given example, on the basis of minutes, we can categorize time in words into 8, which are minutes equal to 0, 15, 30, 45, 1, 59, and in a range less than 30 or greater than 30. Check the value of minutes and print accordingly.Below is the implementation of this approach: C++ Java Python3 C# PHP Javascript // C++ program to convert time into words#include <bits/stdc++.h>using namespace std; // Print Time in words.void printWords(int h, int m){ char nums[][64] = { "zero", "one", "two", "three", "four", "five", "six", "seven", "eight", "nine", "ten", "eleven", "twelve", "thirteen", "fourteen", "fifteen", "sixteen", "seventeen", "eighteen", "nineteen", "twenty", "twenty one", "twenty two", "twenty three", "twenty four", "twenty five", "twenty six", "twenty seven", "twenty eight", "twenty nine", }; if (m == 0) printf("%s o' clock\n", nums[h]); else if (m == 1) printf("one minute past %s\n", nums[h]); else if (m == 59) printf("one minute to %s\n", nums[(h % 12) + 1]); else if (m == 15) printf("quarter past %s\n", nums[h]); else if (m == 30) printf("half past %s\n", nums[h]); else if (m == 45) printf("quarter to %s\n", nums[(h % 12) + 1]); else if (m <= 30) printf("%s minutes past %s\n", nums[m], nums[h]); else if (m > 30) printf("%s minutes to %s\n", nums[60 - m], nums[(h % 12) + 1]);} // Driven Programint main(){ int h = 6; int m = 24; printWords(h, m); return 0;} // Java program to convert time into wordsclass GFG{ // Print Time in words. static void printWords(int h, int m) { String nums[] = { "zero", "one", "two", "three", "four", "five", "six", "seven", "eight", "nine", "ten", "eleven", "twelve", "thirteen", "fourteen", "fifteen", "sixteen", "seventeen", "eighteen", "nineteen", "twenty", "twenty one", "twenty two", "twenty three", "twenty four", "twenty five", "twenty six", "twenty seven", "twenty eight", "twenty nine", }; if (m == 0) System.out.println(nums[h] + " o' clock "); else if (m == 1) System.out.println("one minute past " + nums[h]); else if (m == 59) System.out.println("one minute to " + nums[(h % 12) + 1]); else if (m == 15) System.out.println("quarter past " + nums[h]); else if (m == 30) System.out.println("half past " + nums[h]); else if (m == 45) System.out.println("quarter to " + nums[(h % 12) + 1]); else if (m <= 30) System.out.println( nums[m] + " minutes past " + nums[h]); else if (m > 30) System.out.println( nums[60 - m] + " minutes to " + nums[(h % 12) + 1]); } // Driven code public static void main(String []args) { int h = 6; int m = 24; printWords(h, m); }} // This code is contributed by ihritik # Python3 program to convert# time into words # Print Time in words.def printWords(h, m): nums = ["zero", "one", "two", "three", "four", "five", "six", "seven", "eight", "nine", "ten", "eleven", "twelve", "thirteen", "fourteen", "fifteen", "sixteen", "seventeen", "eighteen", "nineteen", "twenty", "twenty one", "twenty two", "twenty three", "twenty four", "twenty five", "twenty six", "twenty seven", "twenty eight", "twenty nine"]; if (m == 0): print(nums[h], "o' clock"); elif (m == 1): print("one minute past", nums[h]); elif (m == 59): print("one minute to", nums[(h % 12) + 1]); elif (m == 15): print("quarter past", nums[h]); elif (m == 30): print("half past", nums[h]); elif (m == 45): print("quarter to", (nums[(h % 12) + 1])); elif (m <= 30): print(nums[m],"minutes past", nums[h]); elif (m > 30): print(nums[60 - m], "minutes to", nums[(h % 12) + 1]); # Driver Codeh = 6;m = 24; printWords(h, m); # This code is contributed# by Princi Singh // C# program to convert time into wordsusing System; class GFG{ // Print Time in words. static void printWords(int h, int m) { string [] nums = { "zero", "one", "two", "three", "four", "five", "six", "seven", "eight", "nine", "ten", "eleven", "twelve", "thirteen", "fourteen", "fifteen", "sixteen", "seventeen", "eighteen", "nineteen", "twenty", "twenty one", "twenty two", "twenty three", "twenty four", "twenty five", "twenty six", "twenty seven", "twenty eight", "twenty nine", }; if (m == 0) Console.WriteLine(nums[h] + " o' clock "); else if (m == 1) Console.WriteLine("one minute past " + nums[h]); else if (m == 59) Console.WriteLine("one minute to " + nums[(h % 12) + 1]); else if (m == 15) Console.WriteLine("quarter past " + nums[h]); else if (m == 30) Console.WriteLine("half past " + nums[h]); else if (m == 45) Console.WriteLine("quarter to " + nums[(h % 12) + 1]); else if (m <= 30) Console.WriteLine( nums[m] + " minutes past " + nums[h]); else if (m > 30) Console.WriteLine( nums[60 - m] + " minutes to " + nums[(h % 12) + 1]); } // Driven code public static void Main() { int h = 6; int m = 24; printWords(h, m); }} // This code is contributed by ihritik <?php// PHP program to convert// time into words // Print Time in words.function printWords($h, $m){ $nums = array("zero", "one", "two", "three", "four", "five", "six", "seven", "eight", "nine", "ten", "eleven", "twelve", "thirteen", "fourteen", "fifteen", "sixteen", "seventeen", "eighteen", "nineteen", "twenty", "twenty one", "twenty two", "twenty three", "twenty four", "twenty five", "twenty six", "twenty seven", "twenty eight", "twenty nine"); if ($m == 0) echo $nums[$h], "o' clock\n" ; else if ($m == 1) echo "one minute past ", $nums[$h], "\n"; else if ($m == 59) echo "one minute to ", $nums[($h % 12) + 1], "\n"; else if ($m == 15) echo "quarter past ", $nums[$h], "\n"; else if ($m == 30) echo "half past ", $nums[$h],"\n"; else if ($m == 45) echo "quarter to ", ($nums[($h % 12) + 1]), "\n"; else if ($m <= 30) echo $nums[$m], " minutes past ", $nums[$h],"\n"; else if ($m > 30) echo $nums[60 - $m], " minutes to ", $nums[($h % 12) + 1], "\n";} // Driver Code$h = 6;$m = 24; printWords($h, $m); // This code is contributed by aj_36?> <script> // Javascript program to convert time into words // Print Time in words. function printWords(h, m) { let nums = [ "zero", "one", "two", "three", "four", "five", "six", "seven", "eight", "nine", "ten", "eleven", "twelve", "thirteen", "fourteen", "fifteen", "sixteen", "seventeen", "eighteen", "nineteen", "twenty", "twenty one", "twenty two", "twenty three", "twenty four", "twenty five", "twenty six", "twenty seven", "twenty eight", "twenty nine", ]; if (m == 0) document.write(nums[h] + " o' clock " + "</br>"); else if (m == 1) document.write("one minute past " + nums[h] + "</br>"); else if (m == 59) document.write("one minute to " + nums[(h % 12) + 1] + "</br>"); else if (m == 15) document.write("quarter past " + nums[h] + "</br>"); else if (m == 30) document.write("half past " + nums[h] + "</br>"); else if (m == 45) document.write("quarter to " + nums[(h % 12) + 1] + "</br>"); else if (m <= 30) document.write( nums[m] + " minutes past " + nums[h] + "</br>"); else if (m > 30) document.write( nums[60 - m] + " minutes to " + nums[(h % 12) + 1] + "</br>"); } let h = 6; let m = 24; printWords(h, m); </script> Output : twenty four minutes past six This article is contributed by Anuj Chauhan. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. jit_t ihritik princi singh mukesh07 date-time-program School Programming Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Exceptions in Java Constructors in Java Ternary Operator in Python Inline Functions in C++ Pure Virtual Functions and Abstract Classes in C++ Destructors in C++ Difference between Abstract Class and Interface in Java Python Exception Handling Exception Handling in C++ Taking input from console in Python
[ { "code": null, "e": 25574, "s": 25546, "text": "\n29 Apr, 2021" }, { "code": null, "e": 25713, "s": 25574, "text": "Given a time in the format of hh:mm (12-hour format) 0 < hh < 12, 0 <= mm < 60. The task is to convert it into words as shown:Examples : " }, { "code": null, "e": 25818, "s": 25713, "text": "Input : h = 5, m = 0\nOutput : five o' clock\n\nInput : h = 6, m = 24\nOutput : twenty four minutes past six" }, { "code": null, "e": 25870, "s": 25818, "text": "Corner cases are m = 0, m = 15, m = 30 and m = 45. " }, { "code": null, "e": 26007, "s": 25870, "text": "6:00 six o'clock\n6:10 ten minutes past six\n6:15 quarter past six\n6:30 half past six\n6:45 quarter to seven\n6:47 thirteen minutes to seven" }, { "code": null, "e": 26381, "s": 26009, "text": "The idea is to use the if-else-if statement to determine the time in words. According to the above-given example, on the basis of minutes, we can categorize time in words into 8, which are minutes equal to 0, 15, 30, 45, 1, 59, and in a range less than 30 or greater than 30. Check the value of minutes and print accordingly.Below is the implementation of this approach: " }, { "code": null, "e": 26385, "s": 26381, "text": "C++" }, { "code": null, "e": 26390, "s": 26385, "text": "Java" }, { "code": null, "e": 26398, "s": 26390, "text": "Python3" }, { "code": null, "e": 26401, "s": 26398, "text": "C#" }, { "code": null, "e": 26405, "s": 26401, "text": "PHP" }, { "code": null, "e": 26416, "s": 26405, "text": "Javascript" }, { "code": "// C++ program to convert time into words#include <bits/stdc++.h>using namespace std; // Print Time in words.void printWords(int h, int m){ char nums[][64] = { \"zero\", \"one\", \"two\", \"three\", \"four\", \"five\", \"six\", \"seven\", \"eight\", \"nine\", \"ten\", \"eleven\", \"twelve\", \"thirteen\", \"fourteen\", \"fifteen\", \"sixteen\", \"seventeen\", \"eighteen\", \"nineteen\", \"twenty\", \"twenty one\", \"twenty two\", \"twenty three\", \"twenty four\", \"twenty five\", \"twenty six\", \"twenty seven\", \"twenty eight\", \"twenty nine\", }; if (m == 0) printf(\"%s o' clock\\n\", nums[h]); else if (m == 1) printf(\"one minute past %s\\n\", nums[h]); else if (m == 59) printf(\"one minute to %s\\n\", nums[(h % 12) + 1]); else if (m == 15) printf(\"quarter past %s\\n\", nums[h]); else if (m == 30) printf(\"half past %s\\n\", nums[h]); else if (m == 45) printf(\"quarter to %s\\n\", nums[(h % 12) + 1]); else if (m <= 30) printf(\"%s minutes past %s\\n\", nums[m], nums[h]); else if (m > 30) printf(\"%s minutes to %s\\n\", nums[60 - m], nums[(h % 12) + 1]);} // Driven Programint main(){ int h = 6; int m = 24; printWords(h, m); return 0;}", "e": 27813, "s": 26416, "text": null }, { "code": "// Java program to convert time into wordsclass GFG{ // Print Time in words. static void printWords(int h, int m) { String nums[] = { \"zero\", \"one\", \"two\", \"three\", \"four\", \"five\", \"six\", \"seven\", \"eight\", \"nine\", \"ten\", \"eleven\", \"twelve\", \"thirteen\", \"fourteen\", \"fifteen\", \"sixteen\", \"seventeen\", \"eighteen\", \"nineteen\", \"twenty\", \"twenty one\", \"twenty two\", \"twenty three\", \"twenty four\", \"twenty five\", \"twenty six\", \"twenty seven\", \"twenty eight\", \"twenty nine\", }; if (m == 0) System.out.println(nums[h] + \" o' clock \"); else if (m == 1) System.out.println(\"one minute past \" + nums[h]); else if (m == 59) System.out.println(\"one minute to \" + nums[(h % 12) + 1]); else if (m == 15) System.out.println(\"quarter past \" + nums[h]); else if (m == 30) System.out.println(\"half past \" + nums[h]); else if (m == 45) System.out.println(\"quarter to \" + nums[(h % 12) + 1]); else if (m <= 30) System.out.println( nums[m] + \" minutes past \" + nums[h]); else if (m > 30) System.out.println( nums[60 - m] + \" minutes to \" + nums[(h % 12) + 1]); } // Driven code public static void main(String []args) { int h = 6; int m = 24; printWords(h, m); }} // This code is contributed by ihritik", "e": 29655, "s": 27813, "text": null }, { "code": "# Python3 program to convert# time into words # Print Time in words.def printWords(h, m): nums = [\"zero\", \"one\", \"two\", \"three\", \"four\", \"five\", \"six\", \"seven\", \"eight\", \"nine\", \"ten\", \"eleven\", \"twelve\", \"thirteen\", \"fourteen\", \"fifteen\", \"sixteen\", \"seventeen\", \"eighteen\", \"nineteen\", \"twenty\", \"twenty one\", \"twenty two\", \"twenty three\", \"twenty four\", \"twenty five\", \"twenty six\", \"twenty seven\", \"twenty eight\", \"twenty nine\"]; if (m == 0): print(nums[h], \"o' clock\"); elif (m == 1): print(\"one minute past\", nums[h]); elif (m == 59): print(\"one minute to\", nums[(h % 12) + 1]); elif (m == 15): print(\"quarter past\", nums[h]); elif (m == 30): print(\"half past\", nums[h]); elif (m == 45): print(\"quarter to\", (nums[(h % 12) + 1])); elif (m <= 30): print(nums[m],\"minutes past\", nums[h]); elif (m > 30): print(nums[60 - m], \"minutes to\", nums[(h % 12) + 1]); # Driver Codeh = 6;m = 24; printWords(h, m); # This code is contributed# by Princi Singh", "e": 30799, "s": 29655, "text": null }, { "code": "// C# program to convert time into wordsusing System; class GFG{ // Print Time in words. static void printWords(int h, int m) { string [] nums = { \"zero\", \"one\", \"two\", \"three\", \"four\", \"five\", \"six\", \"seven\", \"eight\", \"nine\", \"ten\", \"eleven\", \"twelve\", \"thirteen\", \"fourteen\", \"fifteen\", \"sixteen\", \"seventeen\", \"eighteen\", \"nineteen\", \"twenty\", \"twenty one\", \"twenty two\", \"twenty three\", \"twenty four\", \"twenty five\", \"twenty six\", \"twenty seven\", \"twenty eight\", \"twenty nine\", }; if (m == 0) Console.WriteLine(nums[h] + \" o' clock \"); else if (m == 1) Console.WriteLine(\"one minute past \" + nums[h]); else if (m == 59) Console.WriteLine(\"one minute to \" + nums[(h % 12) + 1]); else if (m == 15) Console.WriteLine(\"quarter past \" + nums[h]); else if (m == 30) Console.WriteLine(\"half past \" + nums[h]); else if (m == 45) Console.WriteLine(\"quarter to \" + nums[(h % 12) + 1]); else if (m <= 30) Console.WriteLine( nums[m] + \" minutes past \" + nums[h]); else if (m > 30) Console.WriteLine( nums[60 - m] + \" minutes to \" + nums[(h % 12) + 1]); } // Driven code public static void Main() { int h = 6; int m = 24; printWords(h, m); }} // This code is contributed by ihritik", "e": 32586, "s": 30799, "text": null }, { "code": "<?php// PHP program to convert// time into words // Print Time in words.function printWords($h, $m){ $nums = array(\"zero\", \"one\", \"two\", \"three\", \"four\", \"five\", \"six\", \"seven\", \"eight\", \"nine\", \"ten\", \"eleven\", \"twelve\", \"thirteen\", \"fourteen\", \"fifteen\", \"sixteen\", \"seventeen\", \"eighteen\", \"nineteen\", \"twenty\", \"twenty one\", \"twenty two\", \"twenty three\", \"twenty four\", \"twenty five\", \"twenty six\", \"twenty seven\", \"twenty eight\", \"twenty nine\"); if ($m == 0) echo $nums[$h], \"o' clock\\n\" ; else if ($m == 1) echo \"one minute past \", $nums[$h], \"\\n\"; else if ($m == 59) echo \"one minute to \", $nums[($h % 12) + 1], \"\\n\"; else if ($m == 15) echo \"quarter past \", $nums[$h], \"\\n\"; else if ($m == 30) echo \"half past \", $nums[$h],\"\\n\"; else if ($m == 45) echo \"quarter to \", ($nums[($h % 12) + 1]), \"\\n\"; else if ($m <= 30) echo $nums[$m], \" minutes past \", $nums[$h],\"\\n\"; else if ($m > 30) echo $nums[60 - $m], \" minutes to \", $nums[($h % 12) + 1], \"\\n\";} // Driver Code$h = 6;$m = 24; printWords($h, $m); // This code is contributed by aj_36?>", "e": 33967, "s": 32586, "text": null }, { "code": "<script> // Javascript program to convert time into words // Print Time in words. function printWords(h, m) { let nums = [ \"zero\", \"one\", \"two\", \"three\", \"four\", \"five\", \"six\", \"seven\", \"eight\", \"nine\", \"ten\", \"eleven\", \"twelve\", \"thirteen\", \"fourteen\", \"fifteen\", \"sixteen\", \"seventeen\", \"eighteen\", \"nineteen\", \"twenty\", \"twenty one\", \"twenty two\", \"twenty three\", \"twenty four\", \"twenty five\", \"twenty six\", \"twenty seven\", \"twenty eight\", \"twenty nine\", ]; if (m == 0) document.write(nums[h] + \" o' clock \" + \"</br>\"); else if (m == 1) document.write(\"one minute past \" + nums[h] + \"</br>\"); else if (m == 59) document.write(\"one minute to \" + nums[(h % 12) + 1] + \"</br>\"); else if (m == 15) document.write(\"quarter past \" + nums[h] + \"</br>\"); else if (m == 30) document.write(\"half past \" + nums[h] + \"</br>\"); else if (m == 45) document.write(\"quarter to \" + nums[(h % 12) + 1] + \"</br>\"); else if (m <= 30) document.write( nums[m] + \" minutes past \" + nums[h] + \"</br>\"); else if (m > 30) document.write( nums[60 - m] + \" minutes to \" + nums[(h % 12) + 1] + \"</br>\"); } let h = 6; let m = 24; printWords(h, m); </script>", "e": 35585, "s": 33967, "text": null }, { "code": null, "e": 35596, "s": 35585, "text": "Output : " }, { "code": null, "e": 35625, "s": 35596, "text": "twenty four minutes past six" }, { "code": null, "e": 36046, "s": 35625, "text": "This article is contributed by Anuj Chauhan. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. " }, { "code": null, "e": 36052, "s": 36046, "text": "jit_t" }, { "code": null, "e": 36060, "s": 36052, "text": "ihritik" }, { "code": null, "e": 36073, "s": 36060, "text": "princi singh" }, { "code": null, "e": 36082, "s": 36073, "text": "mukesh07" }, { "code": null, "e": 36100, "s": 36082, "text": "date-time-program" }, { "code": null, "e": 36119, "s": 36100, "text": "School Programming" }, { "code": null, "e": 36217, "s": 36119, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 36236, "s": 36217, "text": "Exceptions in Java" }, { "code": null, "e": 36257, "s": 36236, "text": "Constructors in Java" }, { "code": null, "e": 36284, "s": 36257, "text": "Ternary Operator in Python" }, { "code": null, "e": 36308, "s": 36284, "text": "Inline Functions in C++" }, { "code": null, "e": 36359, "s": 36308, "text": "Pure Virtual Functions and Abstract Classes in C++" }, { "code": null, "e": 36378, "s": 36359, "text": "Destructors in C++" }, { "code": null, "e": 36434, "s": 36378, "text": "Difference between Abstract Class and Interface in Java" }, { "code": null, "e": 36460, "s": 36434, "text": "Python Exception Handling" }, { "code": null, "e": 36486, "s": 36460, "text": "Exception Handling in C++" } ]
Binomial Coefficient in C++
Binomial coefficient denoted as c(n,k) or ncr is defined as coefficient of xk in the binomial expansion of (1+X)n. The Binomial coefficient also gives the value of the number of ways in which k items are chosen from among n objects i.e. k-combinations of n-element set. The order of selection of items not considered. Here, we are given two parameters n and k and we have to return the value of binomial coefficient nck . Input : n = 8 and k = 3 Output : 56 There can be multiple solutions to this problem, There is a method to calculate the value of c(n,k) using a recursive call. The standard formula for finding the value of binomial coefficients that uses recursive call is − c(n,k) = c(n-1 , k-1) + c(n-1, k) c(n, 0) = c(n, n) = 1 The implementation of a recursive call that uses the above formula − #include <iostream> using namespace std; int binomialCoefficients(int n, int k) { if (k == 0 || k == n) return 1; return binomialCoefficients(n - 1, k - 1) + binomialCoefficients(n - 1, k); } int main() { int n=8 , k=5; cout<<"The value of C("<<n<<", "<<k<<") is "<<binomialCoefficients(n, k); return 0; } The value of C(8, 5) is 56 Another solution might be using overlapping subproblem. So, we will use dynamic programming algorithm to avoid subproblem. #include <bits/stdc++.h>> using namespace std; int binomialCoefficients(int n, int k) { int C[k+1]; memset(C, 0, sizeof(C)); C[0] = 1; for (int i = 1; i <= n; i++) { for (int j = min(i, k); j > 0; j--) C[j] = C[j] + C[j-1]; } return C[k]; } int main() { int n=8, k=5; cout<<"The value of C("<<n<<", "<<k<<") is "<<binomialCoefficients(n,k); return 0; } The value of C(8, 5) is 56
[ { "code": null, "e": 1177, "s": 1062, "text": "Binomial coefficient denoted as c(n,k) or ncr is defined as coefficient of xk in the binomial expansion of (1+X)n." }, { "code": null, "e": 1380, "s": 1177, "text": "The Binomial coefficient also gives the value of the number of ways in which k items are chosen from among n objects i.e. k-combinations of n-element set. The order of selection of items not considered." }, { "code": null, "e": 1484, "s": 1380, "text": "Here, we are given two parameters n and k and we have to return the value of binomial coefficient nck ." }, { "code": null, "e": 1520, "s": 1484, "text": "Input : n = 8 and k = 3\nOutput : 56" }, { "code": null, "e": 1569, "s": 1520, "text": "There can be multiple solutions to this problem," }, { "code": null, "e": 1742, "s": 1569, "text": "There is a method to calculate the value of c(n,k) using a recursive call. The standard formula for finding the value of binomial coefficients that uses recursive call is −" }, { "code": null, "e": 1776, "s": 1742, "text": "c(n,k) = c(n-1 , k-1) + c(n-1, k)" }, { "code": null, "e": 1798, "s": 1776, "text": "c(n, 0) = c(n, n) = 1" }, { "code": null, "e": 1867, "s": 1798, "text": "The implementation of a recursive call that uses the above formula −" }, { "code": null, "e": 2191, "s": 1867, "text": "#include <iostream>\nusing namespace std;\nint binomialCoefficients(int n, int k) {\n if (k == 0 || k == n)\n return 1;\n return binomialCoefficients(n - 1, k - 1) + binomialCoefficients(n - 1, k);\n}\nint main() {\n int n=8 , k=5;\n cout<<\"The value of C(\"<<n<<\", \"<<k<<\") is \"<<binomialCoefficients(n, k);\n return 0;\n}" }, { "code": null, "e": 2218, "s": 2191, "text": "The value of C(8, 5) is 56" }, { "code": null, "e": 2341, "s": 2218, "text": "Another solution might be using overlapping subproblem. So, we will use dynamic programming algorithm to avoid subproblem." }, { "code": null, "e": 2736, "s": 2341, "text": "#include <bits/stdc++.h>>\nusing namespace std;\nint binomialCoefficients(int n, int k) {\n int C[k+1];\n memset(C, 0, sizeof(C));\n C[0] = 1;\n for (int i = 1; i <= n; i++) {\n for (int j = min(i, k); j > 0; j--)\n C[j] = C[j] + C[j-1];\n }\n return C[k];\n}\nint main() {\n int n=8, k=5;\n cout<<\"The value of C(\"<<n<<\", \"<<k<<\") is \"<<binomialCoefficients(n,k);\n return 0;\n}" }, { "code": null, "e": 2763, "s": 2736, "text": "The value of C(8, 5) is 56" } ]
UCL Data Science Society: Pandas. Workshop 6: What is Pandas, Pandas... | by Philip Wilkinson | Towards Data Science
This year, as Head of Science for the UCL Data Science Society, the society is presenting a series of 20 workshops covering topics such as introduction to Python, a Data Scientists toolkit and Machine learning methods, throughout the academic year. For each of these the aim is to create a series of small blogposts that will outline the main points with links to the full workshop for anyone who wishes to follow along. All of these can be found in our GitHub repository, and will be updated throughout the year with new workshops and challenges. The sixth workshop in the series is an introduction to the pandas python package in which we introduce you to Pandas, Pandas Series, Pandas DataFrame, accessing data and Pandas operations. While some of the highlights will be shared here, the full workshop including a problem sheet to test yourself can be found here. If you have missed any of our previous workshops you can find the last three at the following links: towardsdatascience.com towardsdatascience.com towardsdatascience.com Pandas is an open source Python package that is widely used for data science/analysis and machine learning tasks, building on top of the Numpy package that we introduced last week. It is one of the most popular data wrangling packages for data science workflows and integrates well with many of the other libraries used during this process. It is useful because it makes tasks like: Data cleaning Data filling Data normalisation Merges and joins Data visualisation Data inspection so much simpler than they would otherwise be. This functionality helps with its aim of becoming: ...the fundamental high-level building block for doing practical, real world data analysis in Python. Additionally, it has the broader goal of becoming the most powerful and flexible open source data analysis / manipulation tool available in any language. as described in the documentation. Pandas series are the fundamental structure that make up the building blocks of pandas dataframes. While they are similar to Numpy arrays in their structure, the main difference is that we can specify a non-numerical index for a pandas series that is always linked, allowing us to access the location of the information based on that value. However, in most other respects, such as when applying numerical operations, series can behave much like arrays or lists. To generate a series we need to use the pd.series(x) notation where x is a list or an array which is to be converted into the series. For this, we can use the index argument to set index = y where y is a list of the indices we want to use for the respective series values. This means that we can use both numerical and string notation (i.e. min, median, max rather than 1,2,3) if we so desired. An example of this is: # Generating Pandas seriesseries = pd.Series([10,20,30,40])print(series)#out:0 101 202 303 40dtype: int64 or: # Generating Pandas seriesseries = pd.Series([10,20,30,40], index=["a","b","c","d"])print(series)#out:a 10b 20c 30d 40dtype: int64 These series can be combined to make up a Pandas DataFrame, which is the main data structure that is associated with Pandas. Essentially, his is a table of data stored as a variable similarly to how you may store a dictionary of lists or multiple values in an array as a variable. Each column in a DataFrame is represented by a Pandas series meaning that we could extract each column seperately if we wished and the index would remain the same. We can create a DataFrame in two ways, either by converting data already stored in Python into a Dataframe or by reading in an extrnal file. In the first instance, we can create a dataframe from a variety of structures, such as from lists or a list of lists or from zipped lists. In our case however we will use a dictionary of lists as follows using the `pd.DataFrame()` function: #create a dictionary of listsdata_dict = {"Countries": ["England", "Scotland", "Wales", "Northern Ireland"], "Capitals": ["London", "Edinburgh", "Cardiff", "Dublin"], "Population (millions)": [55.98, 5.454, 3.136, 1.786]}#convert this to a dataframedata_df = pd.DataFrame(data_dict)#display the dataframedata_df Or we can extract the data from an existing datasource such as a csv (comma separated variables) file using pd.read_csv as follows: # Defines df as a data frame without adding an additional index column df = pd.read_csv("StarColours.csv")# Displays the data frame dfdf Where a csv is a common data structure to use in conjunction with pandas, where each column is separated by a column as the name suggested. Of course, other data structures can also be used to be read in such as text files or excel files and various other parameters can be specified to allow these to be read in such as the delimiter (how columns are separated), the encoding (the sequence of characters), and compression (how the file is compressed). This can be used in conjunction with parameters such as use_cols where you can specify the columns to be read in or nrows which specifies how many rows you want to be read in. Just as with a list, an array or dictionary we can also take a look at specific values, or ranges of values, from with the data structure. In the case of a DataFrame we can look at slices just as we would with a list or an array, but we can use the column names or the indexes to access this information. In the case of the star colours dataset, we can select a specific column in a way similar to what we would for a dictionary value. The key here would be the column name as a string as follows: # Returns the colour columndf["Color"]#alterntaively you can use dot notation#df.Color#but only if there are no spaces in the title#out:0 Blue1 Blue2 Blue3 Blue to White4 White to Yellow5 Orange to Red6 RedName: Color, dtype: object We can take this even further by selecting a specific item, or items from the column, as we would in a dictionary of lists, by firstly accessing the column and then accessing the indices as follows: # Selects seventh item in colour columndf["Color"][6]#out:'Red' Or for a range: # Selects 4th to 6th items in colour columndf["Color"][3:6]#out:3 Blue to White4 White to Yellow5 Orange to RedName: Color, dtype: object We can further specify several columns by passing a list of column names as follows: # Returns the specified columnsdf[["Color", "Main Characteristics", "Examples"]][3:6] Alternatively we can access information from a DataFrame where a specific condition is met by using df.loc . This is where we can begin to use conditional statements to access information using the notation df.loc[df["columname"] > x] which will return all rows with a value greater than x for the column labeled columname . This can then be used in a conjunction with other conditionals as to either access or change information. For example: # Returns all rows with values greater than #one for the average mass columndf.loc[df["Average Mass (The Sun = 1)"] > 1] The benefit of using the DataFrame data structure is the variety of operations that can be performed on the data including the ability to perform mathematical operations on columns and across them, the creation of new columns to add back to the dataset and a host of inbuilt functionality as well. For example, as we would with Numpy arrays, we can perform mathematical operations for each element across the column. For example we could add the mass average and the mass radius columns together: # Adding the average mass and radius columns(df["Average Mass (The Sun = 1)"] + df["Average Radius (The Sun = 1)"])#out:0 75.01 25.02 5.73 3.04 2.25 1.76 0.7dtype: float64 Or we could calculate the average density of the stars based on average mass and average radius, and assign that back to a new column in the dataset: # Defines a new average density column accordinglydf["Average Density"] = ((df["Average Mass (The Sun = 1)"] *3) /(4* np.pi* (df["Average Radius (The Sun = 1)"] **3)))# Displays the data framedf Other than this, we have have multiple in built functions which we can use to look at the data or change it. For example, we can use df.head() to check the first five rows of the dataset, or we can use df.tail() to access the bottom five rows of the dataset: # Returns first five rows from the data framedf.head() We can also access all unique items from a column (i.e. no duplicates) as an array: # Returns unique items from the colours columndf["Color"].unique()#out:array(['Blue', 'Blue to White', 'White to Yellow', 'Orange to Red', 'Red'], dtype=object) Or we can return a number of statistical paramaters for all columns with numerical values as: # Returns unique items from the colours rowdf.describe() Where we can also use inbuilt Numpy functions to access other statistical values individually such as using df.mean() to access the mean of the columns, df.median() to access the median of the columns, df.min() to access the minimum of the columns or df.max() to find the maximum of the columns. For example: # Returns the mean of each numerical columndf.mean()Average Mass (The Sun = 1) 12.157143Average Radius (The Sun = 1) 4.028571Average Density 0.261248dtype: float64 There is of course many more inbuilt functionalities built into the structure, including setting the index, converting the columns to arrays, lists and dictionaries, dropping a row or a column based on a condition, merging dataframes, dealing with NaN values and finally pushing the data to a csv file. While this functionality is not covered here, it can be found in the workshop file here. Or, if you think you know all there is about pandas DataFrames, then why not try out problem sheet? Dataset of stars comes from: https://www.enchantedlearning.com/subjects/astronomy/stars/startypes.shtml The full workshop notebook, along with further examples and challenges, can be found HERE. If you want any further information on our society feel free to follow us on our socials: Facebook: https://www.facebook.com/ucldata Instagram: https://www.instagram.com/ucl.datasci/ LinkedIn: https://www.linkedin.com/company/ucldata/ And if you want to keep up date with stores from the UCL Data Science Society and other amazing authors, feel free to sign up to medium using my referral code below.
[ { "code": null, "e": 720, "s": 172, "text": "This year, as Head of Science for the UCL Data Science Society, the society is presenting a series of 20 workshops covering topics such as introduction to Python, a Data Scientists toolkit and Machine learning methods, throughout the academic year. For each of these the aim is to create a series of small blogposts that will outline the main points with links to the full workshop for anyone who wishes to follow along. All of these can be found in our GitHub repository, and will be updated throughout the year with new workshops and challenges." }, { "code": null, "e": 1039, "s": 720, "text": "The sixth workshop in the series is an introduction to the pandas python package in which we introduce you to Pandas, Pandas Series, Pandas DataFrame, accessing data and Pandas operations. While some of the highlights will be shared here, the full workshop including a problem sheet to test yourself can be found here." }, { "code": null, "e": 1140, "s": 1039, "text": "If you have missed any of our previous workshops you can find the last three at the following links:" }, { "code": null, "e": 1163, "s": 1140, "text": "towardsdatascience.com" }, { "code": null, "e": 1186, "s": 1163, "text": "towardsdatascience.com" }, { "code": null, "e": 1209, "s": 1186, "text": "towardsdatascience.com" }, { "code": null, "e": 1592, "s": 1209, "text": "Pandas is an open source Python package that is widely used for data science/analysis and machine learning tasks, building on top of the Numpy package that we introduced last week. It is one of the most popular data wrangling packages for data science workflows and integrates well with many of the other libraries used during this process. It is useful because it makes tasks like:" }, { "code": null, "e": 1606, "s": 1592, "text": "Data cleaning" }, { "code": null, "e": 1619, "s": 1606, "text": "Data filling" }, { "code": null, "e": 1638, "s": 1619, "text": "Data normalisation" }, { "code": null, "e": 1655, "s": 1638, "text": "Merges and joins" }, { "code": null, "e": 1674, "s": 1655, "text": "Data visualisation" }, { "code": null, "e": 1690, "s": 1674, "text": "Data inspection" }, { "code": null, "e": 1787, "s": 1690, "text": "so much simpler than they would otherwise be. This functionality helps with its aim of becoming:" }, { "code": null, "e": 2043, "s": 1787, "text": "...the fundamental high-level building block for doing practical, real world data analysis in Python. Additionally, it has the broader goal of becoming the most powerful and flexible open source data analysis / manipulation tool available in any language." }, { "code": null, "e": 2078, "s": 2043, "text": "as described in the documentation." }, { "code": null, "e": 2541, "s": 2078, "text": "Pandas series are the fundamental structure that make up the building blocks of pandas dataframes. While they are similar to Numpy arrays in their structure, the main difference is that we can specify a non-numerical index for a pandas series that is always linked, allowing us to access the location of the information based on that value. However, in most other respects, such as when applying numerical operations, series can behave much like arrays or lists." }, { "code": null, "e": 2959, "s": 2541, "text": "To generate a series we need to use the pd.series(x) notation where x is a list or an array which is to be converted into the series. For this, we can use the index argument to set index = y where y is a list of the indices we want to use for the respective series values. This means that we can use both numerical and string notation (i.e. min, median, max rather than 1,2,3) if we so desired. An example of this is:" }, { "code": null, "e": 3077, "s": 2959, "text": "# Generating Pandas seriesseries = pd.Series([10,20,30,40])print(series)#out:0 101 202 303 40dtype: int64" }, { "code": null, "e": 3081, "s": 3077, "text": "or:" }, { "code": null, "e": 3224, "s": 3081, "text": "# Generating Pandas seriesseries = pd.Series([10,20,30,40], index=[\"a\",\"b\",\"c\",\"d\"])print(series)#out:a 10b 20c 30d 40dtype: int64" }, { "code": null, "e": 3669, "s": 3224, "text": "These series can be combined to make up a Pandas DataFrame, which is the main data structure that is associated with Pandas. Essentially, his is a table of data stored as a variable similarly to how you may store a dictionary of lists or multiple values in an array as a variable. Each column in a DataFrame is represented by a Pandas series meaning that we could extract each column seperately if we wished and the index would remain the same." }, { "code": null, "e": 4051, "s": 3669, "text": "We can create a DataFrame in two ways, either by converting data already stored in Python into a Dataframe or by reading in an extrnal file. In the first instance, we can create a dataframe from a variety of structures, such as from lists or a list of lists or from zipped lists. In our case however we will use a dictionary of lists as follows using the `pd.DataFrame()` function:" }, { "code": null, "e": 4386, "s": 4051, "text": "#create a dictionary of listsdata_dict = {\"Countries\": [\"England\", \"Scotland\", \"Wales\", \"Northern Ireland\"], \"Capitals\": [\"London\", \"Edinburgh\", \"Cardiff\", \"Dublin\"], \"Population (millions)\": [55.98, 5.454, 3.136, 1.786]}#convert this to a dataframedata_df = pd.DataFrame(data_dict)#display the dataframedata_df" }, { "code": null, "e": 4518, "s": 4386, "text": "Or we can extract the data from an existing datasource such as a csv (comma separated variables) file using pd.read_csv as follows:" }, { "code": null, "e": 4655, "s": 4518, "text": "# Defines df as a data frame without adding an additional index column df = pd.read_csv(\"StarColours.csv\")# Displays the data frame dfdf" }, { "code": null, "e": 5284, "s": 4655, "text": "Where a csv is a common data structure to use in conjunction with pandas, where each column is separated by a column as the name suggested. Of course, other data structures can also be used to be read in such as text files or excel files and various other parameters can be specified to allow these to be read in such as the delimiter (how columns are separated), the encoding (the sequence of characters), and compression (how the file is compressed). This can be used in conjunction with parameters such as use_cols where you can specify the columns to be read in or nrows which specifies how many rows you want to be read in." }, { "code": null, "e": 5589, "s": 5284, "text": "Just as with a list, an array or dictionary we can also take a look at specific values, or ranges of values, from with the data structure. In the case of a DataFrame we can look at slices just as we would with a list or an array, but we can use the column names or the indexes to access this information." }, { "code": null, "e": 5782, "s": 5589, "text": "In the case of the star colours dataset, we can select a specific column in a way similar to what we would for a dictionary value. The key here would be the column name as a string as follows:" }, { "code": null, "e": 6085, "s": 5782, "text": "# Returns the colour columndf[\"Color\"]#alterntaively you can use dot notation#df.Color#but only if there are no spaces in the title#out:0 Blue1 Blue2 Blue3 Blue to White4 White to Yellow5 Orange to Red6 RedName: Color, dtype: object" }, { "code": null, "e": 6284, "s": 6085, "text": "We can take this even further by selecting a specific item, or items from the column, as we would in a dictionary of lists, by firstly accessing the column and then accessing the indices as follows:" }, { "code": null, "e": 6348, "s": 6284, "text": "# Selects seventh item in colour columndf[\"Color\"][6]#out:'Red'" }, { "code": null, "e": 6364, "s": 6348, "text": "Or for a range:" }, { "code": null, "e": 6515, "s": 6364, "text": "# Selects 4th to 6th items in colour columndf[\"Color\"][3:6]#out:3 Blue to White4 White to Yellow5 Orange to RedName: Color, dtype: object" }, { "code": null, "e": 6600, "s": 6515, "text": "We can further specify several columns by passing a list of column names as follows:" }, { "code": null, "e": 6686, "s": 6600, "text": "# Returns the specified columnsdf[[\"Color\", \"Main Characteristics\", \"Examples\"]][3:6]" }, { "code": null, "e": 7130, "s": 6686, "text": "Alternatively we can access information from a DataFrame where a specific condition is met by using df.loc . This is where we can begin to use conditional statements to access information using the notation df.loc[df[\"columname\"] > x] which will return all rows with a value greater than x for the column labeled columname . This can then be used in a conjunction with other conditionals as to either access or change information. For example:" }, { "code": null, "e": 7251, "s": 7130, "text": "# Returns all rows with values greater than #one for the average mass columndf.loc[df[\"Average Mass (The Sun = 1)\"] > 1]" }, { "code": null, "e": 7549, "s": 7251, "text": "The benefit of using the DataFrame data structure is the variety of operations that can be performed on the data including the ability to perform mathematical operations on columns and across them, the creation of new columns to add back to the dataset and a host of inbuilt functionality as well." }, { "code": null, "e": 7748, "s": 7549, "text": "For example, as we would with Numpy arrays, we can perform mathematical operations for each element across the column. For example we could add the mass average and the mass radius columns together:" }, { "code": null, "e": 7946, "s": 7748, "text": "# Adding the average mass and radius columns(df[\"Average Mass (The Sun = 1)\"] + df[\"Average Radius (The Sun = 1)\"])#out:0 75.01 25.02 5.73 3.04 2.25 1.76 0.7dtype: float64" }, { "code": null, "e": 8096, "s": 7946, "text": "Or we could calculate the average density of the stars based on average mass and average radius, and assign that back to a new column in the dataset:" }, { "code": null, "e": 8419, "s": 8096, "text": "# Defines a new average density column accordinglydf[\"Average Density\"] = ((df[\"Average Mass (The Sun = 1)\"] *3) /(4* np.pi* (df[\"Average Radius (The Sun = 1)\"] **3)))# Displays the data framedf" }, { "code": null, "e": 8678, "s": 8419, "text": "Other than this, we have have multiple in built functions which we can use to look at the data or change it. For example, we can use df.head() to check the first five rows of the dataset, or we can use df.tail() to access the bottom five rows of the dataset:" }, { "code": null, "e": 8733, "s": 8678, "text": "# Returns first five rows from the data framedf.head()" }, { "code": null, "e": 8817, "s": 8733, "text": "We can also access all unique items from a column (i.e. no duplicates) as an array:" }, { "code": null, "e": 8983, "s": 8817, "text": "# Returns unique items from the colours columndf[\"Color\"].unique()#out:array(['Blue', 'Blue to White', 'White to Yellow', 'Orange to Red', 'Red'], dtype=object)" }, { "code": null, "e": 9077, "s": 8983, "text": "Or we can return a number of statistical paramaters for all columns with numerical values as:" }, { "code": null, "e": 9134, "s": 9077, "text": "# Returns unique items from the colours rowdf.describe()" }, { "code": null, "e": 9443, "s": 9134, "text": "Where we can also use inbuilt Numpy functions to access other statistical values individually such as using df.mean() to access the mean of the columns, df.median() to access the median of the columns, df.min() to access the minimum of the columns or df.max() to find the maximum of the columns. For example:" }, { "code": null, "e": 9633, "s": 9443, "text": "# Returns the mean of each numerical columndf.mean()Average Mass (The Sun = 1) 12.157143Average Radius (The Sun = 1) 4.028571Average Density 0.261248dtype: float64" }, { "code": null, "e": 9936, "s": 9633, "text": "There is of course many more inbuilt functionalities built into the structure, including setting the index, converting the columns to arrays, lists and dictionaries, dropping a row or a column based on a condition, merging dataframes, dealing with NaN values and finally pushing the data to a csv file." }, { "code": null, "e": 10125, "s": 9936, "text": "While this functionality is not covered here, it can be found in the workshop file here. Or, if you think you know all there is about pandas DataFrames, then why not try out problem sheet?" }, { "code": null, "e": 10229, "s": 10125, "text": "Dataset of stars comes from: https://www.enchantedlearning.com/subjects/astronomy/stars/startypes.shtml" }, { "code": null, "e": 10410, "s": 10229, "text": "The full workshop notebook, along with further examples and challenges, can be found HERE. If you want any further information on our society feel free to follow us on our socials:" }, { "code": null, "e": 10453, "s": 10410, "text": "Facebook: https://www.facebook.com/ucldata" }, { "code": null, "e": 10503, "s": 10453, "text": "Instagram: https://www.instagram.com/ucl.datasci/" }, { "code": null, "e": 10555, "s": 10503, "text": "LinkedIn: https://www.linkedin.com/company/ucldata/" } ]
SparkSession vs SparkContext vs SQLContext | by Giorgos Myrianthous | Towards Data Science
In the big data era, Apache Spark is probably one of the most popular technologies as it offers a unified engine for processing enormous amount of data in a reasonable amount of time. In this article, I am going to cover the various entry points for Spark Applications and how these have evolved over the releases made. Before doing so, it might be useful to go through some basic concepts and terms so that we can then jump more easily to the entry points namely SparkSession, SparkContext or SQLContext. A Spark Application consists of a Driver Program and a group of Executors on the cluster. The Driver is a process that executes the main program of your Spark application and creates the SparkContext that coordinates the execution of jobs (more on this later). The executors are processes running on the worker nodes of the cluster which are responsible for executing the tasks the driver process has assigned to them. The cluster manager (such as Mesos or YARN) is responsible for the allocation of physical resources to Spark Applications. Every Spark Application needs an entry point that allows it to communicate with data sources and perform certain operations such as reading and writing data. In Spark 1.x, three entry points were introduced: SparkContext, SQLContext and HiveContext. Since Spark 2.x, a new entry point called SparkSession has been introduced that essentially combined all functionalities available in the three aforementioned contexts. Note that all contexts are still available even in newest Spark releases, mostly for backward compatibility purposes. In the next sections I am going to discuss the purpose of the above entry points and how each differentiates from others. As mentioned before, the earliest releases of Spark made available these three entry points each of which has a different purpose. The SparkContext is used by the Driver Process of the Spark Application in order to establish a communication with the cluster and the resource managers in order to coordinate and execute jobs. SparkContext also enables the access to the other two contexts, namely SQLContext and HiveContext (more on these entry points later on). In order to create a SparkContext, you will first need to create a Spark Configuration (SparkConf) as shown below: // Scalaimport org.apache.spark.{SparkContext, SparkConf}val sparkConf = new SparkConf() \ .setAppName("app") \ .setMaster("yarn")val sc = new SparkContext(sparkConf) # PySparkfrom pyspark import SparkContext, SparkConfconf = SparkConf() \ .setAppName('app') \ .setMaster(master)sc = SparkContext(conf=conf) Note that if you are using the spark-shell, SparkContext is already available through the variable called sc. SQLContext is the entry point to SparkSQL which is a Spark module for structured data processing. Once SQLContext is initialised, the user can then use it in order to perform various “sql-like” operations over Datasets and Dataframes. In order to create a SQLContext, you first need to instantiate a SparkContext as shown below: // Scalaimport org.apache.spark.{SparkContext, SparkConf}import org.apache.spark.sql.SQLContextval sparkConf = new SparkConf() \ .setAppName("app") \ .setMaster("yarn")val sc = new SparkContext(sparkConf)val sqlContext = new SQLContext(sc) # PySparkfrom pyspark import SparkContext, SparkConffrom pyspark.sql import SQLContextconf = SparkConf() \ .setAppName('app') \ .setMaster(master)sc = SparkContext(conf=conf)sql_context = SQLContext(sc) If your Spark Application needs to communicate with Hive and you are using Spark < 2.0 then you will probably need a HiveContext if . For Spark 1.5+, HiveContext also offers support for window functions. // Scalaimport org.apache.spark.{SparkConf, SparkContext}import org.apache.spark.sql.hive.HiveContextval sparkConf = new SparkConf() \ .setAppName("app") \ .setMaster("yarn")val sc = new SparkContext(sparkConf)val hiveContext = new HiveContext(sc)hiveContext.sql("select * from tableName limit 0") # PySparkfrom pyspark import SparkContext, HiveContextconf = SparkConf() \ .setAppName('app') \ .setMaster(master)sc = SparkContext(conf)hive_context = HiveContext(sc)hive_context.sql("select * from tableName limit 0") Since Spark 2.x+, tow additions made HiveContext redundant: a) SparkSession was introduced that also offers Hive support b) Native window functions were released and essentially replaced the Hive UDAFs with native Spark SQL UDAFs Spark 2.0 introduced a new entry point called SparkSession that essentially replaced both SQLContext and HiveContext. Additionally, it gives to developers immediate access to SparkContext. In order to create a SparkSession with Hive support, all you have to do is // Scalaimport org.apache.spark.sql.SparkSessionval sparkSession = SparkSession \ .builder() \ .appName("myApp") \ .enableHiveSupport() \ .getOrCreate()// Two ways you can access spark context from spark sessionval spark_context = sparkSession._scval spark_context = sparkSession.sparkContext # PySparkfrom pyspark.sql import SparkSessionspark_session = SparkSession \ .builder \ .enableHiveSupport() \ .getOrCreate()# Two ways you can access spark context from spark sessionspark_context = spark_session._scspark_context = spark_session.sparkContext In this article we went through the older entry points (SparkContext, SQLContext and HiveContext) that were made available in early releases of Spark. We have also seen how the newest entry point namely SparkSession has made the instantiation of the other three contexts redundant. If you are using Spark 2.x+, then you shouldn’t really worry about HiveContext, SparkContext and SQLContext. All you have to do is to create a SparkSession that offers support to Hive and sql-like operations. Additionally, in case you need to access SparkContext for any reason, you can still do it through SparkSession as we have seen in the examples of the previous session. Another important thing to note is that Spark 2.x comes with native window functions that initially were introduced in HiveContext. PS: If you are not using Spark 2.x yet, I strongly encourage you start doing so. Become a member and read every story on Medium. Your membership fee directly supports me and other writers you read.
[ { "code": null, "e": 231, "s": 47, "text": "In the big data era, Apache Spark is probably one of the most popular technologies as it offers a unified engine for processing enormous amount of data in a reasonable amount of time." }, { "code": null, "e": 553, "s": 231, "text": "In this article, I am going to cover the various entry points for Spark Applications and how these have evolved over the releases made. Before doing so, it might be useful to go through some basic concepts and terms so that we can then jump more easily to the entry points namely SparkSession, SparkContext or SQLContext." }, { "code": null, "e": 972, "s": 553, "text": "A Spark Application consists of a Driver Program and a group of Executors on the cluster. The Driver is a process that executes the main program of your Spark application and creates the SparkContext that coordinates the execution of jobs (more on this later). The executors are processes running on the worker nodes of the cluster which are responsible for executing the tasks the driver process has assigned to them." }, { "code": null, "e": 1095, "s": 972, "text": "The cluster manager (such as Mesos or YARN) is responsible for the allocation of physical resources to Spark Applications." }, { "code": null, "e": 1632, "s": 1095, "text": "Every Spark Application needs an entry point that allows it to communicate with data sources and perform certain operations such as reading and writing data. In Spark 1.x, three entry points were introduced: SparkContext, SQLContext and HiveContext. Since Spark 2.x, a new entry point called SparkSession has been introduced that essentially combined all functionalities available in the three aforementioned contexts. Note that all contexts are still available even in newest Spark releases, mostly for backward compatibility purposes." }, { "code": null, "e": 1754, "s": 1632, "text": "In the next sections I am going to discuss the purpose of the above entry points and how each differentiates from others." }, { "code": null, "e": 1885, "s": 1754, "text": "As mentioned before, the earliest releases of Spark made available these three entry points each of which has a different purpose." }, { "code": null, "e": 2216, "s": 1885, "text": "The SparkContext is used by the Driver Process of the Spark Application in order to establish a communication with the cluster and the resource managers in order to coordinate and execute jobs. SparkContext also enables the access to the other two contexts, namely SQLContext and HiveContext (more on these entry points later on)." }, { "code": null, "e": 2331, "s": 2216, "text": "In order to create a SparkContext, you will first need to create a Spark Configuration (SparkConf) as shown below:" }, { "code": null, "e": 2504, "s": 2331, "text": "// Scalaimport org.apache.spark.{SparkContext, SparkConf}val sparkConf = new SparkConf() \\ .setAppName(\"app\") \\ .setMaster(\"yarn\")val sc = new SparkContext(sparkConf)" }, { "code": null, "e": 2651, "s": 2504, "text": "# PySparkfrom pyspark import SparkContext, SparkConfconf = SparkConf() \\ .setAppName('app') \\ .setMaster(master)sc = SparkContext(conf=conf)" }, { "code": null, "e": 2761, "s": 2651, "text": "Note that if you are using the spark-shell, SparkContext is already available through the variable called sc." }, { "code": null, "e": 2996, "s": 2761, "text": "SQLContext is the entry point to SparkSQL which is a Spark module for structured data processing. Once SQLContext is initialised, the user can then use it in order to perform various “sql-like” operations over Datasets and Dataframes." }, { "code": null, "e": 3090, "s": 2996, "text": "In order to create a SQLContext, you first need to instantiate a SparkContext as shown below:" }, { "code": null, "e": 3336, "s": 3090, "text": "// Scalaimport org.apache.spark.{SparkContext, SparkConf}import org.apache.spark.sql.SQLContextval sparkConf = new SparkConf() \\ .setAppName(\"app\") \\ .setMaster(\"yarn\")val sc = new SparkContext(sparkConf)val sqlContext = new SQLContext(sc)" }, { "code": null, "e": 3545, "s": 3336, "text": "# PySparkfrom pyspark import SparkContext, SparkConffrom pyspark.sql import SQLContextconf = SparkConf() \\ .setAppName('app') \\ .setMaster(master)sc = SparkContext(conf=conf)sql_context = SQLContext(sc)" }, { "code": null, "e": 3749, "s": 3545, "text": "If your Spark Application needs to communicate with Hive and you are using Spark < 2.0 then you will probably need a HiveContext if . For Spark 1.5+, HiveContext also offers support for window functions." }, { "code": null, "e": 4053, "s": 3749, "text": "// Scalaimport org.apache.spark.{SparkConf, SparkContext}import org.apache.spark.sql.hive.HiveContextval sparkConf = new SparkConf() \\ .setAppName(\"app\") \\ .setMaster(\"yarn\")val sc = new SparkContext(sparkConf)val hiveContext = new HiveContext(sc)hiveContext.sql(\"select * from tableName limit 0\")" }, { "code": null, "e": 4278, "s": 4053, "text": "# PySparkfrom pyspark import SparkContext, HiveContextconf = SparkConf() \\ .setAppName('app') \\ .setMaster(master)sc = SparkContext(conf)hive_context = HiveContext(sc)hive_context.sql(\"select * from tableName limit 0\")" }, { "code": null, "e": 4338, "s": 4278, "text": "Since Spark 2.x+, tow additions made HiveContext redundant:" }, { "code": null, "e": 4399, "s": 4338, "text": "a) SparkSession was introduced that also offers Hive support" }, { "code": null, "e": 4508, "s": 4399, "text": "b) Native window functions were released and essentially replaced the Hive UDAFs with native Spark SQL UDAFs" }, { "code": null, "e": 4772, "s": 4508, "text": "Spark 2.0 introduced a new entry point called SparkSession that essentially replaced both SQLContext and HiveContext. Additionally, it gives to developers immediate access to SparkContext. In order to create a SparkSession with Hive support, all you have to do is" }, { "code": null, "e": 5077, "s": 4772, "text": "// Scalaimport org.apache.spark.sql.SparkSessionval sparkSession = SparkSession \\ .builder() \\ .appName(\"myApp\") \\ .enableHiveSupport() \\ .getOrCreate()// Two ways you can access spark context from spark sessionval spark_context = sparkSession._scval spark_context = sparkSession.sparkContext" }, { "code": null, "e": 5344, "s": 5077, "text": "# PySparkfrom pyspark.sql import SparkSessionspark_session = SparkSession \\ .builder \\ .enableHiveSupport() \\ .getOrCreate()# Two ways you can access spark context from spark sessionspark_context = spark_session._scspark_context = spark_session.sparkContext" }, { "code": null, "e": 5495, "s": 5344, "text": "In this article we went through the older entry points (SparkContext, SQLContext and HiveContext) that were made available in early releases of Spark." }, { "code": null, "e": 6135, "s": 5495, "text": "We have also seen how the newest entry point namely SparkSession has made the instantiation of the other three contexts redundant. If you are using Spark 2.x+, then you shouldn’t really worry about HiveContext, SparkContext and SQLContext. All you have to do is to create a SparkSession that offers support to Hive and sql-like operations. Additionally, in case you need to access SparkContext for any reason, you can still do it through SparkSession as we have seen in the examples of the previous session. Another important thing to note is that Spark 2.x comes with native window functions that initially were introduced in HiveContext." }, { "code": null, "e": 6216, "s": 6135, "text": "PS: If you are not using Spark 2.x yet, I strongly encourage you start doing so." } ]
Tutorial: Stop Running Jupyter Notebooks from your Command Line | by Ashton Sidhu | Towards Data Science
Jupyter Notebook provides a great platform to produce human-readable documents containing code, equations, analysis, and their descriptions. Some even consider it a powerful development when combining it with NBDev. For such an integral tool, the out of the box start up is not the best. Each use requires starting the Jupyter web application from the command line and entering your token or password. The entire web application relies on that terminal window being open. Some might “daemonize” the process and then use nohup to detach it from their terminal, but that’s not the most elegant and maintainable solution. Lucky for us, Jupyter has already come up with a solution to this problem by coming out with an extension of Jupyter Notebooks that runs as a sustainable web application and has built-in user authentication. To add a cherry on top, it can be managed and sustained through Docker allowing for isolated development environments. By the end of this post we will leverage the power of JupyterHub to access a Jupyter Notebook instance which can be accessed without a terminal, from multiple devices within your network, and a more user friendly authentication method. A basic knowledge of Docker and the command line would be beneficial in setting this up. I recommend doing this on the most powerful device you have and one that is turned on for most of the day, preferably all day. One of the benefits of this setup is that you will be able to use Jupyter Notebook from any device on your network, but have all the computation happen on the device we configure. JupyterHub brings the power of notebooks to groups of users. The idea behind JupyterHub was to scale out the use of Jupyter Notebooks to enterprises, classrooms, and large groups of users. Jupyter Notebook, however, is supposed to run as a local instance, on a single node, by a single developer. Unfortunately, there was no middle ground to have the usability and scalability of JupyterHub and the simplicity of running a local Jupyter Notebook. That is, until now. JupyterHub has pre-built Docker images that we can utilize to spawn a single notebook on a whim, with little to no overhead in technical complexity. We are going to use the combination of Docker and JupyterHub to access Jupyter Notebooks from anytime, anywhere, at the same URL. The architecture of our JupyterHub server will consist of 2 services: JupyterHub and JupyterLab. JupyterHub will be the entry point and will spawn JupyterLab instances for any user. Each of these services will exist as a Docker container on the host. To build our at-home JupyterHub server we will use the pre-built Docker images of JupyterHub & JupyterLab. The JupyterHub Docker image is simple. FROM jupyterhub/jupyterhub:1.2# Copy the JupyterHub configuration in the containerCOPY jupyterhub_config.py .# Download script to automatically stop idle single-user serversCOPY cull_idle_servers.py .# Install dependencies (for advanced authentication and spawning)RUN pip install dockerspawner We use the pre-built JupyterHub Docker Image and add our own configuration file to stop idle servers, cull_idle_servers.py. Lastly, we install additional packages to spawn JupyterLab instances via Docker. To bring everything together, let’s create a docker-compose.yml file to define our deployments and configuration. version: '3'services: # Configuration for Hub+Proxy jupyterhub: build: . # Build the container from this folder. container_name: jupyterhub_hub # The service will use this container name. volumes: # Give access to Docker socket. - /var/run/docker.sock:/var/run/docker.sock - jupyterhub_data:/srv/jupyterlab environment: # Env variables passed to the Hub process. DOCKER_JUPYTER_IMAGE: jupyter/tensorflow-notebook DOCKER_NETWORK_NAME: ${COMPOSE_PROJECT_NAME}_default HUB_IP: jupyterhub_hub ports: - 8000:8000 restart: unless-stopped # Configuration for the single-user servers jupyterlab: image: jupyter/tensorflow-notebook command: echovolumes: jupyterhub_data: The key environment variables to note are DOCKER_JUPYTER_IMAGE and DOCKER_NETWORK_NAME. JupyterHub will create Jupyter Notebooks with the images defined in the environment variable.For more information on selecting Jupyter images you can visit the following Jupyter documentation. DOCKER_NETWORK_NAME is the name of the Docker network used by the services. This network gets an automatic name from Docker Compose, but the Hub needs to know this name to connect the Jupyter Notebook servers to it. To control the network name we use a little hack: we pass an environment variable COMPOSE_PROJECT_NAME to Docker Compose, and the network name is obtained by appending _default to it. Create a file called .env in the same directory as the docker-compose.yml file and add the following contents: COMPOSE_PROJECT_NAME=jupyter_hub Since this is our home setup, we want to be able to stop idle instances to preserve memory on our machine. JupyterHub has services that can run along side it and one of them being jupyterhub-idle-culler. This service stops any instances that are idle for a prolonged duration. To add this servive, create a new file called cull_idle_servers.py and copy the contents of jupyterhub-idle-culler project into it. Ensure `cull_idle_servers.py` is in the same folder as the Dockerfile. To find out more about JupyterHub services, check out their official documentation on them. To finish off, we need to define configuration options such, volume mounts, Docker images, services, authentication, etc. for our JupyterHub instance. Below is a simple jupyterhub_config.py configuration file I use. import osimport sysc.JupyterHub.spawner_class = 'dockerspawner.DockerSpawner'c.DockerSpawner.image = os.environ['DOCKER_JUPYTER_IMAGE']c.DockerSpawner.network_name = os.environ['DOCKER_NETWORK_NAME']c.JupyterHub.hub_connect_ip = os.environ['HUB_IP']c.JupyterHub.hub_ip = "0.0.0.0" # Makes it accessible from anywhere on your networkc.JupyterHub.admin_access = Truec.JupyterHub.services = [ { 'name': 'cull_idle', 'admin': True, 'command': [sys.executable, 'cull_idle_servers.py', '--timeout=42000'] },]c.Spawner.default_url = '/lab'notebook_dir = os.environ.get('DOCKER_NOTEBOOK_DIR') or '/home/jovyan/work'c.DockerSpawner.notebook_dir = notebook_dirc.DockerSpawner.volumes = { '/home/sidhu': '/home/jovyan/work'} Take note of the following configuration options: 'command': [sys.executable, 'cull_idle_servers.py', '--timeout=42000'] : Timeout is the number of seconds until an idle Jupyter instance is shut down. c.Spawner.default_url = '/lab': Uses Jupyterlab instead of Jupyter Notebook. Comment out this line to use Jupyter Notebook. '/home/sidhu': '/home/jovyan/work': I mounted my home directory to the JupyterLab home directory to have access to any projects and notebooks I have on my Desktop. This also allows us to achieve persistence in the case we create new notebooks, they are saved to our local machine and will not get deleted if our Jupyter Notebook Docker container is deleted. Remove this line if you do not wish to mount your home directory and do not forget to change sidhu to your user name. To start the server, simply run docker-compose up -d, navigate to localhost:8000 in your browser and you should be able to see the JupyterHub landing page. To access it on other devices on your network such asva laptop, an iPad, etc, identify the IP of the host machine by running ifconfig on Unix machines & ipconfig on Windows. From your other device, navigate to the IP you found on port 8000: http://IP:8000 and you should see the JupyterHub landing page! That leaves us with the last task of authenticating to the server. Since we did not set up a LDAP server or OAuth, JupyterHub will use PAM (Pluggable Authentication Module) authentication to authenticate users. This means JupyterHub uses the user name and passwords of the host machine to authenticate. To make use of this, we will have to create a user on the JupyterHub Docker container. There are other ways of doing this such as having a script placed on the container and executed at container start up but we will do it manually as an exercise. If you tear down or rebuild the container you will have to recreate users. I do not recommend hard coding user credentials into any script or Dockerfile. 1) Find the JupyterLab container ID: docker ps -a 2) “SSH” into the container: docker exec -it $YOUR_CONTAINER_ID bash 3) Create a user and follow the terminal prompts to create a password: useradd $YOUR_USERNAME 4) Sign in with the credentials and you’re all set! You now have a ready to go Jupyter Notebook server that can be accessed from any device, in the palm of your hands! Happy Coding! I welcome any and all feedback about any of my posts and tutorials. You can message me on twitter or e-mail me at sidhuashton@gmail.com.
[ { "code": null, "e": 791, "s": 172, "text": "Jupyter Notebook provides a great platform to produce human-readable documents containing code, equations, analysis, and their descriptions. Some even consider it a powerful development when combining it with NBDev. For such an integral tool, the out of the box start up is not the best. Each use requires starting the Jupyter web application from the command line and entering your token or password. The entire web application relies on that terminal window being open. Some might “daemonize” the process and then use nohup to detach it from their terminal, but that’s not the most elegant and maintainable solution." }, { "code": null, "e": 1118, "s": 791, "text": "Lucky for us, Jupyter has already come up with a solution to this problem by coming out with an extension of Jupyter Notebooks that runs as a sustainable web application and has built-in user authentication. To add a cherry on top, it can be managed and sustained through Docker allowing for isolated development environments." }, { "code": null, "e": 1354, "s": 1118, "text": "By the end of this post we will leverage the power of JupyterHub to access a Jupyter Notebook instance which can be accessed without a terminal, from multiple devices within your network, and a more user friendly authentication method." }, { "code": null, "e": 1443, "s": 1354, "text": "A basic knowledge of Docker and the command line would be beneficial in setting this up." }, { "code": null, "e": 1750, "s": 1443, "text": "I recommend doing this on the most powerful device you have and one that is turned on for most of the day, preferably all day. One of the benefits of this setup is that you will be able to use Jupyter Notebook from any device on your network, but have all the computation happen on the device we configure." }, { "code": null, "e": 2217, "s": 1750, "text": "JupyterHub brings the power of notebooks to groups of users. The idea behind JupyterHub was to scale out the use of Jupyter Notebooks to enterprises, classrooms, and large groups of users. Jupyter Notebook, however, is supposed to run as a local instance, on a single node, by a single developer. Unfortunately, there was no middle ground to have the usability and scalability of JupyterHub and the simplicity of running a local Jupyter Notebook. That is, until now." }, { "code": null, "e": 2496, "s": 2217, "text": "JupyterHub has pre-built Docker images that we can utilize to spawn a single notebook on a whim, with little to no overhead in technical complexity. We are going to use the combination of Docker and JupyterHub to access Jupyter Notebooks from anytime, anywhere, at the same URL." }, { "code": null, "e": 2747, "s": 2496, "text": "The architecture of our JupyterHub server will consist of 2 services: JupyterHub and JupyterLab. JupyterHub will be the entry point and will spawn JupyterLab instances for any user. Each of these services will exist as a Docker container on the host." }, { "code": null, "e": 2854, "s": 2747, "text": "To build our at-home JupyterHub server we will use the pre-built Docker images of JupyterHub & JupyterLab." }, { "code": null, "e": 2893, "s": 2854, "text": "The JupyterHub Docker image is simple." }, { "code": null, "e": 3188, "s": 2893, "text": "FROM jupyterhub/jupyterhub:1.2# Copy the JupyterHub configuration in the containerCOPY jupyterhub_config.py .# Download script to automatically stop idle single-user serversCOPY cull_idle_servers.py .# Install dependencies (for advanced authentication and spawning)RUN pip install dockerspawner" }, { "code": null, "e": 3393, "s": 3188, "text": "We use the pre-built JupyterHub Docker Image and add our own configuration file to stop idle servers, cull_idle_servers.py. Lastly, we install additional packages to spawn JupyterLab instances via Docker." }, { "code": null, "e": 3507, "s": 3393, "text": "To bring everything together, let’s create a docker-compose.yml file to define our deployments and configuration." }, { "code": null, "e": 4289, "s": 3507, "text": "version: '3'services: # Configuration for Hub+Proxy jupyterhub: build: . # Build the container from this folder. container_name: jupyterhub_hub # The service will use this container name. volumes: # Give access to Docker socket. - /var/run/docker.sock:/var/run/docker.sock - jupyterhub_data:/srv/jupyterlab environment: # Env variables passed to the Hub process. DOCKER_JUPYTER_IMAGE: jupyter/tensorflow-notebook DOCKER_NETWORK_NAME: ${COMPOSE_PROJECT_NAME}_default HUB_IP: jupyterhub_hub ports: - 8000:8000 restart: unless-stopped # Configuration for the single-user servers jupyterlab: image: jupyter/tensorflow-notebook command: echovolumes: jupyterhub_data:" }, { "code": null, "e": 4570, "s": 4289, "text": "The key environment variables to note are DOCKER_JUPYTER_IMAGE and DOCKER_NETWORK_NAME. JupyterHub will create Jupyter Notebooks with the images defined in the environment variable.For more information on selecting Jupyter images you can visit the following Jupyter documentation." }, { "code": null, "e": 4970, "s": 4570, "text": "DOCKER_NETWORK_NAME is the name of the Docker network used by the services. This network gets an automatic name from Docker Compose, but the Hub needs to know this name to connect the Jupyter Notebook servers to it. To control the network name we use a little hack: we pass an environment variable COMPOSE_PROJECT_NAME to Docker Compose, and the network name is obtained by appending _default to it." }, { "code": null, "e": 5081, "s": 4970, "text": "Create a file called .env in the same directory as the docker-compose.yml file and add the following contents:" }, { "code": null, "e": 5114, "s": 5081, "text": "COMPOSE_PROJECT_NAME=jupyter_hub" }, { "code": null, "e": 5391, "s": 5114, "text": "Since this is our home setup, we want to be able to stop idle instances to preserve memory on our machine. JupyterHub has services that can run along side it and one of them being jupyterhub-idle-culler. This service stops any instances that are idle for a prolonged duration." }, { "code": null, "e": 5523, "s": 5391, "text": "To add this servive, create a new file called cull_idle_servers.py and copy the contents of jupyterhub-idle-culler project into it." }, { "code": null, "e": 5594, "s": 5523, "text": "Ensure `cull_idle_servers.py` is in the same folder as the Dockerfile." }, { "code": null, "e": 5686, "s": 5594, "text": "To find out more about JupyterHub services, check out their official documentation on them." }, { "code": null, "e": 5837, "s": 5686, "text": "To finish off, we need to define configuration options such, volume mounts, Docker images, services, authentication, etc. for our JupyterHub instance." }, { "code": null, "e": 5902, "s": 5837, "text": "Below is a simple jupyterhub_config.py configuration file I use." }, { "code": null, "e": 6646, "s": 5902, "text": "import osimport sysc.JupyterHub.spawner_class = 'dockerspawner.DockerSpawner'c.DockerSpawner.image = os.environ['DOCKER_JUPYTER_IMAGE']c.DockerSpawner.network_name = os.environ['DOCKER_NETWORK_NAME']c.JupyterHub.hub_connect_ip = os.environ['HUB_IP']c.JupyterHub.hub_ip = \"0.0.0.0\" # Makes it accessible from anywhere on your networkc.JupyterHub.admin_access = Truec.JupyterHub.services = [ { 'name': 'cull_idle', 'admin': True, 'command': [sys.executable, 'cull_idle_servers.py', '--timeout=42000'] },]c.Spawner.default_url = '/lab'notebook_dir = os.environ.get('DOCKER_NOTEBOOK_DIR') or '/home/jovyan/work'c.DockerSpawner.notebook_dir = notebook_dirc.DockerSpawner.volumes = { '/home/sidhu': '/home/jovyan/work'}" }, { "code": null, "e": 6696, "s": 6646, "text": "Take note of the following configuration options:" }, { "code": null, "e": 6847, "s": 6696, "text": "'command': [sys.executable, 'cull_idle_servers.py', '--timeout=42000'] : Timeout is the number of seconds until an idle Jupyter instance is shut down." }, { "code": null, "e": 6971, "s": 6847, "text": "c.Spawner.default_url = '/lab': Uses Jupyterlab instead of Jupyter Notebook. Comment out this line to use Jupyter Notebook." }, { "code": null, "e": 7329, "s": 6971, "text": "'/home/sidhu': '/home/jovyan/work': I mounted my home directory to the JupyterLab home directory to have access to any projects and notebooks I have on my Desktop. This also allows us to achieve persistence in the case we create new notebooks, they are saved to our local machine and will not get deleted if our Jupyter Notebook Docker container is deleted." }, { "code": null, "e": 7447, "s": 7329, "text": "Remove this line if you do not wish to mount your home directory and do not forget to change sidhu to your user name." }, { "code": null, "e": 7603, "s": 7447, "text": "To start the server, simply run docker-compose up -d, navigate to localhost:8000 in your browser and you should be able to see the JupyterHub landing page." }, { "code": null, "e": 7777, "s": 7603, "text": "To access it on other devices on your network such asva laptop, an iPad, etc, identify the IP of the host machine by running ifconfig on Unix machines & ipconfig on Windows." }, { "code": null, "e": 7907, "s": 7777, "text": "From your other device, navigate to the IP you found on port 8000: http://IP:8000 and you should see the JupyterHub landing page!" }, { "code": null, "e": 8210, "s": 7907, "text": "That leaves us with the last task of authenticating to the server. Since we did not set up a LDAP server or OAuth, JupyterHub will use PAM (Pluggable Authentication Module) authentication to authenticate users. This means JupyterHub uses the user name and passwords of the host machine to authenticate." }, { "code": null, "e": 8533, "s": 8210, "text": "To make use of this, we will have to create a user on the JupyterHub Docker container. There are other ways of doing this such as having a script placed on the container and executed at container start up but we will do it manually as an exercise. If you tear down or rebuild the container you will have to recreate users." }, { "code": null, "e": 8612, "s": 8533, "text": "I do not recommend hard coding user credentials into any script or Dockerfile." }, { "code": null, "e": 8662, "s": 8612, "text": "1) Find the JupyterLab container ID: docker ps -a" }, { "code": null, "e": 8731, "s": 8662, "text": "2) “SSH” into the container: docker exec -it $YOUR_CONTAINER_ID bash" }, { "code": null, "e": 8825, "s": 8731, "text": "3) Create a user and follow the terminal prompts to create a password: useradd $YOUR_USERNAME" }, { "code": null, "e": 8877, "s": 8825, "text": "4) Sign in with the credentials and you’re all set!" }, { "code": null, "e": 9007, "s": 8877, "text": "You now have a ready to go Jupyter Notebook server that can be accessed from any device, in the palm of your hands! Happy Coding!" } ]
Durbin Watson Test - GeeksforGeeks
09 Mar, 2021 Durbin Watson Test: A test developed by statisticians professor James Durbin and Geoffrey Stuart Watson is used to detect autocorrelation in residuals from the Regression analysis. It is popularly known as Durbin-Watson d statistic, which is defined as Let us first look at some terms to have a clear understanding- Regression Analysis — Regression analysis is a set of statistical methods used for the estimation of relationships between a dependent variable( Y ) and one or more independent variables( x ). This method helps determine which factors influence the results the most and should definitely be involved in the experiment, and those which can be ignored. Residuals — It is the difference between the calculated/observed value and the predicted value for a particular observation. Here the residuals are represented by u. Autocorrelation — Autocorrelation represents the degree of similarity between a given time series and a lagged version of itself over successive time intervals. Autocorrelation measures the relationship between a variable’s current value and its past values. For example — The air temperature values are calculated for all days of a month, and it is observed that the value on the 1st day is more similar to the value on the 2nd day than the value on the 30th day. So the data is said to be autocorrelated as the values which were observed closer in time are more similar than the values which were observed farther apart. Assumptions of Durbin-Watson d Test The errors are normally distributed with a mean value of 0. The errors are stationary. Null and Alternate Hypothesis of Durbin-Watson d Test Null Hypothesis: First order autocorrelation does not exist. Alternate Hypothesis: First order autocorrelation exists. The above hypothesis is formulated to check for autocorrelation which can either be positive or negative. We can also check for the presence of positive autocorrelation and negative autocorrelation. The hypothesis will be formulated accordingly. Test Statistic for Durbin-Watson d Test ut = the residual value for the tth observation. u = Yactual - Ycalculated number of observations in the experiment. d = the ratio of the sum of squared differences in successive residuals to the Residual Sum of Squares(RSS). Analyzing the Durbin-Watson d Statistic The value of d always lies between 0 and 4. If d is close to 2 it means there is no autocorrelation, and we accept the null hypothesis. We find out the critical values dL and dU for the given data. dL is the Lower critical value and dU is the Upper critical value. Using these values the presence of autocorrelation is checked according to the decision rules mentioned below – Testing for positive autocorrelation - d < dL = positive autocorrelation is present d > dU = No positive autocorrelation dL < d < dU = Test is inconclusive Testing for positive autocorrelation - 4-d < dL = negative autocorrelation is present 4-d > dU = No negative autocorrelation dL < 4-d < dU = Test is inconclusive On the basis of these rules, we either accept or reject the null hypothesis. Steps to Perform Durbin-Watson d Test Let us take an example to understand how to perform this test. Example: Using the import and GNP data of U.K. test the autocorrelation of the data by applying Durbin-Watson d-statistic. Use 5% level of significance. Step 1: Run the regression analysis and obtain the residuals. The regression line is given by – n = total number of observations. Ā = mean value of A. Here A can be X or Y. After calculating the equation for the regression line gets the corresponding Ycalculated values by putting the corresponding X values. Then get the values for residuals – Residual(u) = Yactual - Ycalculated for each observation Step 2: Compute the value of d. Now put the required values and find the value of d. For the given example the value for d will be 1.89. Step 3: Find out the critical values dL and dU. For the given sample size(n=15) and the number of independent variables k(in the given example it is 1) use the significance table to find the values. The value of dL is 1.077 and dU is 1.361. Step 4: Follow the decision rules mentioned above to conclude the results. The rules which hold true are – d > dU - No positive autocorrelation 4-d = 2.1 > dU - No negative autocorrelation. Step 5: Conclude the results Since there is no autocorrelation either positive or negative we accept the null hypothesis.. ML-Statistics Machine Learning Machine Learning Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments ML | Stochastic Gradient Descent (SGD) Support Vector Machine Algorithm CNN | Introduction to Pooling Layer Difference between Informed and Uninformed Search in AI Singular Value Decomposition (SVD) ML | Logistic Regression using Python Principal Component Analysis with Python ML | Linear Discriminant Analysis Difference between ANN, CNN and RNN Normalization vs Standardization
[ { "code": null, "e": 23953, "s": 23925, "text": "\n09 Mar, 2021" }, { "code": null, "e": 24207, "s": 23953, "text": "Durbin Watson Test: A test developed by statisticians professor James Durbin and Geoffrey Stuart Watson is used to detect autocorrelation in residuals from the Regression analysis. It is popularly known as Durbin-Watson d statistic, which is defined as " }, { "code": null, "e": 24270, "s": 24207, "text": "Let us first look at some terms to have a clear understanding-" }, { "code": null, "e": 24621, "s": 24270, "text": "Regression Analysis — Regression analysis is a set of statistical methods used for the estimation of relationships between a dependent variable( Y ) and one or more independent variables( x ). This method helps determine which factors influence the results the most and should definitely be involved in the experiment, and those which can be ignored." }, { "code": null, "e": 24787, "s": 24621, "text": "Residuals — It is the difference between the calculated/observed value and the predicted value for a particular observation. Here the residuals are represented by u." }, { "code": null, "e": 25410, "s": 24787, "text": "Autocorrelation — Autocorrelation represents the degree of similarity between a given time series and a lagged version of itself over successive time intervals. Autocorrelation measures the relationship between a variable’s current value and its past values. For example — The air temperature values are calculated for all days of a month, and it is observed that the value on the 1st day is more similar to the value on the 2nd day than the value on the 30th day. So the data is said to be autocorrelated as the values which were observed closer in time are more similar than the values which were observed farther apart." }, { "code": null, "e": 25446, "s": 25410, "text": "Assumptions of Durbin-Watson d Test" }, { "code": null, "e": 25506, "s": 25446, "text": "The errors are normally distributed with a mean value of 0." }, { "code": null, "e": 25533, "s": 25506, "text": "The errors are stationary." }, { "code": null, "e": 25587, "s": 25533, "text": "Null and Alternate Hypothesis of Durbin-Watson d Test" }, { "code": null, "e": 25648, "s": 25587, "text": "Null Hypothesis: First order autocorrelation does not exist." }, { "code": null, "e": 25706, "s": 25648, "text": "Alternate Hypothesis: First order autocorrelation exists." }, { "code": null, "e": 25952, "s": 25706, "text": "The above hypothesis is formulated to check for autocorrelation which can either be positive or negative. We can also check for the presence of positive autocorrelation and negative autocorrelation. The hypothesis will be formulated accordingly." }, { "code": null, "e": 25992, "s": 25952, "text": "Test Statistic for Durbin-Watson d Test" }, { "code": null, "e": 26220, "s": 25992, "text": "ut = the residual value for the tth observation.\nu = Yactual - Ycalculated\nnumber of observations in the experiment.\nd = the ratio of the sum of squared differences in successive\n residuals to the Residual \nSum of Squares(RSS)." }, { "code": null, "e": 26260, "s": 26220, "text": "Analyzing the Durbin-Watson d Statistic" }, { "code": null, "e": 26637, "s": 26260, "text": "The value of d always lies between 0 and 4. If d is close to 2 it means there is no autocorrelation, and we accept the null hypothesis. We find out the critical values dL and dU for the given data. dL is the Lower critical value and dU is the Upper critical value. Using these values the presence of autocorrelation is checked according to the decision rules mentioned below –" }, { "code": null, "e": 26959, "s": 26637, "text": "Testing for positive autocorrelation - \nd < dL = positive autocorrelation is present\nd > dU = No positive autocorrelation\ndL < d < dU = Test is inconclusive\n \nTesting for positive autocorrelation - \n4-d < dL = negative autocorrelation is present\n4-d > dU = No negative autocorrelation\ndL < 4-d < dU = Test is inconclusive" }, { "code": null, "e": 27036, "s": 26959, "text": "On the basis of these rules, we either accept or reject the null hypothesis." }, { "code": null, "e": 27074, "s": 27036, "text": "Steps to Perform Durbin-Watson d Test" }, { "code": null, "e": 27137, "s": 27074, "text": "Let us take an example to understand how to perform this test." }, { "code": null, "e": 27290, "s": 27137, "text": "Example: Using the import and GNP data of U.K. test the autocorrelation of the data by applying Durbin-Watson d-statistic. Use 5% level of significance." }, { "code": null, "e": 27352, "s": 27290, "text": "Step 1: Run the regression analysis and obtain the residuals." }, { "code": null, "e": 27387, "s": 27352, "text": "The regression line is given by – " }, { "code": null, "e": 27465, "s": 27387, "text": "n = total number of observations.\nĀ = mean value of A. Here A can be X or Y." }, { "code": null, "e": 27637, "s": 27465, "text": "After calculating the equation for the regression line gets the corresponding Ycalculated values by putting the corresponding X values. Then get the values for residuals –" }, { "code": null, "e": 27695, "s": 27637, "text": "Residual(u) = Yactual - Ycalculated for each observation" }, { "code": null, "e": 27727, "s": 27695, "text": "Step 2: Compute the value of d." }, { "code": null, "e": 27780, "s": 27727, "text": "Now put the required values and find the value of d." }, { "code": null, "e": 27832, "s": 27780, "text": "For the given example the value for d will be 1.89." }, { "code": null, "e": 27880, "s": 27832, "text": "Step 3: Find out the critical values dL and dU." }, { "code": null, "e": 28031, "s": 27880, "text": "For the given sample size(n=15) and the number of independent variables k(in the given example it is 1) use the significance table to find the values." }, { "code": null, "e": 28073, "s": 28031, "text": "The value of dL is 1.077 and dU is 1.361." }, { "code": null, "e": 28148, "s": 28073, "text": "Step 4: Follow the decision rules mentioned above to conclude the results." }, { "code": null, "e": 28180, "s": 28148, "text": "The rules which hold true are –" }, { "code": null, "e": 28265, "s": 28180, "text": "d > dU - No positive autocorrelation\n4-d = 2.1 > dU - No negative autocorrelation." }, { "code": null, "e": 28294, "s": 28265, "text": "Step 5: Conclude the results" }, { "code": null, "e": 28388, "s": 28294, "text": "Since there is no autocorrelation either positive or negative we accept the null hypothesis.." }, { "code": null, "e": 28402, "s": 28388, "text": "ML-Statistics" }, { "code": null, "e": 28419, "s": 28402, "text": "Machine Learning" }, { "code": null, "e": 28436, "s": 28419, "text": "Machine Learning" }, { "code": null, "e": 28534, "s": 28436, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 28543, "s": 28534, "text": "Comments" }, { "code": null, "e": 28556, "s": 28543, "text": "Old Comments" }, { "code": null, "e": 28595, "s": 28556, "text": "ML | Stochastic Gradient Descent (SGD)" }, { "code": null, "e": 28628, "s": 28595, "text": "Support Vector Machine Algorithm" }, { "code": null, "e": 28664, "s": 28628, "text": "CNN | Introduction to Pooling Layer" }, { "code": null, "e": 28720, "s": 28664, "text": "Difference between Informed and Uninformed Search in AI" }, { "code": null, "e": 28755, "s": 28720, "text": "Singular Value Decomposition (SVD)" }, { "code": null, "e": 28793, "s": 28755, "text": "ML | Logistic Regression using Python" }, { "code": null, "e": 28834, "s": 28793, "text": "Principal Component Analysis with Python" }, { "code": null, "e": 28868, "s": 28834, "text": "ML | Linear Discriminant Analysis" }, { "code": null, "e": 28904, "s": 28868, "text": "Difference between ANN, CNN and RNN" } ]
How to subtract days from a date in JavaScript?
To subtract days to a JavaScript Date object, use the setDate() method. Under that, get the current days and subtract days. JavaScript date setDate() method sets the day of the month for a specified date according to local time. You can try to run the following code to subtract 10 days from the current date. Live Demo <html> <head> <title>JavaScript setDate Method</title> </head> <body> <script> var dt = new Date("December 30, 2017 11:20:25"); dt.setDate( dt.getDate() - 10 ); document.write( dt ); </script> </body> </html> Wed Dec 20 2017 11:20:25 GMT+0530 (India Standard Time)
[ { "code": null, "e": 1291, "s": 1062, "text": "To subtract days to a JavaScript Date object, use the setDate() method. Under that, get the current days and subtract days. JavaScript date setDate() method sets the day of the month for a specified date according to local time." }, { "code": null, "e": 1372, "s": 1291, "text": "You can try to run the following code to subtract 10 days from the current date." }, { "code": null, "e": 1383, "s": 1372, "text": " Live Demo" }, { "code": null, "e": 1649, "s": 1383, "text": "<html>\n <head>\n <title>JavaScript setDate Method</title>\n </head>\n <body>\n <script>\n var dt = new Date(\"December 30, 2017 11:20:25\");\n dt.setDate( dt.getDate() - 10 );\n document.write( dt );\n </script>\n </body>\n</html>" }, { "code": null, "e": 1705, "s": 1649, "text": "Wed Dec 20 2017 11:20:25 GMT+0530 (India Standard Time)" } ]
Chef - Resources
Chef resource represents a piece of the operating system at its desired state. It is a statement of configuration policy that describes the desired state of a node to which one wants to take the current configuration to using resource providers. It helps in knowing the current status of the target machine using the Ohai mechanism of Chef. It also helps in defining the steps required to perform to get the target machine to that state. The resources are grouped in recipes which describes the working configuration. In case of Chef, chef::Platform maps the providers and platform versions of each node. At the beginning of every Chef-client run, Chef server collects the details of any machines current state. Later, Chef server uses those values to identify the correct provider. type 'name' do attribute 'value' action :type_of_action end In the above syntax, ‘type’ is the resource type and ‘name’ is the name that we are going to use. In the ‘do’ and ‘end’ block, we have the attribute of that resource and the action that we need to take for that particular resource. Every resource that we use in the recipe has its own set of actions, which is defined inside the ‘do’ and ‘end’ block. type 'name' do attribute 'value' action :type_of_action end All resources share a common set of functionality, actions, properties, conditional execution, notification, and relevant path of action. Use the apt_package resource to manage packages for the Debian and Ubuntu platforms. Use the bash resource to execute scripts using the Bash interpreter. This resource may also use any of the actions and properties that are available to the execute resource. Commands that are executed with this resource are (by their nature) not idempotent, as they are typically unique to the environment in which they are run. Use not_if and only_if to guard this resource for idempotence. Use the batch resource to execute a batch script using the cmd.exe interpreter. The batch resource creates and executes a temporary file (similar to how the script resource behaves), rather than running the command inline. This resource inherits actions (:run and :nothing) and properties (creates, cwd, environment, group, path, timeout, and user) from the execute resource. Commands that are executed with this resource are (by their nature) not idempotent, as they are typically unique to the environment in which they are run. Use not_if and only_if to guard this resource for idempotence. Use the bff_package resource to manage packages for the AIX platform using the installp utility. When a package is installed from a local file, it must be added to the node using the remote_file or cookbook_file resources. Use the chef_gem resource to install a gem only for the instance of Ruby that is dedicated to the Chef-Client. When a gem is installed from a local file, it must be added to the node using the remote_file or cookbook_file resources. The chef_gem resource works with all of the same properties and options as the gem_package resource, but does not accept the gem_binary property because it always uses the CurrentGemEnvironment under which the Chef-Client is running. In addition to performing actions similar to the gem_package resource, the chef_gem resource does the above. Use the cookbook_file resource to transfer files from a sub-directory of COOKBOOK_NAME/files/ to a specified path located on a host that is running the ChefClient. The file is selected according to file specificity, which allows different source files to be used based on the hostname, host platform (operating system, distro, or as appropriate), or platform version. Files that are located in the COOKBOOK_NAME/files/default subdirectory may be used on any platform. Use the cron resource to manage cron entries for time-based job scheduling. Properties for a schedule will default to * if not provided. The cron resource requires access to a crontab program, typically cron. Use the csh resource to execute scripts using the csh interpreter. This resource may also use any of the actions and properties that are available to the execute resource. Commands that are executed with this resource are (by their nature) not idempotent, as they are typically unique to the environment in which they are run. Use not_if and only_if to guard this resource for idempotence. Use the deploy resource to manage and control deployments. This is a popular resource, but is also complex, having the most properties, multiple providers, the added complexity of callbacks, plus four attributes that support layout modifications from within a recipe. Use the directory resource to manage a directory, which is a hierarchy of folders that comprises all of the information stored on a computer. The root directory is the top-level, under which the rest of the directory is organized. The directory resource uses the name property to specify the path to a location in a directory. Typically, permission to access that location in the directory is required. Use the dpkg_package resource to manage packages for the dpkg platform. When a package is installed from a local file, it must be added to the node using the remote_file or cookbook_file resources. Use the easy_install_package resource to manage packages for the Python platform. Use the env resource to manage environment keys in Microsoft Windows. After an environment key is set, Microsoft Windows must be restarted before the environment key is available to the Task Scheduler. Use the erl_call resource to connect to a node located within a distributed Erlang system. Commands that are executed with this resource are (by their nature) not idempotent, as they are typically unique to the environment in which they are run. Use not_if and only_if to guard this resource for idempotence. Use the execute resource to execute a single command. Commands that are executed with this resource are (by their nature) not idempotent, as they are typically unique to the environment in which they are run. Use not_if and only_if to guard this resource for idempotence. Use the file resource to manage the files directly on a node. Use the freebsd_package resource to manage packages for the FreeBSD platform. Use the gem_package resource to manage gem packages that are only included in recipes. When a package is installed from a local file, it must be added to the node using the remote_file or cookbook_file resources. Use the git resource to manage source control resources that exist in a git repository. git version 1.6.5 (or higher) is required to use all of the functionality in the git resource. Use the group resource to manage a local group. Use the homebrew_package resource to manage packages for the Mac OS X platform. Use the http_request resource to send an HTTP request (GET, PUT, POST, DELETE, HEAD, or OPTIONS) with an arbitrary message. This resource is often useful when custom callbacks are necessary. Use the ifconfig resource to manage interfaces. Use the ips_package resource to manage packages (using Image Packaging System (IPS)) on the Solaris 11 platform. Use the ksh resource to execute scripts using the Korn shell (ksh) interpreter. This resource may also use any of the actions and properties that are available to the execute resource. Commands that are executed with this resource are (by their nature) not idempotent, as they are typically unique to the environment in which they are run. Use not_if and only_if to guard this resource for idempotence. Use the link resource to create symbolic or hard links. Use the log resource to create log entries. The log resource behaves like any other resource: built into the resource collection during the compile phase, and then run during the execution phase. (To create a log entry that is not built into the resource collection, use Chef::Log instead of the log resource) Use the macports_package resource to manage packages for the Mac OS X platform. Use the mdadm resource to manage RAID devices in a Linux environment using the mdadm utility. The mdadm provider will create and assemble an array, but it will not create the config file that is used to persist the array upon reboot. If the config file is required, it must be done by specifying a template with the correct array layout, and then by using the mount provider to create a file systems table (fstab) entry. Use the mount resource to manage a mounted file system. Use the ohai resource to reload the Ohai configuration on a node. This allows recipes that change system attributes (like a recipe that adds a user) to refer to those attributes later on during the chef-client run. Use the package resource to manage packages. When the package is installed from a local file (such as with RubyGems, dpkg, or RPM Package Manager), the file must be added to the node using the remote_file or cookbook_file resources. Use the pacman_package resource to manage packages (using pacman) on the Arch Linux platform. Use the powershell_script resource to execute a script using the Windows PowerShell interpreter, much like how the script and script-based resources—bash, csh, perl, python, and ruby—are used. The powershell_script is specific to the Microsoft Windows platform and the Windows PowerShell interpreter. Use the python resource to execute scripts using the Python interpreter. This resource may also use any of the actions and properties that are available to the execute resource. Commands that are executed with this resource are (by their nature) not idempotent, as they are typically unique to the environment in which they are run. Use not_if and only_if to guard this resource for idempotence. Use the reboot resource to reboot a node, a necessary step with some installations on certain platforms. This resource is supported for use on the Microsoft Windows, Mac OS X, and Linux platforms. Use the registry_key resource to create and delete registry keys in Microsoft Windows. Use the remote_directory resource to incrementally transfer a directory from a cookbook to a node. The directory that is copied from the cookbook should be located under COOKBOOK_NAME/files/default/REMOTE_DIRECTORY. The remote_directory resource will obey file specificity. Use the remote_file resource to transfer a file from a remote location using file specificity. This resource is similar to the file resource. Use the route resource to manage the system routing table in a Linux environment. Use the rpm_package resource to manage packages for the RPM Package Manager platform. Use the ruby resource to execute scripts using the Ruby interpreter. This resource may also use any of the actions and properties that are available to the execute resource. Commands that are executed with this resource are (by their nature) not idempotent, as they are typically unique to the environment in which they are run. Use not_if and only_if to guard this resource for idempotence. Use the ruby_block resource to execute Ruby code during a Chef-Client run. Ruby code in the ruby_block resource is evaluated with other resources during convergence, whereas Ruby code outside of a ruby_block resource is evaluated before other resources, as the recipe is compiled. Use the script resource to execute scripts using a specified interpreter, such as Bash, csh, Perl, Python, or Ruby. This resource may also use any of the actions and properties that are available to the execute resource. Commands that are executed with this resource are (by their nature) not idempotent, as they are typically unique to the environment in which they are run. Use not_if and only_if to guard this resource for idempotence. Use the service resource to manage a service. Use the smartos_package resource to manage packages for the SmartOS platform. The solaris_package resource is used to manage packages for the Solaris platform. Use the subversion resource to manage source control resources that exist in a Subversion repository. Use the template resource to manage the contents of a file using an Embedded Ruby (ERB) template by transferring files from a sub-directory of COOKBOOK_NAME/templates/ to a specified path located on a host that is running the Chef-Client. This resource includes actions and properties from the file resource. Template files managed by the template resource follow the same file specificity rules as the remote_file and file resources. Use the user resource to add users, update existing users, remove users, and to lock/unlock user passwords. Use the windows_package resource to manage Microsoft Installer Package (MSI) packages for the Microsoft Windows platform. Use the windows_service resource to manage a service on the Microsoft Windows platform. Use the yum_package resource to install, upgrade, and remove packages with Yum for the Red Hat and CentOS platforms. The yum_package resource is able to resolve provides data for packages much like Yum can do when it is run from the command line. This allows a variety of options for installing packages, like minimum versions, virtual provides, and library names. Print Add Notes Bookmark this page
[ { "code": null, "e": 2898, "s": 2380, "text": "Chef resource represents a piece of the operating system at its desired state. It is a statement of configuration policy that describes the desired state of a node to which one wants to take the current configuration to using resource providers. It helps in knowing the current status of the target machine using the Ohai mechanism of Chef. It also helps in defining the steps required to perform to get the target machine to that state. The resources are grouped in recipes which describes the working configuration." }, { "code": null, "e": 3163, "s": 2898, "text": "In case of Chef, chef::Platform maps the providers and platform versions of each node. At the beginning of every Chef-client run, Chef server collects the details of any machines current state. Later, Chef server uses those values to identify the correct provider." }, { "code": null, "e": 3232, "s": 3163, "text": "type 'name' do \n attribute 'value' \n action :type_of_action \nend" }, { "code": null, "e": 3464, "s": 3232, "text": "In the above syntax, ‘type’ is the resource type and ‘name’ is the name that we are going to use. In the ‘do’ and ‘end’ block, we have the attribute of that resource and the action that we need to take for that particular resource." }, { "code": null, "e": 3583, "s": 3464, "text": "Every resource that we use in the recipe has its own set of actions, which is defined inside the ‘do’ and ‘end’ block." }, { "code": null, "e": 3653, "s": 3583, "text": "type 'name' do \n attribute 'value' \n action :type_of_action \nend " }, { "code": null, "e": 3791, "s": 3653, "text": "All resources share a common set of functionality, actions, properties, conditional execution, notification, and relevant path of action." }, { "code": null, "e": 3876, "s": 3791, "text": "Use the apt_package resource to manage packages for the Debian and Ubuntu platforms." }, { "code": null, "e": 4268, "s": 3876, "text": "Use the bash resource to execute scripts using the Bash interpreter. This resource may also use any of the actions and properties that are available to the execute resource. Commands that are executed with this resource are (by their nature) not idempotent, as they are typically unique to the environment in which they are run. Use not_if and only_if to guard this resource for idempotence." }, { "code": null, "e": 4491, "s": 4268, "text": "Use the batch resource to execute a batch script using the cmd.exe interpreter. The batch resource creates and executes a temporary file (similar to how the script resource behaves), rather than running the command inline." }, { "code": null, "e": 4862, "s": 4491, "text": "This resource inherits actions (:run and :nothing) and properties (creates, cwd, environment, group, path, timeout, and user) from the execute resource. Commands that are executed with this resource are (by their nature) not idempotent, as they are typically unique to the environment in which they are run. Use not_if and only_if to guard this resource for idempotence." }, { "code": null, "e": 5085, "s": 4862, "text": "Use the bff_package resource to manage packages for the AIX platform using the installp utility. When a package is installed from a local file, it must be added to the node using the remote_file or cookbook_file resources." }, { "code": null, "e": 5318, "s": 5085, "text": "Use the chef_gem resource to install a gem only for the instance of Ruby that is dedicated to the Chef-Client. When a gem is installed from a local file, it must be added to the node using the remote_file or cookbook_file resources." }, { "code": null, "e": 5661, "s": 5318, "text": "The chef_gem resource works with all of the same properties and options as the gem_package resource, but does not accept the gem_binary property because it always uses the CurrentGemEnvironment under which the Chef-Client is running. In addition to performing actions similar to the gem_package resource, the chef_gem resource does the above." }, { "code": null, "e": 5825, "s": 5661, "text": "Use the cookbook_file resource to transfer files from a sub-directory of COOKBOOK_NAME/files/ to a specified path located on a host that is running the ChefClient." }, { "code": null, "e": 6129, "s": 5825, "text": "The file is selected according to file specificity, which allows different source files to be used based on the hostname, host platform (operating system, distro, or as appropriate), or platform version. Files that are located in the COOKBOOK_NAME/files/default subdirectory may be used on any platform." }, { "code": null, "e": 6338, "s": 6129, "text": "Use the cron resource to manage cron entries for time-based job scheduling. Properties for a schedule will default to * if not provided. The cron resource requires access to a crontab program, typically cron." }, { "code": null, "e": 6510, "s": 6338, "text": "Use the csh resource to execute scripts using the csh interpreter. This resource may also use any of the actions and properties that are available to the execute resource." }, { "code": null, "e": 6728, "s": 6510, "text": "Commands that are executed with this resource are (by their nature) not idempotent, as they are typically unique to the environment in which they are run. Use not_if and only_if to guard this resource for idempotence." }, { "code": null, "e": 6996, "s": 6728, "text": "Use the deploy resource to manage and control deployments. This is a popular resource, but is also complex, having the most properties, multiple providers, the added complexity of callbacks, plus four attributes that support layout modifications from within a recipe." }, { "code": null, "e": 7227, "s": 6996, "text": "Use the directory resource to manage a directory, which is a hierarchy of folders that comprises all of the information stored on a computer. The root directory is the top-level, under which the rest of the directory is organized." }, { "code": null, "e": 7399, "s": 7227, "text": "The directory resource uses the name property to specify the path to a location in a directory. Typically, permission to access that location in the directory is required." }, { "code": null, "e": 7597, "s": 7399, "text": "Use the dpkg_package resource to manage packages for the dpkg platform. When a package is installed from a local file, it must be added to the node using the remote_file or cookbook_file resources." }, { "code": null, "e": 7679, "s": 7597, "text": "Use the easy_install_package resource to manage packages for the Python platform." }, { "code": null, "e": 7881, "s": 7679, "text": "Use the env resource to manage environment keys in Microsoft Windows. After an environment key is set, Microsoft Windows must be restarted before the environment key is available to the Task Scheduler." }, { "code": null, "e": 8190, "s": 7881, "text": "Use the erl_call resource to connect to a node located within a distributed Erlang system. Commands that are executed with this resource are (by their nature) not idempotent, as they are typically unique to the environment in which they are run. Use not_if and only_if to guard this resource for idempotence." }, { "code": null, "e": 8462, "s": 8190, "text": "Use the execute resource to execute a single command. Commands that are executed with this resource are (by their nature) not idempotent, as they are typically unique to the environment in which they are run. Use not_if and only_if to guard this resource for idempotence." }, { "code": null, "e": 8524, "s": 8462, "text": "Use the file resource to manage the files directly on a node." }, { "code": null, "e": 8602, "s": 8524, "text": "Use the freebsd_package resource to manage packages for the FreeBSD platform." }, { "code": null, "e": 8815, "s": 8602, "text": "Use the gem_package resource to manage gem packages that are only included in recipes. When a package is installed from a local file, it must be added to the node using the remote_file or cookbook_file resources." }, { "code": null, "e": 8998, "s": 8815, "text": "Use the git resource to manage source control resources that exist in a git repository. git version 1.6.5 (or higher) is required to use all of the functionality in the git resource." }, { "code": null, "e": 9046, "s": 8998, "text": "Use the group resource to manage a local group." }, { "code": null, "e": 9126, "s": 9046, "text": "Use the homebrew_package resource to manage packages for the Mac OS X platform." }, { "code": null, "e": 9317, "s": 9126, "text": "Use the http_request resource to send an HTTP request (GET, PUT, POST, DELETE, HEAD, or OPTIONS) with an arbitrary message. This resource is often useful when custom callbacks are necessary." }, { "code": null, "e": 9365, "s": 9317, "text": "Use the ifconfig resource to manage interfaces." }, { "code": null, "e": 9478, "s": 9365, "text": "Use the ips_package resource to manage packages (using Image Packaging System (IPS)) on the Solaris 11 platform." }, { "code": null, "e": 9663, "s": 9478, "text": "Use the ksh resource to execute scripts using the Korn shell (ksh) interpreter. This resource may also use any of the actions and properties that are available to the execute resource." }, { "code": null, "e": 9881, "s": 9663, "text": "Commands that are executed with this resource are (by their nature) not idempotent, as they are typically unique to the environment in which they are run. Use not_if and only_if to guard this resource for idempotence." }, { "code": null, "e": 9937, "s": 9881, "text": "Use the link resource to create symbolic or hard links." }, { "code": null, "e": 10247, "s": 9937, "text": "Use the log resource to create log entries. The log resource behaves like any other resource: built into the resource collection during the compile phase, and then run during the execution phase. (To create a log entry that is not built into the resource collection, use Chef::Log instead of the log resource)" }, { "code": null, "e": 10327, "s": 10247, "text": "Use the macports_package resource to manage packages for the Mac OS X platform." }, { "code": null, "e": 10561, "s": 10327, "text": "Use the mdadm resource to manage RAID devices in a Linux environment using the mdadm utility. The mdadm provider will create and assemble an array, but it will not create the config file that is used to persist the array upon reboot." }, { "code": null, "e": 10748, "s": 10561, "text": "If the config file is required, it must be done by specifying a template with the correct array layout, and then by using the mount provider to create a file systems table (fstab) entry." }, { "code": null, "e": 10804, "s": 10748, "text": "Use the mount resource to manage a mounted file system." }, { "code": null, "e": 11019, "s": 10804, "text": "Use the ohai resource to reload the Ohai configuration on a node. This allows recipes that change system attributes (like a recipe that adds a user) to refer to those attributes later on during the chef-client run." }, { "code": null, "e": 11252, "s": 11019, "text": "Use the package resource to manage packages. When the package is installed from a local file (such as with RubyGems, dpkg, or RPM Package Manager), the file must be added to the node using the remote_file or cookbook_file resources." }, { "code": null, "e": 11346, "s": 11252, "text": "Use the pacman_package resource to manage packages (using pacman) on the Arch Linux platform." }, { "code": null, "e": 11647, "s": 11346, "text": "Use the powershell_script resource to execute a script using the Windows PowerShell interpreter, much like how the script and script-based resources—bash, csh, perl, python, and ruby—are used. The powershell_script is specific to the Microsoft Windows platform and the Windows PowerShell interpreter." }, { "code": null, "e": 11825, "s": 11647, "text": "Use the python resource to execute scripts using the Python interpreter. This resource may also use any of the actions and properties that are available to the execute resource." }, { "code": null, "e": 12043, "s": 11825, "text": "Commands that are executed with this resource are (by their nature) not idempotent, as they are typically unique to the environment in which they are run. Use not_if and only_if to guard this resource for idempotence." }, { "code": null, "e": 12240, "s": 12043, "text": "Use the reboot resource to reboot a node, a necessary step with some installations on certain platforms. This resource is supported for use on the Microsoft Windows, Mac OS X, and Linux platforms." }, { "code": null, "e": 12327, "s": 12240, "text": "Use the registry_key resource to create and delete registry keys in Microsoft Windows." }, { "code": null, "e": 12543, "s": 12327, "text": "Use the remote_directory resource to incrementally transfer a directory from a cookbook to a node. The directory that is copied from the cookbook should be located under COOKBOOK_NAME/files/default/REMOTE_DIRECTORY." }, { "code": null, "e": 12601, "s": 12543, "text": "The remote_directory resource will obey file specificity." }, { "code": null, "e": 12743, "s": 12601, "text": "Use the remote_file resource to transfer a file from a remote location using file specificity. This resource is similar to the file resource." }, { "code": null, "e": 12825, "s": 12743, "text": "Use the route resource to manage the system routing table in a Linux environment." }, { "code": null, "e": 12911, "s": 12825, "text": "Use the rpm_package resource to manage packages for the RPM Package Manager platform." }, { "code": null, "e": 13085, "s": 12911, "text": "Use the ruby resource to execute scripts using the Ruby interpreter. This resource may also use any of the actions and properties that are available to the execute resource." }, { "code": null, "e": 13303, "s": 13085, "text": "Commands that are executed with this resource are (by their nature) not idempotent, as they are typically unique to the environment in which they are run. Use not_if and only_if to guard this resource for idempotence." }, { "code": null, "e": 13584, "s": 13303, "text": "Use the ruby_block resource to execute Ruby code during a Chef-Client run. Ruby code in the ruby_block resource is evaluated with other resources during convergence, whereas Ruby code outside of a ruby_block resource is evaluated before other resources, as the recipe is compiled." }, { "code": null, "e": 13805, "s": 13584, "text": "Use the script resource to execute scripts using a specified interpreter, such as Bash, csh, Perl, Python, or Ruby. This resource may also use any of the actions and properties that are available to the execute resource." }, { "code": null, "e": 14023, "s": 13805, "text": "Commands that are executed with this resource are (by their nature) not idempotent, as they are typically unique to the environment in which they are run. Use not_if and only_if to guard this resource for idempotence." }, { "code": null, "e": 14069, "s": 14023, "text": "Use the service resource to manage a service." }, { "code": null, "e": 14147, "s": 14069, "text": "Use the smartos_package resource to manage packages for the SmartOS platform." }, { "code": null, "e": 14229, "s": 14147, "text": "The solaris_package resource is used to manage packages for the Solaris platform." }, { "code": null, "e": 14331, "s": 14229, "text": "Use the subversion resource to manage source control resources that exist in a Subversion repository." }, { "code": null, "e": 14766, "s": 14331, "text": "Use the template resource to manage the contents of a file using an Embedded Ruby (ERB) template by transferring files from a sub-directory of COOKBOOK_NAME/templates/ to a specified path located on a host that is running the Chef-Client. This resource includes actions and properties from the file resource. Template files managed by the template resource follow the same file specificity rules as the remote_file and file resources." }, { "code": null, "e": 14874, "s": 14766, "text": "Use the user resource to add users, update existing users, remove users, and to lock/unlock user passwords." }, { "code": null, "e": 14996, "s": 14874, "text": "Use the windows_package resource to manage Microsoft Installer Package (MSI) packages for the Microsoft Windows platform." }, { "code": null, "e": 15084, "s": 14996, "text": "Use the windows_service resource to manage a service on the Microsoft Windows platform." }, { "code": null, "e": 15449, "s": 15084, "text": "Use the yum_package resource to install, upgrade, and remove packages with Yum for the Red Hat and CentOS platforms. The yum_package resource is able to resolve provides data for packages much like Yum can do when it is run from the command line. This allows a variety of options for installing packages, like minimum versions, virtual provides, and library names." }, { "code": null, "e": 15456, "s": 15449, "text": " Print" }, { "code": null, "e": 15467, "s": 15456, "text": " Add Notes" } ]
Recursive factorial method in Java
The factorial of any non-negative integer is basically the product of all the integers that are smaller than or equal to it. The factorial can be obtained using a recursive method. A program that demonstrates this is given as follows: Live Demo public class Demo { public static long fact(long n) { if (n <= 1) return 1; else return n * fact(n - 1); } public static void main(String args[]) { System.out.println("The factorial of 6 is: " + fact(6)); System.out.println("The factorial of 0 is: " + fact(0)); } } The factorial of 6 is: 720 The factorial of 0 is: 1 Now let us understand the above program. The method fact() calculates the factorial of a number n. If n is less than or equal to 1, it returns 1. Otherwise it recursively calls itself and returns n * fact(n - 1). A code snippet which demonstrates this is as follows: public static long fact(long n) { if (n <= 1) return 1; else return n * fact(n - 1); } In main(), the method fact() is called with different values. A code snippet which demonstrates this is as follows: public static void main(String args[]) { System.out.println("The factorial of 6 is: " + fact(6)); System.out.println("The factorial of 0 is: " + fact(0)); }
[ { "code": null, "e": 1243, "s": 1062, "text": "The factorial of any non-negative integer is basically the product of all the integers that are smaller than or equal to it. The factorial can be obtained using a recursive method." }, { "code": null, "e": 1297, "s": 1243, "text": "A program that demonstrates this is given as follows:" }, { "code": null, "e": 1308, "s": 1297, "text": " Live Demo" }, { "code": null, "e": 1628, "s": 1308, "text": "public class Demo {\n public static long fact(long n) {\n if (n <= 1)\n return 1;\n else\n return n * fact(n - 1);\n }\n public static void main(String args[]) {\n System.out.println(\"The factorial of 6 is: \" + fact(6));\n System.out.println(\"The factorial of 0 is: \" + fact(0));\n }\n}" }, { "code": null, "e": 1680, "s": 1628, "text": "The factorial of 6 is: 720\nThe factorial of 0 is: 1" }, { "code": null, "e": 1721, "s": 1680, "text": "Now let us understand the above program." }, { "code": null, "e": 1947, "s": 1721, "text": "The method fact() calculates the factorial of a number n. If n is less than or equal to 1, it returns 1. Otherwise it recursively calls itself and returns n * fact(n - 1). A code snippet which demonstrates this is as follows:" }, { "code": null, "e": 2052, "s": 1947, "text": "public static long fact(long n) {\n if (n <= 1)\n return 1;\n else\n return n * fact(n - 1);\n}" }, { "code": null, "e": 2168, "s": 2052, "text": "In main(), the method fact() is called with different values. A code snippet which demonstrates this is as follows:" }, { "code": null, "e": 2331, "s": 2168, "text": "public static void main(String args[]) {\n System.out.println(\"The factorial of 6 is: \" + fact(6));\n System.out.println(\"The factorial of 0 is: \" + fact(0));\n}" } ]
Python Plotting Basics. Simple Charts with Matplotlib, Seaborn... | by Laura Fedoruk | Towards Data Science
This tutorial will cover the basics of how to use three Python plotting libraries — Matplotlib, Seaborn, and Plotly. After reviewing this tutorial you should be able to use these three libraries to: Plot basic bar charts and pie charts Set up and customize plot characteristics such as titles, axes, and labels Set general graphing styles/characteristics for your plots such as custom font and color choices Understand the differences in use and style between static Matplotlib and interactive Plotly graphics The data I’m using for these graphics is based on a handful of stories and survey results from the Elephant in the Valley , a survey of 200+ women in tech. Matplotlib, Seaborn, and Plotly Differences I’ve heard Matplotlib referred to as the ‘grandfather’ of python plotting packages. It really has everything you’ll likely need to plot your data, and there are lots of examples available on the web of how to use it. I’ve found that it’s drawback is in that its default style isn’t always visually appealing, and it can be complex to learn how to make the adjustments you’d like. Sometimes what seems like it should be simple requires quite a few lines of code. Seaborn is complementary to Matplotlib and as can be seen from the examples below, it’s built ontop of Matplotlib functionality. It has more aesthetically pleasing default style options and for specific charts — especially for visualizing statistical data, and it makes creating compelling graphics that may be complex with Matplotlib easy. Plotly is an online visualization library with a Python API integration. After you’ve set up your account, when you create charts they are automatically linked in your files (and public depending on your account/file settings). It is relatively easy to use and provides interactive graphing capabilities that can be easily embedded into websites. It also has good default style characteristics. Setting up Our Libraries and Data Frame I’m using Pandas to organize the data for these plots, and first set up the parameters for my Jupyter Notebook via the following imports. Note that the %matplotlib inline simply allows you to run your notebook and have the plot automatically generate in your output, and you will only have to setup your Plotly default credentials once. import pandas as pdimport numpy as npimport matplotlib.pyplot as pltimport seaborn as snsimport matplotlib.font_manager%matplotlib inlineimport plotly.plotly as pyimport plotly.graph_objs as goplotly.tools.set_credentials_file(username='***', api_key='***') I went ahead and set up a data frame using pandas. The information we’re graphing is as seen below: Matplotlib Barchart Example The following code produces the bar chart seen below using Matplotlib. You can see that we first set up our figure as a subplot with a specified figure size. We then set what we want to be our default text and color parameters for plotting with Matplotlib using the rcParams function which handles all default styles/values. Note that when you use rcParams as in the example below, it acts as a global parameter and you are changing the default style for every time you then use Matplotlib. In this example I am using a custom color palette which is a list of colors, but it would also be possible (and necessary for grouped bar charts) to use a single color value for each set of data you wanted to use for your bars. Also note that in addition to using hex color codes, you can use the names of colors supported by the library. We set our overall title, axis labels, axis limits, and even rotate our x-axis tick labels using the rotation parameter. fig, ax = plt.subplots(figsize = (12,6))plt.rcParams['font.sans-serif'] = 'Arial'plt.rcParams['font.family'] = 'sans-serif'plt.rcParams['text.color'] = '#909090'plt.rcParams['axes.labelcolor']= '#909090'plt.rcParams['xtick.color'] = '#909090'plt.rcParams['ytick.color'] = '#909090'plt.rcParams['font.size']=12color_palette_list = ['#009ACD', '#ADD8E6', '#63D1F4', '#0EBFE9', '#C1F0F6', '#0099CC']ind = np.arange(len(df['Question']))bars1 = ax.bar(ind, df['Percentage of Respondents'], color = color_palette_list, label='Percentage of yes responses to question')ax.set_title("Elephant in the Valley Survey Results")ax.set_ylabel("Percentage of Yes Responses to Question")ax.set_ylim((0,100))ax.set_xticks(range(0,len(ind)))ax.set_xticklabels(list(df['Q Code']), rotation=70)ax.set_xlabel("Question Key") This produces the following output: Matplotlib Pie Chart Example The following code produces the pie chart seen below. Like our bar chart example, we first set up our figure as a subplot, then reset our default Matplotlib style parameters via rcParams. In this case we are also defining our data within the code below vs. taking from our data frame. We are choosing to explode the pie chart sections, hence setting up a variable we are calling explode, and we are setting the color choices to being the first two entries in our color palette list previously defined above. Setting the axes to be ‘equal’ ensures that we will have a circular pie chart. Autopct formats our values as strings with a set number of decimal points. We are also specifying the start angle of the pie chart in order to get the format we want, as well as using pctdistance and labeldistance to place our text. After we set the title, we are also choosing to use a legend for this chart, and specifying that the legend should not have a frame/visible bounding box, and we are specifically setting the legend location by ‘anchoring’ it using the specified bbox_to_anchor parameter. Useful tip — if you want your legend to live outside your figure, first specify the location parameter to be a particular corner such as ‘upper left’ and then specify the location that you would like to pin the ‘upper left’ corner of your legend to using bbox_to_anchor. fig, ax = plt.subplots()plt.rcParams['font.sans-serif'] = 'Arial'plt.rcParams['font.family'] = 'sans-serif'plt.rcParams['text.color'] = '#909090'plt.rcParams['axes.labelcolor']= '#909090'plt.rcParams['xtick.color'] = '#909090'plt.rcParams['ytick.color'] = '#909090'plt.rcParams['font.size']=12labels = ['Bay Area / Silicon Valley', 'Non Bay Area / Silicon Valley']percentages = [91, 9]explode=(0.1,0)ax.pie(percentages, explode=explode, labels=labels, colors=color_palette_list[0:2], autopct='%1.0f%%', shadow=False, startangle=0, pctdistance=1.2,labeldistance=1.4)ax.axis('equal')ax.set_title("Elephant in the Valley Survey Respondent Make-up")ax.legend(frameon=False, bbox_to_anchor=(1.5,0.8)) This produces the following output: Seaborn Bar Chart Example As can be seen from the following code, Seaborn is really just a wrapper around Matplotlib. In this particular example where we are overriding the default rcParams and using such a simple chart type, it doesn’t make any difference whether you’re using a Matplotlib or Seaborn plot, but for quick graphics where you’re not changing default styles, or more complex plot types, I’ve found Seaborn is often good choice. fig, ax = plt.subplots(figsize = (12,6))plt.rcParams['font.sans-serif'] = 'Arial'plt.rcParams['font.family'] = 'sans-serif'plt.rcParams['text.color'] = '#909090'plt.rcParams['axes.labelcolor']= '#909090'plt.rcParams['xtick.color'] = '#909090'plt.rcParams['ytick.color'] = '#909090'plt.rcParams['font.size']=12ind = np.arange(len(df['Question']))color_palette_list = ['#009ACD', '#ADD8E6', '#63D1F4', '#0EBFE9', '#C1F0F6', '#0099CC']sns.barplot(x=df['Q Code'], y = df['Percentage of Respondents'], data = df, palette=color_palette_list, label="Percentage of yes responses to question", ax=ax, ci=None)ax.set_title("Elephant in the Valley Survey Results")ax.set_ylabel("Percentage of Yes Responses to Question")ax.set_ylim(0,100)ax.set_xlabel("Question Key")ax.set_xticks(range(0,len(ind)))ax.set_xticklabels(list(df['Q Code']), rotation=45) Here the only difference is we’re using sns.barplot and the output can be the same: Plotly Bar Chart Example The following code sets up our bar chart using Plotly. We’re importing our libraries, and using the same color palette. Then we are setting up our bar chart parameters, followed by our overall layout parameters such as our title and then we’re using dictionaries to set up how we want parameters such as our axes and fonts. Within these dictionaries we are able to specify sub parameters such as x-axis tick label rotation and y-axis range. We are then creating our figure, feeding it our data and layout, and outputting our file to our Plotly account so that we can embed it as an interactive web graphic. import plotly.plotly as pyimport plotly.graph_objs as gocolor_palette_list = ['#009ACD', '#ADD8E6', '#63D1F4', '#0EBFE9', '#C1F0F6', '#0099CC']trace = go.Bar( x=df['Q Code'], y=df['Percentage of Respondents'], marker=dict( color=color_palette_list))data = [trace]layout = go.Layout( title='Elephant in the Valley Survey Results', font=dict(color='#909090'), xaxis=dict( title='Question Key', titlefont=dict( family='Arial, sans-serif', size=12, color='#909090' ), showticklabels=True, tickangle=-45, tickfont=dict( family='Arial, sans-serif', size=12, color='#909090' ),), yaxis=dict( range=[0,100], title="Percentage of Yes Responses to Question", titlefont=dict( family='Arial, sans-serif', size=12, color='#909090' ), showticklabels=True, tickangle=0, tickfont=dict( family='Arial, sans-serif', size=12, color='#909090' ) ))fig = go.Figure(data=data, layout=layout)py.iplot(fig, filename='barplot-elephant-in-the-valley') This produces the following output: Plotly Pie Chart Example By now you’ve likely caught on to how we are formatting and calling the parameters within Matplotlib and Plotly to build our visualizations. Let’s take a look at one last chart — an example of how we can create a similar pie chart to the one above using Plotly. The following code sets up and outputs our chart. We are specifying our start angle through the rotation parameter, and noting what information should be available when we hover over each component of our pie chart using the hoverover parameter. labels = ['Bay Area / Silicon Valley', 'Non Bay Area / Silicon Valley']percentages = [91, 9]trace = go.Pie(labels=labels, hoverinfo='label+percent', values=percentages, textposition='outside', marker=dict(colors=color_palette_list[0:2]), rotation=90)layout = go.Layout( title="Elephant in the Valley Survey Respondent Make-up", font=dict(family='Arial', size=12, color='#909090'), legend=dict(x=0.9, y=0.5) )data = [trace]fig = go.Figure(data=data, layout=layout)py.iplot(fig, filename='basic_pie_chart_elephant_in_the_valley') This produces the following output: And that’s it, we’re all done creating and customizing our bar and pie charts. Hopefully this was helpful to you in learning how to use these libraries in a way that allows you to create bespoke graphical solutions for your data. As a final note, I’d like to mention that I think it’s important to be cautious about using pie charts. While they are considered a ‘basic’ chart type, they often don’t increase the understanding of underlying data, so use them sparingly and only where you know that they provide value in comprehension. Happy plotting! Resources: Getting Started with Plotly for Python Matplotlib rcParams documentation
[ { "code": null, "e": 371, "s": 172, "text": "This tutorial will cover the basics of how to use three Python plotting libraries — Matplotlib, Seaborn, and Plotly. After reviewing this tutorial you should be able to use these three libraries to:" }, { "code": null, "e": 408, "s": 371, "text": "Plot basic bar charts and pie charts" }, { "code": null, "e": 483, "s": 408, "text": "Set up and customize plot characteristics such as titles, axes, and labels" }, { "code": null, "e": 580, "s": 483, "text": "Set general graphing styles/characteristics for your plots such as custom font and color choices" }, { "code": null, "e": 682, "s": 580, "text": "Understand the differences in use and style between static Matplotlib and interactive Plotly graphics" }, { "code": null, "e": 838, "s": 682, "text": "The data I’m using for these graphics is based on a handful of stories and survey results from the Elephant in the Valley , a survey of 200+ women in tech." }, { "code": null, "e": 882, "s": 838, "text": "Matplotlib, Seaborn, and Plotly Differences" }, { "code": null, "e": 1344, "s": 882, "text": "I’ve heard Matplotlib referred to as the ‘grandfather’ of python plotting packages. It really has everything you’ll likely need to plot your data, and there are lots of examples available on the web of how to use it. I’ve found that it’s drawback is in that its default style isn’t always visually appealing, and it can be complex to learn how to make the adjustments you’d like. Sometimes what seems like it should be simple requires quite a few lines of code." }, { "code": null, "e": 1685, "s": 1344, "text": "Seaborn is complementary to Matplotlib and as can be seen from the examples below, it’s built ontop of Matplotlib functionality. It has more aesthetically pleasing default style options and for specific charts — especially for visualizing statistical data, and it makes creating compelling graphics that may be complex with Matplotlib easy." }, { "code": null, "e": 2080, "s": 1685, "text": "Plotly is an online visualization library with a Python API integration. After you’ve set up your account, when you create charts they are automatically linked in your files (and public depending on your account/file settings). It is relatively easy to use and provides interactive graphing capabilities that can be easily embedded into websites. It also has good default style characteristics." }, { "code": null, "e": 2120, "s": 2080, "text": "Setting up Our Libraries and Data Frame" }, { "code": null, "e": 2457, "s": 2120, "text": "I’m using Pandas to organize the data for these plots, and first set up the parameters for my Jupyter Notebook via the following imports. Note that the %matplotlib inline simply allows you to run your notebook and have the plot automatically generate in your output, and you will only have to setup your Plotly default credentials once." }, { "code": null, "e": 2716, "s": 2457, "text": "import pandas as pdimport numpy as npimport matplotlib.pyplot as pltimport seaborn as snsimport matplotlib.font_manager%matplotlib inlineimport plotly.plotly as pyimport plotly.graph_objs as goplotly.tools.set_credentials_file(username='***', api_key='***')" }, { "code": null, "e": 2816, "s": 2716, "text": "I went ahead and set up a data frame using pandas. The information we’re graphing is as seen below:" }, { "code": null, "e": 2844, "s": 2816, "text": "Matplotlib Barchart Example" }, { "code": null, "e": 3335, "s": 2844, "text": "The following code produces the bar chart seen below using Matplotlib. You can see that we first set up our figure as a subplot with a specified figure size. We then set what we want to be our default text and color parameters for plotting with Matplotlib using the rcParams function which handles all default styles/values. Note that when you use rcParams as in the example below, it acts as a global parameter and you are changing the default style for every time you then use Matplotlib." }, { "code": null, "e": 3674, "s": 3335, "text": "In this example I am using a custom color palette which is a list of colors, but it would also be possible (and necessary for grouped bar charts) to use a single color value for each set of data you wanted to use for your bars. Also note that in addition to using hex color codes, you can use the names of colors supported by the library." }, { "code": null, "e": 3795, "s": 3674, "text": "We set our overall title, axis labels, axis limits, and even rotate our x-axis tick labels using the rotation parameter." }, { "code": null, "e": 4638, "s": 3795, "text": "fig, ax = plt.subplots(figsize = (12,6))plt.rcParams['font.sans-serif'] = 'Arial'plt.rcParams['font.family'] = 'sans-serif'plt.rcParams['text.color'] = '#909090'plt.rcParams['axes.labelcolor']= '#909090'plt.rcParams['xtick.color'] = '#909090'plt.rcParams['ytick.color'] = '#909090'plt.rcParams['font.size']=12color_palette_list = ['#009ACD', '#ADD8E6', '#63D1F4', '#0EBFE9', '#C1F0F6', '#0099CC']ind = np.arange(len(df['Question']))bars1 = ax.bar(ind, df['Percentage of Respondents'], color = color_palette_list, label='Percentage of yes responses to question')ax.set_title(\"Elephant in the Valley Survey Results\")ax.set_ylabel(\"Percentage of Yes Responses to Question\")ax.set_ylim((0,100))ax.set_xticks(range(0,len(ind)))ax.set_xticklabels(list(df['Q Code']), rotation=70)ax.set_xlabel(\"Question Key\")" }, { "code": null, "e": 4674, "s": 4638, "text": "This produces the following output:" }, { "code": null, "e": 4703, "s": 4674, "text": "Matplotlib Pie Chart Example" }, { "code": null, "e": 5523, "s": 4703, "text": "The following code produces the pie chart seen below. Like our bar chart example, we first set up our figure as a subplot, then reset our default Matplotlib style parameters via rcParams. In this case we are also defining our data within the code below vs. taking from our data frame. We are choosing to explode the pie chart sections, hence setting up a variable we are calling explode, and we are setting the color choices to being the first two entries in our color palette list previously defined above. Setting the axes to be ‘equal’ ensures that we will have a circular pie chart. Autopct formats our values as strings with a set number of decimal points. We are also specifying the start angle of the pie chart in order to get the format we want, as well as using pctdistance and labeldistance to place our text." }, { "code": null, "e": 6064, "s": 5523, "text": "After we set the title, we are also choosing to use a legend for this chart, and specifying that the legend should not have a frame/visible bounding box, and we are specifically setting the legend location by ‘anchoring’ it using the specified bbox_to_anchor parameter. Useful tip — if you want your legend to live outside your figure, first specify the location parameter to be a particular corner such as ‘upper left’ and then specify the location that you would like to pin the ‘upper left’ corner of your legend to using bbox_to_anchor." }, { "code": null, "e": 6793, "s": 6064, "text": "fig, ax = plt.subplots()plt.rcParams['font.sans-serif'] = 'Arial'plt.rcParams['font.family'] = 'sans-serif'plt.rcParams['text.color'] = '#909090'plt.rcParams['axes.labelcolor']= '#909090'plt.rcParams['xtick.color'] = '#909090'plt.rcParams['ytick.color'] = '#909090'plt.rcParams['font.size']=12labels = ['Bay Area / Silicon Valley', 'Non Bay Area / Silicon Valley']percentages = [91, 9]explode=(0.1,0)ax.pie(percentages, explode=explode, labels=labels, colors=color_palette_list[0:2], autopct='%1.0f%%', shadow=False, startangle=0, pctdistance=1.2,labeldistance=1.4)ax.axis('equal')ax.set_title(\"Elephant in the Valley Survey Respondent Make-up\")ax.legend(frameon=False, bbox_to_anchor=(1.5,0.8))" }, { "code": null, "e": 6829, "s": 6793, "text": "This produces the following output:" }, { "code": null, "e": 6855, "s": 6829, "text": "Seaborn Bar Chart Example" }, { "code": null, "e": 7271, "s": 6855, "text": "As can be seen from the following code, Seaborn is really just a wrapper around Matplotlib. In this particular example where we are overriding the default rcParams and using such a simple chart type, it doesn’t make any difference whether you’re using a Matplotlib or Seaborn plot, but for quick graphics where you’re not changing default styles, or more complex plot types, I’ve found Seaborn is often good choice." }, { "code": null, "e": 8170, "s": 7271, "text": "fig, ax = plt.subplots(figsize = (12,6))plt.rcParams['font.sans-serif'] = 'Arial'plt.rcParams['font.family'] = 'sans-serif'plt.rcParams['text.color'] = '#909090'plt.rcParams['axes.labelcolor']= '#909090'plt.rcParams['xtick.color'] = '#909090'plt.rcParams['ytick.color'] = '#909090'plt.rcParams['font.size']=12ind = np.arange(len(df['Question']))color_palette_list = ['#009ACD', '#ADD8E6', '#63D1F4', '#0EBFE9', '#C1F0F6', '#0099CC']sns.barplot(x=df['Q Code'], y = df['Percentage of Respondents'], data = df, palette=color_palette_list, label=\"Percentage of yes responses to question\", ax=ax, ci=None)ax.set_title(\"Elephant in the Valley Survey Results\")ax.set_ylabel(\"Percentage of Yes Responses to Question\")ax.set_ylim(0,100)ax.set_xlabel(\"Question Key\")ax.set_xticks(range(0,len(ind)))ax.set_xticklabels(list(df['Q Code']), rotation=45)" }, { "code": null, "e": 8254, "s": 8170, "text": "Here the only difference is we’re using sns.barplot and the output can be the same:" }, { "code": null, "e": 8279, "s": 8254, "text": "Plotly Bar Chart Example" }, { "code": null, "e": 8886, "s": 8279, "text": "The following code sets up our bar chart using Plotly. We’re importing our libraries, and using the same color palette. Then we are setting up our bar chart parameters, followed by our overall layout parameters such as our title and then we’re using dictionaries to set up how we want parameters such as our axes and fonts. Within these dictionaries we are able to specify sub parameters such as x-axis tick label rotation and y-axis range. We are then creating our figure, feeding it our data and layout, and outputting our file to our Plotly account so that we can embed it as an interactive web graphic." }, { "code": null, "e": 10131, "s": 8886, "text": "import plotly.plotly as pyimport plotly.graph_objs as gocolor_palette_list = ['#009ACD', '#ADD8E6', '#63D1F4', '#0EBFE9', '#C1F0F6', '#0099CC']trace = go.Bar( x=df['Q Code'], y=df['Percentage of Respondents'], marker=dict( color=color_palette_list))data = [trace]layout = go.Layout( title='Elephant in the Valley Survey Results', font=dict(color='#909090'), xaxis=dict( title='Question Key', titlefont=dict( family='Arial, sans-serif', size=12, color='#909090' ), showticklabels=True, tickangle=-45, tickfont=dict( family='Arial, sans-serif', size=12, color='#909090' ),), yaxis=dict( range=[0,100], title=\"Percentage of Yes Responses to Question\", titlefont=dict( family='Arial, sans-serif', size=12, color='#909090' ), showticklabels=True, tickangle=0, tickfont=dict( family='Arial, sans-serif', size=12, color='#909090' ) ))fig = go.Figure(data=data, layout=layout)py.iplot(fig, filename='barplot-elephant-in-the-valley')" }, { "code": null, "e": 10167, "s": 10131, "text": "This produces the following output:" }, { "code": null, "e": 10192, "s": 10167, "text": "Plotly Pie Chart Example" }, { "code": null, "e": 10454, "s": 10192, "text": "By now you’ve likely caught on to how we are formatting and calling the parameters within Matplotlib and Plotly to build our visualizations. Let’s take a look at one last chart — an example of how we can create a similar pie chart to the one above using Plotly." }, { "code": null, "e": 10700, "s": 10454, "text": "The following code sets up and outputs our chart. We are specifying our start angle through the rotation parameter, and noting what information should be available when we hover over each component of our pie chart using the hoverover parameter." }, { "code": null, "e": 11386, "s": 10700, "text": "labels = ['Bay Area / Silicon Valley', 'Non Bay Area / Silicon Valley']percentages = [91, 9]trace = go.Pie(labels=labels, hoverinfo='label+percent', values=percentages, textposition='outside', marker=dict(colors=color_palette_list[0:2]), rotation=90)layout = go.Layout( title=\"Elephant in the Valley Survey Respondent Make-up\", font=dict(family='Arial', size=12, color='#909090'), legend=dict(x=0.9, y=0.5) )data = [trace]fig = go.Figure(data=data, layout=layout)py.iplot(fig, filename='basic_pie_chart_elephant_in_the_valley')" }, { "code": null, "e": 11422, "s": 11386, "text": "This produces the following output:" }, { "code": null, "e": 11652, "s": 11422, "text": "And that’s it, we’re all done creating and customizing our bar and pie charts. Hopefully this was helpful to you in learning how to use these libraries in a way that allows you to create bespoke graphical solutions for your data." }, { "code": null, "e": 11956, "s": 11652, "text": "As a final note, I’d like to mention that I think it’s important to be cautious about using pie charts. While they are considered a ‘basic’ chart type, they often don’t increase the understanding of underlying data, so use them sparingly and only where you know that they provide value in comprehension." }, { "code": null, "e": 11972, "s": 11956, "text": "Happy plotting!" }, { "code": null, "e": 11983, "s": 11972, "text": "Resources:" }, { "code": null, "e": 12022, "s": 11983, "text": "Getting Started with Plotly for Python" } ]
Boolean.ToString() Method in C#
The Boolean.ToString() method in C# converts the value of this instance to its equivalent string representation (either "True" or "False"). Following is the syntax − public override string ToString (); Let us now see an example to implement the Boolean.ToString() method − using System; public class Demo { public static void Main(){ bool b = true; string res = b.ToString(); Console.WriteLine("Return Value = "+res); } } This will produce the following output − Return Value = True Let us now see another example to implement the Boolean.ToString() method − using System; public class Demo { public static void Main(){ bool b = false; string res = b.ToString(); Console.WriteLine("Return Value = "+res); } } This will produce the following output − Return Value = False
[ { "code": null, "e": 1202, "s": 1062, "text": "The Boolean.ToString() method in C# converts the value of this instance to its equivalent string representation (either \"True\" or \"False\")." }, { "code": null, "e": 1228, "s": 1202, "text": "Following is the syntax −" }, { "code": null, "e": 1264, "s": 1228, "text": "public override string ToString ();" }, { "code": null, "e": 1335, "s": 1264, "text": "Let us now see an example to implement the Boolean.ToString() method −" }, { "code": null, "e": 1508, "s": 1335, "text": "using System;\npublic class Demo {\n public static void Main(){\n bool b = true;\n string res = b.ToString();\n Console.WriteLine(\"Return Value = \"+res);\n }\n}" }, { "code": null, "e": 1549, "s": 1508, "text": "This will produce the following output −" }, { "code": null, "e": 1569, "s": 1549, "text": "Return Value = True" }, { "code": null, "e": 1645, "s": 1569, "text": "Let us now see another example to implement the Boolean.ToString() method −" }, { "code": null, "e": 1819, "s": 1645, "text": "using System;\npublic class Demo {\n public static void Main(){\n bool b = false;\n string res = b.ToString();\n Console.WriteLine(\"Return Value = \"+res);\n }\n}" }, { "code": null, "e": 1860, "s": 1819, "text": "This will produce the following output −" }, { "code": null, "e": 1881, "s": 1860, "text": "Return Value = False" } ]
AWS Lambda – Function in C#
This chapter will explain you how to work with AWS Lambda function in C# in detail. Here, we are going to use visual studio to write and deploy the code to AWS Lambda. For any information and help regarding installation of Visual studio and adding AWS toolkit to Visual Studio, please refer to the Introduction chapter in this tutorial. Once you are done with installation of Visual Studio, please follow the steps given below. Refer to the respective screenshots for a better understanding − Open your Visual Studio and follow the steps to create new project. Click on File -> New -> Project. Now, the following screen is displayed wherein you select AWS Lambda for Visual C#. Select AWS Lambda Project (.NET Core). You can change the name if required, will keep here the default name. Click OK to continue. The next step will ask you to select a Blueprint. Select Empty function for this example and click Finish. It will create a new project structure as shown below − Now, select Function.cs which is the main file where the handler with event and context is created for AWS Lambda. The display of the file Functions.cs is as follows − You can use the command given below to serialize the input and output parameters to AWS Lambda function. [assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))] The handler is displayed as follows − public string FunctionHandler(string input, ILambdaContext context) { return input?.ToUpper(); } Various components of the above code are explained below − FunctionHandler −This is the starting point of the C# AWS Lambda function. String input − The parameters to the handler string input has all the event data such as S3 object, API gateway details etc. ILambdaContext context − ILamdaContext is an interface which has context details. It has details like lambda function name, memory details, timeout details etc. The Lambda handler can be invoked in sync and async way. If invoked in a sync way as shown above you can have the return type. If async than the return type has to be void. Now, let us deploy the AWS Lambda C# and test the same. Right click the project and click Publish to AWS Lambda as shown below − Fill up the Function Name and click on Next. The next screen displayed is the Advanced Function Details as shown − Enter the Role Name, Memory and Timeout. detailsNote that here we have selected the existing role created and used memory as 128MB and timeout as 10seconds. Once done click Upload to publish to AWS Lambda console. You can see the following screen once AWS Lambda function is uploaded. Click Invoke to execute the AWS Lambda function created. At present, it shows error as it needs some input as per the code written. Now, let us enter some sample input and Invoke it again. Note that here we have entered some text in the input box and the same on clicking invoke is displayed in uppercase in the response section. The log output is displayed below − Now, let us also check AWS console to see if the function is created as we have deployed the function from Visual Studio. The Lambda function created above is aws lambda using csharp and the same is displayed in AWS console as shown in the screenshots given below − Handler is start point for AWS to execute. The name of the handler should be defined as − ASSEMBLY::TYPE::METHOD The details of the signature are explained as below − ASSEMBLY − This is the name of the .NET assembly for the application created. It is basically the name of the folder from where the project is created. TYPE − This is the name of the handler. It is basically the namespace.classname. METHOD − This is the name of the function handler. The code for handler signature is as shown below − using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; using Amazon.Lambda.Core; // Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class. [assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))] namespace AWSLambda3 { public class Function { /// <summary> /// A simple function that takes a string and does a ToUpper /// </summary> /// <param name="input"></param> /// <param name="context"></param> /// <returns></returns> public string FunctionHandler(string input, ILambdaContext context) { return input?.ToUpper(); } } } Note that here the assembly is AWSLamda3, Type is namespace.classname which is AWSLambda3.Function and Method is FunctionHandler. Thus, the handler signature is AWSLamda3::AWSLambda3.Function::FunctionHandler Context Object gives useful information about the runtime in AWS environment. The properties available in the context object are as shown in the following table − MemoryLimitInMB This will give details of the memory configured for AWS Lambda function FunctionName Name of AWS Lambda function FunctionVersion Version of AWS Lambda function InvokedFunctionArn ARN used to invoke this function. AwsRequestId AWS request id for the AWS function created LogStreamName Cloudwatch log stream name LogGroupName Cloudwatch group name ClientContext Information about the client application and device when used with AWS mobile SDK Identity Information about the amazon cogbnito identity when used with AWS mobile SDK RemainingTime Remaining execution time till the function will be terminated Logger The logger associated with the context In this section, let us test some of the above properties in AWS Lambda in C#. Observe the sample code given below − using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; using Amazon.Lambda.Core; // Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class. [assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))] namespace AWSLambda6 { public class Function { /// <summary> /// </summary> /// <param name="input"></param> /// <param name="context"></param> /// <returns></returns> public void FunctionHandler(ILambdaContext context) { LambdaLogger.Log("Function name: " + context.FunctionName+"\n"); context.Logger.Log("RemainingTime: " + context.RemainingTime+"\n"); LambdaLogger.Log("LogGroupName: " + context.LogGroupName+"\n"); } } } The related output that you can observe when you invoke the above code in C# is as shown below − The related output that you can observe when you invoke the above code in AWS Console is as shown below − For logging, you can use two functions − context.Logger.Log context.Logger.Log LambdaLogger.Log LambdaLogger.Log Observe the following example shown here − public void FunctionHandler(ILambdaContext context) { LambdaLogger.Log("Function name: " + context.FunctionName+"\n"); context.Logger.Log("RemainingTime: " + context.RemainingTime+"\n"); LambdaLogger.Log("LogGroupName: " + context.LogGroupName+"\n"); } The corresponding output fo the code given above is shown here − You can get the logs from CloudWatch as shown below − This section discusses about error handling in C#. For error handling,Exception class has to be extended as shown in the example shown below − namespace example { public class AccountAlreadyExistsException : Exception { public AccountAlreadyExistsException(String message) : base(message) { } } } namespace example { public class Handler { public static void CreateAccount() { throw new AccountAlreadyExistsException("Error in AWS Lambda!"); } } } The corresponding output for the code given above is as given below − { "errorType": "LambdaException", "errorMessage": "Error in AWS Lambda!" } 35 Lectures 7.5 hours Mr. Pradeep Kshetrapal 30 Lectures 3.5 hours Priyanka Choudhary 44 Lectures 7.5 hours Eduonix Learning Solutions 51 Lectures 6 hours Manuj Aggarwal 41 Lectures 5 hours AR Shankar 14 Lectures 1 hours Zach Miller Print Add Notes Bookmark this page
[ { "code": null, "e": 2899, "s": 2406, "text": "This chapter will explain you how to work with AWS Lambda function in C# in detail. Here, we are going to use visual studio to write and deploy the code to AWS Lambda. For any information and help regarding installation of Visual studio and adding AWS toolkit to Visual Studio, please refer to the Introduction chapter in this tutorial. Once you are done with installation of Visual Studio, please follow the steps given below. Refer to the respective screenshots for a better understanding −" }, { "code": null, "e": 3000, "s": 2899, "text": "Open your Visual Studio and follow the steps to create new project. Click on File -> New -> Project." }, { "code": null, "e": 3123, "s": 3000, "text": "Now, the following screen is displayed wherein you select AWS Lambda for Visual C#. Select AWS Lambda Project (.NET Core)." }, { "code": null, "e": 3215, "s": 3123, "text": "You can change the name if required, will keep here the default name. Click OK to continue." }, { "code": null, "e": 3265, "s": 3215, "text": "The next step will ask you to select a Blueprint." }, { "code": null, "e": 3379, "s": 3265, "text": "Select Empty function for this example and click Finish. It will create a new project structure as shown below −" }, { "code": null, "e": 3494, "s": 3379, "text": "Now, select Function.cs which is the main file where the handler with event and context is created for AWS Lambda." }, { "code": null, "e": 3547, "s": 3494, "text": "The display of the file Functions.cs is as follows −" }, { "code": null, "e": 3652, "s": 3547, "text": "You can use the command given below to serialize the input and output parameters to AWS Lambda function." }, { "code": null, "e": 3740, "s": 3652, "text": "[assembly: \nLambdaSerializer(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]\n" }, { "code": null, "e": 3778, "s": 3740, "text": "The handler is displayed as follows −" }, { "code": null, "e": 3878, "s": 3778, "text": "public string FunctionHandler(string input, ILambdaContext context) {\n return input?.ToUpper();\n}" }, { "code": null, "e": 3937, "s": 3878, "text": "Various components of the above code are explained below −" }, { "code": null, "e": 4012, "s": 3937, "text": "FunctionHandler −This is the starting point of the C# AWS Lambda function." }, { "code": null, "e": 4137, "s": 4012, "text": "String input − The parameters to the handler string input has all the event data such as S3 object, API gateway details etc." }, { "code": null, "e": 4298, "s": 4137, "text": "ILambdaContext context − ILamdaContext is an interface which has context details. It has details like lambda function name, memory details, timeout details etc." }, { "code": null, "e": 4471, "s": 4298, "text": "The Lambda handler can be invoked in sync and async way. If invoked in a sync way as shown above you can have the return type. If async than the return type has to be void." }, { "code": null, "e": 4600, "s": 4471, "text": "Now, let us deploy the AWS Lambda C# and test the same. Right click the project and click Publish to AWS Lambda as shown below −" }, { "code": null, "e": 4715, "s": 4600, "text": "Fill up the Function Name and click on Next. The next screen displayed is the Advanced Function Details as shown −" }, { "code": null, "e": 4929, "s": 4715, "text": "Enter the Role Name, Memory and Timeout. detailsNote that here we have selected the existing role created and used memory as 128MB and timeout as 10seconds. Once done click Upload to publish to AWS Lambda console." }, { "code": null, "e": 5132, "s": 4929, "text": "You can see the following screen once AWS Lambda function is uploaded. Click Invoke to execute the AWS Lambda function created. At present, it shows error as it needs some input as per the code written." }, { "code": null, "e": 5366, "s": 5132, "text": "Now, let us enter some sample input and Invoke it again. Note that here we have entered some text in the input box and the same on clicking invoke is displayed in uppercase in the response section. The log output is displayed below −" }, { "code": null, "e": 5488, "s": 5366, "text": "Now, let us also check AWS console to see if the function is created as we have deployed the function from Visual Studio." }, { "code": null, "e": 5632, "s": 5488, "text": "The Lambda function created above is aws lambda using csharp and the same is displayed in AWS console as shown in the screenshots given below −" }, { "code": null, "e": 5722, "s": 5632, "text": "Handler is start point for AWS to execute. The name of the handler should be defined as −" }, { "code": null, "e": 5746, "s": 5722, "text": "ASSEMBLY::TYPE::METHOD\n" }, { "code": null, "e": 5800, "s": 5746, "text": "The details of the signature are explained as below −" }, { "code": null, "e": 5952, "s": 5800, "text": "ASSEMBLY − This is the name of the .NET assembly for the application created. It is basically the name of the folder from where the project is created." }, { "code": null, "e": 6033, "s": 5952, "text": "TYPE − This is the name of the handler. It is basically the namespace.classname." }, { "code": null, "e": 6084, "s": 6033, "text": "METHOD − This is the name of the function handler." }, { "code": null, "e": 6135, "s": 6084, "text": "The code for handler signature is as shown below −" }, { "code": null, "e": 6840, "s": 6135, "text": "using System;\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Threading.Tasks;\nusing Amazon.Lambda.Core;\n\n// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.\n[assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]\n\nnamespace AWSLambda3 {\n public class Function {\n\n /// <summary>\n /// A simple function that takes a string and does a ToUpper\n /// </summary>\n /// <param name=\"input\"></param>\n /// <param name=\"context\"></param>\n /// <returns></returns>\n public string FunctionHandler(string input, ILambdaContext context) {\n return input?.ToUpper();\n }\n }\n}" }, { "code": null, "e": 7049, "s": 6840, "text": "Note that here the assembly is AWSLamda3, Type is namespace.classname which is AWSLambda3.Function and Method is FunctionHandler. Thus, the handler signature is AWSLamda3::AWSLambda3.Function::FunctionHandler" }, { "code": null, "e": 7212, "s": 7049, "text": "Context Object gives useful information about the runtime in AWS environment. The properties available in the context object are as shown in the following table −" }, { "code": null, "e": 7228, "s": 7212, "text": "MemoryLimitInMB" }, { "code": null, "e": 7300, "s": 7228, "text": "This will give details of the memory configured for AWS Lambda function" }, { "code": null, "e": 7313, "s": 7300, "text": "FunctionName" }, { "code": null, "e": 7341, "s": 7313, "text": "Name of AWS Lambda function" }, { "code": null, "e": 7357, "s": 7341, "text": "FunctionVersion" }, { "code": null, "e": 7388, "s": 7357, "text": "Version of AWS Lambda function" }, { "code": null, "e": 7407, "s": 7388, "text": "InvokedFunctionArn" }, { "code": null, "e": 7441, "s": 7407, "text": "ARN used to invoke this function." }, { "code": null, "e": 7454, "s": 7441, "text": "AwsRequestId" }, { "code": null, "e": 7498, "s": 7454, "text": "AWS request id for the AWS function created" }, { "code": null, "e": 7512, "s": 7498, "text": "LogStreamName" }, { "code": null, "e": 7539, "s": 7512, "text": "Cloudwatch log stream name" }, { "code": null, "e": 7552, "s": 7539, "text": "LogGroupName" }, { "code": null, "e": 7574, "s": 7552, "text": "Cloudwatch group name" }, { "code": null, "e": 7588, "s": 7574, "text": "ClientContext" }, { "code": null, "e": 7671, "s": 7588, "text": "Information about the client application and device when used with AWS mobile SDK " }, { "code": null, "e": 7680, "s": 7671, "text": "Identity" }, { "code": null, "e": 7758, "s": 7680, "text": "Information about the amazon cogbnito identity when used with AWS mobile SDK " }, { "code": null, "e": 7772, "s": 7758, "text": "RemainingTime" }, { "code": null, "e": 7835, "s": 7772, "text": "Remaining execution time till the function will be terminated " }, { "code": null, "e": 7842, "s": 7835, "text": "Logger" }, { "code": null, "e": 7881, "s": 7842, "text": "The logger associated with the context" }, { "code": null, "e": 7998, "s": 7881, "text": "In this section, let us test some of the above properties in AWS Lambda in C#. Observe the sample code given below −" }, { "code": null, "e": 8821, "s": 7998, "text": "using System;\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Threading.Tasks;\nusing Amazon.Lambda.Core;\n// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.\n[assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]\n\nnamespace AWSLambda6 {\n public class Function {\n\n /// <summary>\n /// </summary>\n /// <param name=\"input\"></param>\n /// <param name=\"context\"></param>\n /// <returns></returns>\n public void FunctionHandler(ILambdaContext context) {\n LambdaLogger.Log(\"Function name: \" + context.FunctionName+\"\\n\");\n context.Logger.Log(\"RemainingTime: \" + context.RemainingTime+\"\\n\");\n LambdaLogger.Log(\"LogGroupName: \" + context.LogGroupName+\"\\n\"); \n }\n }\n}" }, { "code": null, "e": 8918, "s": 8821, "text": "The related output that you can observe when you invoke the above code in C# is as shown below −" }, { "code": null, "e": 9024, "s": 8918, "text": "The related output that you can observe when you invoke the above code in AWS Console is as shown below −" }, { "code": null, "e": 9065, "s": 9024, "text": "For logging, you can use two functions −" }, { "code": null, "e": 9084, "s": 9065, "text": "context.Logger.Log" }, { "code": null, "e": 9103, "s": 9084, "text": "context.Logger.Log" }, { "code": null, "e": 9120, "s": 9103, "text": "LambdaLogger.Log" }, { "code": null, "e": 9137, "s": 9120, "text": "LambdaLogger.Log" }, { "code": null, "e": 9180, "s": 9137, "text": "Observe the following example shown here −" }, { "code": null, "e": 9454, "s": 9180, "text": "public void FunctionHandler(ILambdaContext context) {\n LambdaLogger.Log(\"Function name: \" + context.FunctionName+\"\\n\");\n context.Logger.Log(\"RemainingTime: \" + context.RemainingTime+\"\\n\");\n LambdaLogger.Log(\"LogGroupName: \" + context.LogGroupName+\"\\n\"); \n}" }, { "code": null, "e": 9519, "s": 9454, "text": "The corresponding output fo the code given above is shown here −" }, { "code": null, "e": 9573, "s": 9519, "text": "You can get the logs from CloudWatch as shown below −" }, { "code": null, "e": 9716, "s": 9573, "text": "This section discusses about error handling in C#. For error handling,Exception class has to be extended as shown in the example shown below −" }, { "code": null, "e": 10084, "s": 9716, "text": "namespace example { \n public class AccountAlreadyExistsException : Exception {\n public AccountAlreadyExistsException(String message) :\n base(message) {\n }\n }\n} \nnamespace example {\n public class Handler {\n public static void CreateAccount() {\n throw new AccountAlreadyExistsException(\"Error in AWS Lambda!\");\n }\n }\n}" }, { "code": null, "e": 10154, "s": 10084, "text": "The corresponding output for the code given above is as given below −" }, { "code": null, "e": 10236, "s": 10154, "text": "{\n \"errorType\": \"LambdaException\",\n \"errorMessage\": \"Error in AWS Lambda!\"\n}\n" }, { "code": null, "e": 10271, "s": 10236, "text": "\n 35 Lectures \n 7.5 hours \n" }, { "code": null, "e": 10295, "s": 10271, "text": " Mr. Pradeep Kshetrapal" }, { "code": null, "e": 10330, "s": 10295, "text": "\n 30 Lectures \n 3.5 hours \n" }, { "code": null, "e": 10350, "s": 10330, "text": " Priyanka Choudhary" }, { "code": null, "e": 10385, "s": 10350, "text": "\n 44 Lectures \n 7.5 hours \n" }, { "code": null, "e": 10413, "s": 10385, "text": " Eduonix Learning Solutions" }, { "code": null, "e": 10446, "s": 10413, "text": "\n 51 Lectures \n 6 hours \n" }, { "code": null, "e": 10462, "s": 10446, "text": " Manuj Aggarwal" }, { "code": null, "e": 10495, "s": 10462, "text": "\n 41 Lectures \n 5 hours \n" }, { "code": null, "e": 10507, "s": 10495, "text": " AR Shankar" }, { "code": null, "e": 10540, "s": 10507, "text": "\n 14 Lectures \n 1 hours \n" }, { "code": null, "e": 10553, "s": 10540, "text": " Zach Miller" }, { "code": null, "e": 10560, "s": 10553, "text": " Print" }, { "code": null, "e": 10571, "s": 10560, "text": " Add Notes" } ]
Choosing a hyperparameter tuning library — ray[tune] or aisaratuners? | by Nouman | Towards Data Science
Creating a model is easy. What’s hard is building a model with optimal hyperparameters!. You can create a neural network with a random number of hidden layers and it will probably give you results better than random. But to get optimal results, you need to get the best hyperparameters that optimize the results. This process of finding the best hyperparameters is known as Hyperparameters tuning. This is basically a time-consuming and a computationally expensive process as we have to search a pretty wide space in order to find these. And also we as humans, can’t try each and every one of the combinations and thus can’t claim a set of hyper parameters as the best ones. Enter hyper parameters tuning libraries. These libraries search the parameters space and calculate the metrics for each one. It lets you know the optimized hyper parameters for your model, and the best thing is you have to do minimal changes in the code. Two such libraries, that I will be comparing in this article, are ray[tune] and aisaratuners. I will be testing them on the iris dataset to see which library performs better. Spoiler Alert: If you just want to select one library from these, select aisaratuners. It outperforms ray[tune] by a large margin both in terms of accuracy and time First, we will start with creating a model with random number of hidden layers to show how the model performs. Let’s create a function for creating a model. After that, let’s create a function that loads the data, creates the model and also does the training. It will return the trained model and the history object containing details about the training metrics This model accuracy will be quite low. In fact, it was able to achieve an accuracy of only 34.21% which is slightly above than random guessing. Tune is a Python library for experiment execution and hyperparameter tuning at any scale. It’s core features are distributed hyperparameter tuning and automatic logging to Tensorboard. It lets you choose from a variety of algorithms for searching the hyperparameters space. You can see all of them here Let’s start with installing the library pip install -q -U ray[tune] For ray[tune] to work, we first have to create a callback that can report at each epoch back to tune, so it can make better decision on how to choose the best hyperparameters. After that, we have to create a function that accepts the object of hyperparameter space and goes through that to find the best parameters. This function is called by the library and a hyperparameter space along with the number of samples (number of trials) are passed to it. The larger the number of trials, better the chances of finding the best hyper parameters. The number of trials were set to 8 and the model was able to achieve an accuracy of 92.11%. Let’s see if aisaratuners can do it better. Aisaratuners library is a hyperparameter tuning library for Machine learning models that aims to give you the best optimized model for your dataset. It uses AiSara propreitary state of the art algorithm. Aisaratuners uses Latin Hypercube Sampling to do the initial sampling of hyperparameters and then uses its HyperParameter tuning API which utlilizes SOTA pattern recognition so that the sampling space can be reduced in order to pick the best parameters for your model. It can be better explained by the following image from Aisara medium post. Now let’s get hands-on with aisaratuners. We will start by installing and importing it. !pip install aisaratunersfrom aisaratuners.aisara_keras_tuner import Hp, HpOptimization We will start by first defining the hyperparameters and their space. I will set the same space as I did for ray[tune] for better comparison. These hyperparameters are used in a function that tries values from this space to figure out the best possible values. Finally, we have to run the optimizer. So, Aisaratuners was able to achieve an accuracy of 97.37% in half as many trials (4 vs 8) as ray[tune]. This just shows that aisaratuners converges much faster due to its proprietary state-of-the-art algorithm for finding the best hyperparameters and also results in much better accuracy The best thing I liked about aisaratuners is its simplicity as we had to do minimum changes which made sense intuitively as well and also that it gives the capability to visualize the process as well. Below are some visualizations about the optimization procedure Lastly, I wanted to figure out if ray[tune] was able to achieve a higher accuracy if we tested with more trials. I doubled the number of trials from 8 to 16. This resulted in more time taken to find the best hyper parameters but it was able to achieve an accuracy of 97.37% in that case. We can conclude the following from the experiments: aisaratuners library was able to outperform ray[tune], both with respect to number of trials (4 versus 8) and test accuracy (97% versus 92%).ray[tune] was able to achieve equal accuracy as compared to aisaratuners but with four times as many trials (4 versus 16).There were less changes in code in aisaratuners as there were incase of ray[tune] i.e, it was much more easy to setup aisaratuners.aisaratuners gives access to an API to plot the optimization process to give a better understanding to the user.I personally did face some issues with ray[tune] when using it with Convolutional Layers. aisaratuners library was able to outperform ray[tune], both with respect to number of trials (4 versus 8) and test accuracy (97% versus 92%). ray[tune] was able to achieve equal accuracy as compared to aisaratuners but with four times as many trials (4 versus 16). There were less changes in code in aisaratuners as there were incase of ray[tune] i.e, it was much more easy to setup aisaratuners. aisaratuners gives access to an API to plot the optimization process to give a better understanding to the user. I personally did face some issues with ray[tune] when using it with Convolutional Layers. Feel free to reach out if you have any questions regarding the article. You can check the full code on my Google Colab here If you feel that above content was useful for you, do share it and feel free to support me ->
[ { "code": null, "e": 570, "s": 172, "text": "Creating a model is easy. What’s hard is building a model with optimal hyperparameters!. You can create a neural network with a random number of hidden layers and it will probably give you results better than random. But to get optimal results, you need to get the best hyperparameters that optimize the results. This process of finding the best hyperparameters is known as Hyperparameters tuning." }, { "code": null, "e": 1102, "s": 570, "text": "This is basically a time-consuming and a computationally expensive process as we have to search a pretty wide space in order to find these. And also we as humans, can’t try each and every one of the combinations and thus can’t claim a set of hyper parameters as the best ones. Enter hyper parameters tuning libraries. These libraries search the parameters space and calculate the metrics for each one. It lets you know the optimized hyper parameters for your model, and the best thing is you have to do minimal changes in the code." }, { "code": null, "e": 1277, "s": 1102, "text": "Two such libraries, that I will be comparing in this article, are ray[tune] and aisaratuners. I will be testing them on the iris dataset to see which library performs better." }, { "code": null, "e": 1442, "s": 1277, "text": "Spoiler Alert: If you just want to select one library from these, select aisaratuners. It outperforms ray[tune] by a large margin both in terms of accuracy and time" }, { "code": null, "e": 1599, "s": 1442, "text": "First, we will start with creating a model with random number of hidden layers to show how the model performs. Let’s create a function for creating a model." }, { "code": null, "e": 1804, "s": 1599, "text": "After that, let’s create a function that loads the data, creates the model and also does the training. It will return the trained model and the history object containing details about the training metrics" }, { "code": null, "e": 1948, "s": 1804, "text": "This model accuracy will be quite low. In fact, it was able to achieve an accuracy of only 34.21% which is slightly above than random guessing." }, { "code": null, "e": 2251, "s": 1948, "text": "Tune is a Python library for experiment execution and hyperparameter tuning at any scale. It’s core features are distributed hyperparameter tuning and automatic logging to Tensorboard. It lets you choose from a variety of algorithms for searching the hyperparameters space. You can see all of them here" }, { "code": null, "e": 2291, "s": 2251, "text": "Let’s start with installing the library" }, { "code": null, "e": 2319, "s": 2291, "text": "pip install -q -U ray[tune]" }, { "code": null, "e": 2495, "s": 2319, "text": "For ray[tune] to work, we first have to create a callback that can report at each epoch back to tune, so it can make better decision on how to choose the best hyperparameters." }, { "code": null, "e": 2635, "s": 2495, "text": "After that, we have to create a function that accepts the object of hyperparameter space and goes through that to find the best parameters." }, { "code": null, "e": 2861, "s": 2635, "text": "This function is called by the library and a hyperparameter space along with the number of samples (number of trials) are passed to it. The larger the number of trials, better the chances of finding the best hyper parameters." }, { "code": null, "e": 2997, "s": 2861, "text": "The number of trials were set to 8 and the model was able to achieve an accuracy of 92.11%. Let’s see if aisaratuners can do it better." }, { "code": null, "e": 3545, "s": 2997, "text": "Aisaratuners library is a hyperparameter tuning library for Machine learning models that aims to give you the best optimized model for your dataset. It uses AiSara propreitary state of the art algorithm. Aisaratuners uses Latin Hypercube Sampling to do the initial sampling of hyperparameters and then uses its HyperParameter tuning API which utlilizes SOTA pattern recognition so that the sampling space can be reduced in order to pick the best parameters for your model. It can be better explained by the following image from Aisara medium post." }, { "code": null, "e": 3633, "s": 3545, "text": "Now let’s get hands-on with aisaratuners. We will start by installing and importing it." }, { "code": null, "e": 3721, "s": 3633, "text": "!pip install aisaratunersfrom aisaratuners.aisara_keras_tuner import Hp, HpOptimization" }, { "code": null, "e": 3862, "s": 3721, "text": "We will start by first defining the hyperparameters and their space. I will set the same space as I did for ray[tune] for better comparison." }, { "code": null, "e": 3981, "s": 3862, "text": "These hyperparameters are used in a function that tries values from this space to figure out the best possible values." }, { "code": null, "e": 4020, "s": 3981, "text": "Finally, we have to run the optimizer." }, { "code": null, "e": 4125, "s": 4020, "text": "So, Aisaratuners was able to achieve an accuracy of 97.37% in half as many trials (4 vs 8) as ray[tune]." }, { "code": null, "e": 4309, "s": 4125, "text": "This just shows that aisaratuners converges much faster due to its proprietary state-of-the-art algorithm for finding the best hyperparameters and also results in much better accuracy" }, { "code": null, "e": 4573, "s": 4309, "text": "The best thing I liked about aisaratuners is its simplicity as we had to do minimum changes which made sense intuitively as well and also that it gives the capability to visualize the process as well. Below are some visualizations about the optimization procedure" }, { "code": null, "e": 4861, "s": 4573, "text": "Lastly, I wanted to figure out if ray[tune] was able to achieve a higher accuracy if we tested with more trials. I doubled the number of trials from 8 to 16. This resulted in more time taken to find the best hyper parameters but it was able to achieve an accuracy of 97.37% in that case." }, { "code": null, "e": 4913, "s": 4861, "text": "We can conclude the following from the experiments:" }, { "code": null, "e": 5509, "s": 4913, "text": "aisaratuners library was able to outperform ray[tune], both with respect to number of trials (4 versus 8) and test accuracy (97% versus 92%).ray[tune] was able to achieve equal accuracy as compared to aisaratuners but with four times as many trials (4 versus 16).There were less changes in code in aisaratuners as there were incase of ray[tune] i.e, it was much more easy to setup aisaratuners.aisaratuners gives access to an API to plot the optimization process to give a better understanding to the user.I personally did face some issues with ray[tune] when using it with Convolutional Layers." }, { "code": null, "e": 5651, "s": 5509, "text": "aisaratuners library was able to outperform ray[tune], both with respect to number of trials (4 versus 8) and test accuracy (97% versus 92%)." }, { "code": null, "e": 5774, "s": 5651, "text": "ray[tune] was able to achieve equal accuracy as compared to aisaratuners but with four times as many trials (4 versus 16)." }, { "code": null, "e": 5906, "s": 5774, "text": "There were less changes in code in aisaratuners as there were incase of ray[tune] i.e, it was much more easy to setup aisaratuners." }, { "code": null, "e": 6019, "s": 5906, "text": "aisaratuners gives access to an API to plot the optimization process to give a better understanding to the user." }, { "code": null, "e": 6109, "s": 6019, "text": "I personally did face some issues with ray[tune] when using it with Convolutional Layers." }, { "code": null, "e": 6233, "s": 6109, "text": "Feel free to reach out if you have any questions regarding the article. You can check the full code on my Google Colab here" } ]
Creating Art with Conv Neural Nets | by Viv | Towards Data Science
In this post, I am using a convolutional neural network to make some neat art! Up until now, art has always been a work of imagination left best with creatives. Artists have had a unique way of expressing themselves and the times they lived in through a unique lens, specific to the way they viewed the world around them. Be it the Da Vinci and his wonder-inspiring work or Van Gogh and his twisted look at the world, art has always inspired millions throughout the generations. Technology has always inspired artists to push the boundaries and explore the possibilities beyond what has already been done. The first film camera was not invented as a technology to aid art, but merely a tool to capture reality. Clearly, artists saw it differently giving birth to the entire film and animation industry. This is true for every major tech we have created, artists have always found a way to use the novel tool creatively. With the recent advances in machine learning, we can generate incredible art pieces within minutes, that may have taken an expert artists years to incomplete just about a century ago. Machine learning creates the possibility of prototyping an art piece at least 100x faster while having the medium collaborate with the artist. The beauty herein lies in the fact that this new wave of technological advancement will enhance the way art is created and looked at by upgrading the tools at hand. Here, I will use python to take any image and turn it into the style of any artist of my choosing. Google released a similar product known as “Deep Dream” in 2015, and the internet took it was throbbing enthusiasm. They essentially trained a Convolutional Neural Net that classifies images and then used an optimization technique to enhance the patterns in the input image as opposed to its own weights based on what the network had learned. Soon after this, the website “Deepart” came out that allowed users to have any image convert to a painting style of their choice within clicks! To understand how this “magic” called the style transfer process works, we will write our own script in Keras with a TensorFlow backend. I will use a base image (a photo of my favorite animal) and a style reference image. My script will use Vincent Van Gogh’s “Starry Night” as the reference and apply it to the base image. Here we go by first importing the necessary dependencies: from __future__ import print_functionimport timefrom PIL import Imageimport numpy as npfrom keras import backendfrom keras.models import Modelfrom keras.applications.vgg16 import VGG16from scipy.optimize import fmin_l_bfgs_bfrom scipy.misc import imsave So we will feed these images into the neural net by first converting them into the de-facto format for all neural nets, Tensors. The variable function from Keras backend Tensorflow is equivalent to tf.variable. The parameter to this will be the image converted to an array and then we do the same thing for the style image. We then create a combination image that can later store our end results by using a placeholder to initialize it with a given width and height. Here is the content image: height = 512width = 512content_image_path = 'images/elephant.jpg'content_image = Image.open(content_image_path)content_image = content_image.resize((height, width))content_image Here I load up the style image: style_image_path = '/Users/vivek/Desktop/VanGogh.jpg'style_image = Image.open (style_image_path)style_image = style_image.resize((height, width))style_image Next, we convert both these images so that they are in a suitable form for numerical processing. We add another dimension (beyond the height, width and normal 3 dimensions) so we can later concatenate the representations of the two images into a common data structure: content_array = np.asarray(content_image, dtype='float32')content_array = np.expand_dims(content_array, axis=0)print(content_array.shape)style_array = np.asarray(style_image, dtype='float32')style_array = np.expand_dims(style_array, axis=0)print(style_array.shape) We will be using the VGG network moving forward. Keras has wrapped this model really well for us to use easily as we will moving forward. VGG16 is a 16 layer convolutional net, created by the Visual Geometry Group at Oxford that won the ImageNet competition in 2014. The idea here is that a CNN pre-trained for image classification on thousands of different images already knows how to encode information in a container image. I have learned features at each layer that can detect certain generalized features. These are the features we will be using to perform style transfer. We do not need the convolution block at the top of this net because its fully connected layers and softmax function help classify the images by squashing the dimensionality feature map and outputting a probability. We are not classifying just transferring. This is essentially an optimization where we have some loss function that measures the error value that we will be attempting to minimize. Our loss function, in this case, can be decomposed into two parts: 1) Content Loss We initialize the total loss to zero and add each of these to it. First is the content loss. An image always has a content component and a style component. We know that the features a CNN learns are arranged in order of progressively more abstract compositions. Since the higher-level features are more abstract, such as detecting faces, we can associate them with content. When we run our output image and our reference image through the network, we obtain a set of feature representations for both from a hidden layer of our choice. We then measure the Euclidean distance between them to calculate our loss. 2) Style Loss This is also a function of our network’s hidden layer outputs but is slightly more complex. We still pass both images through the net to observe their activations, but instead of comparing the raw activations directly for content, we add an extra step to measure the correlation between the activations. We take what is referred to as the gram matrix for both the images of the activation at a given layer in the network. This will measure which features tend to activate together. This basically represents the probability of different features co-occurs in different parts of the image. Once we have this we can define this style loss as a euclidean distance between the gram matrices between the reference image and the output image, and compute the total style loss as the weighted sum of the style loss at each layer we choose. Now that we have the losses we need to define the gradients of the output image with respect to the loss, and then use those gradients to iteratively minimize the loss. We now need to massage the input data to match what was done in Simonyan and Zisserman (2015), paper that introduced the VGG Network model. For this, we need to perform two transformations: Subtract the mean RGB value (computed previously on the ImageNet training set and easily obtainable from Google searches) from each pixel.Flip the ordering of the multi-dimensional array from RGB to BGR (the ordering used in the paper). Subtract the mean RGB value (computed previously on the ImageNet training set and easily obtainable from Google searches) from each pixel. Flip the ordering of the multi-dimensional array from RGB to BGR (the ordering used in the paper). content_array[:,:,:,0] -= 103.99content_array[:, :, :, 1] -= 116.779content_array[:, :, :, 2] -= 123.68content_array = content_array[:, :, :, ::-1]style_array[:, :, :, 0] -= 103.939style_array[:, :, :, 1] -= 116.779style_array[:, :, :, 2] -= 123.68style_array = style_array[:, :, :, ::-1] Now we’re ready to use these arrays to define variables in Keras’ backend (the TensorFlow graph). We also introduce a placeholder variable to store the combination image that retains the content of the content image while incorporating the style of the style image. content_image = backend.variable(content_array)style_image = backend.variable(style_array)combination_image = backend.placeholder((1, height, width, 3)) We now go ahead and concatenate all this image data into a single tensor that can be used for processing Kera’s VGG16 model. input_tensor = backend.concatenate([content_image, style_image, combination_image], axis = 0) As previously stated, since we’re not interested in the classification problem, we don’t need the fully connected layers or the final softmax classifier. We only need the part of the model marked in green in the table below. It is trivial for us to get access to this truncated model because Keras comes with a set of pre trained models, including the VGG16 model we’re interested in. Note that by setting include_top=False in the code below, we don't include any of the fully connected layers. import h5pymodel = VGG16(input_tensor=input_tensor, weights='imagenet', include_top=False) As is clear from the table above, the model we’re working with has a lot of layers. Keras has its own names for these layers. Let’s make a list of these names so that we can easily refer to individual layers later. layers = dict([(layer.name, layer.output) for layer in model.layers])layers We now pick the weights, this can be played around with: content_weight = 0.025style_weight = 5.0total_variation_weight = 1.0 We’ll now use the feature spaces provided by specific layers of our model to define these three loss functions. We begin by initializing the total loss to 0 and adding to it in stages. loss = backend.variable(0.) Now the content loss: def content_loss(content, combination): return backend.sum(backend.square(combination - content))layer_features = layers['block2_conv2']content_image_features = layer_features[0, :, :, :]combination_features = layer_features[2, :, :, :]loss += content_weight * content_loss(content_image_features, combination_features) And the style loss: def gram_matrix(x): features = backend.batch_flatten(backend.permute_dimensions(x, (2, 0, 1))) gram = backend.dot(features, backend.transpose(features)) return gramdef style_loss(style, combination): S = gram_matrix(style) C = gram_matrix(combination) channels = 3 size = height * width return backend.sum(backend.square(S - C)) / (4. * (channels ** 2) * (size ** 2))feature_layers = ['block1_conv2', 'block2_conv2', 'block3_conv3', 'block4_conv3', 'block5_conv3']for layer_name in feature_layers: layer_features = layers[layer_name] style_features = layer_features[1, :, :, :] combination_features = layer_features[2, :, :, :] sl = style_loss(style_features, combination_features) loss += (style_weight / len(feature_layers)) * sl Finally the total variation loss: def total_variation_loss(x): a = backend.square(x[:, :height-1, :width-1, :] - x[:, 1:, :width-1, :]) b = backend.square(x[:, :height-1, :width-1, :] - x[:, :height-1, 1:, :]) return backend.sum(backend.pow(a + b, 1.25))loss += total_variation_weight * total_variation_loss(combination_image) Now we go ahead and define the needed gradients to solve the optimisation problem: grads = backend.gradients(loss, combination_image) We then introduce an Evaluator class that computes loss and gradients in one pass while retrieving them via two separate functions, loss and grads. This is done because scipy.optimize requires separate functions for loss and gradients, but computing them separately would be inefficient. outputs = [loss]outputs += gradsf_outputs = backend.function([combination_image], outputs)def eval_loss_and_grads(x): x = x.reshape((1, height, width, 3)) outs = f_outputs([x]) loss_value = outs[0] grad_values = outs[1].flatten().astype('float64') return loss_value, grad_valuesclass Evaluator(object): def __init__(self): self.loss_value = None self.grads_values = None def loss(self, x): assert self.loss_value is None loss_value, grad_values = eval_loss_and_grads(x) self.loss_value = loss_value self.grad_values = grad_values return self.loss_value def grads(self, x): assert self.loss_value is not None grad_values = np.copy(self.grad_values) self.loss_value = None self.grad_values = None return grad_valuesevaluator = Evaluator() Now we’re finally ready to solve our optimisation problem. This combination image begins its life as a random collection of (valid) pixels, and we use the L-BFGS algorithm (a quasi-Newton algorithm that’s significantly quicker to converge than standard gradient descent) to iteratively improve upon it. We stop after 8 iterations because the output looks good to me and the loss stops reducing significantly. x = np.random.uniform(0, 255, (1, height, width, 3)) - 128.iterations = 8for i in range(iterations): print('Start of iteration', i) start_time = time.time() x, min_val, info = fmin_l_bfgs_b(evaluator.loss, x.flatten(), fprime=evaluator.grads, maxfun=20) print('Current loss value:', min_val) end_time = time.time() print('Iteration %d completed in %ds' % (i, end_time - start_time)) If you are working on a laptop like me, go grab a nice meal because this will take a while. Here is the output from the last iteration though! x = x.reshape((height, width, 3))x = x[:, :, ::-1]x[:, :, 0] += 103.939x[:, :, 1] += 116.779x[:, :, 2] += 123.68x = np.clip(x, 0, 255).astype('uint8')Image.fromarray(x) Neat! We can continue playing with this by changing the two images, their size, the weights of our loss functions, etc. It is important to remember that running this for just 8 iterations took my MacBook air about 4 hours. This is a very CPU-intensive process, and so when scaled this is a relatively expensive problem to work with. Thanks for reading!
[ { "code": null, "e": 250, "s": 171, "text": "In this post, I am using a convolutional neural network to make some neat art!" }, { "code": null, "e": 650, "s": 250, "text": "Up until now, art has always been a work of imagination left best with creatives. Artists have had a unique way of expressing themselves and the times they lived in through a unique lens, specific to the way they viewed the world around them. Be it the Da Vinci and his wonder-inspiring work or Van Gogh and his twisted look at the world, art has always inspired millions throughout the generations." }, { "code": null, "e": 1091, "s": 650, "text": "Technology has always inspired artists to push the boundaries and explore the possibilities beyond what has already been done. The first film camera was not invented as a technology to aid art, but merely a tool to capture reality. Clearly, artists saw it differently giving birth to the entire film and animation industry. This is true for every major tech we have created, artists have always found a way to use the novel tool creatively." }, { "code": null, "e": 1583, "s": 1091, "text": "With the recent advances in machine learning, we can generate incredible art pieces within minutes, that may have taken an expert artists years to incomplete just about a century ago. Machine learning creates the possibility of prototyping an art piece at least 100x faster while having the medium collaborate with the artist. The beauty herein lies in the fact that this new wave of technological advancement will enhance the way art is created and looked at by upgrading the tools at hand." }, { "code": null, "e": 2169, "s": 1583, "text": "Here, I will use python to take any image and turn it into the style of any artist of my choosing. Google released a similar product known as “Deep Dream” in 2015, and the internet took it was throbbing enthusiasm. They essentially trained a Convolutional Neural Net that classifies images and then used an optimization technique to enhance the patterns in the input image as opposed to its own weights based on what the network had learned. Soon after this, the website “Deepart” came out that allowed users to have any image convert to a painting style of their choice within clicks!" }, { "code": null, "e": 2551, "s": 2169, "text": "To understand how this “magic” called the style transfer process works, we will write our own script in Keras with a TensorFlow backend. I will use a base image (a photo of my favorite animal) and a style reference image. My script will use Vincent Van Gogh’s “Starry Night” as the reference and apply it to the base image. Here we go by first importing the necessary dependencies:" }, { "code": null, "e": 2805, "s": 2551, "text": "from __future__ import print_functionimport timefrom PIL import Imageimport numpy as npfrom keras import backendfrom keras.models import Modelfrom keras.applications.vgg16 import VGG16from scipy.optimize import fmin_l_bfgs_bfrom scipy.misc import imsave" }, { "code": null, "e": 3272, "s": 2805, "text": "So we will feed these images into the neural net by first converting them into the de-facto format for all neural nets, Tensors. The variable function from Keras backend Tensorflow is equivalent to tf.variable. The parameter to this will be the image converted to an array and then we do the same thing for the style image. We then create a combination image that can later store our end results by using a placeholder to initialize it with a given width and height." }, { "code": null, "e": 3299, "s": 3272, "text": "Here is the content image:" }, { "code": null, "e": 3477, "s": 3299, "text": "height = 512width = 512content_image_path = 'images/elephant.jpg'content_image = Image.open(content_image_path)content_image = content_image.resize((height, width))content_image" }, { "code": null, "e": 3509, "s": 3477, "text": "Here I load up the style image:" }, { "code": null, "e": 3666, "s": 3509, "text": "style_image_path = '/Users/vivek/Desktop/VanGogh.jpg'style_image = Image.open (style_image_path)style_image = style_image.resize((height, width))style_image" }, { "code": null, "e": 3935, "s": 3666, "text": "Next, we convert both these images so that they are in a suitable form for numerical processing. We add another dimension (beyond the height, width and normal 3 dimensions) so we can later concatenate the representations of the two images into a common data structure:" }, { "code": null, "e": 4200, "s": 3935, "text": "content_array = np.asarray(content_image, dtype='float32')content_array = np.expand_dims(content_array, axis=0)print(content_array.shape)style_array = np.asarray(style_image, dtype='float32')style_array = np.expand_dims(style_array, axis=0)print(style_array.shape)" }, { "code": null, "e": 5241, "s": 4200, "text": "We will be using the VGG network moving forward. Keras has wrapped this model really well for us to use easily as we will moving forward. VGG16 is a 16 layer convolutional net, created by the Visual Geometry Group at Oxford that won the ImageNet competition in 2014. The idea here is that a CNN pre-trained for image classification on thousands of different images already knows how to encode information in a container image. I have learned features at each layer that can detect certain generalized features. These are the features we will be using to perform style transfer. We do not need the convolution block at the top of this net because its fully connected layers and softmax function help classify the images by squashing the dimensionality feature map and outputting a probability. We are not classifying just transferring. This is essentially an optimization where we have some loss function that measures the error value that we will be attempting to minimize. Our loss function, in this case, can be decomposed into two parts:" }, { "code": null, "e": 5867, "s": 5241, "text": "1) Content Loss We initialize the total loss to zero and add each of these to it. First is the content loss. An image always has a content component and a style component. We know that the features a CNN learns are arranged in order of progressively more abstract compositions. Since the higher-level features are more abstract, such as detecting faces, we can associate them with content. When we run our output image and our reference image through the network, we obtain a set of feature representations for both from a hidden layer of our choice. We then measure the Euclidean distance between them to calculate our loss." }, { "code": null, "e": 6714, "s": 5867, "text": "2) Style Loss This is also a function of our network’s hidden layer outputs but is slightly more complex. We still pass both images through the net to observe their activations, but instead of comparing the raw activations directly for content, we add an extra step to measure the correlation between the activations. We take what is referred to as the gram matrix for both the images of the activation at a given layer in the network. This will measure which features tend to activate together. This basically represents the probability of different features co-occurs in different parts of the image. Once we have this we can define this style loss as a euclidean distance between the gram matrices between the reference image and the output image, and compute the total style loss as the weighted sum of the style loss at each layer we choose." }, { "code": null, "e": 6883, "s": 6714, "text": "Now that we have the losses we need to define the gradients of the output image with respect to the loss, and then use those gradients to iteratively minimize the loss." }, { "code": null, "e": 7023, "s": 6883, "text": "We now need to massage the input data to match what was done in Simonyan and Zisserman (2015), paper that introduced the VGG Network model." }, { "code": null, "e": 7073, "s": 7023, "text": "For this, we need to perform two transformations:" }, { "code": null, "e": 7310, "s": 7073, "text": "Subtract the mean RGB value (computed previously on the ImageNet training set and easily obtainable from Google searches) from each pixel.Flip the ordering of the multi-dimensional array from RGB to BGR (the ordering used in the paper)." }, { "code": null, "e": 7449, "s": 7310, "text": "Subtract the mean RGB value (computed previously on the ImageNet training set and easily obtainable from Google searches) from each pixel." }, { "code": null, "e": 7548, "s": 7449, "text": "Flip the ordering of the multi-dimensional array from RGB to BGR (the ordering used in the paper)." }, { "code": null, "e": 7837, "s": 7548, "text": "content_array[:,:,:,0] -= 103.99content_array[:, :, :, 1] -= 116.779content_array[:, :, :, 2] -= 123.68content_array = content_array[:, :, :, ::-1]style_array[:, :, :, 0] -= 103.939style_array[:, :, :, 1] -= 116.779style_array[:, :, :, 2] -= 123.68style_array = style_array[:, :, :, ::-1]" }, { "code": null, "e": 8103, "s": 7837, "text": "Now we’re ready to use these arrays to define variables in Keras’ backend (the TensorFlow graph). We also introduce a placeholder variable to store the combination image that retains the content of the content image while incorporating the style of the style image." }, { "code": null, "e": 8256, "s": 8103, "text": "content_image = backend.variable(content_array)style_image = backend.variable(style_array)combination_image = backend.placeholder((1, height, width, 3))" }, { "code": null, "e": 8381, "s": 8256, "text": "We now go ahead and concatenate all this image data into a single tensor that can be used for processing Kera’s VGG16 model." }, { "code": null, "e": 8545, "s": 8381, "text": "input_tensor = backend.concatenate([content_image, style_image, combination_image], axis = 0)" }, { "code": null, "e": 8770, "s": 8545, "text": "As previously stated, since we’re not interested in the classification problem, we don’t need the fully connected layers or the final softmax classifier. We only need the part of the model marked in green in the table below." }, { "code": null, "e": 9040, "s": 8770, "text": "It is trivial for us to get access to this truncated model because Keras comes with a set of pre trained models, including the VGG16 model we’re interested in. Note that by setting include_top=False in the code below, we don't include any of the fully connected layers." }, { "code": null, "e": 9144, "s": 9040, "text": "import h5pymodel = VGG16(input_tensor=input_tensor, weights='imagenet', include_top=False)" }, { "code": null, "e": 9359, "s": 9144, "text": "As is clear from the table above, the model we’re working with has a lot of layers. Keras has its own names for these layers. Let’s make a list of these names so that we can easily refer to individual layers later." }, { "code": null, "e": 9435, "s": 9359, "text": "layers = dict([(layer.name, layer.output) for layer in model.layers])layers" }, { "code": null, "e": 9492, "s": 9435, "text": "We now pick the weights, this can be played around with:" }, { "code": null, "e": 9561, "s": 9492, "text": "content_weight = 0.025style_weight = 5.0total_variation_weight = 1.0" }, { "code": null, "e": 9746, "s": 9561, "text": "We’ll now use the feature spaces provided by specific layers of our model to define these three loss functions. We begin by initializing the total loss to 0 and adding to it in stages." }, { "code": null, "e": 9774, "s": 9746, "text": "loss = backend.variable(0.)" }, { "code": null, "e": 9796, "s": 9774, "text": "Now the content loss:" }, { "code": null, "e": 10156, "s": 9796, "text": "def content_loss(content, combination): return backend.sum(backend.square(combination - content))layer_features = layers['block2_conv2']content_image_features = layer_features[0, :, :, :]combination_features = layer_features[2, :, :, :]loss += content_weight * content_loss(content_image_features, combination_features)" }, { "code": null, "e": 10176, "s": 10156, "text": "And the style loss:" }, { "code": null, "e": 10981, "s": 10176, "text": "def gram_matrix(x): features = backend.batch_flatten(backend.permute_dimensions(x, (2, 0, 1))) gram = backend.dot(features, backend.transpose(features)) return gramdef style_loss(style, combination): S = gram_matrix(style) C = gram_matrix(combination) channels = 3 size = height * width return backend.sum(backend.square(S - C)) / (4. * (channels ** 2) * (size ** 2))feature_layers = ['block1_conv2', 'block2_conv2', 'block3_conv3', 'block4_conv3', 'block5_conv3']for layer_name in feature_layers: layer_features = layers[layer_name] style_features = layer_features[1, :, :, :] combination_features = layer_features[2, :, :, :] sl = style_loss(style_features, combination_features) loss += (style_weight / len(feature_layers)) * sl" }, { "code": null, "e": 11015, "s": 10981, "text": "Finally the total variation loss:" }, { "code": null, "e": 11317, "s": 11015, "text": "def total_variation_loss(x): a = backend.square(x[:, :height-1, :width-1, :] - x[:, 1:, :width-1, :]) b = backend.square(x[:, :height-1, :width-1, :] - x[:, :height-1, 1:, :]) return backend.sum(backend.pow(a + b, 1.25))loss += total_variation_weight * total_variation_loss(combination_image)" }, { "code": null, "e": 11400, "s": 11317, "text": "Now we go ahead and define the needed gradients to solve the optimisation problem:" }, { "code": null, "e": 11451, "s": 11400, "text": "grads = backend.gradients(loss, combination_image)" }, { "code": null, "e": 11739, "s": 11451, "text": "We then introduce an Evaluator class that computes loss and gradients in one pass while retrieving them via two separate functions, loss and grads. This is done because scipy.optimize requires separate functions for loss and gradients, but computing them separately would be inefficient." }, { "code": null, "e": 12584, "s": 11739, "text": "outputs = [loss]outputs += gradsf_outputs = backend.function([combination_image], outputs)def eval_loss_and_grads(x): x = x.reshape((1, height, width, 3)) outs = f_outputs([x]) loss_value = outs[0] grad_values = outs[1].flatten().astype('float64') return loss_value, grad_valuesclass Evaluator(object): def __init__(self): self.loss_value = None self.grads_values = None def loss(self, x): assert self.loss_value is None loss_value, grad_values = eval_loss_and_grads(x) self.loss_value = loss_value self.grad_values = grad_values return self.loss_value def grads(self, x): assert self.loss_value is not None grad_values = np.copy(self.grad_values) self.loss_value = None self.grad_values = None return grad_valuesevaluator = Evaluator()" }, { "code": null, "e": 12993, "s": 12584, "text": "Now we’re finally ready to solve our optimisation problem. This combination image begins its life as a random collection of (valid) pixels, and we use the L-BFGS algorithm (a quasi-Newton algorithm that’s significantly quicker to converge than standard gradient descent) to iteratively improve upon it. We stop after 8 iterations because the output looks good to me and the loss stops reducing significantly." }, { "code": null, "e": 13430, "s": 12993, "text": "x = np.random.uniform(0, 255, (1, height, width, 3)) - 128.iterations = 8for i in range(iterations): print('Start of iteration', i) start_time = time.time() x, min_val, info = fmin_l_bfgs_b(evaluator.loss, x.flatten(), fprime=evaluator.grads, maxfun=20) print('Current loss value:', min_val) end_time = time.time() print('Iteration %d completed in %ds' % (i, end_time - start_time))" }, { "code": null, "e": 13573, "s": 13430, "text": "If you are working on a laptop like me, go grab a nice meal because this will take a while. Here is the output from the last iteration though!" }, { "code": null, "e": 13742, "s": 13573, "text": "x = x.reshape((height, width, 3))x = x[:, :, ::-1]x[:, :, 0] += 103.939x[:, :, 1] += 116.779x[:, :, 2] += 123.68x = np.clip(x, 0, 255).astype('uint8')Image.fromarray(x)" }, { "code": null, "e": 14075, "s": 13742, "text": "Neat! We can continue playing with this by changing the two images, their size, the weights of our loss functions, etc. It is important to remember that running this for just 8 iterations took my MacBook air about 4 hours. This is a very CPU-intensive process, and so when scaled this is a relatively expensive problem to work with." } ]
Detecting leakage in machine learning pipelines using NANs/complex numbers | by Abhay Pawar | Towards Data Science
Data leakage in machine learning pipelines can cause havoc for your model. In this post, I’m going to share an amazingly simple way to detect data leakages using NANs and complex numbers while treating your ML pipeline as a black box. I’ll talk very briefly about what data leakage is. I’ll also talk about leak-detect, a python package I’m releasing to do all this in one line code. The most precise way to describe data leakage could be this: Data leakage in an ML model occurs when data used to create predictor variables during training time is unavailable at the time of inference. Clearly, using data(features) unavailable at inference time during training leads to model underperforming in production. This under-performance could mean millions of lost dollars depending on the scale of your company! What are some ways feature creation pipelines can introduce data leakage? Using target or data used to create target for feature engineering.Using data from future periods for feature engineering. Using target or data used to create target for feature engineering. Using data from future periods for feature engineering. First is generally easier to detect and keep track of. So, let’s try to understand the second one using an example. Consider you are trying to predict stock price of a company after 5 days. Our data contains date and daily open price. ## target (to be predicted): Open price after 5 days.data['target'] = data['open_price'].shift(-5) We want to create various hand-made features for this task. Say, one feature we want is ‘price on the previous day’. data['price_previous_day'] = data['open_price'].shift(1)# .shift(1) gives value from previous row But instead of doing .shift(1), let’s say by mistake we did .shift(-1)and used values from the next row of open price as a feature. We just created ‘price on the next day’ instead. This is a leaky feature because it uses data from a future period. There are many best practices to follow to avoid leakage, but none of these can make you 100% sure that your pipeline is not leaky. This is where NANs and complex numbers come in! This methodology can be looked at as a unit test for data leakages. Before getting to the methodology, let’s do an analogy first :). Let’s say you have two tanks connected through a pipe which is closed. How can we detect that this pipe is indeed closed and is not leaky without inspecting the pipe? You can add color to one tank and check if the other tank also gets that color. Just like watercolors, NANs and complex numbers are ideal for leakage detection tasks because they have the ability to persist after any operation with real numbers. Operations like addition, subtraction, etc between a real number and NAN or complex number yield NAN or complex number respectively. Of course, there are exceptions to this and we will come to that later. In the stock price example, let’s say we set open price on a specific day D to NAN and create our features using this data. ‘Price on next day’ (leaky feature) will have NAN value for day D-1, whereas ‘price on previous day’ (non-leaky) will have it for day D+1. So for the leaky feature, we will observe an extra NAN in the feature before day D. Essentially, if open-price on day D is being used to create a feature for days before D (which is leakage), we will see extra NAN in that feature before day D. This is true if the feature is being created using any operation which yields NAN when one input is NAN. What if we want to check if data from day D+1, D+2, etc is being used to create features for days before D? Well, we set them all to NAN and just count NANs in final features before day D. This methodology can be summarized in 4 simple steps: 1. Define an imaginary leakage partition that splits your data into the upper and lower half. In our case, we don’t want lower half data (future) to be used to create features for dates in the upper half (past). 2. Run the data creation pipeline on raw data and count the number of NANs in all features in the upper half. Our data creation pipeline here creates both leaky and non-leaky features described above. Our non-leaky feature (previous day price) has 1 NAN because our data starts from 1-Jan-2020 and leaky one (price on next day) has 0 NANs. 3. Now set all the columns in raw data used to create features to NAN below the leakage partition. 4. Run the data creation pipeline on this raw data with NANs. Count the number of NANs above partition again. For our leaky feature, there’s 1 extra NAN! The number of NANs in the upper half for non-leaky feature stayed the same but increased by 1 for the leaky feature. Where did this extra NAN come from? The only place it can come from is the lower half which is all NANs. And we just detected leakage in our leaky feature! This methodology works for any feature because it’s not dependent on the definition of a feature. The same process above can be repeated by adding an imaginary component instead of replacing with NAN and counting the number of rows with an imaginary component. Pandas and numpy already support all operations with NANs and complex numbers. So, you don’t have to change your code at all! Also, this detects leakage only in your pipelines and not leakages which might be already present in raw data. Why use both NANs and complex numbers? It’s important to run this test with both NANs and complex numbers because your pipeline could be replacing NANs with a value. Or numpy converts complex numbers to real numbers for some operations. I’ve tested it for different features that involve addition, multiplication, max, min operations and leakage detection works in all these cases. Still, it’s important to keep an eye out for special cases. I’m also releasing a python package:leak-detect to do all this with one line code! leak-detect can detect horizontal (from target to features) and vertical (from future to past) leakage occurring due to buggy code. In the example below, we are creating two leaky features which use data from the future: ‘return_2day_leaky’ and ‘open_10day_before_leaky’. Both get detected to have vertical leakage. It also prints out the number of previous rows data is leaking into. For the first feature, it’s 2 because it uses price after 2 days. The leak-detect example.ipynb notebook in the repo lists more such examples. You can install the package by doing pip install leak-detect. Here’s the github repo. Leak-detect works only on data similar to the stock price data. In your case, you could be using multiple datasets to create features or maybe even SQL. Whatever your pipeline is, the idea still remains the same. You would have to write your own custom functions which can be replicated by looking at leak-detect code here. Hope this is something that would be useful to you. Let me know if you have any thoughts in the comments! You can also reach out to me through abhayspawar on Twitter, Linkedin and Gmail. Thanks a lot for reading. Stay safe!
[ { "code": null, "e": 556, "s": 172, "text": "Data leakage in machine learning pipelines can cause havoc for your model. In this post, I’m going to share an amazingly simple way to detect data leakages using NANs and complex numbers while treating your ML pipeline as a black box. I’ll talk very briefly about what data leakage is. I’ll also talk about leak-detect, a python package I’m releasing to do all this in one line code." }, { "code": null, "e": 617, "s": 556, "text": "The most precise way to describe data leakage could be this:" }, { "code": null, "e": 759, "s": 617, "text": "Data leakage in an ML model occurs when data used to create predictor variables during training time is unavailable at the time of inference." }, { "code": null, "e": 980, "s": 759, "text": "Clearly, using data(features) unavailable at inference time during training leads to model underperforming in production. This under-performance could mean millions of lost dollars depending on the scale of your company!" }, { "code": null, "e": 1054, "s": 980, "text": "What are some ways feature creation pipelines can introduce data leakage?" }, { "code": null, "e": 1177, "s": 1054, "text": "Using target or data used to create target for feature engineering.Using data from future periods for feature engineering." }, { "code": null, "e": 1245, "s": 1177, "text": "Using target or data used to create target for feature engineering." }, { "code": null, "e": 1301, "s": 1245, "text": "Using data from future periods for feature engineering." }, { "code": null, "e": 1536, "s": 1301, "text": "First is generally easier to detect and keep track of. So, let’s try to understand the second one using an example. Consider you are trying to predict stock price of a company after 5 days. Our data contains date and daily open price." }, { "code": null, "e": 1635, "s": 1536, "text": "## target (to be predicted): Open price after 5 days.data['target'] = data['open_price'].shift(-5)" }, { "code": null, "e": 1752, "s": 1635, "text": "We want to create various hand-made features for this task. Say, one feature we want is ‘price on the previous day’." }, { "code": null, "e": 1850, "s": 1752, "text": "data['price_previous_day'] = data['open_price'].shift(1)# .shift(1) gives value from previous row" }, { "code": null, "e": 2098, "s": 1850, "text": "But instead of doing .shift(1), let’s say by mistake we did .shift(-1)and used values from the next row of open price as a feature. We just created ‘price on the next day’ instead. This is a leaky feature because it uses data from a future period." }, { "code": null, "e": 2346, "s": 2098, "text": "There are many best practices to follow to avoid leakage, but none of these can make you 100% sure that your pipeline is not leaky. This is where NANs and complex numbers come in! This methodology can be looked at as a unit test for data leakages." }, { "code": null, "e": 2411, "s": 2346, "text": "Before getting to the methodology, let’s do an analogy first :)." }, { "code": null, "e": 3029, "s": 2411, "text": "Let’s say you have two tanks connected through a pipe which is closed. How can we detect that this pipe is indeed closed and is not leaky without inspecting the pipe? You can add color to one tank and check if the other tank also gets that color. Just like watercolors, NANs and complex numbers are ideal for leakage detection tasks because they have the ability to persist after any operation with real numbers. Operations like addition, subtraction, etc between a real number and NAN or complex number yield NAN or complex number respectively. Of course, there are exceptions to this and we will come to that later." }, { "code": null, "e": 3292, "s": 3029, "text": "In the stock price example, let’s say we set open price on a specific day D to NAN and create our features using this data. ‘Price on next day’ (leaky feature) will have NAN value for day D-1, whereas ‘price on previous day’ (non-leaky) will have it for day D+1." }, { "code": null, "e": 3641, "s": 3292, "text": "So for the leaky feature, we will observe an extra NAN in the feature before day D. Essentially, if open-price on day D is being used to create a feature for days before D (which is leakage), we will see extra NAN in that feature before day D. This is true if the feature is being created using any operation which yields NAN when one input is NAN." }, { "code": null, "e": 3830, "s": 3641, "text": "What if we want to check if data from day D+1, D+2, etc is being used to create features for days before D? Well, we set them all to NAN and just count NANs in final features before day D." }, { "code": null, "e": 3884, "s": 3830, "text": "This methodology can be summarized in 4 simple steps:" }, { "code": null, "e": 4096, "s": 3884, "text": "1. Define an imaginary leakage partition that splits your data into the upper and lower half. In our case, we don’t want lower half data (future) to be used to create features for dates in the upper half (past)." }, { "code": null, "e": 4436, "s": 4096, "text": "2. Run the data creation pipeline on raw data and count the number of NANs in all features in the upper half. Our data creation pipeline here creates both leaky and non-leaky features described above. Our non-leaky feature (previous day price) has 1 NAN because our data starts from 1-Jan-2020 and leaky one (price on next day) has 0 NANs." }, { "code": null, "e": 4535, "s": 4436, "text": "3. Now set all the columns in raw data used to create features to NAN below the leakage partition." }, { "code": null, "e": 4689, "s": 4535, "text": "4. Run the data creation pipeline on this raw data with NANs. Count the number of NANs above partition again. For our leaky feature, there’s 1 extra NAN!" }, { "code": null, "e": 5060, "s": 4689, "text": "The number of NANs in the upper half for non-leaky feature stayed the same but increased by 1 for the leaky feature. Where did this extra NAN come from? The only place it can come from is the lower half which is all NANs. And we just detected leakage in our leaky feature! This methodology works for any feature because it’s not dependent on the definition of a feature." }, { "code": null, "e": 5460, "s": 5060, "text": "The same process above can be repeated by adding an imaginary component instead of replacing with NAN and counting the number of rows with an imaginary component. Pandas and numpy already support all operations with NANs and complex numbers. So, you don’t have to change your code at all! Also, this detects leakage only in your pipelines and not leakages which might be already present in raw data." }, { "code": null, "e": 5499, "s": 5460, "text": "Why use both NANs and complex numbers?" }, { "code": null, "e": 5902, "s": 5499, "text": "It’s important to run this test with both NANs and complex numbers because your pipeline could be replacing NANs with a value. Or numpy converts complex numbers to real numbers for some operations. I’ve tested it for different features that involve addition, multiplication, max, min operations and leakage detection works in all these cases. Still, it’s important to keep an eye out for special cases." }, { "code": null, "e": 6436, "s": 5902, "text": "I’m also releasing a python package:leak-detect to do all this with one line code! leak-detect can detect horizontal (from target to features) and vertical (from future to past) leakage occurring due to buggy code. In the example below, we are creating two leaky features which use data from the future: ‘return_2day_leaky’ and ‘open_10day_before_leaky’. Both get detected to have vertical leakage. It also prints out the number of previous rows data is leaking into. For the first feature, it’s 2 because it uses price after 2 days." }, { "code": null, "e": 6599, "s": 6436, "text": "The leak-detect example.ipynb notebook in the repo lists more such examples. You can install the package by doing pip install leak-detect. Here’s the github repo." }, { "code": null, "e": 6923, "s": 6599, "text": "Leak-detect works only on data similar to the stock price data. In your case, you could be using multiple datasets to create features or maybe even SQL. Whatever your pipeline is, the idea still remains the same. You would have to write your own custom functions which can be replicated by looking at leak-detect code here." } ]
Github Autocompletion with Machine Learning | by Oscar D. Lara Yejas | Towards Data Science
Written by Óscar D. Lara Yejas and Ankit Jha As data scientists, one of the fields that comes closer our hearts is software development since, after all, we are avid users of all sorts of packages and frameworks that help us build our models. GitHub is one of the key technologies to support the software development lifecycle, including keeping track of defects, tasks, stories, commits, and so forth. In a large development organization, there might be a number of teams (i.e., squads) with specific responsibilities, e.g., performance squad, installer squad, UX squad, and documentation squad, etc. This introduces challenges when creating a new work item, as the user may not know which team a task or defect should be assigned to or who should be its owner. But can Machine Learning help? The answer is yes, especially, if we have some historical data from a GitHub repository. The question we try to address in this article is: can we create an ML model to suggest the squad and owner of a GitHub work item based upon its title and other characteristics? Throughout this article, we will use the R programming language. The following R packages are required: suppressWarnings({ library(tm) library(zoo) library(SnowballC) library(wordcloud) library(plotly) library(rword2vec) library(text2vec) library("reshape") library(nnet) library(randomForest)}) GitHub provides different work item characteristics such as the id, title, type, severity, squad, author, state, date, etc. The title will be our main data source since it is always required and probably has the highest relevance; it’s not hard to imagine that, for example, if the work item title is "Installer fails when trying to deploy Docker instance", it should probably be assigned to the installer squad. Or, a title such as "Documentation is missing for feature XYZ", suggests that the work item is likely to be assigned to the documentation squad. Below’s a sample of the GitHub dataset. # Load the dataset from a CSV fileworkItems <- read.csv('../github-data.csv')# Show the datasetshow(workItems) Note that both the squad and assignee (i.e., owner), which are the ground truths, are given in the historical data. This means, we can approach this as a classification problem. Now, since the work item title is given as free text, some of the Natural Language Processing techniques could be used to derive some features. Natural Language Processing (NLP) basics Let us introduce some NLP terminology: Our dataset (a collection of work item titles) will be called the corpus. Each work item title is a document. The set of all distinct words in the corpus is the dictionary. A very simple way to extract features from free text is to compute term frequency (TF), i.e., count how many times each word of the dictionary appears in each of the documents. The higher the occurrence, the more relevance such word will have. This results into a document-term matrix (DTM), which has one row per document and as many columns as words in the dictionary. Position (i, j) of this matrix represents how many times the word j appears in title i. You can immediately see that the resulting feature set will be very sparse (i.e., having lots of zero values) as you may have thousands of words in the dictionary but each document (i.e., title) will only contain a few dozens of them. A common issue with TF is that words such as “the”, “a”, “in”, etc., tend to appear very frequently yet they may not be relevant. This is why TF-IDF rather normalizes the frequency of a word in the document by dividing it by a function of its frequency in the entire corpus. In this way, the most relevant words will be the ones that appear in the document but are not common in the entire corpus. Now, before applying any of the NLP techniques, some text curation is needed. This includes removing stop words (e.g., prepositions, articles, etc.), case, punctuation, and stemming the document, which refers to reducing inflected/derived words to their base or root form. The code below performs the required text preprocessing: preprocess <- function(text) { corpus <- VCorpus(VectorSource(tolower(text))) corpus <- tm_map(corpus, PlainTextDocument) corpus <- tm_map(corpus, removePunctuation) corpus <- tm_map(corpus, removeWords, stopwords('english')) corpus <- tm_map(corpus, stemDocument) data.frame(text=unlist(sapply(corpus, `[`, "content")), stringsAsFactors=F)}curatedText <- preprocess(workItems$TITLE) The following code will create features by applying TF-IDF to our curated text. The resulting DTM will have one column per word in the dictionary. # Create a tokenizerit <- itoken(curatedText$text, progressbar = FALSE)# Create a vectorizerv <- create_vocabulary(it) %>% prune_vocabulary(doc_proportion_max = 0.1, term_count_min = 5)vectorizer <- vocab_vectorizer(v)# Create a document term matrix (DTM)dtmCorpus <- create_dtm(it, vectorizer)tfidf <- TfIdf$new()dtm_tfidf <- fit_transform(dtmCorpus, tfidf)featuresTFIDF <- as.data.frame(as.matrix(dtm_tfidf))# Add prefix to column names since there could be names starting # with numberscolnames(featuresTFIDF) <- paste0("word_", colnames(featuresTFIDF))# Append the squad and type to the feature set for classificationfeatureSet <- cbind(featuresTFIDF, "SQUAD"=workItems$SQUAD, "TYPE"=workItemsCurated$TYPE) Now we have a feature set where each row is a work item and each column is its TF-IDF score . We also have the type of work item (i.e., either a task or a defect) and the ground truth (i.e., the squad). Next, we will create splits for training and testing sets: random <- runif(nrow(featureSet))train <- featureSet[random > 0.2, ]trainRaw <- workItemsFiltered[random > 0.2, ]test <- featureSet[random < 0.2, ]testRaw <- workItemsFiltered[random < 0.2, ] Random Forests R offers the randomForest package, which allow to train a Random Forest classifier as follows: # Train a Random Forest model> model <- randomForest(SQUAD ~ ., train, ntree = 500)# Compute predictions> predictions <- predict(model, test)# Compute overall accuracy> sum(predictions == test$SQUAD) / length(predictions)[1] 0.59375 Note accuracy is below 60%, which is, for most purposes, pretty bad. However, predicting the exact squad a work item should be assigned to, based upon its title only, is a very challenging task, even for humans. Therefore, let’s rather provide the user with two or three suggestions of the most likely squads for a given work item. To this end, let us use the probabilities of each individual class, which are provided by the randomForest. All we need to do is rank these probabilities and pick the classes with the highest values. The following code does exactly so: # A function for ranking numbersranks <- function(d) { data.frame(t(apply(-d, 1, rank, ties.method='min'))) }# Score the Random Forest model and return probabilitiesrfProbs <- predict(model, test, type="prob")# Compute probability ranksprobRanks <- ranks(rfProbs)cbind("Title" = testRaw$TITLE, probRanks, "SQUAD" = testRaw$SQUAD, "PRED" = predictions)rfSquadsPreds <- as.data.frame(t(apply(probRanks, MARGIN=1, FUN=function(x) names(head(sort(x, decreasing=F), 3)))))# Compute accuracy of any of the two recommendations to be correct> sum(rfSquadsPreds$V1 == rfSquadsPreds$SQUAD | rfSquadsPreds$V2 == rfSquadsPreds$SQUAD) / nrow(rfSquadsPreds)[1] 0.76# Compute accuracy of any of the three recommendations to be correct> sum(rfSquadsPreds$V1 == rfSquadsPreds$SQUAD | rfSquadsPreds$V2 == rfSquadsPreds$SQUAD | rfSquadsPreds$V3 == rfSquadsPreds$SQUAD) / nrow(rfSquadsPreds)[1] 0.87 Note that having two suggestions, the probability of any of them to be correct is 76%, while with three this probability becomes 87%, which makes the model much more useful. Other algorithms We also explored Logistic Regression, XGBoost, Glove, and RNNs/LSTMs. However, the results were not significantly better than for Random Forests. Feature importance To put this model in production, we first need to export (1) the model itself and (2) the TF-IDF transformations. The former will be used for scoring whereas the latter is to extract the same features (i.e., words) that were used for training. Exporting the assets # Save TF-IDF transformationssaveRDS(vectorizer, "../docker/R/vectorizer.rds")saveRDS(dtmCorpus,"../docker/R/dtmCorpus_training_data.rds")# Save DTMsaveRDS(model, "squad_prediction_rf.rds") Docker and plumbr Docker can be a very useful tool to turn our assets into a containerized application. This will help us to ship, build, and run the application anywhere. As for most of the software services, an API end point serves the best way to consume a predictive model. We explored options like OpenCPU and plumbr. Plumber seemed simpler yet quite powerful to read CSV files and run analytics smoothly, hence it was our choice. Plumber’s code style (i.e., using decorators) was also more intuitive, which allowed for an easier time managing endpoint URLs, HTTP headers, and response payloads. Sample docker file is below: FROM trestletech/plumber# Install required packagesRUN apt-get install -y libxml2-dev# Install the randomForest packageRUN R -e ‘install.packages(c(“tm”,”text2vec”, ”plotly”,”randomForest”,”SnowballC”))’# Copy model and scoring scriptRUN mkdir /modelWORKDIR /model# plumb and run serverEXPOSE 8000ENTRYPOINT [“R”, “-e”, \“pr <- plumber::plumb(‘/model/squad_prediction_score.R’); pr$run(host=’0.0.0.0', port=8000)”] A snippet of the scoring file squad_prediction_score.R is below: x <- c(“tm”,”text2vec”,”plotly”,”randomForest”,”SnowballC”)lapply(x, require, character.only = TRUE)# Load tf-idfvectorizer = readRDS(“/model/vectorizer.rds”)dtmCorpus_training_data = readRDS(“/model/dtmCorpus_training_data.rds”)tfidf = TfIdf$new()tfidf$fit_transform(dtmCorpus_training_data)# Load the modelsquad_prediction_rf <- readRDS(“/model/squad_prediction_rf.rds”)#* @param df data frame of variables#* @serializer unboxedJSON#* @post /scorescore <- function(req, df) { curatedText <- preprocess(df$TITLE) df$CURATED_TITLE <- curatedText$text featureSet <- feature_extraction(df) rfProbs <- predict(squad_prediction_rf, featureSet,type=”prob”) probRanks <- ranks(rfProbs) rfSquadsPreds <- as.data.frame(t(apply(probRanks, MARGIN=1, FUN=function(x) names(head(sort(x, decreasing=F), 3))))) result <- list(“1” = rfSquadsPreds$V1, “2” = rfSquadsPreds$V2, “3” = rfSquadsPreds$V3) result}#* @param df data frame of variables#* @post /traintrain <- function(req, df) { ...}preprocess <- function(text) { ...}feature_extraction <- function(df) { ...} Now, to run the model against your own repository, you just need to build your own docker image and hit the end points: docker build -t squad_pred_image .docker run — rm -p 8000:8000 squad_pred_image Once the docker image is ready, a sample API call would look like this: curl -X POST \ http://localhost:8000/score \ -H ‘Content-Type: application/json’ \ -H ‘cache-control: no-cache’ \ -d ‘{ “df”: [{ “ID”: “4808”, “TITLE”: “Data virtualization keeps running out of memory”, “TYPE”: “type: Defect” }] }’ A sample API call output is below: { “1”: “squad.core”, “2”: “squad.performance”, “3”: “squad.dv”} Would you like to help your development organization to be more productive with GitHub? Give our code a try with your own dataset. Let us know your results. Óscar D. Lara Yejas is Senior Data Scientist and one of the founding members of the IBM Machine Learning Hub. He works closely with some of the largest enterprises in the world on applying ML to their specific use-cases, including healthcare, financial, manufacturing, government, and retail. He has also contributed to the IBM Big Data portfolio, particularly in the Large-scale Machine Learning area, being an Apache Spark and Apache SystemML contributor. Óscar holds a Ph.D. in Computer Science and Engineering from University of South Florida. He is the author of the book “Human Activity Recognition: Using Wearable Sensors and Smartphones”, and a number of research/technical papers on Big Data, Machine Learning, Human-centric sensing, and Combinatorial Optimization. Ankit Jha is a Data Scientist working on IBM Cloud Private For Data platform. He is also part of the platform’s serviceability team and works log collection and analysis using ML techniques. Ankit is a seasoned software professional who also holds Masters in Analytics from University Of Cincinnati.
[ { "code": null, "e": 217, "s": 171, "text": "Written by Óscar D. Lara Yejas and Ankit Jha" }, { "code": null, "e": 415, "s": 217, "text": "As data scientists, one of the fields that comes closer our hearts is software development since, after all, we are avid users of all sorts of packages and frameworks that help us build our models." }, { "code": null, "e": 1055, "s": 415, "text": "GitHub is one of the key technologies to support the software development lifecycle, including keeping track of defects, tasks, stories, commits, and so forth. In a large development organization, there might be a number of teams (i.e., squads) with specific responsibilities, e.g., performance squad, installer squad, UX squad, and documentation squad, etc. This introduces challenges when creating a new work item, as the user may not know which team a task or defect should be assigned to or who should be its owner. But can Machine Learning help? The answer is yes, especially, if we have some historical data from a GitHub repository." }, { "code": null, "e": 1233, "s": 1055, "text": "The question we try to address in this article is: can we create an ML model to suggest the squad and owner of a GitHub work item based upon its title and other characteristics?" }, { "code": null, "e": 1337, "s": 1233, "text": "Throughout this article, we will use the R programming language. The following R packages are required:" }, { "code": null, "e": 1563, "s": 1337, "text": "suppressWarnings({ library(tm) library(zoo) library(SnowballC) library(wordcloud) library(plotly) library(rword2vec) library(text2vec) library(\"reshape\") library(nnet) library(randomForest)})" }, { "code": null, "e": 2161, "s": 1563, "text": "GitHub provides different work item characteristics such as the id, title, type, severity, squad, author, state, date, etc. The title will be our main data source since it is always required and probably has the highest relevance; it’s not hard to imagine that, for example, if the work item title is \"Installer fails when trying to deploy Docker instance\", it should probably be assigned to the installer squad. Or, a title such as \"Documentation is missing for feature XYZ\", suggests that the work item is likely to be assigned to the documentation squad. Below’s a sample of the GitHub dataset." }, { "code": null, "e": 2272, "s": 2161, "text": "# Load the dataset from a CSV fileworkItems <- read.csv('../github-data.csv')# Show the datasetshow(workItems)" }, { "code": null, "e": 2594, "s": 2272, "text": "Note that both the squad and assignee (i.e., owner), which are the ground truths, are given in the historical data. This means, we can approach this as a classification problem. Now, since the work item title is given as free text, some of the Natural Language Processing techniques could be used to derive some features." }, { "code": null, "e": 2635, "s": 2594, "text": "Natural Language Processing (NLP) basics" }, { "code": null, "e": 2674, "s": 2635, "text": "Let us introduce some NLP terminology:" }, { "code": null, "e": 2748, "s": 2674, "text": "Our dataset (a collection of work item titles) will be called the corpus." }, { "code": null, "e": 2784, "s": 2748, "text": "Each work item title is a document." }, { "code": null, "e": 2847, "s": 2784, "text": "The set of all distinct words in the corpus is the dictionary." }, { "code": null, "e": 3306, "s": 2847, "text": "A very simple way to extract features from free text is to compute term frequency (TF), i.e., count how many times each word of the dictionary appears in each of the documents. The higher the occurrence, the more relevance such word will have. This results into a document-term matrix (DTM), which has one row per document and as many columns as words in the dictionary. Position (i, j) of this matrix represents how many times the word j appears in title i." }, { "code": null, "e": 3541, "s": 3306, "text": "You can immediately see that the resulting feature set will be very sparse (i.e., having lots of zero values) as you may have thousands of words in the dictionary but each document (i.e., title) will only contain a few dozens of them." }, { "code": null, "e": 3939, "s": 3541, "text": "A common issue with TF is that words such as “the”, “a”, “in”, etc., tend to appear very frequently yet they may not be relevant. This is why TF-IDF rather normalizes the frequency of a word in the document by dividing it by a function of its frequency in the entire corpus. In this way, the most relevant words will be the ones that appear in the document but are not common in the entire corpus." }, { "code": null, "e": 4269, "s": 3939, "text": "Now, before applying any of the NLP techniques, some text curation is needed. This includes removing stop words (e.g., prepositions, articles, etc.), case, punctuation, and stemming the document, which refers to reducing inflected/derived words to their base or root form. The code below performs the required text preprocessing:" }, { "code": null, "e": 4678, "s": 4269, "text": "preprocess <- function(text) { corpus <- VCorpus(VectorSource(tolower(text))) corpus <- tm_map(corpus, PlainTextDocument) corpus <- tm_map(corpus, removePunctuation) corpus <- tm_map(corpus, removeWords, stopwords('english')) corpus <- tm_map(corpus, stemDocument) data.frame(text=unlist(sapply(corpus, `[`, \"content\")), stringsAsFactors=F)}curatedText <- preprocess(workItems$TITLE)" }, { "code": null, "e": 4825, "s": 4678, "text": "The following code will create features by applying TF-IDF to our curated text. The resulting DTM will have one column per word in the dictionary." }, { "code": null, "e": 5580, "s": 4825, "text": "# Create a tokenizerit <- itoken(curatedText$text, progressbar = FALSE)# Create a vectorizerv <- create_vocabulary(it) %>% prune_vocabulary(doc_proportion_max = 0.1, term_count_min = 5)vectorizer <- vocab_vectorizer(v)# Create a document term matrix (DTM)dtmCorpus <- create_dtm(it, vectorizer)tfidf <- TfIdf$new()dtm_tfidf <- fit_transform(dtmCorpus, tfidf)featuresTFIDF <- as.data.frame(as.matrix(dtm_tfidf))# Add prefix to column names since there could be names starting # with numberscolnames(featuresTFIDF) <- paste0(\"word_\", colnames(featuresTFIDF))# Append the squad and type to the feature set for classificationfeatureSet <- cbind(featuresTFIDF, \"SQUAD\"=workItems$SQUAD, \"TYPE\"=workItemsCurated$TYPE)" }, { "code": null, "e": 5783, "s": 5580, "text": "Now we have a feature set where each row is a work item and each column is its TF-IDF score . We also have the type of work item (i.e., either a task or a defect) and the ground truth (i.e., the squad)." }, { "code": null, "e": 5842, "s": 5783, "text": "Next, we will create splits for training and testing sets:" }, { "code": null, "e": 6034, "s": 5842, "text": "random <- runif(nrow(featureSet))train <- featureSet[random > 0.2, ]trainRaw <- workItemsFiltered[random > 0.2, ]test <- featureSet[random < 0.2, ]testRaw <- workItemsFiltered[random < 0.2, ]" }, { "code": null, "e": 6049, "s": 6034, "text": "Random Forests" }, { "code": null, "e": 6144, "s": 6049, "text": "R offers the randomForest package, which allow to train a Random Forest classifier as follows:" }, { "code": null, "e": 6377, "s": 6144, "text": "# Train a Random Forest model> model <- randomForest(SQUAD ~ ., train, ntree = 500)# Compute predictions> predictions <- predict(model, test)# Compute overall accuracy> sum(predictions == test$SQUAD) / length(predictions)[1] 0.59375" }, { "code": null, "e": 6709, "s": 6377, "text": "Note accuracy is below 60%, which is, for most purposes, pretty bad. However, predicting the exact squad a work item should be assigned to, based upon its title only, is a very challenging task, even for humans. Therefore, let’s rather provide the user with two or three suggestions of the most likely squads for a given work item." }, { "code": null, "e": 6945, "s": 6709, "text": "To this end, let us use the probabilities of each individual class, which are provided by the randomForest. All we need to do is rank these probabilities and pick the classes with the highest values. The following code does exactly so:" }, { "code": null, "e": 8018, "s": 6945, "text": "# A function for ranking numbersranks <- function(d) { data.frame(t(apply(-d, 1, rank, ties.method='min'))) }# Score the Random Forest model and return probabilitiesrfProbs <- predict(model, test, type=\"prob\")# Compute probability ranksprobRanks <- ranks(rfProbs)cbind(\"Title\" = testRaw$TITLE, probRanks, \"SQUAD\" = testRaw$SQUAD, \"PRED\" = predictions)rfSquadsPreds <- as.data.frame(t(apply(probRanks, MARGIN=1, FUN=function(x) names(head(sort(x, decreasing=F), 3)))))# Compute accuracy of any of the two recommendations to be correct> sum(rfSquadsPreds$V1 == rfSquadsPreds$SQUAD | rfSquadsPreds$V2 == rfSquadsPreds$SQUAD) / nrow(rfSquadsPreds)[1] 0.76# Compute accuracy of any of the three recommendations to be correct> sum(rfSquadsPreds$V1 == rfSquadsPreds$SQUAD | rfSquadsPreds$V2 == rfSquadsPreds$SQUAD | rfSquadsPreds$V3 == rfSquadsPreds$SQUAD) / nrow(rfSquadsPreds)[1] 0.87" }, { "code": null, "e": 8192, "s": 8018, "text": "Note that having two suggestions, the probability of any of them to be correct is 76%, while with three this probability becomes 87%, which makes the model much more useful." }, { "code": null, "e": 8209, "s": 8192, "text": "Other algorithms" }, { "code": null, "e": 8355, "s": 8209, "text": "We also explored Logistic Regression, XGBoost, Glove, and RNNs/LSTMs. However, the results were not significantly better than for Random Forests." }, { "code": null, "e": 8374, "s": 8355, "text": "Feature importance" }, { "code": null, "e": 8618, "s": 8374, "text": "To put this model in production, we first need to export (1) the model itself and (2) the TF-IDF transformations. The former will be used for scoring whereas the latter is to extract the same features (i.e., words) that were used for training." }, { "code": null, "e": 8639, "s": 8618, "text": "Exporting the assets" }, { "code": null, "e": 8829, "s": 8639, "text": "# Save TF-IDF transformationssaveRDS(vectorizer, \"../docker/R/vectorizer.rds\")saveRDS(dtmCorpus,\"../docker/R/dtmCorpus_training_data.rds\")# Save DTMsaveRDS(model, \"squad_prediction_rf.rds\")" }, { "code": null, "e": 8847, "s": 8829, "text": "Docker and plumbr" }, { "code": null, "e": 9001, "s": 8847, "text": "Docker can be a very useful tool to turn our assets into a containerized application. This will help us to ship, build, and run the application anywhere." }, { "code": null, "e": 9265, "s": 9001, "text": "As for most of the software services, an API end point serves the best way to consume a predictive model. We explored options like OpenCPU and plumbr. Plumber seemed simpler yet quite powerful to read CSV files and run analytics smoothly, hence it was our choice." }, { "code": null, "e": 9430, "s": 9265, "text": "Plumber’s code style (i.e., using decorators) was also more intuitive, which allowed for an easier time managing endpoint URLs, HTTP headers, and response payloads." }, { "code": null, "e": 9459, "s": 9430, "text": "Sample docker file is below:" }, { "code": null, "e": 9899, "s": 9459, "text": "FROM trestletech/plumber# Install required packagesRUN apt-get install -y libxml2-dev# Install the randomForest packageRUN R -e ‘install.packages(c(“tm”,”text2vec”, ”plotly”,”randomForest”,”SnowballC”))’# Copy model and scoring scriptRUN mkdir /modelWORKDIR /model# plumb and run serverEXPOSE 8000ENTRYPOINT [“R”, “-e”, \\“pr <- plumber::plumb(‘/model/squad_prediction_score.R’); pr$run(host=’0.0.0.0', port=8000)”]" }, { "code": null, "e": 9964, "s": 9899, "text": "A snippet of the scoring file squad_prediction_score.R is below:" }, { "code": null, "e": 11096, "s": 9964, "text": "x <- c(“tm”,”text2vec”,”plotly”,”randomForest”,”SnowballC”)lapply(x, require, character.only = TRUE)# Load tf-idfvectorizer = readRDS(“/model/vectorizer.rds”)dtmCorpus_training_data = readRDS(“/model/dtmCorpus_training_data.rds”)tfidf = TfIdf$new()tfidf$fit_transform(dtmCorpus_training_data)# Load the modelsquad_prediction_rf <- readRDS(“/model/squad_prediction_rf.rds”)#* @param df data frame of variables#* @serializer unboxedJSON#* @post /scorescore <- function(req, df) { curatedText <- preprocess(df$TITLE) df$CURATED_TITLE <- curatedText$text featureSet <- feature_extraction(df) rfProbs <- predict(squad_prediction_rf, featureSet,type=”prob”) probRanks <- ranks(rfProbs) rfSquadsPreds <- as.data.frame(t(apply(probRanks, MARGIN=1, FUN=function(x) names(head(sort(x, decreasing=F), 3))))) result <- list(“1” = rfSquadsPreds$V1, “2” = rfSquadsPreds$V2, “3” = rfSquadsPreds$V3) result}#* @param df data frame of variables#* @post /traintrain <- function(req, df) { ...}preprocess <- function(text) { ...}feature_extraction <- function(df) { ...}" }, { "code": null, "e": 11216, "s": 11096, "text": "Now, to run the model against your own repository, you just need to build your own docker image and hit the end points:" }, { "code": null, "e": 11296, "s": 11216, "text": "docker build -t squad_pred_image .docker run — rm -p 8000:8000 squad_pred_image" }, { "code": null, "e": 11368, "s": 11296, "text": "Once the docker image is ready, a sample API call would look like this:" }, { "code": null, "e": 11675, "s": 11368, "text": "curl -X POST \\ http://localhost:8000/score \\ -H ‘Content-Type: application/json’ \\ -H ‘cache-control: no-cache’ \\ -d ‘{ “df”: [{ “ID”: “4808”, “TITLE”: “Data virtualization keeps running out of memory”, “TYPE”: “type: Defect” }] }’" }, { "code": null, "e": 11710, "s": 11675, "text": "A sample API call output is below:" }, { "code": null, "e": 11783, "s": 11710, "text": "{ “1”: “squad.core”, “2”: “squad.performance”, “3”: “squad.dv”}" }, { "code": null, "e": 11940, "s": 11783, "text": "Would you like to help your development organization to be more productive with GitHub? Give our code a try with your own dataset. Let us know your results." }, { "code": null, "e": 12399, "s": 11940, "text": "Óscar D. Lara Yejas is Senior Data Scientist and one of the founding members of the IBM Machine Learning Hub. He works closely with some of the largest enterprises in the world on applying ML to their specific use-cases, including healthcare, financial, manufacturing, government, and retail. He has also contributed to the IBM Big Data portfolio, particularly in the Large-scale Machine Learning area, being an Apache Spark and Apache SystemML contributor." }, { "code": null, "e": 12717, "s": 12399, "text": "Óscar holds a Ph.D. in Computer Science and Engineering from University of South Florida. He is the author of the book “Human Activity Recognition: Using Wearable Sensors and Smartphones”, and a number of research/technical papers on Big Data, Machine Learning, Human-centric sensing, and Combinatorial Optimization." } ]
How to create pill buttons with CSS?
Following is the code to create pill buttons − Live Demo <!DOCTYPE html> <html> <head> <meta name="viewport" content="width=device-width, initial-scale=1" /> <style> button { font-family: "Lucida Sans", "Lucida Sans Regular", "Lucida Grande", "Lucida Sans Unicode", Geneva, Verdana, sans-serif; background-color: rgb(193, 255, 236); border: none; color: rgb(0, 0, 0); padding: 10px 20px; text-align: center; text-decoration: none; display: inline-block; margin: 4px 2px; cursor: pointer; font-size: 30px; border-radius: 32px; } button:hover { background-color: #9affe1; } </style> </head> <body> <h1>Pill Buttons Example</h1> <button>Button 1</button> <button>Button 2</button> <div></div> <button>Button 3</button> <button>Button 4</button> </body> </html> The above code will produce the following output −
[ { "code": null, "e": 1109, "s": 1062, "text": "Following is the code to create pill buttons −" }, { "code": null, "e": 1120, "s": 1109, "text": " Live Demo" }, { "code": null, "e": 1860, "s": 1120, "text": "<!DOCTYPE html>\n<html>\n<head>\n<meta name=\"viewport\" content=\"width=device-width, initial-scale=1\" />\n<style>\nbutton {\n font-family: \"Lucida Sans\", \"Lucida Sans Regular\", \"Lucida Grande\",\n\"Lucida Sans Unicode\", Geneva, Verdana, sans-serif;\n background-color: rgb(193, 255, 236);\n border: none;\n color: rgb(0, 0, 0);\n padding: 10px 20px;\n text-align: center;\n text-decoration: none;\n display: inline-block;\n margin: 4px 2px;\n cursor: pointer;\n font-size: 30px;\n border-radius: 32px;\n}\nbutton:hover {\n background-color: #9affe1;\n}\n</style>\n</head>\n<body>\n<h1>Pill Buttons Example</h1>\n<button>Button 1</button>\n<button>Button 2</button>\n<div></div>\n<button>Button 3</button>\n<button>Button 4</button>\n</body>\n</html>" }, { "code": null, "e": 1911, "s": 1860, "text": "The above code will produce the following output −" } ]
Sum of Subarray Minimums in C++
Suppose we have an array of integers A. We have to find the sum of min(B), where B ranges over every (contiguous) subarray of A. Since the answer may be very large, then return the answer in modulo 10^9 + 7. So if the input is like [3,1,2,4], then the output will be 17, because the subarrays are [3], [1], [2], [4], [3,1], [1,2], [2,4], [3,1,2], [1,2,4], [3,1,2,4], so minimums are [3,1,2,4,1,1,2,1,1,1], and the sum is 17. To solve this, we will follow these steps − m := 1 x 10^9 + 7 m := 1 x 10^9 + 7 Define two methods, add() will take a, b and returns the (a mod m + b mod m) mod m, mul() will take a, b and returns the (a mod m * b mod m) mod m Define two methods, add() will take a, b and returns the (a mod m + b mod m) mod m, mul() will take a, b and returns the (a mod m * b mod m) mod m The main method will take the array A, define a stack st, and set n := size of array A The main method will take the array A, define a stack st, and set n := size of array A Define two arrays left of size n and fill with -1, and another is right of size n, fill with n Define two arrays left of size n and fill with -1, and another is right of size n, fill with n set ans := 0 set ans := 0 for i in range 0 to n – 1while st is not empty and A[stack top] >= A[i], delete from stif st is not empty, then set left[i] := top of stinsert i into st for i in range 0 to n – 1 while st is not empty and A[stack top] >= A[i], delete from st while st is not empty and A[stack top] >= A[i], delete from st if st is not empty, then set left[i] := top of st if st is not empty, then set left[i] := top of st insert i into st insert i into st while st is not empty, then delete st while st is not empty, then delete st for i in range n – 1 down to 0while st is not empty and A[stack top] >= A[i], delete from stif st is not empty, then set right[i] := top of stinsert i into st for i in range n – 1 down to 0 while st is not empty and A[stack top] >= A[i], delete from st while st is not empty and A[stack top] >= A[i], delete from st if st is not empty, then set right[i] := top of st if st is not empty, then set right[i] := top of st insert i into st insert i into st for i in range 0 to n – 1leftBound := i – left[i] + 1, rightBound := right[i] – 1 – icontri := 1 + leftBound + rightBound + (leftBound * rightBound)ans := add(ans and mul(contri, A[i])) for i in range 0 to n – 1 leftBound := i – left[i] + 1, rightBound := right[i] – 1 – i leftBound := i – left[i] + 1, rightBound := right[i] – 1 – i contri := 1 + leftBound + rightBound + (leftBound * rightBound) contri := 1 + leftBound + rightBound + (leftBound * rightBound) ans := add(ans and mul(contri, A[i])) ans := add(ans and mul(contri, A[i])) return ans return ans Let us see the following implementation to get better understanding − Live Demo #include <bits/stdc++.h> using namespace std; typedef long long int lli; const lli MOD = 1e9 + 7; class Solution { public: lli add(lli a, lli b){ return (a % MOD + b % MOD) % MOD; } lli mul(lli a, lli b){ return (a % MOD * b % MOD) % MOD; } int sumSubarrayMins(vector<int>& A) { stack <int> st; int n = A.size(); vector <int> left(n, -1); vector <int> right(n, n); int ans = 0; for(int i = 0; i < n; i++){ while(!st.empty() && A[st.top()] >= A[i]){ st.pop(); } if(!st.empty())left[i] = st.top(); st.push(i); } while(!st.empty())st.pop(); for(int i = n - 1; i >= 0; i--){ while(!st.empty() && A[st.top()] > A[i]){ st.pop(); } if(!st.empty())right[i] = st.top(); st.push(i); } for(int i = 0; i < n; i++){ int leftBound = i - (left[i] + 1); int rightBound = (right[i] - 1) - i; int contri = 1 + leftBound + rightBound + (leftBound * rightBound); ans = add(ans, mul(contri, A[i])); } return ans; } }; main(){ vector<int> v = {3,1,2,4}; Solution ob; cout << (ob.sumSubarrayMins(v)); } [3,1,2,4] 17
[ { "code": null, "e": 1487, "s": 1062, "text": "Suppose we have an array of integers A. We have to find the sum of min(B), where B ranges over every (contiguous) subarray of A. Since the answer may be very large, then return the answer in modulo 10^9 + 7. So if the input is like [3,1,2,4], then the output will be 17, because the subarrays are [3], [1], [2], [4], [3,1], [1,2], [2,4], [3,1,2], [1,2,4], [3,1,2,4], so minimums are [3,1,2,4,1,1,2,1,1,1], and the sum is 17." }, { "code": null, "e": 1531, "s": 1487, "text": "To solve this, we will follow these steps −" }, { "code": null, "e": 1549, "s": 1531, "text": "m := 1 x 10^9 + 7" }, { "code": null, "e": 1567, "s": 1549, "text": "m := 1 x 10^9 + 7" }, { "code": null, "e": 1714, "s": 1567, "text": "Define two methods, add() will take a, b and returns the (a mod m + b mod m) mod m, mul() will take a, b and returns the (a mod m * b mod m) mod m" }, { "code": null, "e": 1861, "s": 1714, "text": "Define two methods, add() will take a, b and returns the (a mod m + b mod m) mod m, mul() will take a, b and returns the (a mod m * b mod m) mod m" }, { "code": null, "e": 1948, "s": 1861, "text": "The main method will take the array A, define a stack st, and set n := size of array A" }, { "code": null, "e": 2035, "s": 1948, "text": "The main method will take the array A, define a stack st, and set n := size of array A" }, { "code": null, "e": 2130, "s": 2035, "text": "Define two arrays left of size n and fill with -1, and another is right of size n, fill with n" }, { "code": null, "e": 2225, "s": 2130, "text": "Define two arrays left of size n and fill with -1, and another is right of size n, fill with n" }, { "code": null, "e": 2238, "s": 2225, "text": "set ans := 0" }, { "code": null, "e": 2251, "s": 2238, "text": "set ans := 0" }, { "code": null, "e": 2404, "s": 2251, "text": "for i in range 0 to n – 1while st is not empty and A[stack top] >= A[i], delete from stif st is not empty, then set left[i] := top of stinsert i into st" }, { "code": null, "e": 2430, "s": 2404, "text": "for i in range 0 to n – 1" }, { "code": null, "e": 2493, "s": 2430, "text": "while st is not empty and A[stack top] >= A[i], delete from st" }, { "code": null, "e": 2556, "s": 2493, "text": "while st is not empty and A[stack top] >= A[i], delete from st" }, { "code": null, "e": 2606, "s": 2556, "text": "if st is not empty, then set left[i] := top of st" }, { "code": null, "e": 2656, "s": 2606, "text": "if st is not empty, then set left[i] := top of st" }, { "code": null, "e": 2673, "s": 2656, "text": "insert i into st" }, { "code": null, "e": 2690, "s": 2673, "text": "insert i into st" }, { "code": null, "e": 2728, "s": 2690, "text": "while st is not empty, then delete st" }, { "code": null, "e": 2766, "s": 2728, "text": "while st is not empty, then delete st" }, { "code": null, "e": 2925, "s": 2766, "text": "for i in range n – 1 down to 0while st is not empty and A[stack top] >= A[i], delete from stif st is not empty, then set right[i] := top of stinsert i into st" }, { "code": null, "e": 2956, "s": 2925, "text": "for i in range n – 1 down to 0" }, { "code": null, "e": 3019, "s": 2956, "text": "while st is not empty and A[stack top] >= A[i], delete from st" }, { "code": null, "e": 3082, "s": 3019, "text": "while st is not empty and A[stack top] >= A[i], delete from st" }, { "code": null, "e": 3133, "s": 3082, "text": "if st is not empty, then set right[i] := top of st" }, { "code": null, "e": 3184, "s": 3133, "text": "if st is not empty, then set right[i] := top of st" }, { "code": null, "e": 3201, "s": 3184, "text": "insert i into st" }, { "code": null, "e": 3218, "s": 3201, "text": "insert i into st" }, { "code": null, "e": 3404, "s": 3218, "text": "for i in range 0 to n – 1leftBound := i – left[i] + 1, rightBound := right[i] – 1 – icontri := 1 + leftBound + rightBound + (leftBound * rightBound)ans := add(ans and mul(contri, A[i]))" }, { "code": null, "e": 3430, "s": 3404, "text": "for i in range 0 to n – 1" }, { "code": null, "e": 3491, "s": 3430, "text": "leftBound := i – left[i] + 1, rightBound := right[i] – 1 – i" }, { "code": null, "e": 3552, "s": 3491, "text": "leftBound := i – left[i] + 1, rightBound := right[i] – 1 – i" }, { "code": null, "e": 3616, "s": 3552, "text": "contri := 1 + leftBound + rightBound + (leftBound * rightBound)" }, { "code": null, "e": 3680, "s": 3616, "text": "contri := 1 + leftBound + rightBound + (leftBound * rightBound)" }, { "code": null, "e": 3718, "s": 3680, "text": "ans := add(ans and mul(contri, A[i]))" }, { "code": null, "e": 3756, "s": 3718, "text": "ans := add(ans and mul(contri, A[i]))" }, { "code": null, "e": 3767, "s": 3756, "text": "return ans" }, { "code": null, "e": 3778, "s": 3767, "text": "return ans" }, { "code": null, "e": 3848, "s": 3778, "text": "Let us see the following implementation to get better understanding −" }, { "code": null, "e": 3859, "s": 3848, "text": " Live Demo" }, { "code": null, "e": 5082, "s": 3859, "text": "#include <bits/stdc++.h>\nusing namespace std;\ntypedef long long int lli;\nconst lli MOD = 1e9 + 7;\nclass Solution {\npublic:\n lli add(lli a, lli b){\n return (a % MOD + b % MOD) % MOD;\n }\n lli mul(lli a, lli b){\n return (a % MOD * b % MOD) % MOD;\n }\n int sumSubarrayMins(vector<int>& A) {\n stack <int> st;\n int n = A.size();\n vector <int> left(n, -1);\n vector <int> right(n, n);\n int ans = 0;\n for(int i = 0; i < n; i++){\n while(!st.empty() && A[st.top()] >= A[i]){\n st.pop();\n }\n if(!st.empty())left[i] = st.top();\n st.push(i);\n }\n while(!st.empty())st.pop();\n for(int i = n - 1; i >= 0; i--){\n while(!st.empty() && A[st.top()] > A[i]){\n st.pop();\n }\n if(!st.empty())right[i] = st.top();\n st.push(i);\n }\n for(int i = 0; i < n; i++){\n int leftBound = i - (left[i] + 1);\n int rightBound = (right[i] - 1) - i;\n int contri = 1 + leftBound + rightBound + (leftBound * rightBound);\n ans = add(ans, mul(contri, A[i]));\n }\n return ans;\n }\n};\nmain(){\n vector<int> v = {3,1,2,4};\n Solution ob;\n cout << (ob.sumSubarrayMins(v));\n}" }, { "code": null, "e": 5092, "s": 5082, "text": "[3,1,2,4]" }, { "code": null, "e": 5095, "s": 5092, "text": "17" } ]
C++ Memory Library - make_shared
It constructs an object of type T passing args to its constructor, and returns an object of type shared_ptr that owns and stores a pointer to it. Following is the declaration for std::make_shared. template <class T, class... Args> shared_ptr<T> make_shared (Args&&... args); template <class T, class... Args> shared_ptr<T> make_shared (Args&&... args); args − It is a list of zero or more types. It returns a shared_ptr object. noexcep − It doesn't throw any exceptions. In below example explains about std::minus. #include <iostream> #include <memory> int main () { std::shared_ptr<int> foo = std::make_shared<int> (100); std::shared_ptr<int> foo2 (new int(100)); auto bar = std::make_shared<int> (200); auto baz = std::make_shared<std::pair<int,int>> (300,400); std::cout << "*foo: " << *foo << '\n'; std::cout << "*bar: " << *bar << '\n'; std::cout << "*baz: " << baz->first << ' ' << baz->second << '\n'; return 0; } Let us compile and run the above program, this will produce the following result − *foo: 100 *bar: 200 *baz: 300 400 Print Add Notes Bookmark this page
[ { "code": null, "e": 2749, "s": 2603, "text": "It constructs an object of type T passing args to its constructor, and returns an object of type shared_ptr that owns and stores a pointer to it." }, { "code": null, "e": 2800, "s": 2749, "text": "Following is the declaration for std::make_shared." }, { "code": null, "e": 2881, "s": 2800, "text": "template <class T, class... Args>\n shared_ptr<T> make_shared (Args&&... args);" }, { "code": null, "e": 2962, "s": 2881, "text": "template <class T, class... Args>\n shared_ptr<T> make_shared (Args&&... args);" }, { "code": null, "e": 3005, "s": 2962, "text": "args − It is a list of zero or more types." }, { "code": null, "e": 3037, "s": 3005, "text": "It returns a shared_ptr object." }, { "code": null, "e": 3080, "s": 3037, "text": "noexcep − It doesn't throw any exceptions." }, { "code": null, "e": 3124, "s": 3080, "text": "In below example explains about std::minus." }, { "code": null, "e": 3560, "s": 3124, "text": "#include <iostream>\n#include <memory>\n\nint main () {\n\n std::shared_ptr<int> foo = std::make_shared<int> (100);\n std::shared_ptr<int> foo2 (new int(100));\n\n auto bar = std::make_shared<int> (200);\n\n auto baz = std::make_shared<std::pair<int,int>> (300,400);\n\n std::cout << \"*foo: \" << *foo << '\\n';\n std::cout << \"*bar: \" << *bar << '\\n';\n std::cout << \"*baz: \" << baz->first << ' ' << baz->second << '\\n';\n\n return 0;\n}" }, { "code": null, "e": 3643, "s": 3560, "text": "Let us compile and run the above program, this will produce the following result −" }, { "code": null, "e": 3678, "s": 3643, "text": "*foo: 100\n*bar: 200\n*baz: 300 400\n" }, { "code": null, "e": 3685, "s": 3678, "text": " Print" }, { "code": null, "e": 3696, "s": 3685, "text": " Add Notes" } ]
While chaining, can we throw unchecked exception from a checked exception in java?
When an exception is cached in a catch block, you can re-throw it using the throw keyword (which is used to throw the exception objects). While re-throwing exceptions you can throw the same exception as it is with out adjusting it as − try { int result = (arr[a])/(arr[b]); System.out.println("Result of "+arr[a]+"/"+arr[b]+": "+result); }catch(ArithmeticException e) { throw e; } Or, wrap it within a new exception and throw it. When you wrap a cached exception within another exception and throw it, it is known as exception chaining or, exception wrapping, by doing this you can adjust your exception, throwing higher level of exception maintaining the abstraction. try { int result = (arr[a])/(arr[b]); System.out.println("Result of "+arr[a]+"/"+arr[b]+": "+result); }catch(ArrayIndexOutOfBoundsException e) { throw new IndexOutOfBoundsException(); } Yes, we can catch compile time exception (checked) and in the catch block we can wrap it with in a run time exception (unchecked) and re-throw it. But since we are re-throwing using checked exception we need to either wrap it inside an implicit try-catch pair or, skip handling it using the throws clause. In the following Java example we have created a user defined (checked) exception named SampleException. We are displaying an integer array of 6 elements and letting user to select the position of the two values and dividing the selected numbers. While choosing the position the user my use index value, beyond the length of the array, which causes an ArrayIndexOutOfBoundsException which is unchecked exception. In the catch block we are re-throwing this object by wrapping it with in the above created Sample exception, which is checked. import java.util.Arrays; import java.util.Scanner; class SampleException extends Exception { SampleException(String msg){ super(msg); } } public class Rethrow { public void demoMethod() { Scanner sc = new Scanner(System.in); int[] arr = {10, 20, 30, 2, 5, 8}; System.out.println("Array: "+Arrays.toString(arr)); System.out.println("Choose numerator and denominator(not 0) from this array (enter positions 0 to 5)"); int a = sc.nextInt(); int b = sc.nextInt(); try { int result = (arr[a])/(arr[b]); System.out.println("Result of "+arr[a]+"/"+arr[b]+": "+result); }catch(ArrayIndexOutOfBoundsException e) { try { throw new SampleException("This is a checked exception"); } catch (SampleException e1) { System.out.println("Checked exception in the catch block"); } } } public static void main(String [] args) { new Rethrow().demoMethod(); } } Array: [10, 20, 30, 2, 0, 8] Choose numerator and denominator(not 0) from this array (enter positions 0 to 5) 25 24 Checked exception in the catch block
[ { "code": null, "e": 1200, "s": 1062, "text": "When an exception is cached in a catch block, you can re-throw it using the throw keyword (which is used to throw the exception objects)." }, { "code": null, "e": 1298, "s": 1200, "text": "While re-throwing exceptions you can throw the same exception as it is with out adjusting it as −" }, { "code": null, "e": 1452, "s": 1298, "text": "try {\n int result = (arr[a])/(arr[b]);\n System.out.println(\"Result of \"+arr[a]+\"/\"+arr[b]+\": \"+result);\n}catch(ArithmeticException e) {\n throw e;\n}" }, { "code": null, "e": 1740, "s": 1452, "text": "Or, wrap it within a new exception and throw it. When you wrap a cached exception within another exception and throw it, it is known as exception chaining or, exception wrapping, by doing this you can adjust your exception, throwing higher level of exception maintaining the abstraction." }, { "code": null, "e": 1935, "s": 1740, "text": "try {\n int result = (arr[a])/(arr[b]);\n System.out.println(\"Result of \"+arr[a]+\"/\"+arr[b]+\": \"+result);\n}catch(ArrayIndexOutOfBoundsException e) {\n throw new IndexOutOfBoundsException();\n}" }, { "code": null, "e": 2241, "s": 1935, "text": "Yes, we can catch compile time exception (checked) and in the catch block we can wrap it with in a run time exception (unchecked) and re-throw it. But since we are re-throwing using checked exception we need to either wrap it inside an implicit try-catch pair or, skip handling it using the throws clause." }, { "code": null, "e": 2345, "s": 2241, "text": "In the following Java example we have created a user defined (checked) exception named SampleException." }, { "code": null, "e": 2653, "s": 2345, "text": "We are displaying an integer array of 6 elements and letting user to select the position of the two values and dividing the selected numbers. While choosing the position the user my use index value, beyond the length of the array, which causes an ArrayIndexOutOfBoundsException which is unchecked exception." }, { "code": null, "e": 2780, "s": 2653, "text": "In the catch block we are re-throwing this object by wrapping it with in the above created Sample exception, which is checked." }, { "code": null, "e": 3773, "s": 2780, "text": "import java.util.Arrays;\nimport java.util.Scanner;\nclass SampleException extends Exception {\n SampleException(String msg){\n super(msg);\n }\n}\npublic class Rethrow {\n public void demoMethod() {\n Scanner sc = new Scanner(System.in);\n int[] arr = {10, 20, 30, 2, 5, 8};\n System.out.println(\"Array: \"+Arrays.toString(arr));\n System.out.println(\"Choose numerator and denominator(not 0) from this array (enter positions 0 to 5)\");\n int a = sc.nextInt();\n int b = sc.nextInt();\n try {\n int result = (arr[a])/(arr[b]);\n System.out.println(\"Result of \"+arr[a]+\"/\"+arr[b]+\": \"+result);\n }catch(ArrayIndexOutOfBoundsException e) {\n try {\n throw new SampleException(\"This is a checked exception\");\n } catch (SampleException e1) {\n System.out.println(\"Checked exception in the catch block\");\n }\n }\n }\n public static void main(String [] args) {\n new Rethrow().demoMethod();\n }\n}" }, { "code": null, "e": 3926, "s": 3773, "text": "Array: [10, 20, 30, 2, 0, 8]\nChoose numerator and denominator(not 0) from this array (enter positions 0 to 5)\n25\n24\nChecked exception in the catch block" } ]
The Upgraded “Top N” Analysis You Haven’t Seen Yet with Pandas | by Byron Dolon | Towards Data Science
Did you know that you can have a Top N analysis based on more than one column in Pandas? A Top N analysis can be useful to select a subset of your data matching a specific condition. For example, what if you owned a restaurant and wanted to look at which customers contributed most to your overall sales? The easiest way to do this would be to look at the total sales for all your customers, then sort that list from highest to lowest. Another interesting subset for you to look at might be the customers who had the lowest (or negative) profit contribution. You could then accomplish this in a similar fashion, getting a list of all customers by their profit contribution and then taking only the lowest members. But what if you wanted to find out if there were customers that appeared on both lists? That could help you identify areas in which you’re actually losing money, even if it looks like the total sales value is very high. For example, if you had a dish that had a very low-profit margin, repeated orders of this dish alone might not be beneficial for your bottom line. Let’s take a look at how you can combine the built-in Pandas functions to do this kind of analysis! The data used in this piece is sourced from Yahoo Finance. We’ll be using a subset of Tesla stock price data. Run the code below if you want to follow along. (And if you’re curious as to the function I used to get the data, scroll to the very bottom and click on the first link.) import pandas as pddf = pd.read_html("https://finance.yahoo.com/quote/TSLA/history?period1=1546300800&period2=1550275200&interval=1d&filter=history&frequency=1d")[0]df = df.head(30)df = df.astype({"Open":'float', "High":'float', "Low":'float', "Close*":'float', "Adj Close**":'float', "Volume":'float'}) To demonstrate how we can combine the Top N and Bottom N analysis, we’re going to answer the following question: Which days had the highest increase in stock price while also having the lowest Open price in the data set? First, we’ll need to calculate how much the stock price changed during the day. This can be achieved with a simple calculation: df['Gain'] = df['Close*'] - df['Open'] We’ve stored the difference between the “Close*” and “Open” columns into a new column called “Gain”. As you can see in the table above, not all column values are positive, as there were some days where the stock price decreased. Next, we’ll be creating two new DataFrames: one with the top 10 highest “Gain” values and one with the top 10 lowest “Open” values. Here, we’ll be using the nlargest method in Pandas. This method accepts the number of elements you want to keep, the column you want to order the DataFrame by, and which duplicate values (if any) should appear in the outputted DataFrame. By default, nlargest will only keep the first of any duplicates, and the rest will be excluded from the returned DataFrame. This method will return the same results as df.sort_values(columns, ascending=false).head(n). This code is very easy to understand and will also work, but according to the documentation, the nlargest method is more performant. The code to get the 10 rows with the highest “Gain” values is as follows: df_top = df.nlargest(10, 'Gain') The returned DataFrame now gives us only the values in the original DataFrame with the highest 10 “Gain” values. This new DataFrame is also already sorted in descending order. Next, we’ll use the nsmallest method on the DataFrame to get the rows with the lowest “Open” values. This method works exactly like the previous one, except it would sort and slice the values ascending order. The code to achieve this is as follows: df_bottom = df.nsmallest(10, 'Open') We’re now ready to combine the two DataFrames to create a combined set. I’m borrowing this term from a built-in Tableau function, but all it refers to is a subset of the data that matches multiple conditions based on two or more columns. In this case, we’re looking for the data that exists only in the top 10 of “Gain” and the bottom 10 of “Open”. To get our combined set, there are two main steps: Concatenate the top N and bottom N DataFramesRemove all rows except the duplicates. Concatenate the top N and bottom N DataFrames Remove all rows except the duplicates. The code to achieve this is as follows: df_combined = pd.concat([df_top, df_bottom])df_combined['Duplicate'] = df_combined.duplicated(subset=['Date'])df_combined = df_combined.loc[df_combined['Duplicate']==True] First, we simply call pd.concat and stick the two DataFrames together. Since the top N and bottom N DataFrames come from the exact same source, we don’t need to worry about renaming any columns or specifying an index. Next, to create the “Duplicate” column, we make use of the duplicated method. This function returns a boolean Series with marking a row as “True” if it’s a duplicate and “False” otherwise. You can call this on a DataFrame and specify which column to search for duplicates in by writing the column name as an argument (in this case subset=['Date']). For demonstration, I created a new column “Duplicate” to store the new boolean values in. I won’t go into how the loc[] function works, but if you haven’t used it before, quickly skim through this introduction so you understand how you can use it to filter your DataFrame in various ways. All we’re doing with it here is taking the values in the “Duplicate” column that are True because those are the ones that appear in both DataFrames. We don’t even really need to create a new column to mark the duplicate values. A slightly condensed (and equivalent) version of the above code would look like this: combined = pd.concat([df_top, df_bottom])combined = combined.loc[combined.duplicated()==True] Voila! Now we can see the rows in which there was a high “Gain” during one of the days with the lowest “Open” in the whole dataset. And that’s all! I hope you found this quick look at the Top N (and Bottom N) analysis useful. Combining multiple conditions can allow you to filter and work with your data in new ways, which can help you extract valuable information from your dataset. Good luck with your Pandas work! More Pandas stuff by me:- 2 Easy Ways to Get Tables From a Website with Pandas- How to Quickly Create and Unpack Lists with Pandas- Top 4 Repositories on GitHub to Learn Pandas- A Quick Way to Reformat Columns in a Pandas DataFrame
[ { "code": null, "e": 260, "s": 171, "text": "Did you know that you can have a Top N analysis based on more than one column in Pandas?" }, { "code": null, "e": 607, "s": 260, "text": "A Top N analysis can be useful to select a subset of your data matching a specific condition. For example, what if you owned a restaurant and wanted to look at which customers contributed most to your overall sales? The easiest way to do this would be to look at the total sales for all your customers, then sort that list from highest to lowest." }, { "code": null, "e": 885, "s": 607, "text": "Another interesting subset for you to look at might be the customers who had the lowest (or negative) profit contribution. You could then accomplish this in a similar fashion, getting a list of all customers by their profit contribution and then taking only the lowest members." }, { "code": null, "e": 973, "s": 885, "text": "But what if you wanted to find out if there were customers that appeared on both lists?" }, { "code": null, "e": 1252, "s": 973, "text": "That could help you identify areas in which you’re actually losing money, even if it looks like the total sales value is very high. For example, if you had a dish that had a very low-profit margin, repeated orders of this dish alone might not be beneficial for your bottom line." }, { "code": null, "e": 1352, "s": 1252, "text": "Let’s take a look at how you can combine the built-in Pandas functions to do this kind of analysis!" }, { "code": null, "e": 1632, "s": 1352, "text": "The data used in this piece is sourced from Yahoo Finance. We’ll be using a subset of Tesla stock price data. Run the code below if you want to follow along. (And if you’re curious as to the function I used to get the data, scroll to the very bottom and click on the first link.)" }, { "code": null, "e": 2011, "s": 1632, "text": "import pandas as pddf = pd.read_html(\"https://finance.yahoo.com/quote/TSLA/history?period1=1546300800&period2=1550275200&interval=1d&filter=history&frequency=1d\")[0]df = df.head(30)df = df.astype({\"Open\":'float', \"High\":'float', \"Low\":'float', \"Close*\":'float', \"Adj Close**\":'float', \"Volume\":'float'})" }, { "code": null, "e": 2124, "s": 2011, "text": "To demonstrate how we can combine the Top N and Bottom N analysis, we’re going to answer the following question:" }, { "code": null, "e": 2232, "s": 2124, "text": "Which days had the highest increase in stock price while also having the lowest Open price in the data set?" }, { "code": null, "e": 2360, "s": 2232, "text": "First, we’ll need to calculate how much the stock price changed during the day. This can be achieved with a simple calculation:" }, { "code": null, "e": 2399, "s": 2360, "text": "df['Gain'] = df['Close*'] - df['Open']" }, { "code": null, "e": 2628, "s": 2399, "text": "We’ve stored the difference between the “Close*” and “Open” columns into a new column called “Gain”. As you can see in the table above, not all column values are positive, as there were some days where the stock price decreased." }, { "code": null, "e": 2760, "s": 2628, "text": "Next, we’ll be creating two new DataFrames: one with the top 10 highest “Gain” values and one with the top 10 lowest “Open” values." }, { "code": null, "e": 3122, "s": 2760, "text": "Here, we’ll be using the nlargest method in Pandas. This method accepts the number of elements you want to keep, the column you want to order the DataFrame by, and which duplicate values (if any) should appear in the outputted DataFrame. By default, nlargest will only keep the first of any duplicates, and the rest will be excluded from the returned DataFrame." }, { "code": null, "e": 3349, "s": 3122, "text": "This method will return the same results as df.sort_values(columns, ascending=false).head(n). This code is very easy to understand and will also work, but according to the documentation, the nlargest method is more performant." }, { "code": null, "e": 3423, "s": 3349, "text": "The code to get the 10 rows with the highest “Gain” values is as follows:" }, { "code": null, "e": 3456, "s": 3423, "text": "df_top = df.nlargest(10, 'Gain')" }, { "code": null, "e": 3632, "s": 3456, "text": "The returned DataFrame now gives us only the values in the original DataFrame with the highest 10 “Gain” values. This new DataFrame is also already sorted in descending order." }, { "code": null, "e": 3841, "s": 3632, "text": "Next, we’ll use the nsmallest method on the DataFrame to get the rows with the lowest “Open” values. This method works exactly like the previous one, except it would sort and slice the values ascending order." }, { "code": null, "e": 3881, "s": 3841, "text": "The code to achieve this is as follows:" }, { "code": null, "e": 3918, "s": 3881, "text": "df_bottom = df.nsmallest(10, 'Open')" }, { "code": null, "e": 4267, "s": 3918, "text": "We’re now ready to combine the two DataFrames to create a combined set. I’m borrowing this term from a built-in Tableau function, but all it refers to is a subset of the data that matches multiple conditions based on two or more columns. In this case, we’re looking for the data that exists only in the top 10 of “Gain” and the bottom 10 of “Open”." }, { "code": null, "e": 4318, "s": 4267, "text": "To get our combined set, there are two main steps:" }, { "code": null, "e": 4402, "s": 4318, "text": "Concatenate the top N and bottom N DataFramesRemove all rows except the duplicates." }, { "code": null, "e": 4448, "s": 4402, "text": "Concatenate the top N and bottom N DataFrames" }, { "code": null, "e": 4487, "s": 4448, "text": "Remove all rows except the duplicates." }, { "code": null, "e": 4527, "s": 4487, "text": "The code to achieve this is as follows:" }, { "code": null, "e": 4699, "s": 4527, "text": "df_combined = pd.concat([df_top, df_bottom])df_combined['Duplicate'] = df_combined.duplicated(subset=['Date'])df_combined = df_combined.loc[df_combined['Duplicate']==True]" }, { "code": null, "e": 4917, "s": 4699, "text": "First, we simply call pd.concat and stick the two DataFrames together. Since the top N and bottom N DataFrames come from the exact same source, we don’t need to worry about renaming any columns or specifying an index." }, { "code": null, "e": 5356, "s": 4917, "text": "Next, to create the “Duplicate” column, we make use of the duplicated method. This function returns a boolean Series with marking a row as “True” if it’s a duplicate and “False” otherwise. You can call this on a DataFrame and specify which column to search for duplicates in by writing the column name as an argument (in this case subset=['Date']). For demonstration, I created a new column “Duplicate” to store the new boolean values in." }, { "code": null, "e": 5704, "s": 5356, "text": "I won’t go into how the loc[] function works, but if you haven’t used it before, quickly skim through this introduction so you understand how you can use it to filter your DataFrame in various ways. All we’re doing with it here is taking the values in the “Duplicate” column that are True because those are the ones that appear in both DataFrames." }, { "code": null, "e": 5869, "s": 5704, "text": "We don’t even really need to create a new column to mark the duplicate values. A slightly condensed (and equivalent) version of the above code would look like this:" }, { "code": null, "e": 5963, "s": 5869, "text": "combined = pd.concat([df_top, df_bottom])combined = combined.loc[combined.duplicated()==True]" }, { "code": null, "e": 6095, "s": 5963, "text": "Voila! Now we can see the rows in which there was a high “Gain” during one of the days with the lowest “Open” in the whole dataset." }, { "code": null, "e": 6111, "s": 6095, "text": "And that’s all!" }, { "code": null, "e": 6347, "s": 6111, "text": "I hope you found this quick look at the Top N (and Bottom N) analysis useful. Combining multiple conditions can allow you to filter and work with your data in new ways, which can help you extract valuable information from your dataset." }, { "code": null, "e": 6380, "s": 6347, "text": "Good luck with your Pandas work!" } ]
C# Program to perform all Basic Arithmetic Operations
Basic Arithmetic Operators in C#, include the following − To add, use the Addition Operator − num1 + num2; In the same way, it works for Subtraction, Multiplication, Division, and other operators. Let us see a complete example to learn how to implement Arithmetic operators in C#. Live Demo using System; namespace Sample { class Demo { static void Main(string[] args) { int num1 = 50; int num2 = 25; int result; result = num1 + num2; Console.WriteLine("Value is {0}", result); result = num1 - num2; Console.WriteLine("Value is {0}", result); result = num1 * num2; Console.WriteLine("Value is {0}", result); result = num1 / num2; Console.WriteLine("Value is {0}", result); result = num1 % num2; Console.WriteLine("Value is {0}", result); result = num1++; Console.WriteLine("Value is {0}", result); result = num1--; Console.WriteLine("Value is {0}", result); Console.ReadLine(); } } } Value is 75 Value is 25 Value is 1250 Value is 2 Value is 0 Value is 50 Value is 51
[ { "code": null, "e": 1120, "s": 1062, "text": "Basic Arithmetic Operators in C#, include the following −" }, { "code": null, "e": 1156, "s": 1120, "text": "To add, use the Addition Operator −" }, { "code": null, "e": 1169, "s": 1156, "text": "num1 + num2;" }, { "code": null, "e": 1259, "s": 1169, "text": "In the same way, it works for Subtraction, Multiplication, Division, and other operators." }, { "code": null, "e": 1343, "s": 1259, "text": "Let us see a complete example to learn how to implement Arithmetic operators in C#." }, { "code": null, "e": 1353, "s": 1343, "text": "Live Demo" }, { "code": null, "e": 2126, "s": 1353, "text": "using System;\nnamespace Sample {\n class Demo {\n static void Main(string[] args) {\n int num1 = 50;\n int num2 = 25;\n int result;\n result = num1 + num2;\n Console.WriteLine(\"Value is {0}\", result);\n result = num1 - num2;\n Console.WriteLine(\"Value is {0}\", result);\n result = num1 * num2;\n Console.WriteLine(\"Value is {0}\", result);\n result = num1 / num2;\n Console.WriteLine(\"Value is {0}\", result);\n result = num1 % num2;\n Console.WriteLine(\"Value is {0}\", result);\n result = num1++;\n Console.WriteLine(\"Value is {0}\", result);\n result = num1--;\n Console.WriteLine(\"Value is {0}\", result);\n Console.ReadLine();\n }\n }\n}" }, { "code": null, "e": 2210, "s": 2126, "text": "Value is 75\nValue is 25\nValue is 1250\nValue is 2\nValue is 0\nValue is 50\nValue is 51" } ]
A Minimal Working Example for Deep Q-Learning in TensorFlow 2.0 | by Wouter van Heeswijk, PhD | Towards Data Science
Deep Q-learning is a staple in the arsenal of any Reinforcement Learning (RL) practitioner. It neatly circumvents some shortcomings of traditional Q-learning, and leverages the power of neural network for complex value function approximations. This article shows how to implement and train a deep Q-network in TensorFlow 2.0, illustrated at the hand of the multi-armed bandit problem (a terminating one-shot game). Some extensions towards temporal difference learning are provided as well. I take the ‘minimal’ in minimal working example quite literal though, so the focus is really on a first-ever implementation of deep Q-learning. Before diving into deep learning, I assume you are already familiar with both vanilla Q-learning and artificial neural networks. Without those basics, trying your hand at deep Q-learning will likely be a frustrating experience. The following update mechanism should hold no secrets to you: Traditional Q-learning explicitly stores a Q-value — essentially an estimate for the cumulative discounted reward — for each state-action pair in a lookup table. When taking an action in a particular state, the observed reward improves the value estimate. The size of the lookup table is |S|×|A|, where S is the state space and A the action space. Q-learning tends to work well for toy-sized problems, but falls apart for larger ones. Typically, it is not possible to observe anywhere near all state-action pairs. In contrast to vanilla Q-learning, deep Q-learning takes the state as input, passes it through a number of neural network layers, and outputs the Q-value per action. The deep Q-network can be viewed as a function f:s→[Q(s,a)]_∀ a ∈ A. By adopting a single representation for all states, deep Q-learning is able to handle large state spaces. It presupposes a reasonable number of actions though, as each action is represented by a node in the output layer (size |A|). After passing through the network and obtaining Q-values for all actions, we continue as usual. To balance exploration and exploitation, we utilize a basic ε-greedy policy. With probability 1-ε we select the best action (an argmax operation on the output layer), with probability ε we sample a random action. Defining a Q-network in TensorFlow is not hard. The input dimension is equal to the length of the vector state, the output dimension is equal to the number of actions (if the set of feasible actions is state-dependent, a mask can be applied). A Q-network is a fairly straightforward neural network: Weight updates are largely handled for you as well, yet you must provide a loss value to the optimizer. The loss represents the error between observation and expectation; a differentiable loss function is needed to properly perform the update. For deep Q-learning, the loss function is typically a simple mean squared error. This is actually a built-in loss function (loss=‘mse’) in TensorFlow, but we will use the GradientTape functionality here, tracing all your operations to compute and apply the gradients[2]. It offers more flexibility and stays close to the underlying mathematics, which is often beneficial when moving towards more complicated RL applications. The mean-squared loss function (observe the similarity with the update mechanism mentioned earlier) is denoted as follows: The generic TensorFlow implementation of the Deep Q-learning approach is as follows (the GradientTape is doing its magic underwater): The multi-armed bandit problem is a classic in RL[3]. It defines a number of slot machines: every machine i has a mean payoff μ_i and a standard deviation σ_i. Every decision moment, you play a machine and observe the resulting reward. When played often enough, you can estimate the mean reward of each machine. It goes without saying that the best policy is playing the slot machine with the highest average payoff. Let’s put our Q-learning network example into action (full Github code here). We define a straightforward neural network with three fully connected 10 node hidden layers. As input we use a tensor with value 1 (representing a fixed state) as input, and four nodes (representing the Q-value of each machine) as output. The network weights are initialized such that all Q-values are 0 initially. For the weight updates, we use the Adam optimizer with a 0.001 learning rate. Some illustrative results (after 10,000 iterations) are shown in the figure below. The tradeoff between exploration and exploitation can be clearly observed, especially when not exploring at all. Note that the results are not overly accurate; vanilla Q-learning actually performs better for problems like this. The multi-armed bandit definitely is a minimal working example, but only treats the terminal case where we don’t look beyond the direct reward. Let’s see how we handle the non-terminal case as well. In this case, we deploy temporal difference learning — we use Q(s’,a’) to update Q(s,a). Obtaining the Q-value corresponding to the next state s’ is not hard per se. You simply insert s’ into the Q-network, and out rolls the set of Q-values. Pick the maximum — always, as this is Q-learning rather than SARSA — and use it to compute the loss function: next_q_values = tf.stop_gradient(q_network(next_state)) next_q_value = np.max(next_q_values[0]) Note that the Q-network is called within a stop_gradient operator[4]. Remind that the GradientTape tracks all operations, and as such would also perform (nonsensical) updates using the next_state input. With the stop_gradient operator, we safely utilize the Q-values corresponding to next state s’, without worrying about erroneous updates! Although the method outlined above can in principle be directly applied to any RL problem, you’ll often find performance quite disappointing. Even for basic problems, don’t be surprised if your vanilla Q-learning implementation outperforms your fancy deep Q-network. In general, neural networks need many observations to learn something, and some level of detail is inherently lost by training a single network for all states that may be encountered. Aside from good neural network practices (e.g., normalization, one-hot encoding, proper weight initialization), the following adjustments may strongly improve the quality of your algorithm[5]: Mini batches: Rather than updating the network after every single observation, update the Q-network using batches of observations. Stability is often improved by training for multiple observations. The losses per observation are simply averaged. The tf.one_hot mask can be used to update for multiple actions. Experience replay: Build a buffer of prior observations (stored as s,a,r,s’ tuples), sample one (or more, when using mini batches) from the buffer, and plug into the Q-network. The main benefit of this approach is that it removes correlations in the data. Target network: Create a copy of the neural network that is updated only periodically (say every 100 updates). The target network is used to compute Q(s’,a’), whereas the original network is used to determine Q(s,a). This procedure typically produces more stable updates. A deep Q-network is a straightforward neural network, taking the state vector as input and outputting Q-values corresponding to each action. By using a single representation for all states, it can handle much larger state spaces than vanilla Q-learning (which uses a lookup table). TensorFlow’s GradientTape can be used to update the Q-network. The corresponding loss function is a mean squared error that is close to the original Q-learning update mechanism. In temporal difference learning, the estimate for Q(s,a) is updated based on Q(s’,a’). The stop_gradient operator ensures that the gradients corresponding to Q(s’,a’) are ignored. Deep Q-learning comes with some implementation challenges. Don’t be alarmed if vanilla Q-learning actually performs better, especially for toy-sized problems. The GitHub code for the minimal working example using multi-armed bandits can be found here. Want to stabilize your deep Q-learning algorithm? The following article might interest you: towardsdatascience.com Looking to implement policy gradient methods instead? Please check my articles with minimal working examples for the continuous and discrete case: towardsdatascience.com towardsdatascience.com [1]Sutton, Richard S., and Andrew G. Barto. Reinforcement learning: An introduction. MIT press, 2018. [2] Rosebrock, A. (2020) Using TensorFlow and GradientTape to train a Keras model. https://www.tensorflow.org/api_docs/python/tf/GradientTape [3] Ryzhov, I. O., Frazier, P. I., and Powell, W. B. (2010). On the robustness of a one-period look-ahead policy in multi-armed bandit problems. Procedia Computer Science, 1(1):1635{1644. [4]TensorFlow (2021). Obtained 26 July 2021 from https://www.tensorflow.org/api_docs/python/tf/stop_gradient [5] Wikipedia Contributors (2021) Deep Q-learning. Obtained 26 July 2021 from https://en.wikipedia.org/wiki/Q-learning#Deep_Q-learning
[ { "code": null, "e": 416, "s": 172, "text": "Deep Q-learning is a staple in the arsenal of any Reinforcement Learning (RL) practitioner. It neatly circumvents some shortcomings of traditional Q-learning, and leverages the power of neural network for complex value function approximations." }, { "code": null, "e": 806, "s": 416, "text": "This article shows how to implement and train a deep Q-network in TensorFlow 2.0, illustrated at the hand of the multi-armed bandit problem (a terminating one-shot game). Some extensions towards temporal difference learning are provided as well. I take the ‘minimal’ in minimal working example quite literal though, so the focus is really on a first-ever implementation of deep Q-learning." }, { "code": null, "e": 1096, "s": 806, "text": "Before diving into deep learning, I assume you are already familiar with both vanilla Q-learning and artificial neural networks. Without those basics, trying your hand at deep Q-learning will likely be a frustrating experience. The following update mechanism should hold no secrets to you:" }, { "code": null, "e": 1610, "s": 1096, "text": "Traditional Q-learning explicitly stores a Q-value — essentially an estimate for the cumulative discounted reward — for each state-action pair in a lookup table. When taking an action in a particular state, the observed reward improves the value estimate. The size of the lookup table is |S|×|A|, where S is the state space and A the action space. Q-learning tends to work well for toy-sized problems, but falls apart for larger ones. Typically, it is not possible to observe anywhere near all state-action pairs." }, { "code": null, "e": 2077, "s": 1610, "text": "In contrast to vanilla Q-learning, deep Q-learning takes the state as input, passes it through a number of neural network layers, and outputs the Q-value per action. The deep Q-network can be viewed as a function f:s→[Q(s,a)]_∀ a ∈ A. By adopting a single representation for all states, deep Q-learning is able to handle large state spaces. It presupposes a reasonable number of actions though, as each action is represented by a node in the output layer (size |A|)." }, { "code": null, "e": 2386, "s": 2077, "text": "After passing through the network and obtaining Q-values for all actions, we continue as usual. To balance exploration and exploitation, we utilize a basic ε-greedy policy. With probability 1-ε we select the best action (an argmax operation on the output layer), with probability ε we sample a random action." }, { "code": null, "e": 2685, "s": 2386, "text": "Defining a Q-network in TensorFlow is not hard. The input dimension is equal to the length of the vector state, the output dimension is equal to the number of actions (if the set of feasible actions is state-dependent, a mask can be applied). A Q-network is a fairly straightforward neural network:" }, { "code": null, "e": 3354, "s": 2685, "text": "Weight updates are largely handled for you as well, yet you must provide a loss value to the optimizer. The loss represents the error between observation and expectation; a differentiable loss function is needed to properly perform the update. For deep Q-learning, the loss function is typically a simple mean squared error. This is actually a built-in loss function (loss=‘mse’) in TensorFlow, but we will use the GradientTape functionality here, tracing all your operations to compute and apply the gradients[2]. It offers more flexibility and stays close to the underlying mathematics, which is often beneficial when moving towards more complicated RL applications." }, { "code": null, "e": 3477, "s": 3354, "text": "The mean-squared loss function (observe the similarity with the update mechanism mentioned earlier) is denoted as follows:" }, { "code": null, "e": 3611, "s": 3477, "text": "The generic TensorFlow implementation of the Deep Q-learning approach is as follows (the GradientTape is doing its magic underwater):" }, { "code": null, "e": 4028, "s": 3611, "text": "The multi-armed bandit problem is a classic in RL[3]. It defines a number of slot machines: every machine i has a mean payoff μ_i and a standard deviation σ_i. Every decision moment, you play a machine and observe the resulting reward. When played often enough, you can estimate the mean reward of each machine. It goes without saying that the best policy is playing the slot machine with the highest average payoff." }, { "code": null, "e": 4499, "s": 4028, "text": "Let’s put our Q-learning network example into action (full Github code here). We define a straightforward neural network with three fully connected 10 node hidden layers. As input we use a tensor with value 1 (representing a fixed state) as input, and four nodes (representing the Q-value of each machine) as output. The network weights are initialized such that all Q-values are 0 initially. For the weight updates, we use the Adam optimizer with a 0.001 learning rate." }, { "code": null, "e": 4810, "s": 4499, "text": "Some illustrative results (after 10,000 iterations) are shown in the figure below. The tradeoff between exploration and exploitation can be clearly observed, especially when not exploring at all. Note that the results are not overly accurate; vanilla Q-learning actually performs better for problems like this." }, { "code": null, "e": 5098, "s": 4810, "text": "The multi-armed bandit definitely is a minimal working example, but only treats the terminal case where we don’t look beyond the direct reward. Let’s see how we handle the non-terminal case as well. In this case, we deploy temporal difference learning — we use Q(s’,a’) to update Q(s,a)." }, { "code": null, "e": 5361, "s": 5098, "text": "Obtaining the Q-value corresponding to the next state s’ is not hard per se. You simply insert s’ into the Q-network, and out rolls the set of Q-values. Pick the maximum — always, as this is Q-learning rather than SARSA — and use it to compute the loss function:" }, { "code": null, "e": 5460, "s": 5361, "text": "next_q_values = tf.stop_gradient(q_network(next_state)) next_q_value = np.max(next_q_values[0])" }, { "code": null, "e": 5801, "s": 5460, "text": "Note that the Q-network is called within a stop_gradient operator[4]. Remind that the GradientTape tracks all operations, and as such would also perform (nonsensical) updates using the next_state input. With the stop_gradient operator, we safely utilize the Q-values corresponding to next state s’, without worrying about erroneous updates!" }, { "code": null, "e": 6252, "s": 5801, "text": "Although the method outlined above can in principle be directly applied to any RL problem, you’ll often find performance quite disappointing. Even for basic problems, don’t be surprised if your vanilla Q-learning implementation outperforms your fancy deep Q-network. In general, neural networks need many observations to learn something, and some level of detail is inherently lost by training a single network for all states that may be encountered." }, { "code": null, "e": 6445, "s": 6252, "text": "Aside from good neural network practices (e.g., normalization, one-hot encoding, proper weight initialization), the following adjustments may strongly improve the quality of your algorithm[5]:" }, { "code": null, "e": 6755, "s": 6445, "text": "Mini batches: Rather than updating the network after every single observation, update the Q-network using batches of observations. Stability is often improved by training for multiple observations. The losses per observation are simply averaged. The tf.one_hot mask can be used to update for multiple actions." }, { "code": null, "e": 7011, "s": 6755, "text": "Experience replay: Build a buffer of prior observations (stored as s,a,r,s’ tuples), sample one (or more, when using mini batches) from the buffer, and plug into the Q-network. The main benefit of this approach is that it removes correlations in the data." }, { "code": null, "e": 7283, "s": 7011, "text": "Target network: Create a copy of the neural network that is updated only periodically (say every 100 updates). The target network is used to compute Q(s’,a’), whereas the original network is used to determine Q(s,a). This procedure typically produces more stable updates." }, { "code": null, "e": 7565, "s": 7283, "text": "A deep Q-network is a straightforward neural network, taking the state vector as input and outputting Q-values corresponding to each action. By using a single representation for all states, it can handle much larger state spaces than vanilla Q-learning (which uses a lookup table)." }, { "code": null, "e": 7743, "s": 7565, "text": "TensorFlow’s GradientTape can be used to update the Q-network. The corresponding loss function is a mean squared error that is close to the original Q-learning update mechanism." }, { "code": null, "e": 7923, "s": 7743, "text": "In temporal difference learning, the estimate for Q(s,a) is updated based on Q(s’,a’). The stop_gradient operator ensures that the gradients corresponding to Q(s’,a’) are ignored." }, { "code": null, "e": 8082, "s": 7923, "text": "Deep Q-learning comes with some implementation challenges. Don’t be alarmed if vanilla Q-learning actually performs better, especially for toy-sized problems." }, { "code": null, "e": 8175, "s": 8082, "text": "The GitHub code for the minimal working example using multi-armed bandits can be found here." }, { "code": null, "e": 8267, "s": 8175, "text": "Want to stabilize your deep Q-learning algorithm? The following article might interest you:" }, { "code": null, "e": 8290, "s": 8267, "text": "towardsdatascience.com" }, { "code": null, "e": 8437, "s": 8290, "text": "Looking to implement policy gradient methods instead? Please check my articles with minimal working examples for the continuous and discrete case:" }, { "code": null, "e": 8460, "s": 8437, "text": "towardsdatascience.com" }, { "code": null, "e": 8483, "s": 8460, "text": "towardsdatascience.com" }, { "code": null, "e": 8585, "s": 8483, "text": "[1]Sutton, Richard S., and Andrew G. Barto. Reinforcement learning: An introduction. MIT press, 2018." }, { "code": null, "e": 8727, "s": 8585, "text": "[2] Rosebrock, A. (2020) Using TensorFlow and GradientTape to train a Keras model. https://www.tensorflow.org/api_docs/python/tf/GradientTape" }, { "code": null, "e": 8915, "s": 8727, "text": "[3] Ryzhov, I. O., Frazier, P. I., and Powell, W. B. (2010). On the robustness of a one-period look-ahead policy in multi-armed bandit problems. Procedia Computer Science, 1(1):1635{1644." }, { "code": null, "e": 9024, "s": 8915, "text": "[4]TensorFlow (2021). Obtained 26 July 2021 from https://www.tensorflow.org/api_docs/python/tf/stop_gradient" } ]
How to import local json file data to my JavaScript variable?
We have an employee.json file in a directory, within the same directory we have a js file, in which we want to import the content of the json file. The content of employees.json − employees.json "Employees" : [ { "userId":"ravjy", "jobTitleName":"Developer", "firstName":"Ran","lastName":"Vijay", "preferredFullName":"Ran Vijay","employeeCode":"H9","region":"DL","phoneNumber":"34567689", "emailAddress":"ranvijay.k.ran@gmail.com" }, { "userId":"mrvjy","jobTitleName":"Developer","firstName":"Murli","lastName":"Vijay", "preferredFullName":"Murli Vijay","employeeCode":"A2","region":"MU", "phoneNumber":"6543565","emailAddress":"murli@vijay.com" } ] } We can use any of the two ways to access the json file − Code to access employees.json using require module − const data = require('./employees.json'); console.log(data); Code to access employees.json using fetch function − fetch("./employees.json") .then(response => { return response.json(); }) .then(data => console.log(data)); Note − While the first function is better suited for node environment, the second function only works in the web environment because the fetch API is only accessible in the web environment. After running any of the above using require or fetch function, the console output is as follows − { Employees: [ { userId: 'ravjy', jobTitleName: 'Developer', firstName: 'Ran', lastName: 'Vijay', preferredFullName: 'Ran Vijay', employeeCode: 'H9', region: 'DL', phoneNumber: '34567689', emailAddress: 'ranvijay.k.ran@gmail.com' }, { userId: 'mrvjy', jobTitleName: 'Developer', firstName: 'Murli', lastName: 'Vijay', preferredFullName: 'Murli Vijay', employeeCode: 'A2', region: 'MU', phoneNumber: '6543565', emailAddress: 'murli@vijay.com' } ] }
[ { "code": null, "e": 1210, "s": 1062, "text": "We have an employee.json file in a directory, within the same directory we have a js file, in\nwhich we want to import the content of the json file." }, { "code": null, "e": 1242, "s": 1210, "text": "The content of employees.json −" }, { "code": null, "e": 1257, "s": 1242, "text": "employees.json" }, { "code": null, "e": 1768, "s": 1257, "text": "\"Employees\" : [\n {\n \"userId\":\"ravjy\", \"jobTitleName\":\"Developer\", \"firstName\":\"Ran\",\"lastName\":\"Vijay\",\n \"preferredFullName\":\"Ran Vijay\",\"employeeCode\":\"H9\",\"region\":\"DL\",\"phoneNumber\":\"34567689\",\n \"emailAddress\":\"ranvijay.k.ran@gmail.com\"\n },\n {\n \"userId\":\"mrvjy\",\"jobTitleName\":\"Developer\",\"firstName\":\"Murli\",\"lastName\":\"Vijay\",\n \"preferredFullName\":\"Murli Vijay\",\"employeeCode\":\"A2\",\"region\":\"MU\",\n \"phoneNumber\":\"6543565\",\"emailAddress\":\"murli@vijay.com\"\n }\n ]\n}" }, { "code": null, "e": 1825, "s": 1768, "text": "We can use any of the two ways to access the json file −" }, { "code": null, "e": 1878, "s": 1825, "text": "Code to access employees.json using require module −" }, { "code": null, "e": 1939, "s": 1878, "text": "const data = require('./employees.json');\nconsole.log(data);" }, { "code": null, "e": 1992, "s": 1939, "text": "Code to access employees.json using fetch function −" }, { "code": null, "e": 2102, "s": 1992, "text": "fetch(\"./employees.json\")\n.then(response => {\n return response.json();\n})\n.then(data => console.log(data));" }, { "code": null, "e": 2292, "s": 2102, "text": "Note − While the first function is better suited for node environment, the second function only works in the web environment because the fetch API is only accessible in the web environment." }, { "code": null, "e": 2391, "s": 2292, "text": "After running any of the above using require or fetch function, the console output is as follows −" }, { "code": null, "e": 3031, "s": 2391, "text": "{\n Employees: [\n {\n userId: 'ravjy',\n jobTitleName: 'Developer',\n firstName: 'Ran',\n lastName: 'Vijay',\n preferredFullName: 'Ran Vijay',\n employeeCode: 'H9',\n region: 'DL',\n phoneNumber: '34567689',\n emailAddress: 'ranvijay.k.ran@gmail.com'\n },\n {\n userId: 'mrvjy',\n jobTitleName: 'Developer',\n firstName: 'Murli',\n lastName: 'Vijay',\n preferredFullName: 'Murli Vijay',\n employeeCode: 'A2',\n region: 'MU',\n phoneNumber: '6543565',\n emailAddress: 'murli@vijay.com'\n }\n ]\n}" } ]
Calendar.get() Method in Java - GeeksforGeeks
02 Mar, 2018 java.util.Calendar.get() method is a method of java.util.Calendar class. The Calendar class provides some methods for implementing a concrete calendar system outside the package. Some examples of Calendar fields are : YEAR, DATE, MONTH, DAY_OF_WEEK, DAY_OF_YEAR, WEEK_OF_YEAR, MINUTE, SECOND, HOUR, AM_PM, WEEK_OF_MONTH, DAY_OF_WEEK_IN_MONTH, HOUR_OF_DAY. Syntax : public int get(int field) where, field represents the given calendar field and the function returns the value of given field. Exception : If the specified field is out of range, then ArrayIndexOutOfBoundsException is thrown. Applications :Example 1: To fetch Date, Month, Year // Java code to implement calendar.get() functionimport java.util.*; class GFG { // Driver codepublic static void main(String[] args) { // creating a calendar Calendar c = Calendar.getInstance(); // get the value of DATE field System.out.println("Day : " + c.get(Calendar.DATE)); // get the value of MONTH field System.out.println("Month : " + c.get(Calendar.MONTH)); // get the value of YEAR field System.out.println("Year : " + c.get(Calendar.YEAR));}} Output : Day : 1 Month : 2 Year : 2018 Example 2 : To fetch Day of the week, Day of the year, Week of the month, Week of the year. // Java Code of calendar.get() functionimport java.util.*; class GFG { // Driver codepublic static void main(String[] args) { // creating a calendar Calendar c = Calendar.getInstance(); // get the value of DATE_OF_WEEK field System.out.println("Day of week : " + c.get(Calendar.DAY_OF_WEEK)); // get the value of DAY_OF_YEAR field System.out.println("Day of year : " + c.get(Calendar.DAY_OF_YEAR)); // get the value of WEEK_OF_MONTH field System.out.println("Week in Month : " + c.get(Calendar.WEEK_OF_MONTH)); // get the value of WEEK_OF_YEAR field System.out.println("Week in Year : " + c.get(Calendar.WEEK_OF_YEAR)); // get the value of DAY_OF_WEEK_IN_MONTH field System.out.println("Day of Week in Month : " + c.get(Calendar.DAY_OF_WEEK_IN_MONTH));}} Output : Day of week : 5 Day of year : 60 Week in Month : 1 Week in Year : 9 Day of Week in Month : 1 Example 3 : To fetch Hour, Minute, Second and AM_PM. // Implementation of calendar.get()// function in Javaimport java.util.*; class GFG { // Driver code public static void main(String[] args) { // creating a calendar Calendar c = Calendar.getInstance(); // get the value of HOUR field System.out.println("Hour : " + c.get(Calendar.HOUR)); // get the value of MINUTE field System.out.println("Minute : " + c.get(Calendar.MINUTE)); // get the value of SECOND field System.out.println("Second : " + c.get(Calendar.SECOND)); // get the value of AM_PM field System.out.println("AM or PM : " + c.get(Calendar.AM_PM)); // get the value of HOUR_OF_DAY field System.out.println("Hour (24-hour clock) : " + c.get(Calendar.HOUR_OF_DAY)); }} Output : Hour : 6 Minute : 51 Second : 53 AM or PM : 0 Hour (24-hour clock) : 6 date-time-program Java-Library Java Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Object Oriented Programming (OOPs) Concept in Java HashMap in Java with Examples How to iterate any Map in Java Initialize an ArrayList in Java Interfaces in Java ArrayList in Java Multidimensional Arrays in Java Stack Class in Java Singleton Class in Java LinkedList in Java
[ { "code": null, "e": 24091, "s": 24063, "text": "\n02 Mar, 2018" }, { "code": null, "e": 24447, "s": 24091, "text": "java.util.Calendar.get() method is a method of java.util.Calendar class. The Calendar class provides some methods for implementing a concrete calendar system outside the package. Some examples of Calendar fields are : YEAR, DATE, MONTH, DAY_OF_WEEK, DAY_OF_YEAR, WEEK_OF_YEAR, MINUTE, SECOND, HOUR, AM_PM, WEEK_OF_MONTH, DAY_OF_WEEK_IN_MONTH, HOUR_OF_DAY." }, { "code": null, "e": 24456, "s": 24447, "text": "Syntax :" }, { "code": null, "e": 24584, "s": 24456, "text": "public int get(int field)\n\nwhere, field represents the given calendar\nfield and the function returns the value of\ngiven field.\n" }, { "code": null, "e": 24683, "s": 24584, "text": "Exception : If the specified field is out of range, then ArrayIndexOutOfBoundsException is thrown." }, { "code": null, "e": 24735, "s": 24683, "text": "Applications :Example 1: To fetch Date, Month, Year" }, { "code": "// Java code to implement calendar.get() functionimport java.util.*; class GFG { // Driver codepublic static void main(String[] args) { // creating a calendar Calendar c = Calendar.getInstance(); // get the value of DATE field System.out.println(\"Day : \" + c.get(Calendar.DATE)); // get the value of MONTH field System.out.println(\"Month : \" + c.get(Calendar.MONTH)); // get the value of YEAR field System.out.println(\"Year : \" + c.get(Calendar.YEAR));}}", "e": 25322, "s": 24735, "text": null }, { "code": null, "e": 25331, "s": 25322, "text": "Output :" }, { "code": null, "e": 25362, "s": 25331, "text": "Day : 1\nMonth : 2\nYear : 2018\n" }, { "code": null, "e": 25454, "s": 25362, "text": "Example 2 : To fetch Day of the week, Day of the year, Week of the month, Week of the year." }, { "code": "// Java Code of calendar.get() functionimport java.util.*; class GFG { // Driver codepublic static void main(String[] args) { // creating a calendar Calendar c = Calendar.getInstance(); // get the value of DATE_OF_WEEK field System.out.println(\"Day of week : \" + c.get(Calendar.DAY_OF_WEEK)); // get the value of DAY_OF_YEAR field System.out.println(\"Day of year : \" + c.get(Calendar.DAY_OF_YEAR)); // get the value of WEEK_OF_MONTH field System.out.println(\"Week in Month : \" + c.get(Calendar.WEEK_OF_MONTH)); // get the value of WEEK_OF_YEAR field System.out.println(\"Week in Year : \" + c.get(Calendar.WEEK_OF_YEAR)); // get the value of DAY_OF_WEEK_IN_MONTH field System.out.println(\"Day of Week in Month : \" + c.get(Calendar.DAY_OF_WEEK_IN_MONTH));}}", "e": 26475, "s": 25454, "text": null }, { "code": null, "e": 26484, "s": 26475, "text": "Output :" }, { "code": null, "e": 26578, "s": 26484, "text": "Day of week : 5\nDay of year : 60\nWeek in Month : 1\nWeek in Year : 9\nDay of Week in Month : 1\n" }, { "code": null, "e": 26631, "s": 26578, "text": "Example 3 : To fetch Hour, Minute, Second and AM_PM." }, { "code": "// Implementation of calendar.get()// function in Javaimport java.util.*; class GFG { // Driver code public static void main(String[] args) { // creating a calendar Calendar c = Calendar.getInstance(); // get the value of HOUR field System.out.println(\"Hour : \" + c.get(Calendar.HOUR)); // get the value of MINUTE field System.out.println(\"Minute : \" + c.get(Calendar.MINUTE)); // get the value of SECOND field System.out.println(\"Second : \" + c.get(Calendar.SECOND)); // get the value of AM_PM field System.out.println(\"AM or PM : \" + c.get(Calendar.AM_PM)); // get the value of HOUR_OF_DAY field System.out.println(\"Hour (24-hour clock) : \" + c.get(Calendar.HOUR_OF_DAY)); }}", "e": 27418, "s": 26631, "text": null }, { "code": null, "e": 27427, "s": 27418, "text": "Output :" }, { "code": null, "e": 27499, "s": 27427, "text": "Hour : 6\nMinute : 51\nSecond : 53\nAM or PM : 0\nHour (24-hour clock) : 6\n" }, { "code": null, "e": 27517, "s": 27499, "text": "date-time-program" }, { "code": null, "e": 27530, "s": 27517, "text": "Java-Library" }, { "code": null, "e": 27535, "s": 27530, "text": "Java" }, { "code": null, "e": 27540, "s": 27535, "text": "Java" }, { "code": null, "e": 27638, "s": 27540, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27647, "s": 27638, "text": "Comments" }, { "code": null, "e": 27660, "s": 27647, "text": "Old Comments" }, { "code": null, "e": 27711, "s": 27660, "text": "Object Oriented Programming (OOPs) Concept in Java" }, { "code": null, "e": 27741, "s": 27711, "text": "HashMap in Java with Examples" }, { "code": null, "e": 27772, "s": 27741, "text": "How to iterate any Map in Java" }, { "code": null, "e": 27804, "s": 27772, "text": "Initialize an ArrayList in Java" }, { "code": null, "e": 27823, "s": 27804, "text": "Interfaces in Java" }, { "code": null, "e": 27841, "s": 27823, "text": "ArrayList in Java" }, { "code": null, "e": 27873, "s": 27841, "text": "Multidimensional Arrays in Java" }, { "code": null, "e": 27893, "s": 27873, "text": "Stack Class in Java" }, { "code": null, "e": 27917, "s": 27893, "text": "Singleton Class in Java" } ]
Add DATE and TIME fields to get DATETIME field in MySQL?
You can use CONCAT() function to set date and time fields to get DATETIME field. Let us create a demo table mysql> create table getDateTimeFieldsDemo -> ( -> ShippingDate date, -> ShippingTime time, -> Shippingdatetime datetime -> ); Query OK, 0 rows affected (0.50 sec) Insert some records in the table using insert command. The query is as follows − mysql> insert into getDateTimeFieldsDemo(ShippingDate,ShippingTime) values('2018-01-21','09:45:34'); Query OK, 1 row affected (0.16 sec) mysql> insert into getDateTimeFieldsDemo(ShippingDate,ShippingTime) values('2013-07-26','13:21:20'); Query OK, 1 row affected (0.13 sec) mysql> insert into getDateTimeFieldsDemo(ShippingDate,ShippingTime) values('2017-12-31','15:31:40'); Query OK, 1 row affected (0.17 sec) mysql> insert into getDateTimeFieldsDemo(ShippingDate,ShippingTime) values('2019-03-07','12:13:34'); Query OK, 1 row affected (0.41 sec) Display all records from the table using select statement. The query is as follows − mysql> select *from getDateTimeFieldsDemo; The following is the output +--------------+--------------+------------------+ | ShippingDate | ShippingTime | Shippingdatetime | +--------------+--------------+------------------+ | 2018-01-21 | 09:45:34 | NULL | | 2013-07-26 | 13:21:20 | NULL | | 2017-12-31 | 15:31:40 | NULL | | 2019-03-07 | 12:13:34 | NULL | +--------------+--------------+------------------+ 4 rows in set (0.00 sec) Here is the query to add DATE and TIME fields to get DATETIME field in MySQL mysql> update getDateTimeFieldsDemo set Shippingdatetime=concat(ShippingDate," ",ShippingTime); Query OK, 4 rows affected (0.09 sec) Rows matched: 4 Changed: 4 Warnings: 0 Now check table records once again. The query is as follows − mysql> select *from getDateTimeFieldsDemo; The following is the output +--------------+--------------+---------------------+ | ShippingDate | ShippingTime | Shippingdatetime | +--------------+--------------+---------------------+ | 2018-01-21 | 09:45:34 | 2018-01-21 09:45:34 | | 2013-07-26 | 13:21:20 | 2013-07-26 13:21:20 | | 2017-12-31 | 15:31:40 | 2017-12-31 15:31:40 | | 2019-03-07 | 12:13:34 | 2019-03-07 12:13:34 | +--------------+--------------+---------------------+ 4 rows in set (0.00 sec)
[ { "code": null, "e": 1143, "s": 1062, "text": "You can use CONCAT() function to set date and time fields to get DATETIME field." }, { "code": null, "e": 1170, "s": 1143, "text": "Let us create a demo table" }, { "code": null, "e": 1348, "s": 1170, "text": "mysql> create table getDateTimeFieldsDemo\n -> (\n -> ShippingDate date,\n -> ShippingTime time,\n -> Shippingdatetime datetime\n -> );\nQuery OK, 0 rows affected (0.50 sec)" }, { "code": null, "e": 1429, "s": 1348, "text": "Insert some records in the table using insert command. The query is as follows −" }, { "code": null, "e": 1977, "s": 1429, "text": "mysql> insert into getDateTimeFieldsDemo(ShippingDate,ShippingTime) values('2018-01-21','09:45:34');\nQuery OK, 1 row affected (0.16 sec)\nmysql> insert into getDateTimeFieldsDemo(ShippingDate,ShippingTime) values('2013-07-26','13:21:20');\nQuery OK, 1 row affected (0.13 sec)\nmysql> insert into getDateTimeFieldsDemo(ShippingDate,ShippingTime) values('2017-12-31','15:31:40');\nQuery OK, 1 row affected (0.17 sec)\nmysql> insert into getDateTimeFieldsDemo(ShippingDate,ShippingTime) values('2019-03-07','12:13:34');\nQuery OK, 1 row affected (0.41 sec)" }, { "code": null, "e": 2062, "s": 1977, "text": "Display all records from the table using select statement. The query is as follows −" }, { "code": null, "e": 2105, "s": 2062, "text": "mysql> select *from getDateTimeFieldsDemo;" }, { "code": null, "e": 2133, "s": 2105, "text": "The following is the output" }, { "code": null, "e": 2566, "s": 2133, "text": "+--------------+--------------+------------------+\n| ShippingDate | ShippingTime | Shippingdatetime |\n+--------------+--------------+------------------+\n| 2018-01-21 | 09:45:34 | NULL |\n| 2013-07-26 | 13:21:20 | NULL |\n| 2017-12-31 | 15:31:40 | NULL |\n| 2019-03-07 | 12:13:34 | NULL |\n+--------------+--------------+------------------+\n4 rows in set (0.00 sec)" }, { "code": null, "e": 2643, "s": 2566, "text": "Here is the query to add DATE and TIME fields to get DATETIME field in MySQL" }, { "code": null, "e": 2815, "s": 2643, "text": "mysql> update getDateTimeFieldsDemo set Shippingdatetime=concat(ShippingDate,\" \",ShippingTime);\nQuery OK, 4 rows affected (0.09 sec)\nRows matched: 4 Changed: 4 Warnings: 0" }, { "code": null, "e": 2877, "s": 2815, "text": "Now check table records once again. The query is as follows −" }, { "code": null, "e": 2920, "s": 2877, "text": "mysql> select *from getDateTimeFieldsDemo;" }, { "code": null, "e": 2948, "s": 2920, "text": "The following is the output" }, { "code": null, "e": 3405, "s": 2948, "text": "+--------------+--------------+---------------------+\n| ShippingDate | ShippingTime | Shippingdatetime |\n+--------------+--------------+---------------------+\n| 2018-01-21 | 09:45:34 | 2018-01-21 09:45:34 |\n| 2013-07-26 | 13:21:20 | 2013-07-26 13:21:20 |\n| 2017-12-31 | 15:31:40 | 2017-12-31 15:31:40 |\n| 2019-03-07 | 12:13:34 | 2019-03-07 12:13:34 |\n+--------------+--------------+---------------------+\n4 rows in set (0.00 sec)" } ]
__attribute__((constructor)) and __attribute__((destructor)) syntaxes in C - GeeksforGeeks
02 Jun, 2017 Write two functions in C using GCC compiler, one of which executes before main function and other executes after the main function. GCC specific syntaxes : 1. __attribute__((constructor)) syntax : This particular GCC syntax, when used with a function, executes the same function at the startup of the program, i.e before main() function. 2. __attribute__((destructor)) syntax : This particular GCC syntax, when used with a function, executes the same function just before the program terminates through _exit, i.e after main() function. Explanation :The way constructors and destructors work is that the shared object file contains special sections (.ctors and .dtors on ELF) which contain references to the functions marked with the constructor and destructor attributes, respectively. When the library is loaded/unloaded, the dynamic loader program checks whether such sections exist, and if so, calls the functions referenced therein. Few points regarding these are worth noting :1. __attribute__((constructor)) runs when a shared library is loaded, typically during program startup.2. __attribute__((destructor)) runs when the shared library is unloaded, typically at program exit.3. The two parentheses are presumably to distinguish them from function calls.4. __attribute__ is a GCC specific syntax;not a function or a macro. Driver code : // C program to demonstrate working of// __attribute__((constructor)) and// __attribute__((destructor))#include<stdio.h> // Assigning functions to be executed before and// after main()void __attribute__((constructor)) calledFirst();void __attribute__((destructor)) calledLast(); void main(){ printf("\nI am in main");} // This function is assigned to execute before// main using __attribute__((constructor))void calledFirst(){ printf("\nI am called first");} // This function is assigned to execute after// main using __attribute__((destructor))void calledLast(){ printf("\nI am called last");} Output: I am called first I am in main I am called last This article is contributed by Rishav Raj. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. C-Library CPP-Library C Language C++ CPP Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments TCP Server-Client implementation in C Exception Handling in C++ Multithreading in C Arrow operator -> in C/C++ with Examples 'this' pointer in C++ Vector in C++ STL Initialize a vector in C++ (6 different ways) Map in C++ Standard Template Library (STL) Inheritance in C++ Constructors in C++
[ { "code": null, "e": 23843, "s": 23815, "text": "\n02 Jun, 2017" }, { "code": null, "e": 23975, "s": 23843, "text": "Write two functions in C using GCC compiler, one of which executes before main function and other executes after the main function." }, { "code": null, "e": 23999, "s": 23975, "text": "GCC specific syntaxes :" }, { "code": null, "e": 24181, "s": 23999, "text": "1. __attribute__((constructor)) syntax : This particular GCC syntax, when used with a function, executes the same function at the startup of the program, i.e before main() function." }, { "code": null, "e": 24380, "s": 24181, "text": "2. __attribute__((destructor)) syntax : This particular GCC syntax, when used with a function, executes the same function just before the program terminates through _exit, i.e after main() function." }, { "code": null, "e": 24781, "s": 24380, "text": "Explanation :The way constructors and destructors work is that the shared object file contains special sections (.ctors and .dtors on ELF) which contain references to the functions marked with the constructor and destructor attributes, respectively. When the library is loaded/unloaded, the dynamic loader program checks whether such sections exist, and if so, calls the functions referenced therein." }, { "code": null, "e": 25175, "s": 24781, "text": "Few points regarding these are worth noting :1. __attribute__((constructor)) runs when a shared library is loaded, typically during program startup.2. __attribute__((destructor)) runs when the shared library is unloaded, typically at program exit.3. The two parentheses are presumably to distinguish them from function calls.4. __attribute__ is a GCC specific syntax;not a function or a macro." }, { "code": null, "e": 25189, "s": 25175, "text": "Driver code :" }, { "code": "// C program to demonstrate working of// __attribute__((constructor)) and// __attribute__((destructor))#include<stdio.h> // Assigning functions to be executed before and// after main()void __attribute__((constructor)) calledFirst();void __attribute__((destructor)) calledLast(); void main(){ printf(\"\\nI am in main\");} // This function is assigned to execute before// main using __attribute__((constructor))void calledFirst(){ printf(\"\\nI am called first\");} // This function is assigned to execute after// main using __attribute__((destructor))void calledLast(){ printf(\"\\nI am called last\");}", "e": 25797, "s": 25189, "text": null }, { "code": null, "e": 25805, "s": 25797, "text": "Output:" }, { "code": null, "e": 25854, "s": 25805, "text": "I am called first\nI am in main\nI am called last\n" }, { "code": null, "e": 26152, "s": 25854, "text": "This article is contributed by Rishav Raj. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks." }, { "code": null, "e": 26277, "s": 26152, "text": "Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above." }, { "code": null, "e": 26287, "s": 26277, "text": "C-Library" }, { "code": null, "e": 26299, "s": 26287, "text": "CPP-Library" }, { "code": null, "e": 26310, "s": 26299, "text": "C Language" }, { "code": null, "e": 26314, "s": 26310, "text": "C++" }, { "code": null, "e": 26318, "s": 26314, "text": "CPP" }, { "code": null, "e": 26416, "s": 26318, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 26425, "s": 26416, "text": "Comments" }, { "code": null, "e": 26438, "s": 26425, "text": "Old Comments" }, { "code": null, "e": 26476, "s": 26438, "text": "TCP Server-Client implementation in C" }, { "code": null, "e": 26502, "s": 26476, "text": "Exception Handling in C++" }, { "code": null, "e": 26522, "s": 26502, "text": "Multithreading in C" }, { "code": null, "e": 26563, "s": 26522, "text": "Arrow operator -> in C/C++ with Examples" }, { "code": null, "e": 26585, "s": 26563, "text": "'this' pointer in C++" }, { "code": null, "e": 26603, "s": 26585, "text": "Vector in C++ STL" }, { "code": null, "e": 26649, "s": 26603, "text": "Initialize a vector in C++ (6 different ways)" }, { "code": null, "e": 26692, "s": 26649, "text": "Map in C++ Standard Template Library (STL)" }, { "code": null, "e": 26711, "s": 26692, "text": "Inheritance in C++" } ]
How to produce Interactive Matplotlib Plots in Jupyter Environment | by Abdishakur | Towards Data Science
Matplotlib is extremely powerful visualization library and is the default backend for many other python libraries including Pandas, Geopandas and Seaborn, to name just a few. Today, there are different options to enable interactivity with Matplotlib plots. However, the new native Matplotlib/Jupyter Interactive widgets offer more extensive usage and benefits to all third party packages that use Matplotlib. Built on top of Matplotlib and Widgets, this technique allows you to have interactive plots without third party libraries. The only requirement is to install Ipympl and all interactivity extensions are readily available in your Jupiter notebook environment. In this tutorial, I will cover some use cases and examples of interactive data visualization with Matplotlib using ipympl. We will first cover the basics of ipympl, its canvas and figures with some examples. Leveraging the Jupyter interactive widgets framework, IPYMPL enables the interactive features of matplotlib in the Jupyter notebook and in JupyterLab. To enable interactive visualization backend, you only need to use the Jupyter magic command: %matplotlib widget Now, let us visualize a matplotlib plot. We first read the data with Pandas and create a scatter plot with Matplotlib. url =df = pd.read_csv(“https://raw.githubusercontent.com/plotly/datasets/master/tips.csv”)# Matplotlib Scatter Plotplt.scatter(‘total_bill’, ‘tip’,data=df)plt.xlabel(‘Total Bill’)plt.ylabel(‘Tip’)plt.show() And with no additional code and only using the simple matplotlib code, the output is an interactive plot where you can zoom in/out, pan it and reset to the original view. Below GIF image displays the interactivity possible with ipympl. Besides, you can also customize the User Interface’s visibility, the canvas footer, and canvas size. fig.canvas.toolbar_visible = Falsefig.canvas.header_visible = Falsefig.canvas.resizable = True These commands alter the User Interface of Ipympl and Matplotlib plots. The buttons will disappear from the UI but you still have the functionality. To revert back to the usual matplotlib plots, you can call the matplotlib inline: %matplotlib inline Let us move to Interactive plots with Pandas. Pandas plots are built on top of Matplotlib; therefore, we can also create interactive pandas plots with Ipympl. Let us take a simple Line plot with Pandas. df.plot(kind=”line”) Lo and behold, we have an interactive plot with Pandas. This line plot is dense, and with this interactive functionality, we can zoom in to a particular place and interact with the plot. See the below GIF. Note that Pandas has already some interactive backend options including Plotly and Bokeh. However, this does not extend to other libraries built on top of Pandas. One such library is Geopandas, which gets its first interactive maps thanks to Ipympl. If you have uses Geospatial data in Pandas, you already know Geopandas, which is widely used in the Geospatial data science landscape. Interactive maps with Geopandas is game-changing. Although pandas had other backend options to make its plots interactive, that was not possible with Geopandas. As Geopandas is also built on top of Maptlotlib, the interactive backend of Ipympl extends to it as well. We can now have interactive maps within Geopandas without using any other third party geo-visualization libraries. Let us see an example of an interactive map with Geopandas powered by Ipympl. I will first read the data with Pandas since we are using CSV file and convert it to Geopandas Geodataframe. carshare = “https://raw.githubusercontent.com/plotly/datasets/master/carshare.csv"df_carshare = pd.read_csv(carshare)gdf = gpd.GeoDataFrame(df_carshare, geometry=gpd.points_from_xy(df_carshare.centroid_lon, df_carshare.centroid_lat), crs=”EPSG:4326") We can now plot any geospatial data with Geopandas. We can simply call .plot() to visualize a map. However, in this example, I am also adding a base map for contextual purpose. import contextily as ctxfig, ax = plt.subplots()gdf.to_crs(epsg=3857).plot(ax=ax, color=”red”, edgecolor=”white”)ctx.add_basemap(ax, url=ctx.providers.CartoDB.Positron) plt.title(“Car Share”, fontsize=30, fontname=”Palatino Linotype”, color=”grey”)ax.axis(“off”)plt.show() And we have an interactive map with Geopandas. We can zoom in and out and pan the map. That is wonderful. I use Geoapndas daily, so this is a huge addition without a lot of hurdles. You can still work with Geopandas and have interactive maps without using other third-party packages. In this tutorial, we have introduced an easy and convenient way to enable interactive plots with Maptlotlib using Ipympl. We have seen how to create interactive plots with Matplotlib, Pandas and Geopandas. To use Ipympl’s interactive functionality, you can install it with Conda/ pip: conda install -c conda-forge ipymplpip install ipympl If you are using Jupyter Lab, you also need to install node js and jupyterLab extension manager. conda install -c conda-forge nodejsjupyter labextension install @jupyter-widgets/jupyterlab-managerjupyter lab build
[ { "code": null, "e": 347, "s": 172, "text": "Matplotlib is extremely powerful visualization library and is the default backend for many other python libraries including Pandas, Geopandas and Seaborn, to name just a few." }, { "code": null, "e": 581, "s": 347, "text": "Today, there are different options to enable interactivity with Matplotlib plots. However, the new native Matplotlib/Jupyter Interactive widgets offer more extensive usage and benefits to all third party packages that use Matplotlib." }, { "code": null, "e": 839, "s": 581, "text": "Built on top of Matplotlib and Widgets, this technique allows you to have interactive plots without third party libraries. The only requirement is to install Ipympl and all interactivity extensions are readily available in your Jupiter notebook environment." }, { "code": null, "e": 1047, "s": 839, "text": "In this tutorial, I will cover some use cases and examples of interactive data visualization with Matplotlib using ipympl. We will first cover the basics of ipympl, its canvas and figures with some examples." }, { "code": null, "e": 1198, "s": 1047, "text": "Leveraging the Jupyter interactive widgets framework, IPYMPL enables the interactive features of matplotlib in the Jupyter notebook and in JupyterLab." }, { "code": null, "e": 1291, "s": 1198, "text": "To enable interactive visualization backend, you only need to use the Jupyter magic command:" }, { "code": null, "e": 1310, "s": 1291, "text": "%matplotlib widget" }, { "code": null, "e": 1429, "s": 1310, "text": "Now, let us visualize a matplotlib plot. We first read the data with Pandas and create a scatter plot with Matplotlib." }, { "code": null, "e": 1636, "s": 1429, "text": "url =df = pd.read_csv(“https://raw.githubusercontent.com/plotly/datasets/master/tips.csv”)# Matplotlib Scatter Plotplt.scatter(‘total_bill’, ‘tip’,data=df)plt.xlabel(‘Total Bill’)plt.ylabel(‘Tip’)plt.show()" }, { "code": null, "e": 1872, "s": 1636, "text": "And with no additional code and only using the simple matplotlib code, the output is an interactive plot where you can zoom in/out, pan it and reset to the original view. Below GIF image displays the interactivity possible with ipympl." }, { "code": null, "e": 1973, "s": 1872, "text": "Besides, you can also customize the User Interface’s visibility, the canvas footer, and canvas size." }, { "code": null, "e": 2068, "s": 1973, "text": "fig.canvas.toolbar_visible = Falsefig.canvas.header_visible = Falsefig.canvas.resizable = True" }, { "code": null, "e": 2299, "s": 2068, "text": "These commands alter the User Interface of Ipympl and Matplotlib plots. The buttons will disappear from the UI but you still have the functionality. To revert back to the usual matplotlib plots, you can call the matplotlib inline:" }, { "code": null, "e": 2318, "s": 2299, "text": "%matplotlib inline" }, { "code": null, "e": 2364, "s": 2318, "text": "Let us move to Interactive plots with Pandas." }, { "code": null, "e": 2521, "s": 2364, "text": "Pandas plots are built on top of Matplotlib; therefore, we can also create interactive pandas plots with Ipympl. Let us take a simple Line plot with Pandas." }, { "code": null, "e": 2542, "s": 2521, "text": "df.plot(kind=”line”)" }, { "code": null, "e": 2748, "s": 2542, "text": "Lo and behold, we have an interactive plot with Pandas. This line plot is dense, and with this interactive functionality, we can zoom in to a particular place and interact with the plot. See the below GIF." }, { "code": null, "e": 2998, "s": 2748, "text": "Note that Pandas has already some interactive backend options including Plotly and Bokeh. However, this does not extend to other libraries built on top of Pandas. One such library is Geopandas, which gets its first interactive maps thanks to Ipympl." }, { "code": null, "e": 3294, "s": 2998, "text": "If you have uses Geospatial data in Pandas, you already know Geopandas, which is widely used in the Geospatial data science landscape. Interactive maps with Geopandas is game-changing. Although pandas had other backend options to make its plots interactive, that was not possible with Geopandas." }, { "code": null, "e": 3515, "s": 3294, "text": "As Geopandas is also built on top of Maptlotlib, the interactive backend of Ipympl extends to it as well. We can now have interactive maps within Geopandas without using any other third party geo-visualization libraries." }, { "code": null, "e": 3702, "s": 3515, "text": "Let us see an example of an interactive map with Geopandas powered by Ipympl. I will first read the data with Pandas since we are using CSV file and convert it to Geopandas Geodataframe." }, { "code": null, "e": 3953, "s": 3702, "text": "carshare = “https://raw.githubusercontent.com/plotly/datasets/master/carshare.csv\"df_carshare = pd.read_csv(carshare)gdf = gpd.GeoDataFrame(df_carshare, geometry=gpd.points_from_xy(df_carshare.centroid_lon, df_carshare.centroid_lat), crs=”EPSG:4326\")" }, { "code": null, "e": 4130, "s": 3953, "text": "We can now plot any geospatial data with Geopandas. We can simply call .plot() to visualize a map. However, in this example, I am also adding a base map for contextual purpose." }, { "code": null, "e": 4403, "s": 4130, "text": "import contextily as ctxfig, ax = plt.subplots()gdf.to_crs(epsg=3857).plot(ax=ax, color=”red”, edgecolor=”white”)ctx.add_basemap(ax, url=ctx.providers.CartoDB.Positron) plt.title(“Car Share”, fontsize=30, fontname=”Palatino Linotype”, color=”grey”)ax.axis(“off”)plt.show()" }, { "code": null, "e": 4509, "s": 4403, "text": "And we have an interactive map with Geopandas. We can zoom in and out and pan the map. That is wonderful." }, { "code": null, "e": 4687, "s": 4509, "text": "I use Geoapndas daily, so this is a huge addition without a lot of hurdles. You can still work with Geopandas and have interactive maps without using other third-party packages." }, { "code": null, "e": 4893, "s": 4687, "text": "In this tutorial, we have introduced an easy and convenient way to enable interactive plots with Maptlotlib using Ipympl. We have seen how to create interactive plots with Matplotlib, Pandas and Geopandas." }, { "code": null, "e": 4972, "s": 4893, "text": "To use Ipympl’s interactive functionality, you can install it with Conda/ pip:" }, { "code": null, "e": 5026, "s": 4972, "text": "conda install -c conda-forge ipymplpip install ipympl" }, { "code": null, "e": 5123, "s": 5026, "text": "If you are using Jupyter Lab, you also need to install node js and jupyterLab extension manager." } ]
Find the Number of Quadrilaterals Possible from the Given Points using C++
A quadrilateral forms a polygon with four vertices and four edges in Euclidean plane geometry. The name 4-gon etc. Included in other names of quadrilaterals and sometimes they are also known as a square, display style, etc. In this article, we will explain the approaches to finding the number of quadrilaterals possible from the given points. In this problem, we need to find out how many possible quadrilaterals are possible to create with the provided four points ( x, y ) in the cartesian plane. So here is the example for the given problem − Input : A( -2, 8 ), B( -2, 0 ), C( 6, -1 ), D( 0, 8 ) Output : 1 Explanation : One quadrilateral can be formed ( ABCD ) Input : A( 1, 8 ), B( 0, 1 ), C( 4, 0 ), D( 1, 2 ) Output : 3 Explanation : 3 quadrilaterals can be formed (ABCD), (ABDC) and (ADBC). We will first check if 3 out of 4 points are collinear and if yes, then no quadrilateral can be formed with the points. We will first check if 3 out of 4 points are collinear and if yes, then no quadrilateral can be formed with the points. After that, we will check whether any 2 out of 4 points are the same and if yes, then no quadrilateral can be formed. After that, we will check whether any 2 out of 4 points are the same and if yes, then no quadrilateral can be formed. Now, we will check if the diagonal intersect or not. If yes, then there is only one possible quadrilateral that can be formed, called a convex quadrilateral. Now, we will check if the diagonal intersect or not. If yes, then there is only one possible quadrilateral that can be formed, called a convex quadrilateral. Total number of intersection = 1 If the diagonals do not intersect, three possible quadrilaterals can be formed, called a concave quadrilateral. Total number of intersection = 0 #include <iostream> using namespace std; struct Point{ // points int x; int y; }; int check_orientation(Point i, Point j, Point k){ int val = (j.y - i.y) * (k.x - j.x) - (j.x - i.x) * (k.y - j.y); if (val == 0) return 0; return (val > 0) ? 1 : 2; } // checking whether line segments intersect bool check_Intersect(Point A, Point B, Point C, Point D){ int o1 = check_orientation(A, B, C); int o2 = check_orientation(A, B, D); int o3 = check_orientation(C, D, A); int o4 = check_orientation(C, D, B); if (o1 != o2 && o3 != o4) return true; return false; } // checking whether 2 points are same bool check_similar(Point A, Point B){ // If found similiar then we are returning false that means no quad. can be formed if (A.x == B.x && A.y == B.y) return false; // returning true for not found similiar return true; } // Checking collinearity of three points bool check_collinear(Point A, Point B, Point C){ int x1 = A.x, y1 = A.y; int x2 = B.x, y2 = B.y; int x3 = C.x, y3 = C.y; if ((y3 - y2) * (x2 - x1) == (y2 - y1) * (x3 - x2)) return false; else return true; } // main function int main(){ struct Point A,B,C,D; A.x = -2, A.y = 8;// A(-2, 8) B.x = -2, B.y = 0;// B(-2, 0) C.x = 6, C.y = -1;// C(6, -1) D.x = 0, D.y = 8;// D(0, 8) // Checking whether any 3 points are collinear bool flag = true; flag = flag & check_collinear(A, B, C); flag = flag & check_collinear(A, B, D); flag = flag & check_collinear(A, C, D); flag = flag & check_collinear(B, C, D); // If points found collinear if (flag == false){ cout << "Number of quadrilaterals possible from the given points: 0"; return 0; } // Checking if 2 points are same. bool same = true; same = same & check_similar(A, B); same = same & check_similar(A, C); same = same & check_similar(B, D); same = same & check_similar(C, D); same = same & check_similar(A, D); same = same & check_similar(B, C); // If similiar point exist if (same == false){ cout << "Number of quadrilaterals possible from the given points: 0"; return 0; } // checking whether diagonal intersect or not flag = true; if (check_Intersect(A, B, C, D)) flag = false; if (check_Intersect(A, C, B, D)) flag = false; if (check_Intersect(A, B, D, C)) flag = false; if (flag == true) cout << "Number of quadrilaterals possible from the given points: 3"; else cout << "Number of quadrilaterals possible from the given points: 1"; return 0; } Number of quadrilaterals possible from the given points : 1 This code can be understood in the following steps − Checking whether any three points are collinear and if yes, then the number of a quad. : 0 Checking whether any three points are collinear and if yes, then the number of a quad. : 0 Checking whether any two points are similar and if yes, then the number of a quad. : 0 Checking whether any two points are similar and if yes, then the number of a quad. : 0 Checking whether any line segments intersect:If yes, then the number of a quad. : 1If no, then the number of quads. : 3 Checking whether any line segments intersect: If yes, then the number of a quad. : 1 If yes, then the number of a quad. : 1 If no, then the number of quads. : 3 If no, then the number of quads. : 3 In this article, we solved finding all possible quadrilaterals that can be formed from the given 4 points. We understand how the number of quadrilaterals depends on collinearity, intersection, and orientation. We also write C++ programs for the same, and we can write this program in any other language like C, Java, and python.
[ { "code": null, "e": 1286, "s": 1062, "text": "A quadrilateral forms a polygon with four vertices and four edges in Euclidean plane geometry. The name 4-gon etc. Included in other names of quadrilaterals and sometimes they are also known as a square, display style, etc." }, { "code": null, "e": 1609, "s": 1286, "text": "In this article, we will explain the approaches to finding the number of quadrilaterals possible from the given points. In this problem, we need to find out how many possible quadrilaterals are possible to create with the provided four points ( x, y ) in the cartesian plane. So here is the example for the given problem −" }, { "code": null, "e": 1729, "s": 1609, "text": "Input : A( -2, 8 ), B( -2, 0 ), C( 6, -1 ), D( 0, 8 )\nOutput : 1\nExplanation : One quadrilateral can be formed ( ABCD )" }, { "code": null, "e": 1863, "s": 1729, "text": "Input : A( 1, 8 ), B( 0, 1 ), C( 4, 0 ), D( 1, 2 )\nOutput : 3\nExplanation : 3 quadrilaterals can be formed (ABCD), (ABDC) and (ADBC)." }, { "code": null, "e": 1983, "s": 1863, "text": "We will first check if 3 out of 4 points are collinear and if yes, then no quadrilateral can be formed with the points." }, { "code": null, "e": 2103, "s": 1983, "text": "We will first check if 3 out of 4 points are collinear and if yes, then no quadrilateral can be formed with the points." }, { "code": null, "e": 2221, "s": 2103, "text": "After that, we will check whether any 2 out of 4 points are the same and if yes, then no quadrilateral can be formed." }, { "code": null, "e": 2339, "s": 2221, "text": "After that, we will check whether any 2 out of 4 points are the same and if yes, then no quadrilateral can be formed." }, { "code": null, "e": 2497, "s": 2339, "text": "Now, we will check if the diagonal intersect or not. If yes, then there is only one possible quadrilateral that can be formed, called a convex quadrilateral." }, { "code": null, "e": 2655, "s": 2497, "text": "Now, we will check if the diagonal intersect or not. If yes, then there is only one possible quadrilateral that can be formed, called a convex quadrilateral." }, { "code": null, "e": 2688, "s": 2655, "text": "Total number of intersection = 1" }, { "code": null, "e": 2800, "s": 2688, "text": "If the diagonals do not intersect, three possible quadrilaterals can be formed, called a concave quadrilateral." }, { "code": null, "e": 2833, "s": 2800, "text": "Total number of intersection = 0" }, { "code": null, "e": 5457, "s": 2833, "text": "#include <iostream>\nusing namespace std;\nstruct Point{ // points\n int x;\n int y;\n};\nint check_orientation(Point i, Point j, Point k){\n int val = (j.y - i.y) * (k.x - j.x) - (j.x - i.x) * (k.y - j.y);\n if (val == 0)\n return 0;\n return (val > 0) ? 1 : 2;\n}\n// checking whether line segments intersect\nbool check_Intersect(Point A, Point B, Point C, Point D){\n int o1 = check_orientation(A, B, C);\n int o2 = check_orientation(A, B, D);\n int o3 = check_orientation(C, D, A);\n int o4 = check_orientation(C, D, B);\n if (o1 != o2 && o3 != o4)\n return true;\n return false;\n}\n// checking whether 2 points are same\nbool check_similar(Point A, Point B){\n // If found similiar then we are returning false that means no quad. can be formed\n if (A.x == B.x && A.y == B.y)\n return false;\n // returning true for not found similiar\n return true;\n}\n// Checking collinearity of three points\nbool check_collinear(Point A, Point B, Point C){\n int x1 = A.x, y1 = A.y;\n int x2 = B.x, y2 = B.y;\n int x3 = C.x, y3 = C.y;\n if ((y3 - y2) * (x2 - x1) == (y2 - y1) * (x3 - x2))\n return false;\n else\n return true;\n}\n// main function\nint main(){\n struct Point A,B,C,D;\n A.x = -2, A.y = 8;// A(-2, 8)\n B.x = -2, B.y = 0;// B(-2, 0)\n C.x = 6, C.y = -1;// C(6, -1)\n D.x = 0, D.y = 8;// D(0, 8)\n // Checking whether any 3 points are collinear\n bool flag = true;\n flag = flag & check_collinear(A, B, C);\n flag = flag & check_collinear(A, B, D);\n flag = flag & check_collinear(A, C, D);\n flag = flag & check_collinear(B, C, D);\n // If points found collinear\n if (flag == false){\n cout << \"Number of quadrilaterals possible from the given points: 0\";\n return 0;\n }\n // Checking if 2 points are same.\n bool same = true;\n same = same & check_similar(A, B);\n same = same & check_similar(A, C);\n same = same & check_similar(B, D);\n same = same & check_similar(C, D);\n same = same & check_similar(A, D);\n same = same & check_similar(B, C);\n // If similiar point exist\n if (same == false){\n cout << \"Number of quadrilaterals possible from the given points: 0\";\n return 0;\n }\n // checking whether diagonal intersect or not\n flag = true;\n if (check_Intersect(A, B, C, D))\n flag = false;\n if (check_Intersect(A, C, B, D))\n flag = false;\n if (check_Intersect(A, B, D, C))\n flag = false;\n if (flag == true)\n cout << \"Number of quadrilaterals possible from the given points: 3\";\n else\n cout << \"Number of quadrilaterals possible from the given points: 1\";\n return 0;\n}" }, { "code": null, "e": 5517, "s": 5457, "text": "Number of quadrilaterals possible from the given points : 1" }, { "code": null, "e": 5570, "s": 5517, "text": "This code can be understood in the following steps −" }, { "code": null, "e": 5661, "s": 5570, "text": "Checking whether any three points are collinear and if yes, then the number of a quad. : 0" }, { "code": null, "e": 5752, "s": 5661, "text": "Checking whether any three points are collinear and if yes, then the number of a quad. : 0" }, { "code": null, "e": 5839, "s": 5752, "text": "Checking whether any two points are similar and if yes, then the number of a quad. : 0" }, { "code": null, "e": 5926, "s": 5839, "text": "Checking whether any two points are similar and if yes, then the number of a quad. : 0" }, { "code": null, "e": 6046, "s": 5926, "text": "Checking whether any line segments intersect:If yes, then the number of a quad. : 1If no, then the number of quads. : 3" }, { "code": null, "e": 6092, "s": 6046, "text": "Checking whether any line segments intersect:" }, { "code": null, "e": 6131, "s": 6092, "text": "If yes, then the number of a quad. : 1" }, { "code": null, "e": 6170, "s": 6131, "text": "If yes, then the number of a quad. : 1" }, { "code": null, "e": 6207, "s": 6170, "text": "If no, then the number of quads. : 3" }, { "code": null, "e": 6244, "s": 6207, "text": "If no, then the number of quads. : 3" }, { "code": null, "e": 6573, "s": 6244, "text": "In this article, we solved finding all possible quadrilaterals that can be formed from the given 4 points. We understand how the number of quadrilaterals depends on collinearity, intersection, and orientation. We also write C++ programs for the same, and we can write this program in any other language like C, Java, and python." } ]
Average of a stream of numbers - GeeksforGeeks
20 Oct, 2021 Difficulty Level: Rookie Given a stream of numbers, print the average (or mean) of the stream at every point. For example, let us consider the stream as 10, 20, 30, 40, 50, 60, ... Average of 1 numbers is 10.00 Average of 2 numbers is 15.00 Average of 3 numbers is 20.00 Average of 4 numbers is 25.00 Average of 5 numbers is 30.00 Average of 6 numbers is 35.00 .................. To print the mean of a stream, we need to find out how to find the average when a new number is being added to the stream. To do this, all we need is the count of numbers seen so far in the stream, previous average, and new number. Let n be the count, prev_avg be the previous average and x be the new number being added. The average after including x number can be written as (prev_avg*n + x)/(n+1). C++ C Java Python3 C# PHP Javascript #include <iostream>using namespace std; // Returns the new average after including xfloat getAvg(float prev_avg, int x, int n){ return (prev_avg * n + x) / (n + 1);} // Prints average of a stream of numbersvoid streamAvg(float arr[], int n){ float avg = 0; for (int i = 0; i < n; i++) { avg = getAvg(avg, arr[i], i); cout <<"Average of " <<i+1 << " numbers is " << avg << endl; } return;} // Driver program to test above functionsint main(){ float arr[] = { 10, 20, 30, 40, 50, 60 }; int n = sizeof(arr) / sizeof(arr[0]); streamAvg(arr, n); return 0;} // This code is contributed by shivanisinghss2110 #include <stdio.h> // Returns the new average after including xfloat getAvg(float prev_avg, int x, int n){ return (prev_avg * n + x) / (n + 1);} // Prints average of a stream of numbersvoid streamAvg(float arr[], int n){ float avg = 0; for (int i = 0; i < n; i++) { avg = getAvg(avg, arr[i], i); printf("Average of %d numbers is %f \n", i + 1, avg); } return;} // Driver program to test above functionsint main(){ float arr[] = { 10, 20, 30, 40, 50, 60 }; int n = sizeof(arr) / sizeof(arr[0]); streamAvg(arr, n); return 0;} // Java program to find average// of a stream of numbersclass GFG { // Returns the new average after including x static float getAvg(float prev_avg, float x, int n) { return (prev_avg * n + x) / (n + 1); } // Prints average of a stream of numbers static void streamAvg(float arr[], int n) { float avg = 0; for (int i = 0; i < n; i++) { avg = getAvg(avg, arr[i], i); System.out.printf("Average of %d numbers is %f \n", i + 1, avg); } return; } // Driver program to test above functions public static void main(String[] args) { float arr[] = { 10, 20, 30, 40, 50, 60 }; int n = arr.length; streamAvg(arr, n); }} // This code is contributed by Smitha Dinesh Semwal # Returns the new average# after including xdef getAvg(prev_avg, x, n): return ((prev_avg * n + x) / (n + 1)); # Prints average of# a stream of numbersdef streamAvg(arr, n): avg = 0; for i in range(n): avg = getAvg(avg, arr[i], i); print("Average of ", i + 1, " numbers is ", avg); # Driver Codearr = [10, 20, 30, 40, 50, 60];n = len(arr);streamAvg(arr, n); # This code is contributed# by mits // C# program to find average// of a stream of numbersusing System; class GFG{ // Returns the new average // after including x static float getAvg(float prev_avg, float x, int n) { return (prev_avg * n + x) / (n + 1); } // Prints average of // a stream of numbers static void streamAvg(float[] arr, int n) { float avg = 0; for (int i = 0; i < n; i++) { avg = getAvg(avg, arr[i], i); Console.WriteLine("Average of {0} " + "numbers is {1}", i + 1, avg); } return; } // Driver Code public static void Main(String[] args) { float[] arr = {10, 20, 30, 40, 50, 60}; int n = arr.Length; streamAvg(arr, n); }} // This code is contributed by mits <?php// PHP program for Average of// a stream of numbers // Returns the new average// after including xfunction getAvg($prev_avg, $x, $n){ return ($prev_avg * $n + $x) / ($n + 1);} // Prints average of a// stream of numbersfunction streamAvg($arr, $n){ $avg = 0; for ($i = 0; $i < $n; $i++) { $avg = getAvg($avg, $arr[$i], $i); echo "Average of ",$i + 1, "numbers is " ,$avg,"\n"; } return;} // Driver Code $arr = array(10, 20, 30, 40, 50, 60); $n = sizeof($arr); streamAvg($arr, $n); // This code is contributed by aj_36?> <script>// javascript program to find average// of a stream of numbers // Returns the new average after including xfunction getAvg( prev_avg, x, n){ return (prev_avg * n + x) / (n + 1);} // Prints average of a stream of numbersfunction streamAvg( arr, n){ let avg = 0; for (let i = 0; i < n; i++) { avg = getAvg(avg, arr[i], i); document.write("Average of "+(i + 1) +" numbers is "+ avg.toFixed(6) + "<br/>"); } return;} // Driver program to test above functions let arr = [10, 20, 30, 40, 50, 60 ]; let n = arr.length; streamAvg(arr, n); // This code is contributed by todaysgaurav </script> Output : Average of 1 numbers is 10.000000 Average of 2 numbers is 15.000000 Average of 3 numbers is 20.000000 Average of 4 numbers is 25.000000 Average of 5 numbers is 30.000000 Average of 6 numbers is 35.000000 Time Complexity: O(n) Auxiliary Space: O(1) The above function getAvg() can be optimized using the following changes. We can avoid the use of prev_avg and the number of elements by using static variables (Assuming that only this function is called for an average of stream). Following is the optimized version. C++ C Java Python3 C# PHP Javascript #include <bits/stdc++.h>using namespace std; // Returns the new average after including xfloat getAvg(int x){ static int sum, n; sum += x; return (((float)sum) / ++n);} // Prints average of a stream of numbersvoid streamAvg(float arr[], int n){ float avg = 0; for (int i = 0; i < n; i++) { avg = getAvg(arr[i]); cout<<"Average of "<<i+1<<" numbers is "<<fixed<<setprecision(1)<<avg<<endl; } return;} // Driver codeint main(){ float arr[] = { 10, 20, 30, 40, 50, 60 }; int n = sizeof(arr) / sizeof(arr[0]); streamAvg(arr, n); return 0;} // This code is contributed by rathbhupendra #include <stdio.h> // Returns the new average after including xfloat getAvg(int x){ static int sum, n; sum += x; return (((float)sum) / ++n);} // Prints average of a stream of numbersvoid streamAvg(float arr[], int n){ float avg = 0; for (int i = 0; i < n; i++) { avg = getAvg(arr[i]); printf("Average of %d numbers is %f \n", i + 1, avg); } return;} // Driver program to test above functionsint main(){ float arr[] = { 10, 20, 30, 40, 50, 60 }; int n = sizeof(arr) / sizeof(arr[0]); streamAvg(arr, n); return 0;} // Java program to return// Average of a stream of numbersclass GFG{static int sum, n; // Returns the new average// after including xstatic float getAvg(int x){ sum += x; return (((float)sum) / ++n);} // Prints average of a// stream of numbersstatic void streamAvg(float[] arr, int n){ float avg = 0; for (int i = 0; i < n; i++) { avg = getAvg((int)arr[i]); System.out.println("Average of "+ (i + 1) + " numbers is " + avg); } return;} // Driver Codepublic static void main(String[] args){ float[] arr = new float[]{ 10, 20, 30, 40, 50, 60 }; int n = arr.length; streamAvg(arr, n);}} // This code is contributed by mits # Returns the new average# after including xdef getAvg(x, n, sum): sum = sum + x; return float(sum) / n; # Prints average of a# stream of numbersdef streamAvg(arr, n): avg = 0; sum = 0; for i in range(n): avg = getAvg(arr[i], i + 1, sum); sum = avg * (i + 1); print("Average of ", end = ""); print(i + 1, end = ""); print(" numbers is ", end = ""); print(avg); return; # Driver Codearr= [ 10, 20, 30, 40, 50, 60 ];n = len(arr);streamAvg(arr,n); # This code is contributed by mits using System; class GFG{static int sum, n; // Returns the new average// after including xstatic float getAvg(int x){ sum += x; return (((float)sum) / ++n);} // Prints average of a// stream of numbersstatic void streamAvg(float[] arr, int n){ float avg = 0; for (int i = 0; i < n; i++) { avg = getAvg((int)arr[i]); Console.WriteLine("Average of {0} numbers " + "is {1}", (i + 1), avg); } return;} // Driver Codestatic int Main(){ float[] arr = new float[]{ 10, 20, 30, 40, 50, 60 }; int n = arr.Length; streamAvg(arr, n); return 0;}} // This code is contributed by mits <?php// Returns the new average// after including xfunction getAvg($x){ static $sum; static $n; $sum += $x; return (((float)$sum) / ++$n);} // Prints average of// a stream of numbersfunction streamAvg($arr, $n){ for ($i = 0; $i < $n; $i++) { $avg = getAvg($arr[$i]); echo "Average of " . ($i + 1) . " numbers is ".$avg." \n"; } return;} // Driver Code$arr = array(10, 20, 30, 40, 50, 60);$n = sizeof($arr) / sizeof($arr[0]);streamAvg($arr, $n); // This code is contributed by mits?> <script>// javascript program to return// Average of a stream of numbers var sum=0, n=0; // Returns the new average // after including x function getAvg(x) { sum += x; n++; return (sum / n); } // Prints average of a // stream of numbers function streamAvg( arr , m) { var avg = 0; for (i = 0; i < m; i++) { avg = getAvg(parseInt(arr[i])); document.write("Average of " + (i + 1) + " numbers is " + avg.toFixed(1)+"<br/>"); } return; } // Driver Code var arr = [ 10, 20, 30, 40, 50, 60 ]; var m = arr.length; streamAvg(arr, m); // This code is contributed by todaysgaurav</script> Output: Average of 1 numbers is 10.0 Average of 2 numbers is 15.0 Average of 3 numbers is 20.0 Average of 4 numbers is 25.0 Average of 5 numbers is 30.0 Average of 6 numbers is 35.0 Time Complexity: O(n) Auxiliary Space: O(1) Thanks to Abhijeet Deshpande for suggesting this optimized version. Related article: Program for an average of an array (Iterative and Recursive)Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. Smitha Dinesh Semwal jit_t Mithun Kumar rathbhupendra todaysgaurav subham348 subhammahato348 shivanisinghss2110 rishavmahato348 array-stream Arrays Mathematical Arrays Mathematical Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Maximum and minimum of an array using minimum number of comparisons Stack Data Structure (Introduction and Program) Top 50 Array Coding Problems for Interviews Multidimensional Arrays in Java Introduction to Arrays Write a program to print all permutations of a given string C++ Data Types Set in C++ Standard Template Library (STL) Coin Change | DP-7 Merge two sorted arrays
[ { "code": null, "e": 24614, "s": 24586, "text": "\n20 Oct, 2021" }, { "code": null, "e": 24796, "s": 24614, "text": "Difficulty Level: Rookie Given a stream of numbers, print the average (or mean) of the stream at every point. For example, let us consider the stream as 10, 20, 30, 40, 50, 60, ... " }, { "code": null, "e": 25009, "s": 24796, "text": " Average of 1 numbers is 10.00\n Average of 2 numbers is 15.00\n Average of 3 numbers is 20.00\n Average of 4 numbers is 25.00\n Average of 5 numbers is 30.00\n Average of 6 numbers is 35.00\n .................." }, { "code": null, "e": 25414, "s": 25011, "text": "To print the mean of a stream, we need to find out how to find the average when a new number is being added to the stream. To do this, all we need is the count of numbers seen so far in the stream, previous average, and new number. Let n be the count, prev_avg be the previous average and x be the new number being added. The average after including x number can be written as (prev_avg*n + x)/(n+1). " }, { "code": null, "e": 25418, "s": 25414, "text": "C++" }, { "code": null, "e": 25420, "s": 25418, "text": "C" }, { "code": null, "e": 25425, "s": 25420, "text": "Java" }, { "code": null, "e": 25433, "s": 25425, "text": "Python3" }, { "code": null, "e": 25436, "s": 25433, "text": "C#" }, { "code": null, "e": 25440, "s": 25436, "text": "PHP" }, { "code": null, "e": 25451, "s": 25440, "text": "Javascript" }, { "code": "#include <iostream>using namespace std; // Returns the new average after including xfloat getAvg(float prev_avg, int x, int n){ return (prev_avg * n + x) / (n + 1);} // Prints average of a stream of numbersvoid streamAvg(float arr[], int n){ float avg = 0; for (int i = 0; i < n; i++) { avg = getAvg(avg, arr[i], i); cout <<\"Average of \" <<i+1 << \" numbers is \" << avg << endl; } return;} // Driver program to test above functionsint main(){ float arr[] = { 10, 20, 30, 40, 50, 60 }; int n = sizeof(arr) / sizeof(arr[0]); streamAvg(arr, n); return 0;} // This code is contributed by shivanisinghss2110", "e": 26095, "s": 25451, "text": null }, { "code": "#include <stdio.h> // Returns the new average after including xfloat getAvg(float prev_avg, int x, int n){ return (prev_avg * n + x) / (n + 1);} // Prints average of a stream of numbersvoid streamAvg(float arr[], int n){ float avg = 0; for (int i = 0; i < n; i++) { avg = getAvg(avg, arr[i], i); printf(\"Average of %d numbers is %f \\n\", i + 1, avg); } return;} // Driver program to test above functionsint main(){ float arr[] = { 10, 20, 30, 40, 50, 60 }; int n = sizeof(arr) / sizeof(arr[0]); streamAvg(arr, n); return 0;}", "e": 26661, "s": 26095, "text": null }, { "code": "// Java program to find average// of a stream of numbersclass GFG { // Returns the new average after including x static float getAvg(float prev_avg, float x, int n) { return (prev_avg * n + x) / (n + 1); } // Prints average of a stream of numbers static void streamAvg(float arr[], int n) { float avg = 0; for (int i = 0; i < n; i++) { avg = getAvg(avg, arr[i], i); System.out.printf(\"Average of %d numbers is %f \\n\", i + 1, avg); } return; } // Driver program to test above functions public static void main(String[] args) { float arr[] = { 10, 20, 30, 40, 50, 60 }; int n = arr.length; streamAvg(arr, n); }} // This code is contributed by Smitha Dinesh Semwal", "e": 27497, "s": 26661, "text": null }, { "code": "# Returns the new average# after including xdef getAvg(prev_avg, x, n): return ((prev_avg * n + x) / (n + 1)); # Prints average of# a stream of numbersdef streamAvg(arr, n): avg = 0; for i in range(n): avg = getAvg(avg, arr[i], i); print(\"Average of \", i + 1, \" numbers is \", avg); # Driver Codearr = [10, 20, 30, 40, 50, 60];n = len(arr);streamAvg(arr, n); # This code is contributed# by mits", "e": 27956, "s": 27497, "text": null }, { "code": "// C# program to find average// of a stream of numbersusing System; class GFG{ // Returns the new average // after including x static float getAvg(float prev_avg, float x, int n) { return (prev_avg * n + x) / (n + 1); } // Prints average of // a stream of numbers static void streamAvg(float[] arr, int n) { float avg = 0; for (int i = 0; i < n; i++) { avg = getAvg(avg, arr[i], i); Console.WriteLine(\"Average of {0} \" + \"numbers is {1}\", i + 1, avg); } return; } // Driver Code public static void Main(String[] args) { float[] arr = {10, 20, 30, 40, 50, 60}; int n = arr.Length; streamAvg(arr, n); }} // This code is contributed by mits", "e": 28861, "s": 27956, "text": null }, { "code": "<?php// PHP program for Average of// a stream of numbers // Returns the new average// after including xfunction getAvg($prev_avg, $x, $n){ return ($prev_avg * $n + $x) / ($n + 1);} // Prints average of a// stream of numbersfunction streamAvg($arr, $n){ $avg = 0; for ($i = 0; $i < $n; $i++) { $avg = getAvg($avg, $arr[$i], $i); echo \"Average of \",$i + 1, \"numbers is \" ,$avg,\"\\n\"; } return;} // Driver Code $arr = array(10, 20, 30, 40, 50, 60); $n = sizeof($arr); streamAvg($arr, $n); // This code is contributed by aj_36?>", "e": 29488, "s": 28861, "text": null }, { "code": "<script>// javascript program to find average// of a stream of numbers // Returns the new average after including xfunction getAvg( prev_avg, x, n){ return (prev_avg * n + x) / (n + 1);} // Prints average of a stream of numbersfunction streamAvg( arr, n){ let avg = 0; for (let i = 0; i < n; i++) { avg = getAvg(avg, arr[i], i); document.write(\"Average of \"+(i + 1) +\" numbers is \"+ avg.toFixed(6) + \"<br/>\"); } return;} // Driver program to test above functions let arr = [10, 20, 30, 40, 50, 60 ]; let n = arr.length; streamAvg(arr, n); // This code is contributed by todaysgaurav </script>", "e": 30125, "s": 29488, "text": null }, { "code": null, "e": 30136, "s": 30125, "text": "Output : " }, { "code": null, "e": 30346, "s": 30136, "text": "Average of 1 numbers is 10.000000 \nAverage of 2 numbers is 15.000000 \nAverage of 3 numbers is 20.000000 \nAverage of 4 numbers is 25.000000 \nAverage of 5 numbers is 30.000000 \nAverage of 6 numbers is 35.000000 " }, { "code": null, "e": 30368, "s": 30346, "text": "Time Complexity: O(n)" }, { "code": null, "e": 30390, "s": 30368, "text": "Auxiliary Space: O(1)" }, { "code": null, "e": 30659, "s": 30390, "text": "The above function getAvg() can be optimized using the following changes. We can avoid the use of prev_avg and the number of elements by using static variables (Assuming that only this function is called for an average of stream). Following is the optimized version. " }, { "code": null, "e": 30663, "s": 30659, "text": "C++" }, { "code": null, "e": 30665, "s": 30663, "text": "C" }, { "code": null, "e": 30670, "s": 30665, "text": "Java" }, { "code": null, "e": 30678, "s": 30670, "text": "Python3" }, { "code": null, "e": 30681, "s": 30678, "text": "C#" }, { "code": null, "e": 30685, "s": 30681, "text": "PHP" }, { "code": null, "e": 30696, "s": 30685, "text": "Javascript" }, { "code": "#include <bits/stdc++.h>using namespace std; // Returns the new average after including xfloat getAvg(int x){ static int sum, n; sum += x; return (((float)sum) / ++n);} // Prints average of a stream of numbersvoid streamAvg(float arr[], int n){ float avg = 0; for (int i = 0; i < n; i++) { avg = getAvg(arr[i]); cout<<\"Average of \"<<i+1<<\" numbers is \"<<fixed<<setprecision(1)<<avg<<endl; } return;} // Driver codeint main(){ float arr[] = { 10, 20, 30, 40, 50, 60 }; int n = sizeof(arr) / sizeof(arr[0]); streamAvg(arr, n); return 0;} // This code is contributed by rathbhupendra", "e": 31329, "s": 30696, "text": null }, { "code": "#include <stdio.h> // Returns the new average after including xfloat getAvg(int x){ static int sum, n; sum += x; return (((float)sum) / ++n);} // Prints average of a stream of numbersvoid streamAvg(float arr[], int n){ float avg = 0; for (int i = 0; i < n; i++) { avg = getAvg(arr[i]); printf(\"Average of %d numbers is %f \\n\", i + 1, avg); } return;} // Driver program to test above functionsint main(){ float arr[] = { 10, 20, 30, 40, 50, 60 }; int n = sizeof(arr) / sizeof(arr[0]); streamAvg(arr, n); return 0;}", "e": 31892, "s": 31329, "text": null }, { "code": "// Java program to return// Average of a stream of numbersclass GFG{static int sum, n; // Returns the new average// after including xstatic float getAvg(int x){ sum += x; return (((float)sum) / ++n);} // Prints average of a// stream of numbersstatic void streamAvg(float[] arr, int n){ float avg = 0; for (int i = 0; i < n; i++) { avg = getAvg((int)arr[i]); System.out.println(\"Average of \"+ (i + 1) + \" numbers is \" + avg); } return;} // Driver Codepublic static void main(String[] args){ float[] arr = new float[]{ 10, 20, 30, 40, 50, 60 }; int n = arr.length; streamAvg(arr, n);}} // This code is contributed by mits", "e": 32637, "s": 31892, "text": null }, { "code": "# Returns the new average# after including xdef getAvg(x, n, sum): sum = sum + x; return float(sum) / n; # Prints average of a# stream of numbersdef streamAvg(arr, n): avg = 0; sum = 0; for i in range(n): avg = getAvg(arr[i], i + 1, sum); sum = avg * (i + 1); print(\"Average of \", end = \"\"); print(i + 1, end = \"\"); print(\" numbers is \", end = \"\"); print(avg); return; # Driver Codearr= [ 10, 20, 30, 40, 50, 60 ];n = len(arr);streamAvg(arr,n); # This code is contributed by mits", "e": 33183, "s": 32637, "text": null }, { "code": "using System; class GFG{static int sum, n; // Returns the new average// after including xstatic float getAvg(int x){ sum += x; return (((float)sum) / ++n);} // Prints average of a// stream of numbersstatic void streamAvg(float[] arr, int n){ float avg = 0; for (int i = 0; i < n; i++) { avg = getAvg((int)arr[i]); Console.WriteLine(\"Average of {0} numbers \" + \"is {1}\", (i + 1), avg); } return;} // Driver Codestatic int Main(){ float[] arr = new float[]{ 10, 20, 30, 40, 50, 60 }; int n = arr.Length; streamAvg(arr, n); return 0;}} // This code is contributed by mits", "e": 33863, "s": 33183, "text": null }, { "code": "<?php// Returns the new average// after including xfunction getAvg($x){ static $sum; static $n; $sum += $x; return (((float)$sum) / ++$n);} // Prints average of// a stream of numbersfunction streamAvg($arr, $n){ for ($i = 0; $i < $n; $i++) { $avg = getAvg($arr[$i]); echo \"Average of \" . ($i + 1) . \" numbers is \".$avg.\" \\n\"; } return;} // Driver Code$arr = array(10, 20, 30, 40, 50, 60);$n = sizeof($arr) / sizeof($arr[0]);streamAvg($arr, $n); // This code is contributed by mits?>", "e": 34408, "s": 33863, "text": null }, { "code": "<script>// javascript program to return// Average of a stream of numbers var sum=0, n=0; // Returns the new average // after including x function getAvg(x) { sum += x; n++; return (sum / n); } // Prints average of a // stream of numbers function streamAvg( arr , m) { var avg = 0; for (i = 0; i < m; i++) { avg = getAvg(parseInt(arr[i])); document.write(\"Average of \" + (i + 1) + \" numbers is \" + avg.toFixed(1)+\"<br/>\"); } return; } // Driver Code var arr = [ 10, 20, 30, 40, 50, 60 ]; var m = arr.length; streamAvg(arr, m); // This code is contributed by todaysgaurav</script>", "e": 35119, "s": 34408, "text": null }, { "code": null, "e": 35129, "s": 35119, "text": "Output: " }, { "code": null, "e": 35303, "s": 35129, "text": "Average of 1 numbers is 10.0\nAverage of 2 numbers is 15.0\nAverage of 3 numbers is 20.0\nAverage of 4 numbers is 25.0\nAverage of 5 numbers is 30.0\nAverage of 6 numbers is 35.0" }, { "code": null, "e": 35325, "s": 35303, "text": "Time Complexity: O(n)" }, { "code": null, "e": 35347, "s": 35325, "text": "Auxiliary Space: O(1)" }, { "code": null, "e": 35618, "s": 35347, "text": "Thanks to Abhijeet Deshpande for suggesting this optimized version. Related article: Program for an average of an array (Iterative and Recursive)Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. " }, { "code": null, "e": 35639, "s": 35618, "text": "Smitha Dinesh Semwal" }, { "code": null, "e": 35645, "s": 35639, "text": "jit_t" }, { "code": null, "e": 35658, "s": 35645, "text": "Mithun Kumar" }, { "code": null, "e": 35672, "s": 35658, "text": "rathbhupendra" }, { "code": null, "e": 35685, "s": 35672, "text": "todaysgaurav" }, { "code": null, "e": 35695, "s": 35685, "text": "subham348" }, { "code": null, "e": 35711, "s": 35695, "text": "subhammahato348" }, { "code": null, "e": 35730, "s": 35711, "text": "shivanisinghss2110" }, { "code": null, "e": 35746, "s": 35730, "text": "rishavmahato348" }, { "code": null, "e": 35759, "s": 35746, "text": "array-stream" }, { "code": null, "e": 35766, "s": 35759, "text": "Arrays" }, { "code": null, "e": 35779, "s": 35766, "text": "Mathematical" }, { "code": null, "e": 35786, "s": 35779, "text": "Arrays" }, { "code": null, "e": 35799, "s": 35786, "text": "Mathematical" }, { "code": null, "e": 35897, "s": 35799, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 35965, "s": 35897, "text": "Maximum and minimum of an array using minimum number of comparisons" }, { "code": null, "e": 36013, "s": 35965, "text": "Stack Data Structure (Introduction and Program)" }, { "code": null, "e": 36057, "s": 36013, "text": "Top 50 Array Coding Problems for Interviews" }, { "code": null, "e": 36089, "s": 36057, "text": "Multidimensional Arrays in Java" }, { "code": null, "e": 36112, "s": 36089, "text": "Introduction to Arrays" }, { "code": null, "e": 36172, "s": 36112, "text": "Write a program to print all permutations of a given string" }, { "code": null, "e": 36187, "s": 36172, "text": "C++ Data Types" }, { "code": null, "e": 36230, "s": 36187, "text": "Set in C++ Standard Template Library (STL)" }, { "code": null, "e": 36249, "s": 36230, "text": "Coin Change | DP-7" } ]
How to access nested Python dictionary items via a list of keys?
The easiest and most readable way to access nested properties in a Python dict is to use for loop and loop over each item while getting the next value, until the end. def getFromDict(dataDict, mapList): for k in mapList: dataDict = dataDict[k] return dataDict a = { 'foo': 45,'bar': { 'baz': 100,'tru': "Hello" } } print(getFromDict(a, ["bar", "baz"])) This will give the output − 100
[ { "code": null, "e": 1230, "s": 1062, "text": "The easiest and most readable way to access nested properties in a Python dict is to use for loop and loop over each item while getting the next value, until the end. " }, { "code": null, "e": 1428, "s": 1230, "text": "def getFromDict(dataDict, mapList):\nfor k in mapList: dataDict = dataDict[k]\nreturn dataDict\na = {\n 'foo': 45,'bar': {\n 'baz': 100,'tru': \"Hello\"\n }\n}\nprint(getFromDict(a, [\"bar\", \"baz\"]))" }, { "code": null, "e": 1456, "s": 1428, "text": "This will give the output −" }, { "code": null, "e": 1460, "s": 1456, "text": "100" } ]
Logo - Strings
Any sequence of alpha-numeric characters, for example – “america”, “emp1234”, etc. are examples of a string. Counting the characters is the most basic of all string processes. The answer to the question stringlength "abc12ef is given by the following procedure − to stringlength :s make "inputstring :s make "count 0 while [not emptyp :s] [ make "count :count + 1 print first :s make "s butfirst :s ] print (sentence :inputstring "has :count "letters) end In the above procedure –‘s’ is the variable containing the input string. Variable inputstring contains the copy of the input string. Variable count is initialized with 0. In the while loop, the condition checks whether the string has become empty or not. In each loop count, a variable is being increased by 1 to hold the length count. The statement print first :s, prints the first character only of the string stored in ‘s’. The statement make "s butfirst :s, retrieves the sub-string excluding the first character. After exiting from the while-loop, we have printed the character count or the length of the input string. Following is the execution and output of the code. 48 Lectures 6 hours Arnab Chakraborty 38 Lectures 2.5 hours Rob Cubbon 81 Lectures 7.5 hours YouAccel 8 Lectures 34 mins Yash Rajoliya Print Add Notes Bookmark this page
[ { "code": null, "e": 2097, "s": 1834, "text": "Any sequence of alpha-numeric characters, for example – “america”, “emp1234”, etc. are examples of a string. Counting the characters is the most basic of all string processes. The answer to the question stringlength \"abc12ef is given by the following procedure −" }, { "code": null, "e": 2323, "s": 2097, "text": "to stringlength :s\n make \"inputstring :s\n make \"count 0\n while [not emptyp :s] [\n make \"count :count + 1\n print first :s\n make \"s butfirst :s\n ]\n print (sentence :inputstring \"has :count \"letters)\nend" }, { "code": null, "e": 2750, "s": 2323, "text": "In the above procedure –‘s’ is the variable containing the input string. Variable inputstring contains the copy of the input string. Variable count is initialized with 0. In the while loop, the condition checks whether the string has become empty or not. In each loop count, a variable is being increased by 1 to hold the length count. The statement print first :s, prints the first character only of the string stored in ‘s’." }, { "code": null, "e": 2998, "s": 2750, "text": "The statement make \"s butfirst :s, retrieves the sub-string excluding the first character. After exiting from the while-loop, we have printed the character count or the length of the input string. Following is the execution and output of the code." }, { "code": null, "e": 3031, "s": 2998, "text": "\n 48 Lectures \n 6 hours \n" }, { "code": null, "e": 3050, "s": 3031, "text": " Arnab Chakraborty" }, { "code": null, "e": 3085, "s": 3050, "text": "\n 38 Lectures \n 2.5 hours \n" }, { "code": null, "e": 3097, "s": 3085, "text": " Rob Cubbon" }, { "code": null, "e": 3132, "s": 3097, "text": "\n 81 Lectures \n 7.5 hours \n" }, { "code": null, "e": 3142, "s": 3132, "text": " YouAccel" }, { "code": null, "e": 3173, "s": 3142, "text": "\n 8 Lectures \n 34 mins\n" }, { "code": null, "e": 3188, "s": 3173, "text": " Yash Rajoliya" }, { "code": null, "e": 3195, "s": 3188, "text": " Print" }, { "code": null, "e": 3206, "s": 3195, "text": " Add Notes" } ]
Longest Palindromic Substring
In a given string, we have to find a substring, which is a palindrome and it is longest. To get the longest palindromic substring, we have to solve many subproblems, some of the subproblems are overlapping. They are needed to be solved for multiple times. For that reason, the Dynamic programming is helpful. Using a table, we can store the result of the previous subproblems, and simply use them to generate further results. Input: A String. Say “thisispalapsiti” Output: The palindrome substring and the length of the palindrome. Longest palindrome substring is: ispalapsi Length is: 9 findLongPalSubstr(str) Input − The main string. Output − Longest palindromic substring and its length. Begin n := length of the given string create a n x n table named palTab to store true or false value fill patTab with false values maxLen := 1 for i := 0 to n-1, do patTab[i, i] = true //as it is palindrome of length 1 done start := 0 for i := 0 to n-2, do if str[i] = str[i-1], then palTab[i, i+1] := true start := i maxLen := 2 done for k := 3 to n, do for i := 0 to n-k, do j := i + k – 1 if palTab[i+1, j-1] and str[i] = str[j], then palTab[i, j] := true if k > maxLen, then start := i maxLen := k done done display substring from start to maxLen from str, and return maxLen End #include<iostream> using namespace std; int findLongPalSubstr(string str) { int n = str.size(); // get length of input string bool palCheckTab[n][n]; //true when substring from i to j is palindrome for(int i = 0; i<n; i++) for(int j = 0; j<n; j++) palCheckTab[i][j] = false; //initially set all values to false int maxLength = 1; for (int i = 0; i < n; ++i) palCheckTab[i][i] = true; //as all substring of length 1 is palindrome int start = 0; for (int i = 0; i < n-1; ++i) { if (str[i] == str[i+1]) { //for two character substring both characters are equal palCheckTab[i][i+1] = true; start = i; maxLength = 2; } } for (int k = 3; k <= n; ++k) { //for substrings with length 3 to n for (int i = 0; i < n-k+1 ; ++i) { int j = i + k - 1; if (palCheckTab[i+1][j-1] && str[i] == str[j]) { //if (i,j) and (i+1, j-1) are same, then check palindrome palCheckTab[i][j] = true; if (k > maxLength) { start = i; maxLength = k; } } } } cout << "Longest palindrome substring is: " << str.substr(start, maxLength) << endl; return maxLength; // return length } int main() { char str[] = "thisispalapsiti"; cout << "Length is: "<< findLongPalSubstr(str); } Longest palindrome substring is: ispalapsi Length is: 9
[ { "code": null, "e": 1151, "s": 1062, "text": "In a given string, we have to find a substring, which is a palindrome and it is longest." }, { "code": null, "e": 1488, "s": 1151, "text": "To get the longest palindromic substring, we have to solve many subproblems, some of the subproblems are overlapping. They are needed to be solved for multiple times. For that reason, the Dynamic programming is helpful. Using a table, we can store the result of the previous subproblems, and simply use them to generate further results." }, { "code": null, "e": 1650, "s": 1488, "text": "Input:\nA String. Say “thisispalapsiti”\nOutput:\nThe palindrome substring and the length of the palindrome.\nLongest palindrome substring is: ispalapsi\nLength is: 9" }, { "code": null, "e": 1673, "s": 1650, "text": "findLongPalSubstr(str)" }, { "code": null, "e": 1698, "s": 1673, "text": "Input − The main string." }, { "code": null, "e": 1753, "s": 1698, "text": "Output − Longest palindromic substring and its length." }, { "code": null, "e": 2505, "s": 1753, "text": "Begin\n n := length of the given string\n create a n x n table named palTab to store true or false value\n fill patTab with false values\n maxLen := 1\n\n for i := 0 to n-1, do\n patTab[i, i] = true //as it is palindrome of length 1\n done\n\n start := 0\n for i := 0 to n-2, do\n if str[i] = str[i-1], then\n palTab[i, i+1] := true\n start := i\n maxLen := 2\n done\n\n for k := 3 to n, do\n for i := 0 to n-k, do\n j := i + k – 1\n if palTab[i+1, j-1] and str[i] = str[j], then\n palTab[i, j] := true\n if k > maxLen, then\n start := i\n maxLen := k\n done\n done\n display substring from start to maxLen from str, and return maxLen\n End" }, { "code": null, "e": 3908, "s": 2505, "text": "#include<iostream>\nusing namespace std;\n\nint findLongPalSubstr(string str) {\n int n = str.size(); // get length of input string\n \n bool palCheckTab[n][n]; //true when substring from i to j is palindrome\n \n for(int i = 0; i<n; i++)\n for(int j = 0; j<n; j++)\n palCheckTab[i][j] = false; //initially set all values to false\n \n int maxLength = 1;\n \n for (int i = 0; i < n; ++i)\n palCheckTab[i][i] = true; //as all substring of length 1 is palindrome\n \n int start = 0;\n for (int i = 0; i < n-1; ++i) {\n if (str[i] == str[i+1]) { //for two character substring both characters are equal\n palCheckTab[i][i+1] = true;\n start = i;\n maxLength = 2;\n }\n }\n \n for (int k = 3; k <= n; ++k) { //for substrings with length 3 to n\n for (int i = 0; i < n-k+1 ; ++i) {\n int j = i + k - 1;\n if (palCheckTab[i+1][j-1] && str[i] == str[j]) { //if (i,j) and (i+1, j-1) are same, then check palindrome\n palCheckTab[i][j] = true;\n if (k > maxLength) {\n start = i;\n maxLength = k;\n }\n }\n }\n }\n cout << \"Longest palindrome substring is: \" << str.substr(start, maxLength) << endl;\n return maxLength; // return length\n}\n \nint main() {\n char str[] = \"thisispalapsiti\";\n cout << \"Length is: \"<< findLongPalSubstr(str);\n}" }, { "code": null, "e": 3964, "s": 3908, "text": "Longest palindrome substring is: ispalapsi\nLength is: 9" } ]
HTML | <form> enctype Attribute - GeeksforGeeks
10 Jun, 2019 The HTML <form> enctype Attribute is used to specify that data that will be present in form should be encoded when submitting to the server. This type of attribute can be used only if method = “POST”. Syntax: <form enctype="value"> Attribute Value: This attribute contains three value which are listed below: application/x-www-form-urlencoded: It is the default value. It encodes all the characters before sent to the server. It converts spaces into + symbols and special character into its hex value. multipart/form-data: This value does not encode any character. text/plain: This value convert spaces into + symbols but special characters are not converted. Example: This Example illustrates the use of enctype attribute in <form> element. <!DOCTYPE html><html> <head> <title>Form enctype attribute</title></head> <body style="text-align: center"> <h1 style="color: green">GeeksforGeeks</h1> <h2>Form enctype Attribute</h2> <form action="#" method="post" enctype="multipart/form-data"> First name: <input type="text" name="fname"> <br> Last name: <input type="text" name="lname"> <br> Address: <input type="text" name="Address"> <br> <input type="submit" value="Submit"> </form></body> </html> Output : Supported Browsers: The browsers supported by <form> enctype Attribute are listed below: Google Chrome Internet Explorer Firefox Apple Safari Opera Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course. HTML-Attributes HTML Web Technologies HTML Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments REST API (Introduction) Design a web page using HTML and CSS Form validation using jQuery How to place text on image using HTML and CSS? How to auto-resize an image to fit a div container using CSS? Top 10 Front End Developer Skills That You Need in 2022 Installation of Node.js on Linux How to fetch data from an API in ReactJS ? Difference between var, let and const keywords in JavaScript Convert a string to an integer in JavaScript
[ { "code": null, "e": 24503, "s": 24475, "text": "\n10 Jun, 2019" }, { "code": null, "e": 24704, "s": 24503, "text": "The HTML <form> enctype Attribute is used to specify that data that will be present in form should be encoded when submitting to the server. This type of attribute can be used only if method = “POST”." }, { "code": null, "e": 24712, "s": 24704, "text": "Syntax:" }, { "code": null, "e": 24736, "s": 24712, "text": "<form enctype=\"value\"> " }, { "code": null, "e": 24813, "s": 24736, "text": "Attribute Value: This attribute contains three value which are listed below:" }, { "code": null, "e": 25006, "s": 24813, "text": "application/x-www-form-urlencoded: It is the default value. It encodes all the characters before sent to the server. It converts spaces into + symbols and special character into its hex value." }, { "code": null, "e": 25069, "s": 25006, "text": "multipart/form-data: This value does not encode any character." }, { "code": null, "e": 25164, "s": 25069, "text": "text/plain: This value convert spaces into + symbols but special characters are not converted." }, { "code": null, "e": 25246, "s": 25164, "text": "Example: This Example illustrates the use of enctype attribute in <form> element." }, { "code": "<!DOCTYPE html><html> <head> <title>Form enctype attribute</title></head> <body style=\"text-align: center\"> <h1 style=\"color: green\">GeeksforGeeks</h1> <h2>Form enctype Attribute</h2> <form action=\"#\" method=\"post\" enctype=\"multipart/form-data\"> First name: <input type=\"text\" name=\"fname\"> <br> Last name: <input type=\"text\" name=\"lname\"> <br> Address: <input type=\"text\" name=\"Address\"> <br> <input type=\"submit\" value=\"Submit\"> </form></body> </html>", "e": 25851, "s": 25246, "text": null }, { "code": null, "e": 25860, "s": 25851, "text": "Output :" }, { "code": null, "e": 25949, "s": 25860, "text": "Supported Browsers: The browsers supported by <form> enctype Attribute are listed below:" }, { "code": null, "e": 25963, "s": 25949, "text": "Google Chrome" }, { "code": null, "e": 25981, "s": 25963, "text": "Internet Explorer" }, { "code": null, "e": 25989, "s": 25981, "text": "Firefox" }, { "code": null, "e": 26002, "s": 25989, "text": "Apple Safari" }, { "code": null, "e": 26008, "s": 26002, "text": "Opera" }, { "code": null, "e": 26145, "s": 26008, "text": "Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course." }, { "code": null, "e": 26161, "s": 26145, "text": "HTML-Attributes" }, { "code": null, "e": 26166, "s": 26161, "text": "HTML" }, { "code": null, "e": 26183, "s": 26166, "text": "Web Technologies" }, { "code": null, "e": 26188, "s": 26183, "text": "HTML" }, { "code": null, "e": 26286, "s": 26188, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 26295, "s": 26286, "text": "Comments" }, { "code": null, "e": 26308, "s": 26295, "text": "Old Comments" }, { "code": null, "e": 26332, "s": 26308, "text": "REST API (Introduction)" }, { "code": null, "e": 26369, "s": 26332, "text": "Design a web page using HTML and CSS" }, { "code": null, "e": 26398, "s": 26369, "text": "Form validation using jQuery" }, { "code": null, "e": 26445, "s": 26398, "text": "How to place text on image using HTML and CSS?" }, { "code": null, "e": 26507, "s": 26445, "text": "How to auto-resize an image to fit a div container using CSS?" }, { "code": null, "e": 26563, "s": 26507, "text": "Top 10 Front End Developer Skills That You Need in 2022" }, { "code": null, "e": 26596, "s": 26563, "text": "Installation of Node.js on Linux" }, { "code": null, "e": 26639, "s": 26596, "text": "How to fetch data from an API in ReactJS ?" }, { "code": null, "e": 26700, "s": 26639, "text": "Difference between var, let and const keywords in JavaScript" } ]
How to append data to a file in Java?
In most scenarios if you try to write contents to a file, using the classes of the java.io package, the file will be overwritten i.e. data existing in the file is erased and the new data is added to it. But, in certain scenarios like logging exceptions into a file (without using logger frame works) you need to append data (message) in the next line of the file. You can do this using the Files class of the java.nio package. This class provides a method named write() which accepts An object of the class Path, representing a file. A byte array holding the data to the file. A variable arguments of the type OpenOption (interface) as a value to it you can pass one of the elements of StandardOpenOption enumeration which contains 10 options namely, APPEND, CREATE, CREATE_NEW, DELETE_ON_CLOSE, DSYNC, READ, SPARSE, SYNC, TRUNCATE_EXISTING, WRITE. You can invoke this method by passing the path of the file, byte array containing the data to be appended and, the option StandardOpenOption.APPEND. Following Java program has an array storing 5 integer values, we are letting the user to choose two elements from the array (indices of the elements) and performing division between them. We are wrapping this code in try block with three catch blocks catching ArithmeticException, InputMismatchException and, ArrayIndexOutOfBoundsException. In each of them we are invoking the writeToFile() method. This method accepts an exception object, and appends it to a file using the write() method of the Files class. public class LoggingToFile { private static void writeToFile(Exception e) throws IOException { //Retrieving the log file Path logFile = Paths.get("ExceptionLog.txt"); //Preparing the data to be logged byte bytes[] = ("\r\n"+LocalDateTime.now()+": "+e.toString()).getBytes(); //Appending the exception to your file Files.write(logFile, bytes, StandardOpenOption.APPEND); System.out.println("Exception logged to your file"); } public static void main(String [] args) throws IOException { Scanner sc = new Scanner(System.in); int[] arr = {10, 20, 30, 2, 0, 8}; System.out.println("Array: "+Arrays.toString(arr)); System.out.println("Choose numerator and denominator (not 0) from this array (enter positions 0 to 5)"); try { int a = sc.nextInt(); int b = sc.nextInt(); int result = (arr[a])/(arr[b]); System.out.println("Result of "+arr[a]+"/"+arr[b]+": "+result); }catch(ArrayIndexOutOfBoundsException ex) { System.out.println("Warning: You have chosen a position which is not in the array"); writeLogToFile(ex); }catch(ArithmeticException ex) { System.out.println("Warning: You cannot divide an number with 0"); writeLogToFile(ex); }catch(InputMismatchException ex) { System.out.println("Warning: You have entered invalid input"); writeLogToFile(ex); } } } Enter 3 integer values one by one: Array: [10, 20, 30, 2, 0, 8] Choose numerator and denominator(not 0) from this array (enter positions 0 to 5) 2 4 Warning: You cannot divide an number with 0 Exception logged to your file Enter 3 integer values one by one: Array: [10, 20, 30, 2, 0, 8] Choose numerator and denominator(not 0) from this array (enter positions 0 to 5) 5 12 Warning: You have chosen a position which is not in the array Exception logged to your file Enter 3 integer values one by one: Array: [10, 20, 30, 2, 0, 8] Choose numerator and denominator(not 0) from this array (enter positions 0 to 5) hello Warning: You have entered invalid input Exception logged to your file 2019-07-19T17:57:09.735: java.lang.ArithmeticException: / by zero 2019-07-19T17:57:39.025: java.lang.ArrayIndexOutOfBoundsException: 12 2019-07-19T18:00:23.374: java.util.InputMismatchException
[ { "code": null, "e": 1265, "s": 1062, "text": "In most scenarios if you try to write contents to a file, using the classes of the java.io package, the file will be overwritten i.e. data existing in the file is erased and the new data is added to it." }, { "code": null, "e": 1426, "s": 1265, "text": "But, in certain scenarios like logging exceptions into a file (without using logger frame works) you need to append data (message) in the next line of the file." }, { "code": null, "e": 1546, "s": 1426, "text": "You can do this using the Files class of the java.nio package. This class provides a method named write() which accepts" }, { "code": null, "e": 1596, "s": 1546, "text": "An object of the class Path, representing a file." }, { "code": null, "e": 1639, "s": 1596, "text": "A byte array holding the data to the file." }, { "code": null, "e": 1911, "s": 1639, "text": "A variable arguments of the type OpenOption (interface) as a value to it you can pass one of the elements of StandardOpenOption enumeration which contains 10 options namely, APPEND, CREATE, CREATE_NEW, DELETE_ON_CLOSE, DSYNC, READ, SPARSE, SYNC, TRUNCATE_EXISTING, WRITE." }, { "code": null, "e": 2060, "s": 1911, "text": "You can invoke this method by passing the path of the file, byte array containing the data to be appended and, the option StandardOpenOption.APPEND." }, { "code": null, "e": 2459, "s": 2060, "text": "Following Java program has an array storing 5 integer values, we are letting the user to choose two elements from the array (indices of the elements) and performing division between them. We are wrapping this code in try block with three catch blocks catching ArithmeticException, InputMismatchException and, ArrayIndexOutOfBoundsException. In each of them we are invoking the writeToFile() method." }, { "code": null, "e": 2570, "s": 2459, "text": "This method accepts an exception object, and appends it to a file using the write() method of the Files class." }, { "code": null, "e": 4023, "s": 2570, "text": "public class LoggingToFile {\n private static void writeToFile(Exception e) throws IOException {\n //Retrieving the log file\n Path logFile = Paths.get(\"ExceptionLog.txt\");\n //Preparing the data to be logged\n byte bytes[] = (\"\\r\\n\"+LocalDateTime.now()+\": \"+e.toString()).getBytes();\n //Appending the exception to your file\n Files.write(logFile, bytes, StandardOpenOption.APPEND);\n System.out.println(\"Exception logged to your file\");\n }\n public static void main(String [] args) throws IOException {\n Scanner sc = new Scanner(System.in);\n int[] arr = {10, 20, 30, 2, 0, 8};\n System.out.println(\"Array: \"+Arrays.toString(arr));\n System.out.println(\"Choose numerator and denominator (not 0) from this array (enter positions 0 to 5)\");\n try {\n int a = sc.nextInt();\n int b = sc.nextInt();\n int result = (arr[a])/(arr[b]);\n System.out.println(\"Result of \"+arr[a]+\"/\"+arr[b]+\": \"+result);\n }catch(ArrayIndexOutOfBoundsException ex) {\n System.out.println(\"Warning: You have chosen a position which is not in the array\");\n writeLogToFile(ex);\n }catch(ArithmeticException ex) {\n System.out.println(\"Warning: You cannot divide an number with 0\");\n writeLogToFile(ex);\n }catch(InputMismatchException ex) {\n System.out.println(\"Warning: You have entered invalid input\");\n writeLogToFile(ex);\n }\n }\n}" }, { "code": null, "e": 4246, "s": 4023, "text": "Enter 3 integer values one by one:\nArray: [10, 20, 30, 2, 0, 8]\nChoose numerator and denominator(not 0) from this array (enter positions 0 to 5)\n2\n4\nWarning: You cannot divide an number with 0\nException logged to your file" }, { "code": null, "e": 4488, "s": 4246, "text": "Enter 3 integer values one by one:\nArray: [10, 20, 30, 2, 0, 8]\nChoose numerator and denominator(not 0) from this array (enter positions 0 to 5)\n5\n12\nWarning: You have chosen a position which is not in the array\nException logged to your file" }, { "code": null, "e": 4709, "s": 4488, "text": "Enter 3 integer values one by one:\nArray: [10, 20, 30, 2, 0, 8]\nChoose numerator and denominator(not 0) from this array (enter positions 0 to 5)\nhello\nWarning: You have entered invalid input\nException logged to your file" }, { "code": null, "e": 4903, "s": 4709, "text": "2019-07-19T17:57:09.735: java.lang.ArithmeticException: / by zero\n2019-07-19T17:57:39.025: java.lang.ArrayIndexOutOfBoundsException: 12\n2019-07-19T18:00:23.374: java.util.InputMismatchException" } ]
Minimum Number in a sorted rotated array | Practice | GeeksforGeeks
Given an array of distinct elements which was initially sorted. This array is rotated at some unknown point. The task is to find the minimum element in the given sorted and rotated array. Example 1: Input: N = 10 arr[] = {2,3,4,5,6,7,8,9,10,1} Output: 1 Explanation: The array is rotated once anti-clockwise. So minium element is at last index (n-1) which is 1. Example 2: Input: N = 5 arr[] = {3,4,5,1,2} Output: 1 Explanation: The array is rotated and the minimum element present is at index (n-2) which is 1. Your Task: The task is to complete the function minNumber() which takes the array arr[] and its starting and ending indices (low and high) as inputs and returns the minimum element in the given sorted and rotated array. Expected Time Complexity: O(LogN). Expected Auxiliary Space: O(LogN). Constraints: 1 <= N <= 107 1 <= arr[i] <= 107 0 pkalyanramcec191 day ago int min = *min_element(arr,arr+high-low+1); 0 harshitsinghparmar902422 days ago def minNumber(self, arr,low,high): return min(arr) 0 vikasrajpoot4792 days ago //using binary search int minNumber(int arr[], int low, int high){ int ans=-1; while(low<=high) { int mid=low+(high-low)/2; if(arr[0]<=arr[mid]) { low=mid+1; } else{ ans=arr[mid]; high=mid-1; } } if(ans==-1) { return arr[0]; } return ans; } 0 swapnilsrkr6 days ago //Time complexity -> O(LogN) //Space complexity -> O(1) int minNumber(int arr[], int low, int high) { if(high + 1 == 1) return arr[0]; if(high + 1 == 2) return arr[0] < arr[1] ? arr[0] : arr[1]; int mid = low + (high - low)/2, minElement = -1; while(low <= high){ if(arr[0] > arr[mid]){ minElement = arr[mid]; high = mid - 1; } else low = mid + 1; mid = low + (high - low)/2; } //Below if condition will be true, only when all elements are in ascending order if(minElement == -1) return arr[0]; return minElement; } 0 tantade20021 week ago //using two pointer approach static int minNumber(int arr[], int low, int high) { // Your code here int left =1; int right=arr.length-2; while(left<=right){ if(arr[left]<arr[left-1]&&arr[left]<arr[left+1]){ return arr[left]; } else if(arr[right]<arr[right-1]&& arr[right]<arr[right+1]){ return arr[right]; } else{ left++; right--; } } if(arr[0]>arr[arr.length-1]){ return arr[arr.length-1]; } return arr[0]; } 0 jituverma70491 week ago //simple solution to above problem //Function to find the minimum element in sorted and rotated array. int minnumber(int arr[], int low, int high) { sort(arr,arr+high+1); return arr[0]; } +2 dhruv08092 weeks ago int minNumber(int arr[], int low, int high) { // Time complexity O(logn) // Space complexity O(1) // Your code here int res = low; while(high>=low){ int mid = (high+low)/2; if(arr[mid]>arr[res]) low = mid+1; else { res = mid; high = mid-1; } } return arr[res]; } 0 dhruv0809 This comment was deleted. +1 anant41363 weeks ago int minNumber(int arr[], int low, int high) { // Your code here while(low<high) { int m=(low+high)/2; if(arr[m]<arr[m+1]&&arr[m]<arr[high])high=m; else if(arr[m]<arr[m+1]){return(min(minNumber(arr,low,m),minNumber(arr,m+1,high)));} else if(arr[m]>arr[m+1])low=m+1; } return arr[low]; } 0 shubham211019973 weeks ago static int minNumber(int arr[], int low, int high) { low=0; high=arr.length-1; if(arr[low]<arr[high]) return arr[low];//not rotated while(low<=high){ int mid=(low+high)/2; if(arr[mid]<arr[mid-1])return arr[mid]; if(arr[mid]>arr[mid+1])return arr[mid+1]; else if(arr[mid]>arr[low]) low=mid+1; else if(arr[mid]<arr[high]) high=mid-1; } return -1; } We strongly recommend solving this problem on your own before viewing its editorial. Do you still want to view the editorial? Login to access your submissions. Problem Contest Reset the IDE using the second button on the top right corner. Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values. Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints. You can access the hints to get an idea about what is expected of you as well as the final solution code. You can view the solutions submitted by other users from the submission tab.
[ { "code": null, "e": 427, "s": 238, "text": "Given an array of distinct elements which was initially sorted. This array is rotated at some unknown point. The task is to find the minimum element in the given sorted and rotated array. " }, { "code": null, "e": 438, "s": 427, "text": "Example 1:" }, { "code": null, "e": 604, "s": 438, "text": "Input:\nN = 10\narr[] = {2,3,4,5,6,7,8,9,10,1}\nOutput: 1\nExplanation: The array is rotated \nonce anti-clockwise. So minium \nelement is at last index (n-1) \nwhich is 1." }, { "code": null, "e": 615, "s": 604, "text": "Example 2:" }, { "code": null, "e": 756, "s": 615, "text": "Input:\nN = 5\narr[] = {3,4,5,1,2}\nOutput: 1\nExplanation: The array is rotated \nand the minimum element present is\nat index (n-2) which is 1.\n" }, { "code": null, "e": 976, "s": 756, "text": "Your Task:\nThe task is to complete the function minNumber() which takes the array arr[] and its starting and ending indices (low and high) as inputs and returns the minimum element in the given sorted and rotated array." }, { "code": null, "e": 1046, "s": 976, "text": "Expected Time Complexity: O(LogN).\nExpected Auxiliary Space: O(LogN)." }, { "code": null, "e": 1092, "s": 1046, "text": "Constraints:\n1 <= N <= 107\n1 <= arr[i] <= 107" }, { "code": null, "e": 1094, "s": 1092, "text": "0" }, { "code": null, "e": 1119, "s": 1094, "text": "pkalyanramcec191 day ago" }, { "code": null, "e": 1163, "s": 1119, "text": "int min = *min_element(arr,arr+high-low+1);" }, { "code": null, "e": 1165, "s": 1163, "text": "0" }, { "code": null, "e": 1199, "s": 1165, "text": "harshitsinghparmar902422 days ago" }, { "code": null, "e": 1256, "s": 1199, "text": "def minNumber(self, arr,low,high): return min(arr)" }, { "code": null, "e": 1260, "s": 1258, "text": "0" }, { "code": null, "e": 1286, "s": 1260, "text": "vikasrajpoot4792 days ago" }, { "code": null, "e": 1308, "s": 1286, "text": "//using binary search" }, { "code": null, "e": 1606, "s": 1308, "text": "int minNumber(int arr[], int low, int high){ int ans=-1; while(low<=high) { int mid=low+(high-low)/2; if(arr[0]<=arr[mid]) { low=mid+1; } else{ ans=arr[mid]; high=mid-1; } } if(ans==-1) { return arr[0]; } return ans; }" }, { "code": null, "e": 1608, "s": 1606, "text": "0" }, { "code": null, "e": 1630, "s": 1608, "text": "swapnilsrkr6 days ago" }, { "code": null, "e": 2401, "s": 1630, "text": "//Time complexity -> O(LogN)\n//Space complexity -> O(1)\n\nint minNumber(int arr[], int low, int high)\n {\n if(high + 1 == 1)\n return arr[0];\n \n if(high + 1 == 2)\n return arr[0] < arr[1] ? arr[0] : arr[1];\n \n int mid = low + (high - low)/2, minElement = -1;\n \n while(low <= high){\n if(arr[0] > arr[mid]){\n minElement = arr[mid];\n high = mid - 1;\n }\n else\n low = mid + 1;\n mid = low + (high - low)/2;\n }\n \n //Below if condition will be true, only when all elements are in ascending order\n if(minElement == -1)\n return arr[0];\n \n return minElement; \n }" }, { "code": null, "e": 2403, "s": 2401, "text": "0" }, { "code": null, "e": 2425, "s": 2403, "text": "tantade20021 week ago" }, { "code": null, "e": 2454, "s": 2425, "text": "//using two pointer approach" }, { "code": null, "e": 3103, "s": 2456, "text": " static int minNumber(int arr[], int low, int high) { // Your code here int left =1; int right=arr.length-2; while(left<=right){ if(arr[left]<arr[left-1]&&arr[left]<arr[left+1]){ return arr[left]; } else if(arr[right]<arr[right-1]&& arr[right]<arr[right+1]){ return arr[right]; } else{ left++; right--; } } if(arr[0]>arr[arr.length-1]){ return arr[arr.length-1]; } return arr[0]; }" }, { "code": null, "e": 3105, "s": 3103, "text": "0" }, { "code": null, "e": 3129, "s": 3105, "text": "jituverma70491 week ago" }, { "code": null, "e": 3166, "s": 3129, "text": " //simple solution to above problem" }, { "code": null, "e": 3342, "s": 3166, "text": "//Function to find the minimum element in sorted and rotated array. int minnumber(int arr[], int low, int high) { sort(arr,arr+high+1); return arr[0]; }" }, { "code": null, "e": 3345, "s": 3342, "text": "+2" }, { "code": null, "e": 3366, "s": 3345, "text": "dhruv08092 weeks ago" }, { "code": null, "e": 3414, "s": 3366, "text": "int minNumber(int arr[], int low, int high) {" }, { "code": null, "e": 3448, "s": 3414, "text": " // Time complexity O(logn)" }, { "code": null, "e": 3745, "s": 3448, "text": " // Space complexity O(1) // Your code here int res = low; while(high>=low){ int mid = (high+low)/2; if(arr[mid]>arr[res]) low = mid+1; else { res = mid; high = mid-1; } } return arr[res]; }" }, { "code": null, "e": 3747, "s": 3745, "text": "0" }, { "code": null, "e": 3757, "s": 3747, "text": "dhruv0809" }, { "code": null, "e": 3783, "s": 3757, "text": "This comment was deleted." }, { "code": null, "e": 3786, "s": 3783, "text": "+1" }, { "code": null, "e": 3807, "s": 3786, "text": "anant41363 weeks ago" }, { "code": null, "e": 4165, "s": 3807, "text": "int minNumber(int arr[], int low, int high) { // Your code here while(low<high) { int m=(low+high)/2; if(arr[m]<arr[m+1]&&arr[m]<arr[high])high=m; else if(arr[m]<arr[m+1]){return(min(minNumber(arr,low,m),minNumber(arr,m+1,high)));} else if(arr[m]>arr[m+1])low=m+1; } return arr[low]; }" }, { "code": null, "e": 4167, "s": 4165, "text": "0" }, { "code": null, "e": 4194, "s": 4167, "text": "shubham211019973 weeks ago" }, { "code": null, "e": 4656, "s": 4194, "text": "static int minNumber(int arr[], int low, int high) { low=0; high=arr.length-1; if(arr[low]<arr[high]) return arr[low];//not rotated while(low<=high){ int mid=(low+high)/2; if(arr[mid]<arr[mid-1])return arr[mid]; if(arr[mid]>arr[mid+1])return arr[mid+1]; else if(arr[mid]>arr[low]) low=mid+1; else if(arr[mid]<arr[high]) high=mid-1; } return -1; }" }, { "code": null, "e": 4802, "s": 4656, "text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?" }, { "code": null, "e": 4838, "s": 4802, "text": " Login to access your submissions. " }, { "code": null, "e": 4848, "s": 4838, "text": "\nProblem\n" }, { "code": null, "e": 4858, "s": 4848, "text": "\nContest\n" }, { "code": null, "e": 4921, "s": 4858, "text": "Reset the IDE using the second button on the top right corner." }, { "code": null, "e": 5069, "s": 4921, "text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values." }, { "code": null, "e": 5277, "s": 5069, "text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints." }, { "code": null, "e": 5383, "s": 5277, "text": "You can access the hints to get an idea about what is expected of you as well as the final solution code." } ]
How to call another enum value in an enum's constructor using java?
Enumeration (enum) in Java is a datatype which stores a set of constant values. You can use enumerations to store fixed values such as days in a week, months in a year etc. You can define an enumeration using the keyword enum followed by the name of the enumeration as − enum Days { SUNDAY, MONDAY, TUESDAY, WEDNESDAY, THURSDAY, FRIDAY, SATURDAY } Enumerations are similar to classes and, you can have variables, methods (Only concrete methods) and constructors within them. For suppose we have elements in an enumeration with values as − enum Scoters { ACTIVA125(80000), ACTIVA5G(70000), ACCESS125(75000), VESPA(90000), TVSJUPITER(75000); } To define a constructor in it, first of all declare an instance variable to hold the values of the elements. private int price; Then, declare a parameterized constructor initializing above created instance variable. Scoters (int price) { this.price = price; } To initialize the enum with values in another enum. Declare the desired enum as the instance variable. Initialize it with a parameterized constructor. import java.util.Scanner; enum State{ Telangana, Delhi, Tamilnadu, Karnataka, Andhrapradesh } enum Cities { Hyderabad(State.Telangana), Delhi(State.Delhi), Chennai(State.Tamilnadu), Banglore(State.Karnataka), Vishakhapatnam(State.Andhrapradesh); //Instance variable private State state; //Constructor to initialize the instance variable Cities(State state) { this.state = state; } //Static method to display the country public static void display(int model){ Cities constants[] = Cities.values(); System.out.println("State of: "+constants[model]+" is "+constants[model].state); } } public class EnumerationExample { public static void main(String args[]) { Cities constants[] = Cities.values(); System.out.println("Value of constants: "); for(Cities d: constants) { System.out.println(d.ordinal()+": "+d); } System.out.println("Select one model: "); Scanner sc = new Scanner(System.in); int model = sc.nextInt(); //Calling the static method of the enum Cities.display(model); } } Value of constants: 0: Hyderabad 1: Delhi 2: Chennai 3: Banglore 4: Vishakhapatnam Select one model: 2 State of: Chennai is Tamilnadu
[ { "code": null, "e": 1235, "s": 1062, "text": "Enumeration (enum) in Java is a datatype which stores a set of constant values. You can use enumerations to store fixed values such as days in a week, months in a year etc." }, { "code": null, "e": 1333, "s": 1235, "text": "You can define an enumeration using the keyword enum followed by the name of the enumeration as −" }, { "code": null, "e": 1413, "s": 1333, "text": "enum Days {\n SUNDAY, MONDAY, TUESDAY, WEDNESDAY, THURSDAY, FRIDAY, SATURDAY\n}" }, { "code": null, "e": 1540, "s": 1413, "text": "Enumerations are similar to classes and, you can have variables, methods (Only concrete methods) and constructors within them." }, { "code": null, "e": 1604, "s": 1540, "text": "For suppose we have elements in an enumeration with values as −" }, { "code": null, "e": 1710, "s": 1604, "text": "enum Scoters {\n ACTIVA125(80000), ACTIVA5G(70000), ACCESS125(75000), VESPA(90000), TVSJUPITER(75000);\n}" }, { "code": null, "e": 1819, "s": 1710, "text": "To define a constructor in it, first of all declare an instance variable to hold the values of the elements." }, { "code": null, "e": 1838, "s": 1819, "text": "private int price;" }, { "code": null, "e": 1926, "s": 1838, "text": "Then, declare a parameterized constructor initializing above created instance variable." }, { "code": null, "e": 1973, "s": 1926, "text": "Scoters (int price) {\n this.price = price;\n}" }, { "code": null, "e": 2025, "s": 1973, "text": "To initialize the enum with values in another enum." }, { "code": null, "e": 2076, "s": 2025, "text": "Declare the desired enum as the instance variable." }, { "code": null, "e": 2124, "s": 2076, "text": "Initialize it with a parameterized constructor." }, { "code": null, "e": 3221, "s": 2124, "text": "import java.util.Scanner;\nenum State{\n Telangana, Delhi, Tamilnadu, Karnataka, Andhrapradesh\n}\nenum Cities {\n Hyderabad(State.Telangana), Delhi(State.Delhi), Chennai(State.Tamilnadu), Banglore(State.Karnataka), Vishakhapatnam(State.Andhrapradesh);\n //Instance variable\n private State state;\n //Constructor to initialize the instance variable\n Cities(State state) {\n this.state = state;\n }\n //Static method to display the country\n public static void display(int model){\n Cities constants[] = Cities.values();\n System.out.println(\"State of: \"+constants[model]+\" is \"+constants[model].state);\n }\n}\npublic class EnumerationExample {\n public static void main(String args[]) {\n Cities constants[] = Cities.values();\n System.out.println(\"Value of constants: \");\n for(Cities d: constants) {\n System.out.println(d.ordinal()+\": \"+d);\n }\n System.out.println(\"Select one model: \");\n Scanner sc = new Scanner(System.in);\n int model = sc.nextInt();\n //Calling the static method of the enum\n Cities.display(model);\n }\n}" }, { "code": null, "e": 3355, "s": 3221, "text": "Value of constants:\n0: Hyderabad\n1: Delhi\n2: Chennai\n3: Banglore\n4: Vishakhapatnam\nSelect one model:\n2\nState of: Chennai is Tamilnadu" } ]