title
stringlengths
3
221
text
stringlengths
17
477k
parsed
listlengths
0
3.17k
Maximum circular subarray sum - GeeksforGeeks
29 Dec, 2021 Given n numbers (both +ve and -ve), arranged in a circle, find the maximum sum of consecutive numbers. Examples: Input: a[] = {8, -8, 9, -9, 10, -11, 12} Output: 22 (12 + 8 - 8 + 9 - 9 + 10) Input: a[] = {10, -3, -4, 7, 6, 5, -4, -1} Output: 23 (7 + 6 + 5 - 4 -1 + 10) Input: a[] = {-1, 40, -14, 7, 6, 5, -4, -1} Output: 52 (7 + 6 + 5 - 4 - 1 - 1 + 40) Method 1 There can be two cases for the maximum sum: Case 1: The elements that contribute to the maximum sum are arranged such that no wrapping is there. Examples: {-10, 2, -1, 5}, {-2, 4, -1, 4, -1}. In this case, Kadane’s algorithm will produce the result. Case 2: The elements which contribute to the maximum sum are arranged such that wrapping is there. Examples: {10, -12, 11}, {12, -5, 4, -8, 11}. In this case, we change wrapping to non-wrapping. Let us see how. Wrapping of contributing elements implies non-wrapping of non-contributing elements, so find out the sum of non-contributing elements and subtract this sum from the total sum. To find out the sum of non-contributions, invert the sign of each element and then run Kadane’s algorithm. Our array is like a ring and we have to eliminate the maximum continuous negative that implies maximum continuous positive in the inverted arrays. Finally, we compare the sum obtained in both cases and return the maximum of the two sums. Thanks to ashishdey0 for suggesting this solution. The following are implementations of the above method. C++ C Java Python C# PHP Javascript // C++ program for maximum contiguous circular sum problem#include <bits/stdc++.h>using namespace std; // Standard Kadane's algorithm to// find maximum subarray sumint kadane(int a[], int n); // The function returns maximum// circular contiguous sum in a[]int maxCircularSum(int a[], int n){ // Case 1: get the maximum sum using standard kadane' // s algorithm int max_kadane = kadane(a, n); // if maximum sum using standard kadane' is less than 0 if(max_kadane < 0) return max_kadane; // Case 2: Now find the maximum sum that includes // corner elements. int max_wrap = 0, i; for (i = 0; i < n; i++) { max_wrap += a[i]; // Calculate array-sum a[i] = -a[i]; // invert the array (change sign) } // max sum with corner elements will be: // array-sum - (-max subarray sum of inverted array) max_wrap = max_wrap + kadane(a, n); // The maximum circular sum will be maximum of two sums return (max_wrap > max_kadane) ? max_wrap : max_kadane;} // Standard Kadane's algorithm to find maximum subarray sum// See https:// www.geeksforgeeks.org/archives/576 for detailsint kadane(int a[], int n){ int max_so_far = 0, max_ending_here = 0; int i; for (i = 0; i < n; i++) { max_ending_here = max_ending_here + a[i]; if (max_so_far < max_ending_here) max_so_far = max_ending_here; if (max_ending_here < 0) max_ending_here = 0; } return max_so_far;} /* Driver program to test maxCircularSum() */int main(){ int a[] = { 11, 10, -20, 5, -3, -5, 8, -13, 10 }; int n = sizeof(a) / sizeof(a[0]); cout << "Maximum circular sum is " << maxCircularSum(a, n) << endl; return 0;} // This is code is contributed by rathbhupendra // C program for maximum contiguous circular sum problem#include <stdio.h> // Standard Kadane's algorithm to find maximum subarray// sumint kadane(int a[], int n); // The function returns maximum circular contiguous sum// in a[]int maxCircularSum(int a[], int n){ // Case 1: get the maximum sum using standard kadane' // s algorithm int max_kadane = kadane(a, n); // Case 2: Now find the maximum sum that includes // corner elements. int max_wrap = 0, i; for (i = 0; i < n; i++) { max_wrap += a[i]; // Calculate array-sum a[i] = -a[i]; // invert the array (change sign) } // max sum with corner elements will be: // array-sum - (-max subarray sum of inverted array) max_wrap = max_wrap + kadane(a, n); // The maximum circular sum will be maximum of two sums return (max_wrap > max_kadane) ? max_wrap : max_kadane;} // Standard Kadane's algorithm to find maximum subarray sum// See https:// www.geeksforgeeks.org/archives/576 for detailsint kadane(int a[], int n){ int max_so_far = 0, max_ending_here = 0; int i; for (i = 0; i < n; i++) { max_ending_here = max_ending_here + a[i]; if (max_ending_here < 0) max_ending_here = 0; if (max_so_far < max_ending_here) max_so_far = max_ending_here; } return max_so_far;} /* Driver program to test maxCircularSum() */int main(){ int a[] = { 11, 10, -20, 5, -3, -5, 8, -13, 10 }; int n = sizeof(a) / sizeof(a[0]); printf("Maximum circular sum is %dn", maxCircularSum(a, n)); return 0;} // Java program for maximum contiguous circular sum problemimport java.io.*;import java.util.*; class Solution{ public static int kadane(int a[],int n){ int res = 0; int x = a[0]; for(int i = 0; i < n; i++){ res = Math.max(a[i],res+a[i]); x= Math.max(x,res); } return x; } //lets write a function for calculating max sum in circular manner as discuss above public static int reverseKadane(int a[],int n){ int total = 0; //taking the total sum of the array elements for(int i = 0; i< n; i++){ total +=a[i]; } // inverting the array for(int i = 0; i<n ; i++){ a[i] = -a[i]; } // finding min sum subarray int k = kadane(a,n);// max circular sum int ress = total+k; // to handle the case in which all elements are negative if(total == -k ){ return total; } else{ return ress; } } public static void main(String[] args) { int a[] = {1,4,6,4,-3,8,-1}; int n = 7; if(n==1){ System.out.println("Maximum circular sum is " +a[0]); } else{ System.out.println("Maximum circular sum is " +Integer.max(kadane(a,n), reverseKadane(a,n))); } }} /* This code is contributed by Mohit Kumar*/ # Python program for maximum contiguous circular sum problem # Standard Kadane's algorithm to find maximum subarray sumdef kadane(a): Max = a[0] temp = Max for i in range(1,len(a)): temp += a[i] if temp < a[i]: temp = a[i] Max = max(Max,temp) return Max # The function returns maximum circular contiguous sum in# a[]def maxCircularSum(a): n = len(a) # Case 1: get the maximum sum using standard kadane's # algorithm max_kadane = kadane(a) # Case 2: Now find the maximum sum that includes corner # elements. # You can do so by finding the maximum negative contiguous # sum # convert a to -ve 'a' and run kadane's algo neg_a = [-1*x for x in a] max_neg_kadane = kadane(neg_a) # Max sum with corner elements will be: # array-sum - (-max subarray sum of inverted array) max_wrap = -(sum(neg_a)-max_neg_kadane) # The maximum circular sum will be maximum of two sums res = max(max_wrap,max_kadane) return res if res != 0 else max_kadane # Driver function to test above functiona = [11, 10, -20, 5, -3, -5, 8, -13, 10]print "Maximum circular sum is", maxCircularSum(a) # This code is contributed by Devesh Agrawal // C# program for maximum contiguous// circular sum problemusing System; class MaxCircularSum { // The function returns maximum circular // contiguous sum in a[] static int maxCircularSum(int[] a) { int n = a.Length; // Case 1: get the maximum sum using standard kadane' // s algorithm int max_kadane = kadane(a); // Case 2: Now find the maximum sum that includes // corner elements. int max_wrap = 0; for (int i = 0; i < n; i++) { max_wrap += a[i]; // Calculate array-sum a[i] = -a[i]; // invert the array (change sign) } // max sum with corner elements will be: // array-sum - (-max subarray sum of inverted array) max_wrap = max_wrap + kadane(a); // The maximum circular sum will be maximum of two sums return (max_wrap > max_kadane) ? max_wrap : max_kadane; } // Standard Kadane's algorithm to find maximum subarray sum // See https:// www.geeksforgeeks.org/archives/576 for details static int kadane(int[] a) { int n = a.Length; int max_so_far = 0, max_ending_here = 0; for (int i = 0; i < n; i++) { max_ending_here = max_ending_here + a[i]; if (max_ending_here < 0) max_ending_here = 0; if (max_so_far < max_ending_here) max_so_far = max_ending_here; } return max_so_far; } // Driver code public static void Main() { int[] a = { 11, 10, -20, 5, -3, -5, 8, -13, 10 }; Console.Write("Maximum circular sum is " + maxCircularSum(a)); }} /* This code is contributed by vt_m*/ <?php // PHP program for maximum// contiguous circular sum problem // The function returns maximum// circular contiguous sum $a[]function maxCircularSum($a, $n){ // Case 1: get the maximum sum // using standard kadane' s algorithm $max_kadane = kadane($a, $n); // Case 2: Now find the maximum // sum that includes corner elements. $max_wrap = 0; for ($i = 0; $i < $n; $i++) { $max_wrap += $a[$i]; // Calculate array-sum $a[$i] = -$a[$i]; // invert the array (change sign) } // max sum with corner elements will be: // array-sum - (-max subarray sum of inverted array) $max_wrap = $max_wrap + kadane($a, $n); // The maximum circular sum will be maximum of two sums return ($max_wrap > $max_kadane)? $max_wrap: $max_kadane;} // Standard Kadane's algorithm to// find maximum subarray sum// See https://www.geeksforgeeks.org/archives/576 for detailsfunction kadane($a, $n){ $max_so_far = 0; $max_ending_here = 0; for ($i = 0; $i < $n; $i++) { $max_ending_here = $max_ending_here +$a[$i]; if ($max_ending_here < 0) $max_ending_here = 0; if ($max_so_far < $max_ending_here) $max_so_far = $max_ending_here; } return $max_so_far;} /* Driver code */ $a = array(11, 10, -20, 5, -3, -5, 8, -13, 10); $n = count($a); echo "Maximum circular sum is ". maxCircularSum($a, $n); // This code is contributed by rathbhupendra?> <script> // javascript program for maximum contiguous circular sum problem function kadane(a , n) { var res = 0; var x = a[0]; for (i = 0; i < n; i++) { res = Math.max(a[i], res + a[i]); x = Math.max(x, res); } return x; } // lets write a function for calculating max sum in circular manner as discuss // above function reverseKadane(a , n) { var total = 0; // taking the total sum of the array elements for (i = 0; i < n; i++) { total += a[i]; } // inverting the array for (i = 0; i < n; i++) { a[i] = -a[i]; } // finding min sum subarray var k = kadane(a, n); // max circular sum var ress = total + k; // to handle the case in which all elements are negative if (total == -k) { return total; } else { return ress; } } var a = [11, 10, -20, 5, -3, -5, 8, -13, 10]; var n = 9; if (n == 1) { document.write("Maximum circular sum is " + a[0]); } else { document.write("Maximum circular sum is " + Math.max(kadane(a, n), reverseKadane(a, n))); } // This code is contributed by todaysgaurav</script> Output: Maximum circular sum is 31 Complexity Analysis: Time Complexity: O(n), where n is the number of elements in the input array. As only linear traversal of the array is needed. Auxiliary Space: O(1). As no extra space is required. Note that the above algorithm doesn’t work if all numbers are negative, e.g., {-1, -2, -3}. It returns 0 in this case. This case can be handled by adding a pre-check to see if all the numbers are negative before running the above algorithm. Method 2 Approach: In this method, modify Kadane’s algorithm to find a minimum contiguous subarray sum and the maximum contiguous subarray sum, then check for the maximum value between the max_value and the value left after subtracting min_value from the total sum.Algorithm We will calculate the total sum of the given array.We will declare the variable curr_max, max_so_far, curr_min, min_so_far as the first value of the array.Now we will use Kadane’s Algorithm to find the maximum subarray sum and minimum subarray sum.Check for all the values in the array:- If min_so_far is equaled to sum, i.e. all values are negative, then we return max_so_far.Else, we will calculate the maximum value of max_so_far and (sum – min_so_far) and return it. We will calculate the total sum of the given array. We will declare the variable curr_max, max_so_far, curr_min, min_so_far as the first value of the array. Now we will use Kadane’s Algorithm to find the maximum subarray sum and minimum subarray sum. Check for all the values in the array:- If min_so_far is equaled to sum, i.e. all values are negative, then we return max_so_far.Else, we will calculate the maximum value of max_so_far and (sum – min_so_far) and return it. If min_so_far is equaled to sum, i.e. all values are negative, then we return max_so_far.Else, we will calculate the maximum value of max_so_far and (sum – min_so_far) and return it. If min_so_far is equaled to sum, i.e. all values are negative, then we return max_so_far. Else, we will calculate the maximum value of max_so_far and (sum – min_so_far) and return it. The implementation of the above method is given below. C++ Java Python3 C# Javascript // C++ program for maximum contiguous circular sum problem#include <bits/stdc++.h>using namespace std; // The function returns maximum// circular contiguous sum in a[]int maxCircularSum(int a[], int n){ // Corner Case if (n == 1) return a[0]; // Initialize sum variable which store total sum of the array. int sum = 0; for (int i = 0; i < n; i++) { sum += a[i]; } // Initialize every variable with first value of array. int curr_max = a[0], max_so_far = a[0], curr_min = a[0], min_so_far = a[0]; // Concept of Kadane's Algorithm for (int i = 1; i < n; i++) { // Kadane's Algorithm to find Maximum subarray sum. curr_max = max(curr_max + a[i], a[i]); max_so_far = max(max_so_far, curr_max); // Kadane's Algorithm to find Minimum subarray sum. curr_min = min(curr_min + a[i], a[i]); min_so_far = min(min_so_far, curr_min); } if (min_so_far == sum) return max_so_far; // returning the maximum value return max(max_so_far, sum - min_so_far);} /* Driver program to test maxCircularSum() */int main(){ int a[] = { 11, 10, -20, 5, -3, -5, 8, -13, 10 }; int n = sizeof(a) / sizeof(a[0]); cout << "Maximum circular sum is " << maxCircularSum(a, n) << endl; return 0;} /*package whatever //do not write package name here */import java.io.*; class GFG { public static int maxCircularSum(int a[], int n) { // Corner Case if (n == 1) return a[0]; // Initialize sum variable which store total sum of // the array. int sum = 0; for (int i = 0; i < n; i++) { sum += a[i]; } // Initialize every variable with first value of // array. int curr_max = a[0], max_so_far = a[0], curr_min = a[0], min_so_far = a[0]; // Concept of Kadane's Algorithm for (int i = 1; i < n; i++) { // Kadane's Algorithm to find Maximum subarray // sum. curr_max = Math.max(curr_max + a[i], a[i]); max_so_far = Math.max(max_so_far, curr_max); // Kadane's Algorithm to find Minimum subarray // sum. curr_min = Math.min(curr_min + a[i], a[i]); min_so_far = Math.min(min_so_far, curr_min); } if (min_so_far == sum) { return max_so_far; } // returning the maximum value return Math.max(max_so_far, sum - min_so_far); } // Driver code public static void main(String[] args) { int a[] = { 11, 10, -20, 5, -3, -5, 8, -13, 10 }; int n = 9; System.out.println("Maximum circular sum is " + maxCircularSum(a, n)); }} // This code is contributed by aditya7409 # Python program for maximum contiguous circular sum problem # The function returns maximum# circular contiguous sum in a[]def maxCircularSum(a, n): # Corner Case if (n == 1): return a[0] # Initialize sum variable which # store total sum of the array. sum = 0 for i in range(n): sum += a[i] # Initialize every variable # with first value of array. curr_max = a[0] max_so_far = a[0] curr_min = a[0] min_so_far = a[0] # Concept of Kadane's Algorithm for i in range(1, n): # Kadane's Algorithm to find Maximum subarray sum. curr_max = max(curr_max + a[i], a[i]) max_so_far = max(max_so_far, curr_max) # Kadane's Algorithm to find Minimum subarray sum. curr_min = min(curr_min + a[i], a[i]) min_so_far = min(min_so_far, curr_min) if (min_so_far == sum): return max_so_far # returning the maximum value return max(max_so_far, sum - min_so_far) # Driver codea = [11, 10, -20, 5, -3, -5, 8, -13, 10]n = len(a)print("Maximum circular sum is", maxCircularSum(a, n)) # This code is contributes by subhammahato348 // C# program for maximum contiguous circular sum problemusing System;class GFG{ public static int maxCircularSum(int[] a, int n) { // Corner Case if (n == 1) return a[0]; // Initialize sum variable which store total sum of // the array. int sum = 0; for (int i = 0; i < n; i++) { sum += a[i]; } // Initialize every variable with first value of // array. int curr_max = a[0], max_so_far = a[0], curr_min = a[0], min_so_far = a[0]; // Concept of Kadane's Algorithm for (int i = 1; i < n; i++) { // Kadane's Algorithm to find Maximum subarray // sum. curr_max = Math.Max(curr_max + a[i], a[i]); max_so_far = Math.Max(max_so_far, curr_max); // Kadane's Algorithm to find Minimum subarray // sum. curr_min = Math.Min(curr_min + a[i], a[i]); min_so_far = Math.Min(min_so_far, curr_min); } if (min_so_far == sum) { return max_so_far; } // returning the maximum value return Math.Max(max_so_far, sum - min_so_far); } // Driver code public static void Main() { int[] a = { 11, 10, -20, 5, -3, -5, 8, -13, 10 }; int n = 9; Console.WriteLine("Maximum circular sum is " + maxCircularSum(a, n)); }} // This code is contributed by subhammahato348 <script> // JavaScript program for the above approach // The function returns maximum // circular contiguous sum in a[] function maxCircularSum(a, n) { // Corner Case if (n == 1) return a[0]; // Initialize sum variable which store total sum of the array. let sum = 0; for (let i = 0; i < n; i++) { sum += a[i]; } // Initialize every variable with first value of array. let curr_max = a[0], max_so_far = a[0], curr_min = a[0], min_so_far = a[0]; // Concept of Kadane's Algorithm for (let i = 1; i < n; i++) { // Kadane's Algorithm to find Maximum subarray sum. curr_max = Math.max(curr_max + a[i], a[i]); max_so_far = Math.max(max_so_far, curr_max); // Kadane's Algorithm to find Minimum subarray sum. curr_min = Math.min(curr_min + a[i], a[i]); min_so_far = Math.min(min_so_far, curr_min); } if (min_so_far == sum) return max_so_far; // returning the maximum value return Math.max(max_so_far, sum - min_so_far); } // Driver program to test maxCircularSum() let a = [11, 10, -20, 5, -3, -5, 8, -13, 10]; let n = a.length; document.write("Maximum circular sum is " + maxCircularSum(a, n)); // This code is contributed by Potta Lokesh </script> Output: Maximum circular sum is 31 Complexity Analysis: Time Complexity: O(n), where n is the number of elements in the input array. As only linear traversal of the array is needed. Auxiliary Space: O(1). As no extra space is required. rathbhupendra its_sahil andrew1234 aditya7409 subhammahato348 hectic monuyadav94161 todaysgaurav lokeshpotta20 akshaysingh98088 devakar305 circular-array subarray subarray-sum Arrays Arrays Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Maximum and minimum of an array using minimum number of comparisons Top 50 Array Coding Problems for Interviews Stack Data Structure (Introduction and Program) Introduction to Arrays Multidimensional Arrays in Java Linear Search Linked List vs Array Python | Using 2D arrays/lists the right way Search an element in a sorted and rotated array Array of Strings in C++ (5 Different Ways to Create)
[ { "code": null, "e": 26341, "s": 26313, "text": "\n29 Dec, 2021" }, { "code": null, "e": 26445, "s": 26341, "text": "Given n numbers (both +ve and -ve), arranged in a circle, find the maximum sum of consecutive numbers. " }, { "code": null, "e": 26456, "s": 26445,...
PyQt5 - Indicator border of Check Box - GeeksforGeeks
22 Apr, 2020 In this article we will see how to set the border to indicator of check box, although by default indicator has its own border but we can edit it also, we can change the size and color of the border. In order to do so we have to change the style sheet code with the help of setStyleSheet method, below is the style sheet code. QCheckBox::indicator { border : 3px solid red; } Below is the implementation. # importing librariesfrom PyQt5.QtWidgets import * from PyQt5 import QtCore, QtGuifrom PyQt5.QtGui import * from PyQt5.QtCore import * import sys class Window(QMainWindow): def __init__(self): super().__init__() # setting title self.setWindowTitle("Python ") # setting geometry self.setGeometry(100, 100, 600, 400) # calling method self.UiComponents() # showing all the widgets self.show() # method for widgets def UiComponents(self): # creating the check-box checkbox = QCheckBox('Geek ?', self) # setting geometry of check box checkbox.setGeometry(200, 150, 100, 30) # changing border of indicator in checkbox checkbox.setStyleSheet("QCheckBox::indicator" "{" "border : 3px solid red;" "}") # create pyqt5 appApp = QApplication(sys.argv) # create the instance of our Windowwindow = Window() # start the appsys.exit(App.exec()) Output : Python-gui Python-PyQt Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Install PIP on Windows ? Check if element exists in list in Python How To Convert Python Dictionary To JSON? How to drop one or multiple columns in Pandas Dataframe Python Classes and Objects Python | Get unique values from a list Python | os.path.join() method Defaultdict in Python Create a directory in Python Python | Pandas dataframe.groupby()
[ { "code": null, "e": 25537, "s": 25509, "text": "\n22 Apr, 2020" }, { "code": null, "e": 25736, "s": 25537, "text": "In this article we will see how to set the border to indicator of check box, although by default indicator has its own border but we can edit it also, we can chang...
KeyStore getProvider() method in Java with Examples - GeeksforGeeks
09 Jun, 2020 The getProvider() method of java.security.KeyStore class is used to get the Provider associated with this KeyStore instance. Syntax: public final Provider getProvider() Parameter: This method accepts nothing as a parameter. Return Value: This method returns the Provider associated with this KeyStore. Note: All the programs in this article won’t run on online IDE as no ‘privatekey’ Keystore exists. You can check this code on Java compiler on your system. To check this code, create a Keystore ‘privatekey’ on your system and set your own Keystore password to access that Keystore. Below are the examples to illustrate the getProvider() method: Example 1: // Java program to demonstrate getCertificate() method import java.security.*;import java.security.cert.*;import java.util.*;import java.io.*; public class GFG { public static void main(String[] argv) { try { // creating the object of KeyStore // and getting instance // By using getInstance() method KeyStore sr = KeyStore.getInstance("JKS"); // keystore password is required to access keystore char[] pass = ("123456").toCharArray(); // creating and initializing object of InputStream InputStream is = new FileInputStream( "f:/java/private key.store"); // initializing keystore object sr.load(is, pass); // getting the certificate // using getCertificate() method Provider provider = sr.getProvider(); // display the result System.out.println("Provider : " + provider); } catch (NoSuchAlgorithmException e) { System.out.println("Exception thrown : " + e); } catch (NullPointerException e) { System.out.println("Exception thrown : " + e); } catch (KeyStoreException e) { System.out.println("Exception thrown : " + e); } catch (FileNotFoundException e) { System.out.println("Exception thrown : " + e); } catch (IOException e) { System.out.println("Exception thrown : " + e); } catch (CertificateException e) { System.out.println("Exception thrown : " + e); } }} Example 2: For Without Loading keystore // Java program to demonstrate getCertificate() method import java.security.*;import java.security.cert.*;import java.util.*;import java.io.*; public class GFG { public static void main(String[] argv) { try { // creating the object of KeyStore // and getting instance // By using getInstance() method KeyStore sr = KeyStore.getInstance("JKS"); // keystore password is required to access keystore char[] pass = ("123456").toCharArray(); // creating and initializing object of InputStream InputStream is = new FileInputStream( "f:/java/private key.store"); // getting the certificate // using getCertificate() method Provider provider = sr.getProvider(); // display the result System.out.println("Provider : " + provider); } catch (NullPointerException e) { System.out.println("Exception thrown : " + e); } catch (KeyStoreException e) { System.out.println("Exception thrown : " + e); } catch (FileNotFoundException e) { System.out.println("Exception thrown : " + e); } }} Reference: https://docs.oracle.com/javase/9/docs/api/java/security/KeyStore.html#getProvider– shubham_singh Akanksha_Rai Java-Functions Java-KeyStore Java-security package Java Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Stream In Java Constructors in Java Exceptions in Java Functional Interfaces in Java Different ways of Reading a text file in Java Generics in Java Introduction to Java Comparator Interface in Java with Examples Internal Working of HashMap in Java Strings in Java
[ { "code": null, "e": 25225, "s": 25197, "text": "\n09 Jun, 2020" }, { "code": null, "e": 25350, "s": 25225, "text": "The getProvider() method of java.security.KeyStore class is used to get the Provider associated with this KeyStore instance." }, { "code": null, "e": 2...
iotop Command in Linux with Examples - GeeksforGeeks
26 May, 2020 iotop or Input/Output top is a command in Linux which is used to display and monitor the disk IO usage details and even gets a table of existing IO utilization by the process. It is designed in python and needs kernel modules for its execution. It is used by system administrators to trace the specific process that may be causing a high disk I/O read/writes.It requires a python interpreter for its execution. It produces output similar to that of top command. It generally requires root privileges for its execution. CentOS/RHEL sudo yum install iotop ubuntu sudo apt install iotop 1. To get the list of processes and their current disk IO usage. sudo iotop This command will now display the list of processes and their current disk usage and will keep on updating the same. 2. To show processes that are actually doing IO sudo iotop -o This will display all the processes which are currently and actually doing IO. 3. To get the version of the iotop sudo iotop --version This will display the currently installed version of iotop tool. 4. To display help section sudo iotop -h This command will display the help section of the iotop tool. 5. To display output in non interactive mode sudo iotop -b This will display the output in non-interactive and batch mode. 6. To change the number of iterations or updations sudo iotop -n 3 This command will not update the output 3 times in spite of the default time which is infinity. 7 To display a specific process sudo iotop -p 10989 This will display the IO usage of the process with the mentioned PID in spite of all the processes. 8. To show accumulated output sudo iotop -a This will not display the accumulated IO instead of bandwidth. 9. To add a time stamp to each line sudo iotop -t This will add a time stamp to each line of the output. 10. To suppress some lines of header sudo iotop -q This will now suppress some line of header in the output. linux-command Linux-system-commands Linux-Unix Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. ZIP command in Linux with examples TCP Server-Client implementation in C SORT command in Linux/Unix with examples tar command in Linux with examples curl command in Linux with Examples Conditional Statements | Shell Script diff command in Linux with examples UDP Server-Client implementation in C Tail command in Linux with examples echo command in Linux with Examples
[ { "code": null, "e": 25665, "s": 25637, "text": "\n26 May, 2020" }, { "code": null, "e": 26184, "s": 25665, "text": "iotop or Input/Output top is a command in Linux which is used to display and monitor the disk IO usage details and even gets a table of existing IO utilization by ...
C Program to Detect Cycle in a Directed Graph - GeeksforGeeks
02 Jan, 2019 Given a directed graph, check whether the graph contains a cycle or not. Your function should return true if the given graph contains at least one cycle, else return false. For example, the following graph contains three cycles 0->2->0, 0->1->2->0 and 3->3, so your function must return true. C++ // A C++ Program to detect cycle in a graph#include <iostream>#include <limits.h>#include <list> using namespace std; class Graph { int V; // No. of vertices list<int>* adj; // Pointer to an array containing adjacency lists bool isCyclicUtil(int v, bool visited[], bool* rs); // used by isCyclic()public: Graph(int V); // Constructor void addEdge(int v, int w); // to add an edge to graph bool isCyclic(); // returns true if there is a cycle in this graph}; Graph::Graph(int V){ this->V = V; adj = new list<int>[V];} void Graph::addEdge(int v, int w){ adj[v].push_back(w); // Add w to v’s list.} // This function is a variation of DFSUytil()// in https:// www.geeksforgeeks.org/archives/18212bool Graph::isCyclicUtil(int v, bool visited[], bool* recStack){ if (visited[v] == false) { // Mark the current node as visited and part of recursion stack visited[v] = true; recStack[v] = true; // Recur for all the vertices adjacent to this vertex list<int>::iterator i; for (i = adj[v].begin(); i != adj[v].end(); ++i) { if (!visited[*i] && isCyclicUtil(*i, visited, recStack)) return true; else if (recStack[*i]) return true; } } recStack[v] = false; // remove the vertex from recursion stack return false;} // Returns true if the graph contains a cycle, else false.// This function is a variation of DFS()// in https:// www.geeksforgeeks.org/archives/18212bool Graph::isCyclic(){ // Mark all the vertices as not visited and not part of recursion // stack bool* visited = new bool[V]; bool* recStack = new bool[V]; for (int i = 0; i < V; i++) { visited[i] = false; recStack[i] = false; } // Call the recursive helper function to detect cycle in different // DFS trees for (int i = 0; i < V; i++) if (isCyclicUtil(i, visited, recStack)) return true; return false;} int main(){ // Create a graph given in the above diagram Graph g(4); g.addEdge(0, 1); g.addEdge(0, 2); g.addEdge(1, 2); g.addEdge(2, 0); g.addEdge(2, 3); g.addEdge(3, 3); if (g.isCyclic()) cout << "Graph contains cycle"; else cout << "Graph doesn't contain cycle"; return 0;} Graph contains cycle Please refer complete article on Detect Cycle in a Directed Graph for more details! C Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. C Program to read contents of Whole File Producer Consumer Problem in C Exit codes in C/C++ with Examples C program to find the length of a string Handling multiple clients on server with multithreading using Socket Programming in C/C++ C / C++ Program for Dijkstra's shortest path algorithm | Greedy Algo-7 Regular expressions in C Create n-child process from same parent process using fork() in C Conditional wait and signal in multi-threading How to store words in an array in C?
[ { "code": null, "e": 26175, "s": 26147, "text": "\n02 Jan, 2019" }, { "code": null, "e": 26468, "s": 26175, "text": "Given a directed graph, check whether the graph contains a cycle or not. Your function should return true if the given graph contains at least one cycle, else retu...
ReactJS Semantic UI Dropdown Module - GeeksforGeeks
28 Jun, 2021 Semantic UI is a modern framework used in developing seamless designs for the website, Its gives the user a lightweight experience with its components. It uses the predefined CSS, JQuery language to incorporate in different frameworks. In this article we see know how to use Dropdown Module in ReactJS Semantic UI. The Dropdown Module allows a user to select a value from a series of options Properties: Selection: We can select through a range of variety. Search Selection: We can select through a range of variety by searching. Multiple Selection: We can select through a multiple selection. Multiple Search Selection: We can select through a multiple selection by search. Clearable: We can make a dropdown which can be clearable. Search Dropdown: We can make a dropdown which can be searchable. Search In-Menu: We can make a dropdown which can be searchable by a menu. Inline: We can make a dropdown which can be appeared in a line. Pointing: We can make a dropdown which can be appeared as pointing. Floating: We can make a dropdown which can be appeared as a floating. Simple: We can make a simple dropdown. States: Loading: It is used for making a loading dropdown. Error: It is used for making a dropdown with error. Active: It is used for making an active dropdown. Disabled: It is used for making a disabled dropdown, Syntax: <Dropdown text='content'/> Creating React Application And Installing Module: Step 1: Create a React application using the following command.npx create-react-app foldername npx create-react-app foldername Step 2: After creating your project folder i.e. foldername, move to it using the following command.cd foldername cd foldername Step 3: Install semantic UI in your given directory. npm install semantic-ui-react semantic-ui-css npm install semantic-ui-react semantic-ui-css Project Structure: It will look like the following. Step to Run Application: Run the application from the root directory of the project, using the following command. npm start Example 1: this is the basic example which shows how to use dropdown module by using ReactJS Semantic UI Dropdown Module. App.js import React from 'react'import { Dropdown, Icon } from 'semantic-ui-react' const styleLink = document.createElement("link");styleLink.rel = "stylesheet";styleLink.href = "https://cdn.jsdelivr.net/npm/semantic-ui/dist/semantic.min.css";document.head.appendChild(styleLink); const btt = () => (<div> <div style={{ display: 'block', width: 700, padding: 30 }}> <br/> <Dropdown text='GeeksforGeeks'> <Dropdown.Menu> <Dropdown.Item text='ReactJS' icon='react' /> <Dropdown.Item text='AngularJS' icon='angular'/> <Dropdown.Item text='HTML5' icon='html5' /> <Dropdown.Item text='JavaScript' icon='js' /> <Dropdown.Item text='NodeJS' icon='node'/> </Dropdown.Menu> </Dropdown></div></div>) export default btt Output: Example 2: In this example, we are showing the disabled state in a dropdown by using ReactJS Semantic UI Dropdown Module. App.js import React from 'react'import { Dropdown, Icon } from 'semantic-ui-react' const styleLink = document.createElement("link");styleLink.rel = "stylesheet";styleLink.href = "https://cdn.jsdelivr.net/npm/semantic-ui/dist/semantic.min.css";document.head.appendChild(styleLink); const btt = () => (<div> <div style={{ display: 'block', width: 700, padding: 30 }}> <br/> <Dropdown text='GeeksforGeeks' disabled> <Dropdown.Menu> <Dropdown.Item text='ReactJS' icon='react' /> <Dropdown.Item text='AngularJS' icon='angular'/> <Dropdown.Item text='HTML5' icon='html5' /> <Dropdown.Item text='JavaScript' icon='js' /> <Dropdown.Item text='NodeJS' icon='node'/> </Dropdown.Menu> </Dropdown></div></div>) export default btt Output: Reference: https://react.semantic-ui.com/modules/dropdown Semantic-UI ReactJS Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. ReactJS useNavigate() Hook How to set background images in ReactJS ? Axios in React: A Guide for Beginners How to create a table in ReactJS ? How to navigate on path by button click in react router ? Remove elements from a JavaScript Array Installation of Node.js on Linux Convert a string to an integer in JavaScript Top 10 Projects For Beginners To Practice HTML and CSS Skills Difference between var, let and const keywords in JavaScript
[ { "code": null, "e": 26071, "s": 26043, "text": "\n28 Jun, 2021" }, { "code": null, "e": 26307, "s": 26071, "text": "Semantic UI is a modern framework used in developing seamless designs for the website, Its gives the user a lightweight experience with its components. It uses the...
Program for Priority CPU Scheduling | Set 1 - GeeksforGeeks
28 Apr, 2021 Priority scheduling is one of the most common scheduling algorithms in batch systems. Each process is assigned a priority. Process with the highest priority is to be executed first and so on. Processes with the same priority are executed on first come first served basis. Priority can be decided based on memory requirements, time requirements or any other resource requirement.Implementation : 1- First input the processes with their burst time and priority. 2- Sort the processes, burst time and priority according to the priority. 3- Now simply apply FCFS algorithm. Note: A major problem with priority scheduling is indefinite blocking or starvation. A solution to the problem of indefinite blockage of the low-priority process is aging. Aging is a technique of gradually increasing the priority of processes that wait in the system for a long period of time. C++ Java Python3 // C++ program for implementation of FCFS// scheduling#include<bits/stdc++.h>using namespace std; struct Process{ int pid; // Process ID int bt; // CPU Burst time required int priority; // Priority of this process}; // Function to sort the Process acc. to prioritybool comparison(Process a, Process b){ return (a.priority > b.priority);} // Function to find the waiting time for all// processesvoid findWaitingTime(Process proc[], int n, int wt[]){ // waiting time for first process is 0 wt[0] = 0; // calculating waiting time for (int i = 1; i < n ; i++ ) wt[i] = proc[i-1].bt + wt[i-1] ;} // Function to calculate turn around timevoid findTurnAroundTime( Process proc[], int n, int wt[], int tat[]){ // calculating turnaround time by adding // bt[i] + wt[i] for (int i = 0; i < n ; i++) tat[i] = proc[i].bt + wt[i];} //Function to calculate average timevoid findavgTime(Process proc[], int n){ int wt[n], tat[n], total_wt = 0, total_tat = 0; //Function to find waiting time of all processes findWaitingTime(proc, n, wt); //Function to find turn around time for all processes findTurnAroundTime(proc, n, wt, tat); //Display processes along with all details cout << "\nProcesses "<< " Burst time " << " Waiting time " << " Turn around time\n"; // Calculate total waiting time and total turn // around time for (int i=0; i<n; i++) { total_wt = total_wt + wt[i]; total_tat = total_tat + tat[i]; cout << " " << proc[i].pid << "\t\t" << proc[i].bt << "\t " << wt[i] << "\t\t " << tat[i] <<endl; } cout << "\nAverage waiting time = " << (float)total_wt / (float)n; cout << "\nAverage turn around time = " << (float)total_tat / (float)n;} void priorityScheduling(Process proc[], int n){ // Sort processes by priority sort(proc, proc + n, comparison); cout<< "Order in which processes gets executed \n"; for (int i = 0 ; i < n; i++) cout << proc[i].pid <<" " ; findavgTime(proc, n);} // Driver codeint main(){ Process proc[] = {{1, 10, 2}, {2, 5, 0}, {3, 8, 1}}; int n = sizeof proc / sizeof proc[0]; priorityScheduling(proc, n); return 0;} // Java program for implementation of FCFS// schedulingimport java.util.*; class Process{ int pid; // Process ID int bt; // CPU Burst time required int priority; // Priority of this process Process(int pid, int bt, int priority) { this.pid = pid; this.bt = bt; this.priority = priority; } public int prior() { return priority; }} public class GFG{ // Function to find the waiting time for all// processespublic void findWaitingTime(Process proc[], int n, int wt[]){ // waiting time for first process is 0 wt[0] = 0; // calculating waiting time for (int i = 1; i < n ; i++ ) wt[i] = proc[i - 1].bt + wt[i - 1] ;} // Function to calculate turn around timepublic void findTurnAroundTime( Process proc[], int n, int wt[], int tat[]){ // calculating turnaround time by adding // bt[i] + wt[i] for (int i = 0; i < n ; i++) tat[i] = proc[i].bt + wt[i];} // Function to calculate average timepublic void findavgTime(Process proc[], int n){ int wt[] = new int[n], tat[] = new int[n], total_wt = 0, total_tat = 0; // Function to find waiting time of all processes findWaitingTime(proc, n, wt); // Function to find turn around time for all processes findTurnAroundTime(proc, n, wt, tat); // Display processes along with all details System.out.print("\nProcesses Burst time Waiting time Turn around time\n"); // Calculate total waiting time and total turn // around time for (int i = 0; i < n; i++) { total_wt = total_wt + wt[i]; total_tat = total_tat + tat[i]; System.out.print(" " + proc[i].pid + "\t\t" + proc[i].bt + "\t " + wt[i] + "\t\t " + tat[i] + "\n"); } System.out.print("\nAverage waiting time = " +(float)total_wt / (float)n); System.out.print("\nAverage turn around time = "+(float)total_tat / (float)n);} public void priorityScheduling(Process proc[], int n){ // Sort processes by priority Arrays.sort(proc, new Comparator<Process>() { @Override public int compare(Process a, Process b) { return b.prior() - a.prior(); } }); System.out.print("Order in which processes gets executed \n"); for (int i = 0 ; i < n; i++) System.out.print(proc[i].pid + " ") ; findavgTime(proc, n);} // Driver codepublic static void main(String[] args){ GFG ob=new GFG(); int n = 3; Process proc[] = new Process[n]; proc[0] = new Process(1, 10, 2); proc[1] = new Process(2, 5, 0); proc[2] = new Process(3, 8, 1); ob.priorityScheduling(proc, n);}} // This code is contributed by rahulpatil07109. # Python3 program for implementation of# Priority Scheduling # Function to find the waiting time # for all processesdef findWaitingTime(processes, n, wt): wt[0] = 0 # calculating waiting time for i in range(1, n): wt[i] = processes[i - 1][1] + wt[i - 1] # Function to calculate turn around timedef findTurnAroundTime(processes, n, wt, tat): # Calculating turnaround time by # adding bt[i] + wt[i] for i in range(n): tat[i] = processes[i][1] + wt[i] # Function to calculate average waiting# and turn-around times.def findavgTime(processes, n): wt = [0] * n tat = [0] * n # Function to find waiting time # of all processes findWaitingTime(processes, n, wt) # Function to find turn around time # for all processes findTurnAroundTime(processes, n, wt, tat) # Display processes along with all details print("\nProcesses Burst Time Waiting", "Time Turn-Around Time") total_wt = 0 total_tat = 0 for i in range(n): total_wt = total_wt + wt[i] total_tat = total_tat + tat[i] print(" ", processes[i][0], "\t\t", processes[i][1], "\t\t", wt[i], "\t\t", tat[i]) print("\nAverage waiting time = %.5f "%(total_wt /n)) print("Average turn around time = ", total_tat / n) def priorityScheduling(proc, n): # Sort processes by priority proc = sorted(proc, key = lambda proc:proc[2], reverse = True); print("Order in which processes gets executed") for i in proc: print(i[0], end = " ") findavgTime(proc, n) # Driver codeif __name__ =="__main__": # Process id's proc = [[1, 10, 1], [2, 5, 0], [3, 8, 1]] n = 3 priorityScheduling(proc, n) # This code is contributed# Shubham Singh(SHUBHAMSINGH10) Output: Order in which processes gets executed 1 3 2 Processes Burst time Waiting time Turn around time 1 10 0 10 3 8 10 18 2 5 18 23 Average waiting time = 9.33333 Average turn around time = 17 In this post, the processes with arrival time 0 are discussed. In next set, we will be considering different arrival times to evaluate waiting times.This article is contributed by Sahil Chhabra (akku). If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. SHUBHAMSINGH10 rahul07109 cpu-scheduling Operating Systems Operating Systems Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Page Replacement Algorithms in Operating Systems Paging in Operating System Introduction of Deadlock in Operating System Introduction of Operating System - Set 1 Semaphores in Process Synchronization Inter Process Communication (IPC) CPU Scheduling in Operating Systems Difference between Process and Thread Preemptive and Non-Preemptive Scheduling Introduction of Process Synchronization
[ { "code": null, "e": 26393, "s": 26365, "text": "\n28 Apr, 2021" }, { "code": null, "e": 26790, "s": 26393, "text": "Priority scheduling is one of the most common scheduling algorithms in batch systems. Each process is assigned a priority. Process with the highest priority is to ...
Python | Splitting string to list of characters - GeeksforGeeks
11 May, 2020 Sometimes we need to work with just the lists and hence strings might need to be converted into lists. It has to be converted in list of characters for certain tasks to be performed. This is generally required in Machine Learning to preprocess data and text classifications. Let’s discuss certain ways in which this task is performed. Method #1 : Using list slicingList slicing can be used for this particular purpose, in which we assign to each index element of list the next occurring character of string using the slice operation. # Python3 code to demonstrate # splitting string to list of characters.# using list slicing # initializing stringtest_string = "GeeksforGeeks" # printing original string print ("The original string is : " + str(test_string)) # using list slicing# for splitting string to list of charactersres = []res[:] = test_string # printing resultprint ("The resultant list of characters : " + str(res)) The original string is : GeeksforGeeksThe resultant list of characters : [‘G’, ‘e’, ‘e’, ‘k’, ‘s’, ‘f’, ‘o’, ‘r’, ‘G’, ‘e’, ‘e’, ‘k’, ‘s’] Method #2 : Using list()The most concise and readable way to perform splitting is to type case string into list and the splitting of list is automatically handled internally. This is recommended method to perform this particular task. # Python3 code to demonstrate # splitting string to list of characters.# using list() # initializing stringtest_string = "GeeksforGeeks" # printing original string print ("The original string is : " + str(test_string)) # using list()# for splitting string to list of charactersres = list(test_string) # printing resultprint ("The resultant list of characters : " + str(res)) The original string is : GeeksforGeeksThe resultant list of characters : [‘G’, ‘e’, ‘e’, ‘k’, ‘s’, ‘f’, ‘o’, ‘r’, ‘G’, ‘e’, ‘e’, ‘k’, ‘s’] Method #3 : Using map() + lambdaThis is yet another way to perform this particular task. Though not recommended but can be used in certain situations. But drawback is readability of code gets sacrificed. # Python3 code to demonstrate # splitting string to list of characters.# using map() + lambda # initializing stringtest_string = "GeeksforGeeks" # printing original string print ("The original string is : " + str(test_string)) # using map() + lambda# for splitting string to list of charactersres = list(map(lambda i:i, test_string)) # printing resultprint ("The resultant list of characters : " + str(res)) The original string is : GeeksforGeeksThe resultant list of characters : [‘G’, ‘e’, ‘e’, ‘k’, ‘s’, ‘f’, ‘o’, ‘r’, ‘G’, ‘e’, ‘e’, ‘k’, ‘s’] mailtojyoti2005 Python list-programs Python string-programs Python Python Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Install PIP on Windows ? Enumerate() in Python Different ways to create Pandas Dataframe Python String | replace() *args and **kwargs in Python Defaultdict in Python Python | Get dictionary keys as a list Python | Convert a list to dictionary How to print without newline in Python? Python | Convert string dictionary to dictionary
[ { "code": null, "e": 25689, "s": 25661, "text": "\n11 May, 2020" }, { "code": null, "e": 25964, "s": 25689, "text": "Sometimes we need to work with just the lists and hence strings might need to be converted into lists. It has to be converted in list of characters for certain tas...
PHP | filesize( ) Function - GeeksforGeeks
05 May, 2018 The filesize() function in PHP is an inbuilt function which is used to return the size of a specified file. The filesize() function accepts the filename as a parameter and returns the size of a file in bytes on success and False on failure. The result of the filesize() function is cached and a function called clearstatcache() is used to clear the cache. Syntax: filesize($filename) Parameters: The filesize() function in PHP accepts only one parameter $filename. It specifies the filename of the file whose size you want to check. Return Value: It returns the size of a file in bytes on success and False on failure. Errors And Exception: For files which are larger than 2GB some filesystem functions may return unexpected results since PHP’s integer type is signed and many platforms use 32bit integers.The buffer must be cleared if the filesize() function is used multiple times.The filesize() function emits an E_WARNING in case of a failure. For files which are larger than 2GB some filesystem functions may return unexpected results since PHP’s integer type is signed and many platforms use 32bit integers. The buffer must be cleared if the filesize() function is used multiple times. The filesize() function emits an E_WARNING in case of a failure. Examples: Input : echo filesize("gfg.txt"); Output : 256 Input : $myfile = 'gfg.txt'; echo $myfile . ': ' . filesize($myfile) . ' bytes'; Output : gfg.txt : 256 bytes Below programs illustrate the filesize() function. Program 1: <?php // displaying file size using// filesize() functionecho filesize("gfg.txt"); ?> Output: 256 Program 2: <?php // displaying file size using// filesize() function$myfile = 'gfg.txt'; echo $myfile . ': ' . filesize($myfile) . ' bytes'; ?> Output: gfg.txt : 256 bytes Reference:http://php.net/manual/en/function.filesize.php PHP-file-handling PHP Web Technologies PHP Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments How to Insert Form Data into Database using PHP ? How to convert array to string in PHP ? Comparing two dates in PHP How to receive JSON POST with PHP ? PHP | Converting string to Date and DateTime Top 10 Front End Developer Skills That You Need in 2022 Installation of Node.js on Linux Top 10 Projects For Beginners To Practice HTML and CSS Skills How to fetch data from an API in ReactJS ? How to insert spaces/tabs in text using HTML/CSS?
[ { "code": null, "e": 24539, "s": 24511, "text": "\n05 May, 2018" }, { "code": null, "e": 24780, "s": 24539, "text": "The filesize() function in PHP is an inbuilt function which is used to return the size of a specified file. The filesize() function accepts the filename as a param...
Understanding NLP Word Embeddings — Text Vectorization | by Prabhu | Towards Data Science
Processing natural language text and extract useful information from the given word, a sentence using machine learning and deep learning techniques requires the string/text needs to be converted into a set of real numbers (a vector) — Word Embeddings. Word Embeddings or Word vectorization is a methodology in NLP to map words or phrases from vocabulary to a corresponding vector of real numbers which used to find word predictions, word similarities/semantics. The process of converting words into numbers are called Vectorization. Word embeddings help in the following use cases. Compute similar words Text classifications Document clustering/grouping Feature extraction for text classifications Natural language processing. After the words are converted as vectors, we need to use some techniques such as Euclidean distance, Cosine Similarity to identify similar words. Count the common words or Euclidean distance is the general approach used to match similar documents which are based on counting the number of common words between the documents. This approach will not work even if the number of common words increases but the document talks about different topics. To overcome this flaw, the “Cosine Similarity” approach is used to find the similarity between the documents. Mathematically, it measures the cosine of the angle between two vectors (item1, item2) projected in an N-dimensional vector space. The advantageous of cosine similarity is, it predicts the document similarity even Euclidean is distance. “Smaller the angle, the higher the similarity” — Cosine Similarity. Let’s see an example. Julie loves John more than Linda loves JohnJane loves John more than Julie loves John Julie loves John more than Linda loves John Jane loves John more than Julie loves John John 2 2Jane 0 1Julie 1 1Linda 1 0likes 0 1loves 2 1more 1 1than 1 1 the two vectors are, Item 1: [2, 0, 1, 1, 0, 2, 1, 1]Item 2: [2, 1, 1, 0, 1, 1, 1, 1] The cosine angle (the smaller the angle) between the two vectors' value is 0.822 which is nearest to 1. Now let’s see what are all the ways to convert sentences into vectors. Word embeddings coming from pre-trained methods such as, Word2Vec — From Google Fasttext — From Facebook Glove — From Standford In this blog, we will see the most popular embedding architecture called Word2Vec. Word2Vec Word2Vec — Word representations in Vector Space founded by Tomas Mikolov and a group of a research team from Google developed this model in 2013. Why Word2Vec technique is created: Most of the NLP systems treat words as atomic units. There is a limitation of the existing systems that there is no notion of similarity between words. Also, the system works for small, simpler and outperforms on less data which is only a few billions of data or less. In order to train with a larger dataset with complex models, the modern techniques use a neural network architecture to train complex data models and outperforms for huge datasets with billions of words and with millions of words vocabulary. This technique helps to measure the quality of the resulting vector representations. This works with similar words that tend to close with words that can have multiple degrees of similarity. Syntactic Regularities: Refers to grammatical sentence correction. Semantic Regularities: Refers to the meaning of the vocabulary symbols arranged in that structure. The proposed technique was found that the similarity of word representations goes beyond syntactic regularities and works surprisingly good for algebraic operations of word vectors. For example, Vector(“King”) — Vector(“Man”)+Vector(“Woman”) = Word(“Queen”) where “Queen” is the closest result vector of word representations. The following model architectures for word representations' objectives are to maximize the accuracy and minimize the computation complexity. The models are, FeedForward Neural Net Language Model (NNLM) Recurrent Neural Net Language Model (RNNLM) All the above-mentioned models are trained using Stochastic gradient descent and backpropagation. FeedForward Neural Net Language Model (NNLM) The NNLM model consists of input, projection, hidden and output layers. This architecture becomes complex for computation between the projection and the hidden layer, as values in the projection layer dense. Recurrent Neural Net Language Model (RNNLM) RNN model can efficiently represent more complex patterns than the shallow neural network. The RNN model does not have a projection layer; only input, hidden and output layer. Models should be trained for huge datasets using a large-scale distributed framework called DistBelief, which would give better results. The proposed new two models in Word2Vec such as, Continuous Bag-of-Words Model Continuous Skip-gram Model uses distributed architecture which tries to minimize computation complexity. Continuous Bag-of-Words Model We denote this model as CBOW. The CBOW architecture is similar to the feedforward NNLM, where the non-linear hidden layer is removed and the projection layer is shared for all the words; thus all words get projected into the same position. CBOW architecture predicts the current word based on the context. Continuous Skip-gram Model The skip-gram model is similar to CBOW. The only difference is instead of predicting the current word based on the context, it tries to maximize the classification of a word based on another word in the same sentence. Skip-gram architecture predicts surrounding words given the current word. Word2Vec Architecture Implementation — Gensim Gensim library will enable us to develop word embeddings by training our own word2vec models on a custom corpus either with CBOW of skip-grams algorithms. The implementation library can be found here — https://bit.ly/33ywiaW. Conclusion Natural Language Processing requires texts/strings to real numbers called word embeddings or word vectorization Once words are converted as vectors, Cosine similarity is the approach used to fulfill most use cases to use NLP, Documents clustering, Text classifications, predicts words based on the sentence context Cosine Similarity — “Smaller the angle, higher the similarity Most famous architectures such as Word2Vec, Fasttext, Glove helps to converts word vectors and leverage cosine similarity for word similarity features NNLM, RNNLM outperforms for the huge dataset of words. But computation complexity is a big overhead To overcome the computation complexity, the Word2Vec uses CBOW and Skip-gram architecture in order to maximize the accuracy and minimize the computation complexity CBOW architecture predicts the current word based on the context Skip-gram architecture predicts surrounding words given the current word Details explained about Word2Vec architecture paper.
[ { "code": null, "e": 424, "s": 172, "text": "Processing natural language text and extract useful information from the given word, a sentence using machine learning and deep learning techniques requires the string/text needs to be converted into a set of real numbers (a vector) — Word Embeddings." ...
Building a Conda environment for Horovod | by David R. Pugh | Towards Data Science
Horovod is an open-source distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet. Originally developed by Uber for in house use, Horovod was open sourced a couple of years ago and is now an official Linux Foundation AI (LFAI) project. In this post I describe how I build Conda environments for my deep learning projects when I am using Horovod to enable distributed training across multiple GPUs (either on the same node or spread across multuple nodes). If you like my approach then you can make use of the template repository on GitHub to get started with your next Horovod data science project! First thing you need to do is to install the appropriate version of the NVIDIA CUDA Toolkit on your workstation. I am using NVIDIA CUDA Toolkit 10.1 (documentation) which works with all three deep learning frameworks that are currently supported by Horovod. Typically when installing PyTorch, TensorFlow, or Apache MXNet with GPU support using Conda you simply add the appropriate version of the cudatoolkit package to your environment.yml file. Unfortunately, for the moment at least, the cudatoolkit package available from conda-forge does not include NVCC which is required in order to use Horovod with either PyTorch, TensorFlow, or MXNet as you need to compile extensions. While there are cudatoolkit-dev packages available from conda-forge that do include NVCC, I have had difficult getting these packages to consistently install properly. Some of the available builds require manual intervention to accept license agreements making these builds unsuitable for installing on remote systems (which is critical functionality). Other builds seems to work on Ubuntu but not on other flavors of Linux. I would encourage you to try adding cudatoolkit-dev to your environment.yml file and see what happens! The package is well maintained so perhaps it will become more stable in the future. The most robust approach to obtain NVCC and still use Conda to manage all the other dependencies is to install the NVIDIA CUDA Toolkit on your system and then install a meta-package nvcc_linux-64 from conda-forge which configures your Conda environment to use the NVCC installed on the system together with the other CUDA Toolkit components installed inside the Conda environment. For more details on this package I recommend reading through the issue threads on GitHub. I prefer to specify as many dependencies as possible in the Conda environment.yml file and only specify dependencies in requirements.txt that are not available via Conda channels. Check the official Horovod installation guide for details of required dependencies. I use the recommended channel priorities. Note that conda-forge has priority over defaults. name: nullchannels: - pytorch - conda-forge - defaults There are a few things worth noting about the dependencies. Even though I have installed the NVIDIA CUDA Toolkit manually I still use Conda to manage the other required CUDA components such as cudnn and nccl (and the optional cupti).I use two meta-packages, cxx-compiler and nvcc_linux-64, to make sure that suitable C, and C++ compilers are installed and that the resulting Conda environment is aware of the manually installed CUDA Toolkit.Horovod requires some controller library to coordinate work between the various Horovod processes. Typically this will be some MPI implementation such as OpenMPI. However, rather than specifying the openmpi package directly I instead opt for mpi4py Conda package which provides a cuda-aware build of OpenMPI (assuming it is supported by your hardware).Horovod also support that Gloo collective communications library that can be used in place of MPI. I include cmake in order to insure that the Horovod extensions for Gloo are built. Even though I have installed the NVIDIA CUDA Toolkit manually I still use Conda to manage the other required CUDA components such as cudnn and nccl (and the optional cupti). I use two meta-packages, cxx-compiler and nvcc_linux-64, to make sure that suitable C, and C++ compilers are installed and that the resulting Conda environment is aware of the manually installed CUDA Toolkit. Horovod requires some controller library to coordinate work between the various Horovod processes. Typically this will be some MPI implementation such as OpenMPI. However, rather than specifying the openmpi package directly I instead opt for mpi4py Conda package which provides a cuda-aware build of OpenMPI (assuming it is supported by your hardware). Horovod also support that Gloo collective communications library that can be used in place of MPI. I include cmake in order to insure that the Horovod extensions for Gloo are built. Below are the core required dependencies. The complete environment.yml file is available on GitHub. dependencies: - bokeh=1.4 - cmake=3.16 # insures that Gloo library extensions will be built - cudnn=7.6 - cupti=10.1 - cxx-compiler=1.0 # insures C and C++ compilers are available - jupyterlab=1.2 - mpi4py=3.0 # installs cuda-aware openmpi - nccl=2.5 - nodejs=13 - nvcc_linux-64=10.1 # configures environment to be "cuda-aware" - pip=20.0 - pip: - mxnet-cu101mkl==1.6.* # MXNET is installed prior to horovod - -r file:requirements.txt - python=3.7 - pytorch=1.4 - tensorboard=2.1 - tensorflow-gpu=2.1 - torchvision=0.5 The requirements.txt file is where all of the pip dependencies, including Horovod itself, are listed for installation. In addition to Horovod I typically will also use pip to install JupyterLab extensions to enable GPU and CPU resource monitoring via jupyterlab-nvdashboard and Tensorboard support via jupyter-tensorboard. horovod==0.19.*jupyterlab-nvdashboard==0.2.*jupyter-tensorboard==0.2.*# make sure horovod is re-compiled if environment is re-built--no-binary=horovod Note the use of the --no-binary option at the end of the file. Including this option insures that Horovod will be re-built whenever the Conda environment is re-built. The complete requirements.txt file is available on GitHub. After adding any necessary dependencies that should be downloaded via conda to the environment.yml file and any dependencies that should be downloaded via pip to the requirements.txt file you create the Conda environment in a sub-directory env in your project directory by running the following commands. export ENV_PREFIX=$PWD/envexport HOROVOD_CUDA_HOME=$CUDA_HOMEexport HOROVOD_NCCL_HOME=$ENV_PREFIXexport HOROVOD_GPU_OPERATIONS=NCCLconda env create --prefix $ENV_PREFIX --file environment.yml --force By default Horovod will try and build extensions for all detected frameworks. See the Horovod documentation on environment variables for the details on additional environment variables that can be set prior to building Horovod. Once the new environment has been created you can activate the environment with the following command. conda activate $ENV_PREFIX If you wish to use any JupyterLab extensions included in the environment.yml and requirements.txt files, then you may need to rebuild the JupyterLab application. For simplicity, I typically include the instructions for re-building JupyterLab in a postBuild script. Here is what this script looks like for my Horovod environments. jupyter labextension install --no-build @pyviz/jupyterlab_pyvizjupyter labextension install --no-build jupyterlab-nvdashboard jupyter labextension install --no-build jupyterlab_tensorboardjupyter serverextension enable jupyterlab_sql --py --sys-prefixjupyter lab build Use the following commands to source the postBuild script. conda activate $ENV_PREFIX # optional if environment already active. postBuild I typically wrap these commands into a shell script create-conda-env.sh. Running the shell script will set the Horovod build variables, create the Conda environment, activate the Conda environment, and built JupyterLab with any additional extensions. #!/bin/bash --loginset -eexport ENV_PREFIX=$PWD/envexport HOROVOD_CUDA_HOME=$CUDA_HOMEexport HOROVOD_NCCL_HOME=$ENV_PREFIXexport HOROVOD_GPU_OPERATIONS=NCCLconda env create --prefix $ENV_PREFIX --file environment.yml --forceconda activate $ENV_PREFIX. postBuild I typically put scripts inside a bin directory in my project root directory. The script should be run from the project root directory as follows. ./bin/create-conda-env.sh # assumes that $CUDA_HOME is set properly After building the Conda environment you can check that Horovod has been built with support for the deep learning frameworks TensorFlow, PyTorch, Apache MXNet, and the controllers MPI and Gloo with the following command. conda activate $ENV_PREFIX # optional if environment already activehorovodrun --check-build You should see output similar to the following. Horovod v0.19.4:Available Frameworks: [X] TensorFlow [X] PyTorch [X] MXNetAvailable Controllers: [X] MPI [X] GlooAvailable Tensor Operations: [X] NCCL [ ] DDL [ ] CCL [X] MPI [X] Gloo To see the full list of packages installed into the environment run the following command. conda activate $ENV_PREFIX # optional if environment already activeconda list If you add (remove) dependencies to (from) the environment.yml file or the requirements.txt file after the environment has already been created, then you can re-create the environment with the following command. conda env create --prefix $ENV_PREFIX --file environment.yml --force However, whenever I add new dependencies I prefer to re-run the Bash script which will re-build both the Conda environment and JupyterLab. ./bin/create-conda-env.sh Finding a reproducible process for building Horovod extensions for my deep learning projects was tricky. Key to my solution is the use of meta-packages from conda-forge to insure that the appropriate compilers are installed and that the resulting Conda environment is aware of the system installed NVIDIA CUDA Toolkit. The second key is to use the --no-binary flag in the requirements.txt file to insure that Horovod is re-built whenever the Conda environment is re-built. If you like my approach then you can make use of the template repository on GitHub to get started with your next Horovod data science project!
[ { "code": null, "e": 432, "s": 172, "text": "Horovod is an open-source distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet. Originally developed by Uber for in house use, Horovod was open sourced a couple of years ago and is now an official Linux Foundation AI (LFAI) proj...
Shared Preferences - Save, edit, retrieve, delete in Kotlin?
This example demonstrates how Save, edit, retrieve, delete shared preference data in Kotlin. Step 1 − Create a new project in Android Studio, go to File ⇉ New Project and fill all required details to create a new project. Step 2 − Add the following code to res/layout/activity_main.xml. <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:padding="8dp" tools:context=".MainActivity"> <Button android:id="@+id/btnSave" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentStart="true" android:layout_centerInParent="true" android:onClick="saveData" android:text="Save" /> <Button android:id="@+id/btnRetrieve" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_centerInParent="true" android:layout_centerHorizontal="true" android:onClick="readData" android:text="Read" /> <Button android:id="@+id/btnClear" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentEnd="true" android:layout_centerInParent="true" android:onClick="clearData" android:text="Clear" /> <EditText android:id="@+id/etEmail" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_below="@+id/etName" android:layout_alignParentEnd="true" android:layout_marginTop="10dp" android:ems="10" android:hint="Email" android:inputType="textEmailAddress" /> <EditText android:id="@+id/etName" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_alignStart="@+id/etEmail" android:layout_marginTop="40dp" android:ems="10" android:hint="Name" android:inputType="text" /> <LinearLayout android:layout_width="match_parent" android:layout_height="400dp" android:layout_below="@+id/btnSave" android:layout_marginTop="10dp" android:orientation="vertical" android:padding="8dp"> <TextView android:id="@+id/textViewName" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginTop="20dp" android:textColor="@android:color/holo_blue_light" android:textSize="24sp" android:textStyle="bold" /> <TextView android:id="@+id/textViewEmail" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginTop="10dp" android:textColor="@android:color/holo_blue_light" android:textSize="24sp" android:textStyle="bold"> </LinearLayout> </RelativeLayout> Step 3 − Add the following code to src/MainActivity.kt import android.content.Context import android.content.SharedPreferences import android.os.Bundle import android.view.View import android.widget.EditText import android.widget.TextView import android.widget.Toast import androidx.appcompat.app.AppCompatActivity @Suppress("NULLABILITY_MISMATCH_BASED_ON_JAVA_ANNOTATIONS") class MainActivity : AppCompatActivity() { var editTextName: EditText? = null var editTextEmail: EditText? = null lateinit var textViewName: TextView lateinit var textViewEmail: TextView private val myPreference = "myPref" private val name = "nameKey" private val email = "emailKey" var sharedPreferences: SharedPreferences? = null override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) title = "KotlinApp" editTextEmail = findViewById(R.id.etEmail) editTextName = findViewById(R.id.etName) sharedPreferences = getSharedPreferences(myPreference, Context.MODE_PRIVATE) if (sharedPreferences!!.contains(name)) { editTextName?.setText(sharedPreferences!!.getString(name, "")) } if (sharedPreferences!!.contains(email)) { editTextName?.setText(sharedPreferences!!.getString(email, "")) } } fun readData(view: View) { textViewEmail = findViewById(R.id.textViewEmail) textViewName = findViewById(R.id.textViewName) var strName: String = editTextName?.text.toString().trim() var strEmail: String = editTextEmail?.text.toString().trim() strName = sharedPreferences!!.getString(name, "") strEmail = sharedPreferences!!.getString(email, "") sharedPreferences = getSharedPreferences(myPreference, Context.MODE_PRIVATE) if (sharedPreferences!!.contains(name)) { textViewName.text = strName } if (sharedPreferences!!.contains(email)) { textViewEmail.text = strEmail } Toast.makeText(baseContext, "Data retrieved", Toast.LENGTH_SHORT).show() } fun saveData(view: View) { val strName: String = editTextName?.text.toString().trim() val strEmail: String = editTextEmail?.text.toString().trim() val editor: SharedPreferences.Editor = sharedPreferences!!.edit() editor.putString(name, strName) editor.putString(email, strEmail) editor.apply() Toast.makeText(baseContext, "Saved", Toast.LENGTH_SHORT).show() } fun clearData(view: View) { editTextName!!.text.clear() editTextEmail!!.text.clear() textViewName.text = "" textViewEmail.text = "" Toast.makeText(baseContext, "Cleared data", Toast.LENGTH_SHORT).show() } } Step 4 − Add the following code to androidManifest.xml <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="app.com.kotlipapp"> <application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:roundIcon="@mipmap/ic_launcher_round" android:supportsRtl="true" android:theme="@style/AppTheme"> <activity android:name=".MainActivity"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> </manifest> Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click the Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen − Click here to download the project code.
[ { "code": null, "e": 1155, "s": 1062, "text": "This example demonstrates how Save, edit, retrieve, delete shared preference data in Kotlin." }, { "code": null, "e": 1284, "s": 1155, "text": "Step 1 − Create a new project in Android Studio, go to File ⇉ New Project and fill all re...
Training models with a progress bar | by Adam Oudad | Towards Data Science
tqdm is a Python library for adding progress bar. It lets you configure and display a progress bar with metrics you want to track. Its ease of use and versatility makes it the perfect choice for tracking machine learning experiments. I organize this tutorial in two parts. I will first introduce tqdm, then show an example for machine learning. For each code fragment in this article, we will import the sleep function from Python's time library as it will let us slow down the program to see the progress bar update. from time import sleep You can install tqdm with pip install tqdm. The library comes with various iterators each dedicated to a specific use that I am going to present. tqdm is the default iterator. It takes an iterator object as argument and displays a progress bar as it iterates over it. Output is 100%|█████████████████████████████████| 5/5 [00:00<00:00, 9.90it/s] You can see the nice output with 9.90it/s meaning an average speed of 9.90 iterations per second. "it" for iterations can be configured to something else and this is what we will see on the next example. trange follows the same template as range in Python. For example, give to trange the number of iterations. Proving P=NP: 100%|████████████| 20/20 [00:02<00:00, 9.91carrots/s] You can see in this example we added a (joke) description of what we are doing and the unit for each iteration. tqdm has two methods that can update what is displayed in the progress bar. To use these methods, we need to assign the tqdm iterator instance to a variable. This can be done either with the = operator or the with keyword in Python. We can for example update the postfix with the list of divisors of the number i. Let's use this function to get the list of divisors And here is our code with progress bar. If you feel like a distinguished Python programmer, you may use with keyword like so Using with will automatically call pbar.close() at the end of the block. Here is the status displayed at i=6. Testing even number 6: 70%|██████████████▋ | 7/10 [00:03<00:01, 1.76carrots/s, divisors=[1, 2, 3]] In this section, we use a neural network wrote in PyTorch and train it using tqdm to display the loss and accuracy. Here is the model This is a simple perceptron model that we can use for processing and classifying images of digits from MNIST dataset. The following code for loading MNIST dataset is inspired from a PyTorch example. We just loaded our data, and defined our model and settings, we can now run a training experiment. I am once again using sleep function to pause the program so that we can see the update of the progress bar. As you can see, we just applied what we learned previously here, in particular with tepoch.set_postfix and tepoch.set_description which let you update the information displayed by the progress bar. Here is a capture of the output while the program was running Epoch 1: 15%|▉ | 142/937 [00:16<01:32, 8.56batch/s, accuracy=89.1, loss=0.341] This gives us an idea of how tqdm can be used in practical. You can achieve much more with tqdm, like adapting it to Jupyter notebooks, finely configuring the progress bar updates or nesting progress bars, so I recommend you to read the documentation for more: https://github.com/tqdm/tqdm Thank you for reading! Originally published at https://adamoudad.github.io on October 12, 2020.
[ { "code": null, "e": 406, "s": 172, "text": "tqdm is a Python library for adding progress bar. It lets you configure and display a progress bar with metrics you want to track. Its ease of use and versatility makes it the perfect choice for tracking machine learning experiments." }, { "code":...
C++ Program for sum of arithmetic series
Given with ‘a’(first term), ‘d’(common difference) and ‘n’ (number of values in a string) and the task is to generate the series and thereby calculating their sum. Arithmetic series is the sequence of numbers with common difference where the first term of a series is fixed which is ‘a’ and the common difference between them is ‘d’. It is represented as − a, a + d, a + 2d, a + 3d, . . . Input-: a = 1.5, d = 0.5, n=10 Output-: sum of series A.P is : 37.5 Input : a = 2.5, d = 1.5, n = 20 Output : sum of series A.P is : 335 Approach used below is as follows − Input the data as the first term(a), common difference(d) and the numbers of terms in a series(n) Traverse the loop till n and keep adding the first term to a temporary variable with the difference Print the resultant output Start Step 1-> declare Function to find sum of series float sum(float a, float d, int n) set float sum = 0 Loop For int i=0 and i<n and i++ Set sum = sum + a Set a = a + d End return sum Step 2-> In main() Set int n = 10 Set float a = 1.5, d = 0.5 Call sum(a, d, n) Stop Live Demo #include<bits/stdc++.h> using namespace std; // Function to find sum of series. float sum(float a, float d, int n) { float sum = 0; for (int i=0;i<n;i++) { sum = sum + a; a = a + d; } return sum; } int main() { int n = 10; float a = 1.5, d = 0.5; cout<<"sum of series A.P is : "<<sum(a, d, n); return 0; } sum of series A.P is : 37.5
[ { "code": null, "e": 1226, "s": 1062, "text": "Given with ‘a’(first term), ‘d’(common difference) and ‘n’ (number of values in a string) and the task is to generate the series and thereby calculating their sum." }, { "code": null, "e": 1396, "s": 1226, "text": "Arithmetic series ...
How to Create a Savepoint in JDBC? - GeeksforGeeks
28 Dec, 2020 A Savepoint object is used to save the current state of the database which can be rolled-back afterwards to that state of the database. Savepoints are similar to the SQL Transactions and are generally to rollback if something goes wrong within the current transaction. The connection.setSavepoint() method of Connection interface in Java is used to create an object which references a current state of the database within the transaction. The following example shows the usage of Savepoint and Rollback in JDBC application. syntax connection.setSavepoint() Returns: It returns a new Savepoint object. Exceptions: SQLException is thrown if a database access error occurs, this method is called while participating in a distributed transaction, this method is called on a closed connection or this Connection object is currently in auto-commit mode. SQLFeatureNotSupportedException is thrown if the JDBC driver does not support this method. Example Java // Java program to demonstrate how to make a save point import java.io.*;import java.sql.*; class GFG { public static void main(String[] args) { // db credentials String jdbcEndpoint = "jdbc:mysql://localhost:3000/GEEKSFORGEEKS"; String userid = "GFG"; String password = "GEEKSFORGEEKS"; // create a connection to db Connection connection = DriverManager.getConnection( jdbcEndpoint, userid, password); // construct a query Statement deleteStmt = connection.createStatement(); String deleteQuery = "DELETE FROM USER WHERE AGE > 15"; // Disable auto commit to connection connection.setAutoCommit(false); /* Table USER +--------+---------+------------+ | USR_ID | NAME | AGE | +--------+---------+------------+ | 1 | GFG_1 | 10 | | 2 | GFG_2 | 20 | | 3 | GFG_3 | 25 | +--------+---------+------------+ */ // Create a savepoint object before executing the // deleteQuery Savepoint beforeDeleteSavepoint = connection.setSavepoint(); // Executing the deleteQuery ResultSet res = deleteStmt.executeQuery(deleteQuery); /* Table USER after executing deleteQuery +--------+---------+------------+ | USR_ID | NAME | AGE | +--------+---------+------------+ | 1 | GFG_1 | 10 | +--------+---------+------------+ */ // Rollback to our beforeDeleteSavepoint connection.rollback(beforeDeleteSavepoint); connection.commit(); /* Table USER after rollback +--------+---------+------------+ | USR_ID | NAME | AGE | +--------+---------+------------+ | 1 | GFG_1 | 10 | | 2 | GFG_2 | 20 | | 3 | GFG_3 | 25 | +--------+---------+------------+ */ }} JDBC Picked Java Java Programs Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Different ways of Reading a text file in Java Constructors in Java Stream In Java Generics in Java Exceptions in Java Convert a String to Character array in Java Java Programming Examples Convert Double to Integer in Java Implementing a Linked List in Java using Class How to Iterate HashMap in Java?
[ { "code": null, "e": 23974, "s": 23946, "text": "\n28 Dec, 2020" }, { "code": null, "e": 24498, "s": 23974, "text": "A Savepoint object is used to save the current state of the database which can be rolled-back afterwards to that state of the database. Savepoints are similar to t...
Node.js - Streams
Streams are objects that let you read data from a source or write data to a destination in continuous fashion. In Node.js, there are four types of streams − Readable − Stream which is used for read operation. Readable − Stream which is used for read operation. Writable − Stream which is used for write operation. Writable − Stream which is used for write operation. Duplex − Stream which can be used for both read and write operation. Duplex − Stream which can be used for both read and write operation. Transform − A type of duplex stream where the output is computed based on input. Transform − A type of duplex stream where the output is computed based on input. Each type of Stream is an EventEmitter instance and throws several events at different instance of times. For example, some of the commonly used events are − data − This event is fired when there is data is available to read. data − This event is fired when there is data is available to read. end − This event is fired when there is no more data to read. end − This event is fired when there is no more data to read. error − This event is fired when there is any error receiving or writing data. error − This event is fired when there is any error receiving or writing data. finish − This event is fired when all the data has been flushed to underlying system. finish − This event is fired when all the data has been flushed to underlying system. This tutorial provides a basic understanding of the commonly used operations on Streams. Create a text file named input.txt having the following content − Tutorials Point is giving self learning content to teach the world in simple and easy way!!!!! Create a js file named main.js with the following code − var fs = require("fs"); var data = ''; // Create a readable stream var readerStream = fs.createReadStream('input.txt'); // Set the encoding to be utf8. readerStream.setEncoding('UTF8'); // Handle stream events --> data, end, and error readerStream.on('data', function(chunk) { data += chunk; }); readerStream.on('end',function() { console.log(data); }); readerStream.on('error', function(err) { console.log(err.stack); }); console.log("Program Ended"); Now run the main.js to see the result − $ node main.js Verify the Output. Program Ended Tutorials Point is giving self learning content to teach the world in simple and easy way!!!!! Create a js file named main.js with the following code − var fs = require("fs"); var data = 'Simply Easy Learning'; // Create a writable stream var writerStream = fs.createWriteStream('output.txt'); // Write the data to stream with encoding to be utf8 writerStream.write(data,'UTF8'); // Mark the end of file writerStream.end(); // Handle stream events --> finish, and error writerStream.on('finish', function() { console.log("Write completed."); }); writerStream.on('error', function(err) { console.log(err.stack); }); console.log("Program Ended"); Now run the main.js to see the result − $ node main.js Verify the Output. Program Ended Write completed. Now open output.txt created in your current directory; it should contain the following − Simply Easy Learning Piping is a mechanism where we provide the output of one stream as the input to another stream. It is normally used to get data from one stream and to pass the output of that stream to another stream. There is no limit on piping operations. Now we'll show a piping example for reading from one file and writing it to another file. Create a js file named main.js with the following code − var fs = require("fs"); // Create a readable stream var readerStream = fs.createReadStream('input.txt'); // Create a writable stream var writerStream = fs.createWriteStream('output.txt'); // Pipe the read and write operations // read input.txt and write data to output.txt readerStream.pipe(writerStream); console.log("Program Ended"); Now run the main.js to see the result − $ node main.js Verify the Output. Program Ended Open output.txt created in your current directory; it should contain the following − Tutorials Point is giving self learning content to teach the world in simple and easy way!!!!! Chaining is a mechanism to connect the output of one stream to another stream and create a chain of multiple stream operations. It is normally used with piping operations. Now we'll use piping and chaining to first compress a file and then decompress the same. Create a js file named main.js with the following code − var fs = require("fs"); var zlib = require('zlib'); // Compress the file input.txt to input.txt.gz fs.createReadStream('input.txt') .pipe(zlib.createGzip()) .pipe(fs.createWriteStream('input.txt.gz')); console.log("File Compressed."); Now run the main.js to see the result − $ node main.js Verify the Output. File Compressed. You will find that input.txt has been compressed and it created a file input.txt.gz in the current directory. Now let's try to decompress the same file using the following code − var fs = require("fs"); var zlib = require('zlib'); // Decompress the file input.txt.gz to input.txt fs.createReadStream('input.txt.gz') .pipe(zlib.createGunzip()) .pipe(fs.createWriteStream('input.txt')); console.log("File Decompressed."); Now run the main.js to see the result − $ node main.js Verify the Output. File Decompressed. 44 Lectures 7.5 hours Eduonix Learning Solutions 88 Lectures 17 hours Eduonix Learning Solutions 32 Lectures 1.5 hours Richard Wells 8 Lectures 33 mins Anant Rungta 9 Lectures 2.5 hours SHIVPRASAD KOIRALA 97 Lectures 6 hours Skillbakerystudios Print Add Notes Bookmark this page
[ { "code": null, "e": 2175, "s": 2018, "text": "Streams are objects that let you read data from a source or write data to a destination in continuous fashion. In Node.js, there are four types of streams −" }, { "code": null, "e": 2227, "s": 2175, "text": "Readable − Stream which i...
Java Examples - print stack trace
How to print stack of the Exception ? This example shows how to print stack of the exception using printStack() method of the exception class. public class Main{ public static void main (String args[]) { int array[] = {20,20,40}; int num1 = 15, num2 = 10; int result = 10; try { result = num1/num2; System.out.println("The result is" +result); for(int i = 5; i >= 0; i--) { System.out.println("The value of array is" +array[i]); } } catch (Exception e) { e.printStackTrace(); } } } The above code sample will produce the following result. The result is1 java.lang.ArrayIndexOutOfBoundsException: 5 at Main.main(Main.java:11) The following is an another example of print stack of the Exception in Java. public class Demo { public static void main(String[] args) { try { ExceptionFunc(); } catch(Throwable e) { e.printStackTrace(); } } public static void ExceptionFunc() throws Throwable { Throwable t = new Throwable("This is new Exception in Java..."); StackTraceElement[] trace = new StackTraceElement[] { new StackTraceElement("ClassName","methodName","fileName",5) }; t.setStackTrace(trace); throw t; } } The above code sample will produce the following result. java.lang.Throwable: This is new Exception in Java... at ClassName.methodName(fileName:5) Print Add Notes Bookmark this page
[ { "code": null, "e": 2106, "s": 2068, "text": "How to print stack of the Exception ?" }, { "code": null, "e": 2211, "s": 2106, "text": "This example shows how to print stack of the exception using printStack() method of the exception class." }, { "code": null, "e": 26...
DateTime.ToLongTimeString() Method in C#
The DateTime.ToLongTimeString() method in C# is used to convert the value of the current DateTime object to its equivalent long time string representation. Following is the syntax − public string ToLongTimeString (); Let us now see an example to implement the DateTime.ToLongTimeString() method − using System; using System.Globalization; public class Demo { public static void Main() { DateTime d = DateTime.Now; Console.WriteLine("Date = {0}", d); Console.WriteLine("Current culture = "+CultureInfo.CurrentCulture.Name); var pattern = CultureInfo.CurrentCulture.DateTimeFormat; string str = d.ToLongTimeString(); Console.WriteLine("Long time string = {0}", pattern.LongTimePattern); Console.WriteLine("Long time string representation = {0}", str); } } This will produce the following output − Date = 10/16/2019 8:41:03 AM Current culture = en-US Long time string = h:mm:ss tt Long time string representation = 8:41:03 AM Let us now see another example to implement the DateTime.ToLongTimeString() method − using System; public class Demo { public static void Main() { DateTime d = new DateTime(2019, 11, 11, 7, 11, 25); Console.WriteLine("Date = {0}", d); string str = d.ToLongTimeString(); Console.WriteLine("Long time string representation = {0}", str); } } This will produce the following output − Date = 11/11/2019 7:11:25 AM Long time string representation = 7:11:25 AM
[ { "code": null, "e": 1218, "s": 1062, "text": "The DateTime.ToLongTimeString() method in C# is used to convert the value of the current DateTime object to its equivalent long time string representation." }, { "code": null, "e": 1244, "s": 1218, "text": "Following is the syntax −"...
Check if Node.js MySQL Server is Active or not - GeeksforGeeks
07 Oct, 2021 We will see how to check if the server where our MySQL Database is Hosted is Active or Not. Syntax: database_connection.ping(callback); Modules: NodeJS ExpressJS MySQL Setting environment and Execution:Create Projectnpm initInstall Modulesnpm install express npm install mysqlFile Structure:Create Serverindex.jsindex.jsconst express = require("express");const database = require('./sqlConnection'); const app = express(); app.listen(5000, () => { console.log(`Server is up and running on 5000 ...`);});Create and export database connection objectsqlConnection.jssqlConnection.jsconst mysql = require("mysql"); let db_con = mysql.createConnection({ host: "localhost", user: "root", password: ''}); db_con.connect((err) => { if (err) { console.log("Database Connection Failed !!!", err); } else { console.log("connected to Database"); }}); module.exports = db_con;Create Route to Check mysql server Active or Not.app.get("/getMysqlStatus", (req, res) => { database.ping((err) => { if(err) return res.status(500).send("MySQL Server is Down"); res.send("MySQL Server is Active"); })});Full index.js File:JavascriptJavascriptconst express = require("express");const database = require('./sqlConnection'); const app = express(); app.listen(5000, () => { console.log(`Server is up and running on 5000 ...`);}); app.get("/getMysqlStatus", (req, res) => { database.ping((err) => { if(err) return res.status(500).send("MySQL Server is Down"); res.send("MySQL Server is Active"); })});Run Servernode index.jsOutput: Put this link in your browser http://localhost:5000/getMysqlStatusIf server is Not Active you will see below output in your browser:MySQL Server is DownIf server is Active you will see below output in your browser:MySQL Server is Active Setting environment and Execution: Create Projectnpm init Create Project npm init Install Modulesnpm install express npm install mysqlFile Structure: Install Modules npm install express npm install mysql File Structure: Create Serverindex.jsindex.jsconst express = require("express");const database = require('./sqlConnection'); const app = express(); app.listen(5000, () => { console.log(`Server is up and running on 5000 ...`);}); Create Server index.js const express = require("express");const database = require('./sqlConnection'); const app = express(); app.listen(5000, () => { console.log(`Server is up and running on 5000 ...`);}); Create and export database connection objectsqlConnection.jssqlConnection.jsconst mysql = require("mysql"); let db_con = mysql.createConnection({ host: "localhost", user: "root", password: ''}); db_con.connect((err) => { if (err) { console.log("Database Connection Failed !!!", err); } else { console.log("connected to Database"); }}); module.exports = db_con; Create and export database connection object sqlConnection.js const mysql = require("mysql"); let db_con = mysql.createConnection({ host: "localhost", user: "root", password: ''}); db_con.connect((err) => { if (err) { console.log("Database Connection Failed !!!", err); } else { console.log("connected to Database"); }}); module.exports = db_con; Create Route to Check mysql server Active or Not.app.get("/getMysqlStatus", (req, res) => { database.ping((err) => { if(err) return res.status(500).send("MySQL Server is Down"); res.send("MySQL Server is Active"); })});Full index.js File:JavascriptJavascriptconst express = require("express");const database = require('./sqlConnection'); const app = express(); app.listen(5000, () => { console.log(`Server is up and running on 5000 ...`);}); app.get("/getMysqlStatus", (req, res) => { database.ping((err) => { if(err) return res.status(500).send("MySQL Server is Down"); res.send("MySQL Server is Active"); })}); Create Route to Check mysql server Active or Not. app.get("/getMysqlStatus", (req, res) => { database.ping((err) => { if(err) return res.status(500).send("MySQL Server is Down"); res.send("MySQL Server is Active"); })}); Full index.js File: Javascript const express = require("express");const database = require('./sqlConnection'); const app = express(); app.listen(5000, () => { console.log(`Server is up and running on 5000 ...`);}); app.get("/getMysqlStatus", (req, res) => { database.ping((err) => { if(err) return res.status(500).send("MySQL Server is Down"); res.send("MySQL Server is Active"); })}); Run Servernode index.js Run Server node index.js Output: Put this link in your browser http://localhost:5000/getMysqlStatusIf server is Not Active you will see below output in your browser:MySQL Server is DownIf server is Active you will see below output in your browser:MySQL Server is Active Output: Put this link in your browser http://localhost:5000/getMysqlStatus If server is Not Active you will see below output in your browser: MySQL Server is Down If server is Active you will see below output in your browser: MySQL Server is Active NodeJS-MySQL NodeJS-Questions Technical Scripter 2020 Node.js Technical Scripter Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Express.js express.Router() Function Express.js req.params Property JWT Authentication with Node.js Mongoose Populate() Method Difference between npm i and npm ci in Node.js Roadmap to Become a Web Developer in 2022 How to fetch data from an API in ReactJS ? Top 10 Projects For Beginners To Practice HTML and CSS Skills Convert a string to an integer in JavaScript How to insert spaces/tabs in text using HTML/CSS?
[ { "code": null, "e": 25026, "s": 24998, "text": "\n07 Oct, 2021" }, { "code": null, "e": 25118, "s": 25026, "text": "We will see how to check if the server where our MySQL Database is Hosted is Active or Not." }, { "code": null, "e": 25126, "s": 25118, "text":...
COVID-19 Data Visualization using Python | by Jaskeerat Singh Bhatia | Towards Data Science
Data Visualization is the first step towards getting an insight into a large data set in every data science project. Once the data has been acquired and preprocessed (cleaned and deduplicated), the next step in the Data Science Life Cycle is Exploratory Data Analysis which kicks off with visualization of the data. The aim here is to extract useful information from the data. I have used Python and its few powerful libraries to achieve the task. Also, I have used Google Colab Notebooks to write the code so as to avoid the hassle of installing any IDE or packages in case you wish to follow along. The first step is to open a new Google Colab ipython notebook and import the libraries we require. import pandas as pdimport numpy as npimport matplotlib.pyplot as pltimport seaborn as snsimport plotly.express as px ### for plotting the data on world map These visualizations are based on data as of May 25, 2020. I have used the daily report data published by John Hopkins University for May 25, 2020. The next part of the code deals with loading the .csv data to our project. path = 'https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_daily_reports/05-25-2020.csv'df = pd.read_csv(path)df.info()df.head() In just two lines in code we have our data loaded and ready for use as a Pandas Dataframe. the next two lines display the information about the data (metadata) i.e., a total of 3409 rows of data and 11 columns. It also gives us a preview of the first five rows. Now since our data has loaded successfully, the next step is to preprocess the data before using it for plotting. It will include : Removing superfluous columns like ‘FIPS’, ‘Admin2', ‘Last_Update’ (since all the data is for single-day — 25th May). Removing columns ‘Province_State’ and ‘Combined_Key’ since statewide data is not available for all the countries. Grouping together data by ‘Country_Region’ and rename the column to ‘Country’ df.drop(['FIPS', 'Admin2','Last_Update','Province_State', 'Combined_Key'], axis=1, inplace=True)df.rename(columns={'Country_Region': "Country"}, inplace=True)df.head() The data can be grouped together by the ‘groupby’ function of the dataframe. It is similar to the GROUPBY statement in SQL. world = df.groupby("Country")['Confirmed','Active','Recovered','Deaths'].sum().reset_index()world.head() Finally our data is cleaned and ready to use. ### Find top 20 countries with maximum number of confirmed casestop_20 = world.sort_values(by=['Confirmed'], ascending=False).head(20)### Generate a Barplotplt.figure(figsize=(12,10))plot = sns.barplot(top_20['Confirmed'], top_20['Country'])for i,(value,name) in enumerate(zip(top_20['Confirmed'],top_20['Country'])): plot.text(value,i-0.05,f'{value:,.0f}',size=10)plt.show() top_5 = world.sort_values(by=['Confirmed'], ascending=False).head()### Generate a Barplotplt.figure(figsize=(15,5))confirmed = sns.barplot(top_5['Confirmed'], top_5['Country'], color = 'red', label='Confirmed')recovered = sns.barplot(top_5['Recovered'], top_5['Country'], color = 'green', label='Recovered')### Add Texts for Barplotsfor i,(value,name) in enumerate(zip(top_5['Confirmed'],top_5['Country'])): confirmed.text(value,i-0.05,f'{value:,.0f}',size=9)for i,(value,name) in enumerate(zip(top_5['Recovered'],top_5['Country'])): recovered.text(value,i-0.05,f'{value:,.0f}',size=9)plt.legend(loc=4)plt.show() A choropleth map is a type of thematic map in which areas are shaded or patterned in proportion to a statistical variable that represents an aggregate summary of a geographic characteristic within each area, such as population density or per-capita income. Choropleth maps provide an easy way to visualize how a measurement varies across a geographic area or show the level of variability within a region figure = px.choropleth(world,locations=’Country’, locationmode=’country names’, color=’Confirmed’, hover_name=’Country’, color_continuous_scale=’tealgrn’, range_color=[1,1000000],title=’Countries with Confirmed cases’)figure.show() We can zoom into the map and hover over a particular region to see the confirmed number of cases for that country Complete code is available at my GitHub repo: https://github.com/jaskeeratbhatia/covid-19-data-visulaization/blob/master/covid-19-data-25-may-2020-revised.ipynb John Hopkins Github Repo for data: https://github.com/CSSEGISandData/COVID-19Wikipedia: https://en.wikipedia.org/wiki/Choropleth_map John Hopkins Github Repo for data: https://github.com/CSSEGISandData/COVID-19
[ { "code": null, "e": 424, "s": 47, "text": "Data Visualization is the first step towards getting an insight into a large data set in every data science project. Once the data has been acquired and preprocessed (cleaned and deduplicated), the next step in the Data Science Life Cycle is Exploratory Da...
Why C++ is best for Competitive Programming? - GeeksforGeeks
22 Jun, 2021 C++ is the most preferred language for competitive programming. In this article, some features of C++ are discussed that make it best for competitive programming. STL (Standard Template Library): C++ has a vast library called STL which is a collection of C++ templates to provide common programming data structures and functions such as lists, stacks, arrays, etc. that makes the code very short and increases the speed of coding. It is a library of container classes, algorithms, and iterators. For example, std::min is used to find out the smallest of the number passed to it. It returns the first of them if there is more than one. Program 1: C++ // C++ program to demonstrate the// use of min() function #include <iostream>using namespace std; // Driver Codeint main(){ double a = 12.123; double b = 12.456; // Print the minimum of the // two numbers cout << min(a, b); return 0;} 12.123 Faster: C/C++ is faster than any other programming language in terms of speed. The C++ source code needs to become machine code. Whereas, python follows a different tactic as it is interpreted. The compilation of code is always faster than the interpretation. Program 2: Below program to demonstrate how to measure execution time using the clock() function: C++ // C++ program to measure execution// time using clock() function #include <bits/stdc++.h>using namespace std; // Function whose time taken to// be measuredvoid fun(){ for (int i = 0; i < 10; i++) { }} // Driver Codeint main(){ // clock_t clock(void) returns the // number of clock ticks elapsed // after program was launched. clock_t start, end; // Recording the starting // clock tick start = clock(); fun(); // Recording the end clock tick end = clock(); // Calculating total time taken // by the program double time_taken = double(end - start) / double(CLOCKS_PER_SEC); cout << "Time taken by program is: " << fixed << time_taken << setprecision(5); cout << " sec " << endl; return 0;} Time taken by program is: 0.000001 sec Simple Constructs: C++ is a simple language i.e., much closer to a low-level language, therefore it’s much easier to write codes in C++ than in Java. Also, this makes the code-generation process simpler, optimized, and fast in C++ (i.e., like in Java no conversion of code to byte code first and then to machine code). Widely used: C++ is considered to be the best choice for competitive programming by 75% of the programmers across the world, as it is usually faster than Java and Python and most of the resources are available in C++. Templates: A template is a simple and yet very powerful tool in C++. The simple idea is to pass data type as a parameter so that we don’t need to write the same code for different data types. Program 3: Below is the program to demonstrate templates: C++ // C++ program to demonstrate template#include <iostream>using namespace std; // Generic function to find minimum// of 2 data typestemplate <typename T>T Min(T x, T y){ return (x < y) ? x : y;} // Driver Codeint main(){ cout << Min(7, 3) << endl; cout << Min('z', 'a') << endl; return 0;} 3 a Snippets: Snippets provide an easy way to implement commonly used code or functions into a larger section of code. Instead of rewriting the same code over and over again, a programmer can save the code as a snippet and simply drag and drop the snippet wherever it is needed. By using snippets, programmers and web developers can also organize common code sections into categories, creating a cleaner development environment. It also increases the coding speed, helps in coding contests, etc. Program 4: Below is an example of a sample snippet that can be used in competitive programming: C++ // C++ program to demonstrate snippets#include <bits/stdc++.h>using namespace std; #define MOD 1000000007#define endl "\n"#define lli long long int#define ll long long#define mp make_pair#define pb push_back void solve(){ // Write down your desired // code here cout << "Write your code here";} // Driver Codeint main(){ // Handle t number of testcases int t = 1; while (t--) { solve(); } return 0;} Write your code here simmytarika5 Technical Scripter 2020 C++ C++ Programs Competitive Programming Technical Scripter CPP Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Operator Overloading in C++ Polymorphism in C++ Friend class and function in C++ Iterators in C++ STL Sorting a vector in C++ Header files in C/C++ and its uses C++ Program for QuickSort How to return multiple values from a function in C or C++? Program to print ASCII Value of a character CSV file management using C++
[ { "code": null, "e": 24122, "s": 24094, "text": "\n22 Jun, 2021" }, { "code": null, "e": 24285, "s": 24122, "text": "C++ is the most preferred language for competitive programming. In this article, some features of C++ are discussed that make it best for competitive programming."...
How to create footer for Bootstrap 4 card
To create a footer in Bootstrap 4 card, use the card-footer class. Set the footer − <div class="card-footer"> Footer message </div> The footer class comes after the card-body and card-header class, since the footer is always in the bottom as the footer of a web page. Here is the complete code − <div class=”card”> <div class="card-header">Venue: 811, KY Road, New York</div> <div class="card-body">Timings: 9AM-11AM</div> <div class="card-footer">Reach before 9AM</div> </div> Let us see how to create card footer in Bootstrap 4 − Live Demo <!DOCTYPE html> <html lang="en"> <head> <title>Bootstrap Example</title> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.1.0/css/bootstrap.min.css"> <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.1.0/js/bootstrap.min.js"></script> </head> <body> <div class="container"> <h2>Church Worship</h2> <div class="card"> <div class="card-header">Venue: 811, KY Road, New York</div> <div class="card-body">Timings: 9AM-11AM</div> <div class="card-footer">Reach before 9AM</div> </div> </div> </body> </html>
[ { "code": null, "e": 1129, "s": 1062, "text": "To create a footer in Bootstrap 4 card, use the card-footer class." }, { "code": null, "e": 1146, "s": 1129, "text": "Set the footer −" }, { "code": null, "e": 1196, "s": 1146, "text": "<div class=\"card-footer\">...
How to Convert Kilometres to Miles using Python?
The ratio of Km to Mile is 1 km = 0.621371 mile >>> km=5 >>> m=km*0.621371 >>> m 3.106855
[ { "code": null, "e": 1110, "s": 1062, "text": "The ratio of Km to Mile is 1 km = 0.621371 mile" }, { "code": null, "e": 1153, "s": 1110, "text": ">>> km=5\n>>> m=km*0.621371\n>>> m\n3.106855\n" } ]
Strategy Analysis: Pairs Trading. We are going to walk through the basics... | by Posey | Towards Data Science
Pairs trading is a widely used strategy in which a long position is “paired” with a short position of two highly correlated (or cointegrated) stocks. There are many reasons for taking such a position. The position can be market neutral. That is to say, you can establish a position that seeks to make money regardless of the performance of the broader market. In such a trade, as long as the long goes up more than the short goes up or the short goes down more than the long goes down, it is a profitable trade. That was quite the tongue twister. Let’s explain... Typically, stocks from the same sector and stocks that are direct competitors to one another are heavily correlated and consequently great candidates for pairs trading. Cointegration is also criteria for a pairs trade, and cointegration is oftentimes the more reliable strategy for successful pairs trading. Cointegration describes the distance between the two assets in price over time, whereas correlation describes the tendency to move in similar directions. Perhaps the most obvious pairs trade would be Pepsi (PEP) and Coca-Cola (KO); this is the classical example. Let’s say one is unsure of the direction of both stocks, but one is confident that no matter the direction, Coca-Cola will outperform. With this view, one could enter a market neutral position with a long position for Coca-Cola and short position for Pepsi. Even if both stocks went down in price, as long as Pepsi goes down more than Coca-Cola, it will be a profitable trade (provided they were equally weighted positions when the trade was opened). Let’s see how this strategy would perform. Let’s pretend the trader made this trade at the beginning of April (2019). Let’s assume they started with $100,000 and put $50,000 towards the long position and $50,000 towards the short position, effectively a market neutral position. Let’s do some analysis... import ffnimport numpy as npprices = ffn.get('pep,ko', start='2019-04-01')stats = prices.calc_stats()stats.display() As you can see, during this 3-month period KO outperformed PEP by about 1.5%. If you had simply gone long on either stock you would have far outperformed, with a much higher return. Your return for this pairs trade would be about .785% for a $785 return on their $100,000 trade. So why would one perform such a trade? If one had a market-neutral view but believed in KO outperforming Pepsi, this was the right trade to make. Such a trader protected their downside by not betting on market or sector direction. Let’s say you did a 100% long position in KO but the entire market went down, dragging KO with it; you would have been in the red. This market-neutral strategy ensured profits as long as KO outperformed PEP. Many times you may not have a market-neutral view but want to perform a pairs trade. In such a case, you could adjust the trade accordingly by changing the percentages of your short and long positions (change from 50/50 to whatever your market view is). These stocks are heavily correlated, and we can view this correlation and their performance very easily. First we rebase to view the stocks on the same price scale. # calculate correlation coefficient and then plot pricescorrelation = np.corrcoef(prices)ax = prices.rebase().plot() We briefly discussed the basics of pairs trading. Where it gets complicated is identifying pairs, identifying entries and exits to trades, and building strategies that actually work the majority of the time. Another popular way to identify pairs is to find groups of stocks that are typically very correlated and identify the stocks which have deviated far from the mean. Such stocks typically can be profited from via mean reversion. For example, let’s say a basket of industrials typically trade with a P/E ratio of about 24x. You may notice one of the stocks in that basket is starting to drift above its peers and has a P/E ratio of 27x. You might be mostly market-neutral that sector but believe that single stock is probably overvalued. You could open a pairs trade whereby you’re long one (or more) of the stocks trading about 24x and are short the stock trading at 27x. If the 27x stock reverts back towards the 24x stocks and the 24x stocks perform in lockstep then you’re likely to profit. There are many more complicated strategies and methods used for pairs trading. This idea was first popularized in the 80s at Morgan Stanley. So it’s been around, been researched extensively, and continues to be widely used. I encourage you to dig even deeper into this strategy if it interests you. Check out Dataset Daily, a newsletter where we study companies, industries, and markets each week. This analysis was only a few lines of code, but if you’re interested you can find it and other analysis on Github here. If you are interested in following up on the intricacies of pairs trading you can check out this paper or other articles and papers like it. poseidon01.ssrn.com www.investopedia.com Let’s continue the conversation on Twitter. Note from Towards Data Science’s editors: While we allow independent authors to publish articles in accordance with our rules and guidelines, we do not endorse each author’s contribution. You should not rely on an author’s works without seeking professional advice. See our Reader Terms for details.
[ { "code": null, "e": 736, "s": 172, "text": "Pairs trading is a widely used strategy in which a long position is “paired” with a short position of two highly correlated (or cointegrated) stocks. There are many reasons for taking such a position. The position can be market neutral. That is to say, yo...
DateTime.AddMonths() Method in C#
The DateTime.AddMonths() method in C# is used to add the specified number of months to the value of this instance. Following is the syntax − public DateTime AddMonths (int val); Above, Val is the number of months. If you want to subtract months, then set it as negative. Let us now see an example to implement the DateTime.AddMonths() method − using System; public class Demo { public static void Main(){ DateTime d1 = new DateTime(2019, 05, 15, 5, 15, 25); DateTime d2 = d1.AddMonths(5); System.Console.WriteLine("Initial DateTime = {0:dd} {0:y}, {0:hh}:{0:mm}:{0:ss} ", d1); System.Console.WriteLine("\nNew DateTime (After adding months) = {0:dd} {0:y}, {0:hh}:{0:mm}:{0:ss} ", d2); } } This will produce the following output − Initial DateTime = 15 May 2019, 05:15:25 New DateTime (After adding months) = 15 October 2019, 05:15:25 Let us now see another example to implement the DateTime.AddMonths() method − using System; public class Demo { public static void Main(){ DateTime d1 = new DateTime(2019, 08, 20, 3, 30, 50); DateTime d2 = d1.AddMonths(-2); System.Console.WriteLine("Initial DateTime = {0:dd} {0:y}, {0:hh}:{0:mm}:{0:ss} ", d1); System.Console.WriteLine("\nNew DateTime (After subtracting months) = {0:dd} {0:y}, {0:hh}:{0:mm}:{0:ss} ", d2); } } This will produce the following output − Initial DateTime = 20 August 2019, 03:30:50 New DateTime (After subtracting months) = 20 June 2019, 03:30:50
[ { "code": null, "e": 1177, "s": 1062, "text": "The DateTime.AddMonths() method in C# is used to add the specified number of months to the value of this instance." }, { "code": null, "e": 1203, "s": 1177, "text": "Following is the syntax −" }, { "code": null, "e": 1240...
Check if a string contains a sub-string in C++
Here we will see how the string library functions can be used to match strings in C++. Here we are using the find() operation to get the occurrences of the substring into the main string. This find() method returns the first location where the string is found. Here we are using this find() function multiple times to get all of the matches. If the item is found, this function returns the position. But if it is not found, it will return string::npos. So for checking whether the substring is present into the main string, we have to check the return value of find() is string::npos or not. Here we are simply getting the position where the substring is present. Input: The main string “aabbabababbbaabb” and substring “abb” Output: The locations where the substrings are found. [1, 8, 13] Input − The main string and the substring to check Output − The positions of the substring in the main string pos := 0 while index = first occurrence of sub_str into the str in range pos to end of the string, do print the index as there is a match pos := index + 1 done Live Demo #include using namespace std; main() { string str1 = "aabbabababbbaabb"; string str2 = "abb"; int pos = 0; int index; while((index = str1.find(str2, pos)) != string::npos) { cout << "Match found at position: " << index << endl; pos = index + 1; //new position is from next element of index } } Match found at position: 1 Match found at position: 8 Match found at position: 13
[ { "code": null, "e": 1404, "s": 1062, "text": "Here we will see how the string library functions can be used to match strings in C++. Here we are using the find() operation to get the occurrences of the substring into the main string. This find() method returns the first location where the string is...
TIQ Part 3 — Ultimate Guide to Date dimension creation | by Nikola Ilic | Towards Data Science
TIQ stands for Time Intelligence Quotient. As “regular” intelligence is usually being measured as IQ, and Time Intelligence is one of the most important topics in data modeling, I decided to start a blog series which will introduce some basic concepts, pros and cons of specific solutions and potential pitfalls to avoid, everything in order to increase your overall TIQ and make your models more robust, scalable and flexible in terms of time analysis After explaining why using Auto Date/Time feature in Power BI is a legitimate, but not desirable solution for handling dates in your data model in the first part of this series, and emphasizing the importance of Date dimension within the Star schema in the second part, in this part, you will find various solutions to creating a proper Date dimension in your data model. In most cases, if you are connecting to a relational data sources, such as SQL Server, Oracle, or MySQL, there is a big possibility that your data model which resides in the data warehouse already contains Date dimension. In this scenario, you simply import existing Date dimension into Power BI data model and you’re good to go. This approach brings the benefits of securing a single source of truth regarding time handling on the organizational level. Let’s say I’m connecting to a Contoso SQL Server database, which holds data about sales for an imaginary company called Contoso. Once I select Get Data within Power BI and connect to the Contoso database, the only thing I need to do is to select DimDate table (among others that I need) and I have a proper Date dimension within my Power BI data model! That’s the easiest and most frequently used way of handling time within your Power BI data model. In case that, for any reason, your data warehouse doesn’t have date dimension (honestly, you have more chances to get hit by the truck than to find data warehouse without date dimension), you can create one on your own. There are multiple ready-made solutions on the web, such as this one by Aaron Bertrand, so in case you need it for any reason, a calendar table using SQL can be created in a few minutes. If you don’t have an existing date dimension to import into your data model, you can quickly and easily create a brand new date dimension using Power Query and its M language. As I’ve already mentioned in one of the previous articles, related to various Power Query tips, there are dozen of out-of-the-box solutions on the web. I’ve chosen this one from Reza Rad. Here are the step-by-step instructions on how to create Date dimension using M language. Open new Power BI file and choose Blank query under Get data: If you want to create a highly flexible and customized Date dimension, you should take advantage of using parameters. Therefore, under Manage Parameters, select New Parameter and set it like follows: After you defined start year of your Date dimension, apply the same steps for end year: As soon as you are done with that, open Advanced Editor and paste the following script: let StartDate = #date(StartYear,1,1), EndDate = #date(EndYear,12,31), NumberOfDays = Duration.Days( EndDate - StartDate ), Dates = List.Dates(StartDate, NumberOfDays+1, #duration(1,0,0,0)), #"Converted to Table" = Table.FromList(Dates, Splitter.SplitByNothing(), null, null, ExtraValues.Error), #"Renamed Columns" = Table.RenameColumns(#"Converted to Table",{{"Column1", "FullDateAlternateKey"}}), #"Changed Type" = Table.TransformColumnTypes(#"Renamed Columns",{{"FullDateAlternateKey", type date}}), #"Inserted Year" = Table.AddColumn(#"Changed Type", "Year", each Date.Year([FullDateAlternateKey]), type number), #"Inserted Month" = Table.AddColumn(#"Inserted Year", "Month", each Date.Month([FullDateAlternateKey]), type number), #"Inserted Month Name" = Table.AddColumn(#"Inserted Month", "Month Name", each Date.MonthName([FullDateAlternateKey]), type text), #"Inserted Quarter" = Table.AddColumn(#"Inserted Month Name", "Quarter", each Date.QuarterOfYear([FullDateAlternateKey]), type number), #"Inserted Week of Year" = Table.AddColumn(#"Inserted Quarter", "Week of Year", each Date.WeekOfYear([FullDateAlternateKey]), type number), #"Inserted Week of Month" = Table.AddColumn(#"Inserted Week of Year", "Week of Month", each Date.WeekOfMonth([FullDateAlternateKey]), type number), #"Inserted Day" = Table.AddColumn(#"Inserted Week of Month", "Day", each Date.Day([FullDateAlternateKey]), type number), #"Inserted Day of Week" = Table.AddColumn(#"Inserted Day", "Day of Week", each Date.DayOfWeek([FullDateAlternateKey]), type number), #"Inserted Day of Year" = Table.AddColumn(#"Inserted Day of Week", "Day of Year", each Date.DayOfYear([FullDateAlternateKey]), type number), #"Inserted Day Name" = Table.AddColumn(#"Inserted Day of Year", "Day Name", each Date.DayOfWeekName([FullDateAlternateKey]), type text)in #"Inserted Day Name" Once you’re done with that, hit Close&Apply and you have fully functional Date dimension in your data model! Additionally, if you save your file as .pbit (Power BI template file), you can easily change the time period for which you want to generate your dates. As you may have noticed, you are just defining start and end year of your desired date dimension, and M takes care of handling everything else — how cool is that! Finally, if you prefer using DAX for your Power BI calculations, you can also use it to create a proper Date dimension. DAX has built-in functions CALENDAR() and CALENDARAUTO(), which will lay the ground for other necessary attributes, such as weeks, working days, etc. Before I guide you where to find the best DAX script for creating the Date dimension, I just wanted to suggest avoiding usage of CALENDARAUTO() function in most cases, since it takes earliest date value from your whole data model and expands until the latest date from your whole data model! That could work fine in some limited number of cases, but in many real-life scenarios, such as when you import data about the customers, there are records where we don’t know the correct birth date of the customer. Then, someone inserts default value January 1st 1900, and all of a sudden your date dimension in the Power BI model would start from 1900 in case you opt to use CALENDARAUTO() function! The key takeaway from here — you need to leverage the handling of specific scenarios and don’t let Power BI mess around with your data model. Keep in mind that you are the boss of your data model! As I promised above, if you decide to use DAX for creating the Date dimension, don’t look any further and use this solution from DAX gurus Marco Russo and Alberto Ferrari. Whatever solution you choose for creating Date dimension (SQL, M or DAX), don’t forget the most important thing — mark this table as a date table! This will enable you to use your Date dimension in its full capacity, performing all powerful DAX time intelligence functions without worrying about the results and performance. In order to be marked as a Date table, a table, or better to say a column which will be marked, must satisfy few conditions: Contains unique values in every single row NULL values are not allowed Contains contiguous dates (no gaps are allowed) Must be of Date data type Let’s see how we mark the table as a Date table in few clicks. Right-click on your Date dimension, hover over Mark as date table option and click on Mark as date table: A dialog box should open and Power BI will offer you to select on which column you want to apply the marking: Power BI will automatically recognize columns of proper data type and once you choose the column, Power BI will perform validation and inform you if the selected column was validated successfully. If everything went fine, you should see a small icon next to the Date column name: Congratulations! You’ve just completed all necessary steps for creating proper and fully functional Date dimension in your data model. As we now laid the ground for manipulating data using various time periods, in the next part of this series I will write about the most important Time intelligence functions in DAX and how you can use them to leverage different real-life scenarios.
[ { "code": null, "e": 625, "s": 172, "text": "TIQ stands for Time Intelligence Quotient. As “regular” intelligence is usually being measured as IQ, and Time Intelligence is one of the most important topics in data modeling, I decided to start a blog series which will introduce some basic concepts, pr...
Entity Framework - Views
A view is an object that contains data obtained by a predefined query. A view is a virtual object or table whose result set is derived from a query. It is very similar to a real table because it contains columns and rows of data. Following are some typical uses of views − Filter data of underlying tables Filter data for security purposes Centralize data distributed across several servers Create a reusable set of data Views can be used in a similar way as you can use tables. To use view as an entity, first you will need to add database views to EDM. After adding views to your model then you can work with it the same way as normal entities except for Create, Update, and Delete operations. Let’s take a look, how to add views into the model from the database. Step 1 − Create a new Console Application project. Step 2 − Right-click on project in solution explorer and select Add → New Item. Step 3 − Select ADO.NET Entity Data Model from the middle pane and enter name ViewModel in the Name field. Step 4 − Click Add button which will launch the Entity Data Model Wizard dialog. Step 5 − Select EF Designer from database and click Next button. Step 6 − Select the existing database and click Next. Step 7 − Choose Entity Framework 6.x and click Next. Step 8 − Select tables and views from your database and click Finish. You can see in the designer window that a view is created and you can use it in the program as an entity. In the solution explorer, you can see that MyView class is also generated from the database. Let’s take an example in which all data is retrieved from view. Following is the code − class Program { static void Main(string[] args) { using (var db = new UniContextEntities()) { var query = from b in db.MyViews orderby b.FirstMidName select b; Console.WriteLine("All student in the database:"); foreach (var item in query) { Console.WriteLine(item.FirstMidName + " " + item.LastName); } Console.WriteLine("Press any key to exit..."); Console.ReadKey(); } } } When the above code is executed, you will receive the following output − All student in the database: Ali Khan Arturo finand Bill Gates Carson Alexander Gytis Barzdukas Laura Norman Meredith Alonso Nino Olivetto Peggy Justice Yan Li Press any key to exit... We recommend you to execute the above example in a step-by-step manner for better understanding. 19 Lectures 5 hours Trevoir Williams 33 Lectures 3.5 hours Nilay Mehta 21 Lectures 2.5 hours TELCOMA Global 89 Lectures 7.5 hours Mustafa Radaideh Print Add Notes Bookmark this page
[ { "code": null, "e": 3305, "s": 3032, "text": "A view is an object that contains data obtained by a predefined query. A view is a virtual object or table whose result set is derived from a query. It is very similar to a real table because it contains columns and rows of data. Following are some typi...
Java - DataOutputStream
The DataOutputStream stream lets you write the primitives to an output source. Following is the constructor to create a DataOutputStream. DataOutputStream out = DataOutputStream(OutputStream out); Once you have DataOutputStream object in hand, then there is a list of helper methods, which can be used to write the stream or to do other operations on the stream. public final void write(byte[] w, int off, int len)throws IOException Writes len bytes from the specified byte array starting at point off, to the underlying stream. Public final int write(byte [] b)throws IOException Writes the current number of bytes written to this data output stream. Returns the total number of bytes written into the buffer. (a) public final void writeBooolean()throws IOException, (b) public final void writeByte()throws IOException, (c) public final void writeShort()throws IOException (d) public final void writeInt()throws IOException These methods will write the specific primitive type data into the output stream as bytes. Public void flush()throws IOException Flushes the data output stream. public final void writeBytes(String s) throws IOException Writes out the string to the underlying output stream as a sequence of bytes. Each character in the string is written out, in sequence, by discarding its high eight bits. Following is an example to demonstrate DataInputStream and DataOutputStream. This example reads 5 lines given in a file test.txt and converts those lines into capital letters and finally copies them into another file test1.txt. import java.io.*; public class DataInput_Stream { public static void main(String args[])throws IOException { // writing string to a file encoded as modified UTF-8 DataOutputStream dataOut = new DataOutputStream(new FileOutputStream("E:\\file.txt")); dataOut.writeUTF("hello"); // Reading data from the same file DataInputStream dataIn = new DataInputStream(new FileInputStream("E:\\file.txt")); while(dataIn.available()>0) { String k = dataIn.readUTF(); System.out.print(k+" "); } } } Here is the sample run of the above program − THIS IS TEST 1 , THIS IS TEST 2 , THIS IS TEST 3 , THIS IS TEST 4 , THIS IS TEST 5 , 16 Lectures 2 hours Malhar Lathkar 19 Lectures 5 hours Malhar Lathkar 25 Lectures 2.5 hours Anadi Sharma 126 Lectures 7 hours Tushar Kale 119 Lectures 17.5 hours Monica Mittal 76 Lectures 7 hours Arnab Chakraborty Print Add Notes Bookmark this page
[ { "code": null, "e": 2456, "s": 2377, "text": "The DataOutputStream stream lets you write the primitives to an output source." }, { "code": null, "e": 2515, "s": 2456, "text": "Following is the constructor to create a DataOutputStream." }, { "code": null, "e": 2575, ...
Wildcard Matching in Python
Suppose we have an input string s and another input string p. Here is the main string and p is the pattern. We have to define one method, that can match pattern in the string. So we have to implement this for a regular expression, that supports wildcard characters like ‘?’ And ‘*’. Dot ‘?’ Matches any single character Dot ‘?’ Matches any single character Star ‘*’ Matches zero or more characters. Star ‘*’ Matches zero or more characters. So for example, if the input is like s = “aa” and p = “a?”, then it will be true, for the same input string, if the patter is “?*”, then it will be true. To solve this, we will follow these steps − ss := size of s and ps := size of p ss := size of s and ps := size of p make dp a matrix of size ss x ps, and fill this using false value make dp a matrix of size ss x ps, and fill this using false value Update p and s by adding one blank space before these Update p and s by adding one blank space before these For i in range 1 to ps −if p[i] = star, thendp[0, i] := dp[0, i - 1] For i in range 1 to ps − if p[i] = star, thendp[0, i] := dp[0, i - 1] if p[i] = star, then dp[0, i] := dp[0, i - 1] dp[0, i] := dp[0, i - 1] for i in range 1 to ssfor j in range 1 to psif s[i] is p[j], or p[j] is ‘?’, thendp[i, j] := dp[i – 1, j – 1]otherwise when p[j] is star, thendp[i, j] := max of dp[i – 1, j] and dp[i, j – 1] for i in range 1 to ss for j in range 1 to psif s[i] is p[j], or p[j] is ‘?’, thendp[i, j] := dp[i – 1, j – 1]otherwise when p[j] is star, thendp[i, j] := max of dp[i – 1, j] and dp[i, j – 1] for j in range 1 to ps if s[i] is p[j], or p[j] is ‘?’, thendp[i, j] := dp[i – 1, j – 1] if s[i] is p[j], or p[j] is ‘?’, then dp[i, j] := dp[i – 1, j – 1] dp[i, j] := dp[i – 1, j – 1] otherwise when p[j] is star, thendp[i, j] := max of dp[i – 1, j] and dp[i, j – 1] otherwise when p[j] is star, then dp[i, j] := max of dp[i – 1, j] and dp[i, j – 1] dp[i, j] := max of dp[i – 1, j] and dp[i, j – 1] return dp[ss, ps] return dp[ss, ps] Let us see the following implementation to get better understanding − Live Demo class Solution(object): def isMatch(self, s, p): sl = len(s) pl = len(p) dp = [[False for i in range(pl+1)] for j in range(sl+1)] s = " "+s p = " "+p dp[0][0]=True for i in range(1,pl+1): if p[i] == '*': dp[0][i] = dp[0][i-1] for i in range(1,sl+1): for j in range(1,pl+1): if s[i] == p[j] or p[j] == '?': dp[i][j] = dp[i-1][j-1] elif p[j]=='*': dp[i][j] = max(dp[i-1][j],dp[i][j-1]) return dp[sl][pl] ob = Solution() print(ob.isMatch("aa", "a?")) print(ob.isMatch("aaaaaa", "a*")) "aa", "a." "aaaaaa", "a*" True True
[ { "code": null, "e": 1345, "s": 1062, "text": "Suppose we have an input string s and another input string p. Here is the main string and p is the pattern. We have to define one method, that can match pattern in the string. So we have to implement this for a regular expression, that supports wildcard...
How to hide a div in JavaScript on button click?
Let’s say the following is our div − <div id="showOrHide"> Welcome in JavaScript </div> Following is our button. On clicking, the above div should hide − <button onclick="showOrHideDiv()">Click The Button</button> Use the style.display concept in JavaScript to hide div. Following is the code − Live Demo <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initialscale=1.0"> <title>Document</title> <link rel="stylesheet" href="//code.jquery.com/ui/1.12.1/themes/base/jquery-ui.css"> <script src="https://code.jquery.com/jquery-1.12.4.js"></script> <script src="https://code.jquery.com/ui/1.12.1/jquery-ui.js"></script> </head> <body> <button onclick="showOrHideDiv()">Click The Button</button> <div id="showOrHide"> Welcome in JavaScript </div> <script> function showOrHideDiv() { var v = document.getElementById("showOrHide"); if (v.style.display === "none") { v.style.display = "block"; } else { v.style.display = "none"; } } </script> </body> </html> To run the above program, save the file name “anyName.html(index.html)” and right click on the file. Select the option “Open with Live Server” in VS Code editor. This will produce the following output − When the user clicks on the button, the div hides. The screenshot is as follows −
[ { "code": null, "e": 1099, "s": 1062, "text": "Let’s say the following is our div −" }, { "code": null, "e": 1150, "s": 1099, "text": "<div id=\"showOrHide\">\nWelcome in JavaScript\n</div>" }, { "code": null, "e": 1216, "s": 1150, "text": "Following is our bu...
Deleting points from Convex Hull - GeeksforGeeks
22 Apr, 2022 Given a fixed set of points. We need to find convex hull of given set. We also need to find convex hull when a point is removed from the set. Example: Initial Set of Points: (-2, 8) (-1, 2) (0, 1) (1, 0) (-3, 0) (-1, -9) (2, -6) (3, 0) (5, 3) (2, 5) Initial convex hull:- (-2, 8) (-3, 0) (-1, -9) (2, -6) (5, 3) Point to remove from the set : (-2, 8) Final convex hull: (2, 5) (-3, 0) (-1, -9) (2, -6) (5, 3) Prerequisite : Convex Hull (Simple Divide and Conquer Algorithm)The algorithm for solving the above problem is very easy. We simply check whether the point to be removed is a part of the convex hull. If it is, then we have to remove that point from the initial set and then make the convex hull again (refer Convex hull (divide and conquer) ). And if not then we already have the solution (the convex hull will not change). C++ // C++ program to demonstrate delete operation// on Convex Hull.#include<bits/stdc++.h>using namespace std; // stores the center of polygon (It is made// global because it is used in compare function)pair<int, int> mid; // determines the quadrant of a point// (used in compare())int quad(pair<int, int> p){ if (p.first >= 0 && p.second >= 0) return 1; if (p.first <= 0 && p.second >= 0) return 2; if (p.first <= 0 && p.second <= 0) return 3; return 4;} // Checks whether the line is crossing the polygonint orientation(pair<int, int> a, pair<int, int> b, pair<int, int> c){ int res = (b.second-a.second)*(c.first-b.first) - (c.second-b.second)*(b.first-a.first); if (res == 0) return 0; if (res > 0) return 1; return -1;} // compare function for sortingbool compare(pair<int, int> p1, pair<int, int> q1){ pair<int, int> p = make_pair(p1.first - mid.first, p1.second - mid.second); pair<int, int> q = make_pair(q1.first - mid.first, q1.second - mid.second); int one = quad(p); int two = quad(q); if (one != two) return (one < two); return (p.second*q.first < q.second*p.first);} // Finds upper tangent of two polygons 'a' and 'b'// represented as two vectors.vector<pair<int, int> > merger(vector<pair<int, int> > a, vector<pair<int, int> > b){ // n1 -> number of points in polygon a // n2 -> number of points in polygon b int n1 = a.size(), n2 = b.size(); int ia = 0, ib = 0; for (int i=1; i<n1; i++) if (a[i].first > a[ia].first) ia = i; // ib -> leftmost point of b for (int i=1; i<n2; i++) if (b[i].first < b[ib].first) ib=i; // finding the upper tangent int inda = ia, indb = ib; bool done = 0; while (!done) { done = 1; while (orientation(b[indb], a[inda], a[(inda+1)%n1]) >=0) inda = (inda + 1) % n1; while (orientation(a[inda], b[indb], b[(n2+indb-1)%n2]) <=0) { indb = (n2+indb-1)%n2; done = 0; } } int uppera = inda, upperb = indb; inda = ia, indb=ib; done = 0; int g = 0; while (!done)//finding the lower tangent { done = 1; while (orientation(a[inda], b[indb], b[(indb+1)%n2])>=0) indb=(indb+1)%n2; while (orientation(b[indb], a[inda], a[(n1+inda-1)%n1])<=0) { inda=(n1+inda-1)%n1; done=0; } } int lowera = inda, lowerb = indb; vector<pair<int, int> > ret; //ret contains the convex hull after merging the two convex hulls //with the points sorted in anti-clockwise order int ind = uppera; ret.push_back(a[uppera]); while (ind != lowera) { ind = (ind+1)%n1; ret.push_back(a[ind]); } ind = lowerb; ret.push_back(b[lowerb]); while (ind != upperb) { ind = (ind+1)%n2; ret.push_back(b[ind]); } return ret; } // Brute force algorithm to find convex hull for a set// of less than 6 pointsvector<pair<int, int> > bruteHull(vector<pair<int, int> > a){ // Take any pair of points from the set and check // whether it is the edge of the convex hull or not. // if all the remaining points are on the same side // of the line then the line is the edge of convex // hull otherwise not set<pair<int, int> >s; for (int i=0; i<a.size(); i++) { for (int j=i+1; j<a.size(); j++) { int x1 = a[i].first, x2 = a[j].first; int y1 = a[i].second, y2 = a[j].second; int a1 = y1-y2; int b1 = x2-x1; int c1 = x1*y2-y1*x2; int pos = 0, neg = 0; for (int k=0; k<a.size(); k++) { if (a1*a[k].first+b1*a[k].second+c1 <= 0) neg++; if (a1*a[k].first+b1*a[k].second+c1 >= 0) pos++; } if (pos == a.size() || neg == a.size()) { s.insert(a[i]); s.insert(a[j]); } } } vector<pair<int, int> >ret; for (auto e : s) ret.push_back(e); // Sorting the points in the anti-clockwise order mid = {0, 0}; int n = ret.size(); for (int i=0; i<n; i++) { mid.first += ret[i].first; mid.second += ret[i].second; ret[i].first *= n; ret[i].second *= n; } sort(ret.begin(), ret.end(), compare); for (int i=0; i<n; i++) ret[i] = make_pair(ret[i].first/n, ret[i].second/n); return ret;} // Returns the convex hull for the given set of pointsvector<pair<int, int>> findHull(vector<pair<int, int>> a){ // If the number of points is less than 6 then the // function uses the brute algorithm to find the // convex hull if (a.size() <= 5) return bruteHull(a); // left contains the left half points // right contains the right half points vector<pair<int, int>>left, right; for (int i=0; i<a.size()/2; i++) left.push_back(a[i]); for (int i=a.size()/2; i<a.size(); i++) right.push_back(a[i]); // convex hull for the left and right sets vector<pair<int, int>>left_hull = findHull(left); vector<pair<int, int>>right_hull = findHull(right); // merging the convex hulls return merger(left_hull, right_hull);} // Returns the convex hull for the given set of points after// remviubg a point p.vector<pair<int, int>> removePoint(vector<pair<int, int>> a, vector<pair<int, int>> hull, pair<int, int> p){ // checking whether the point is a part of the // convex hull or not. bool found = 0; for (int i=0; i < hull.size() && !found; i++) if (hull[i].first == p.first && hull[i].second == p.second) found = 1; // If point is not part of convex hull if (found == 0) return hull; // if it is the part of the convex hull then // we remove the point and again make the convex hull // and if not, we print the same convex hull. for (int i=0; i<a.size(); i++) { if (a[i].first==p.first && a[i].second==p.second) { a.erase(a.begin()+i); break; } } sort(a.begin(), a.end()); return findHull(a);} // Driver codeint main(){ vector<pair<int, int> > a; a.push_back(make_pair(0, 0)); a.push_back(make_pair(1, -4)); a.push_back(make_pair(-1, -5)); a.push_back(make_pair(-5, -3)); a.push_back(make_pair(-3, -1)); a.push_back(make_pair(-1, -3)); a.push_back(make_pair(-2, -2)); a.push_back(make_pair(-1, -1)); a.push_back(make_pair(-2, -1)); a.push_back(make_pair(-1, 1)); int n = a.size(); // sorting the set of points according // to the x-coordinate sort(a.begin(), a.end()); vector<pair<int, int> >hull = findHull(a); cout << "Convex hull:\n"; for (auto e : hull) cout << e.first << " " << e.second << endl; pair<int, int> p = make_pair(-5, -3); removePoint(a, hull, p); cout << "\nModified Convex Hull:\n"; for (auto e:hull) cout << e.first << " " << e.second << endl; return 0;} Output: convex hull: -3 0 -1 -9 2 -6 5 3 2 5 Time Complexity: It is simple to see that the maximum time taken per query is the time taken to construct the convex hull which is O(n*logn). So, the overall complexity is O(q*n*logn), where q is the number of points to be deleted. This article is contributed by Amritya Vagmi. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. khushboogoyal499 surinderdawra388 Geometric Geometric Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Closest Pair of Points | O(nlogn) Implementation Given n line segments, find if any two segments intersect Check whether a given point lies inside a triangle or not Polygon Clipping | Sutherland–Hodgman Algorithm Program To Check whether a Triangle is Equilateral, Isosceles or Scalene Check if two given circles touch or intersect each other Window to Viewport Transformation in Computer Graphics with Implementation Sum of Manhattan distances between all pairs of points Optimum location of point to minimize total distance Program to find area of a triangle
[ { "code": null, "e": 24866, "s": 24838, "text": "\n22 Apr, 2022" }, { "code": null, "e": 25008, "s": 24866, "text": "Given a fixed set of points. We need to find convex hull of given set. We also need to find convex hull when a point is removed from the set." }, { "code":...
10 Must-Know Python Topics for Data Science | by Soner Yıldırım | Towards Data Science
Python is dominating the data science ecosystem. What I think the top two reasons for such dominance are being relatively easy to learn and the rich selection of data science libraries. Python is a general purpose language so it is not just for data science. Web development, mobile application and game development are some use cases for Python. If you are using Python only for data science related tasks, you do not have to be a Python expert. However, there are some core concepts and features that I think you must have in your possession. What we cover in this article is not library-specific. They can be considered as base Python for data science. Even if you are only using Pandas, Matplotlib, and Scikit-learn, you need to have a comprehensive understanding of Python basics. Such libraries assume you are familiar with Python basics. I will briefly explain each topic with a few examples and also provide a link to a detailed article for most of the topics. Functions are building blocks in Python. They take zero or more arguments and return a value. We create a function using the def keyword. Here is a simple function that multiplies two numbers. def multiply(a, b): return a * bmultiply(5, 4)20 Here is another example that evaluates a word based on its length. def is_long(word): if len(word) > 8: return f"{word} is a long word."is_long("artificial")'artificial is a long word.' Functions should accomplish a single task. Creating a function that performs a series of tasks defy the purpose of using functions. We should also assign descriptive names to functions so we have an idea of what it does without seeing the code. When we define a function, we specify its parameters. When a function is called, it must be provided with the values for the required parameters. The values for parameters are also known as arguments. Consider the multiply function created in the previous step. It has two parameters so we provide values for these parameters when the function is called. Positional arguments are declared by a name only. Keyword arguments are declared by a name and a default value. When a function is called, values for positional arguments must be given. Otherwise, we will get an error. If we do not specify the value for a keyword argument, it takes the default value. Let’s redefine the multiply function with keyword arguments so we can see the difference. def multiply(a=1, b=1): return a * bprint(multiply(5, 4))20print(multiply())1 Functions are building blocks in Python. They take zero or more arguments and return a value. Python is pretty flexible in terms of how arguments are passed to a function. The *args and **kwargs make it easier and cleaner to handle arguments. *args allow a function to take any number of positional arguments. Here is a simple example: def addition(*args): result = 0 for i in args: result += i return resultprint(addition(1,4))5print(addition(1,7,3))11 **kwargs allow a function to take any number of keyword arguments. By default, **kwargs is an empty dictionary. Each undefined keyword argument is stored as a key-value pair in the **kwargs dictionary. Here is a simple example: def arg_printer(a, b, option=True, **kwargs): print(a, b) print(option) print(kwargs)arg_printer(3, 4, param1=5, param2=6)3 4True{'param1': 5, 'param2': 6} towardsdatascience.com Object oriented programming (OOP) paradigm is built around the idea of having objects that belong to a particular type. In a sense, the type is what explains us the object. Everything in Python is an object of a type such as integers, lists, dictionaries, functions and so on. We define a type of object using classes. Classes possess the following information: Data attributes: What is needed to create an instance of a class Methods (i.e. procedural attributes): How we interact with instances of a class. towardsdatascience.com List is a built-in data structure in Python. It is represented as a collection of data points in square brackets. Lists can be used to store any data type or a mixture of different data types. Lists are mutable which is one of the reasons why they are so commonly used. Thus, we can remove and add items. It is also possible to update the items of a list. Here are a few examples on how to create and modify a list. words = ['data','science'] #create a listprint(words[0]) #access an item'data'words.append('machine') #add an itemprint(len(words)) #length of list3print(words)['data', 'science', 'machine'] towardsdatascience.com List comprehension is basically creating lists based on other iterables such as lists, tuples, sets, and so on. It can also be described as representing for and if loops with a simpler and more appealing syntax. List comprehensions are relatively faster than for loops. Here is a simple list comprehension that creates a list from another list based on a given condition. a = [4,6,7,3,2]b = [x for x in a if x > 5]b[6, 7] The following list comprehension applies a function to the items in another list. words = ['data','science','machine','learning']b = [len(word) for word in words]b[4, 7, 7, 8] towardsdatascience.com Dictionary is an unordered collection of key-value pairs. Each entry has a key and value. A dictionary can be considered as a list with special index. The keys must be unique and immutable. So we can use strings, numbers (int or float), or tuples as keys. Values can be of any type. Consider a case where we need to store grades of students. We can either store them in a dictionary or a list. One way to create a dictionary is writing key-value pairs in curly braces. grades = {'John':'A', 'Emily':'A+', 'Betty':'B', 'Mike':'C', 'Ashley':'A'} We can access a value in a dictionary using its key. grades['John']'A'grades.get('Betty')'B' towardsdatascience.com A set is an unordered collection of distinct hashable objects. This is the definition of a set in the official Python documentation. Let’s open it up. Unordered collection: It contains zero or more elements. There is no order associated with the elements of a set. Thus, it does not support indexing or slicing like we do with lists. Distinct hashable objects: A set contains unique elements. The hashable means immutable. Although sets are mutable, the elements of sets must be immutable. We can create a set by putting objects separated by a comma in curly braces. a = {1, 4, 'foo'}print(type(a))<class 'set'> Sets do not contain repeated elements so even if we try to add same elements more than once, the resulting set will contain unique elements. a = {1, 4, 'foo', 4, 'foo'}print(a){1, 4, 'foo'} towardsdatascience.com Tuple is a collection of values separated by comma and enclosed in parenthesis. Unlike lists, tuples are immutable. The immutability can be considered as the identifying feature of tuples. Tuples consist of values in parenthesis and separated by comma. a = (3, 4)print(type(a))<class 'tuple'> We can also create tuples without using the parenthesis. A sequence of values separated by comma will create a tuple. a = 3, 4, 5, 6print(type(a))<class 'tuple'> One of the most common use cases of tuples is with functions that return multiple values. import numpy as npdef count_sum(arr): count = len(arr) sum = arr.sum() return count, sumarr = np.random.randint(10, size=8)a = count_sum(arr)print(a)(8, 39)print(type(a))<class 'tuple'> towardsdatascience.com Lambda expressions are special forms of functions. In general, lambda expressions are used without a name. Consider the following function that returns the square of a given number. def square(x): return x**2 The equivalent lambda expression is: lambda x: x ** 2 Consider an operation that needs to be done once or very few times. Furthermore, we have many variations of this operation which are slightly different than the original one. In such case, it is not ideal to define a separate function for each operation. Instead, lambda expressions provide a much more efficient way of accomplishing the tasks. towardsdatascience.com We have covered some key concepts and topics of Python. Most data science related tasks are done through third-party libraries and frameworks such as Pandas, Matplotlib, Scikit-learn, TensorFlow, and so on. However, we should have a comprehensive understanding of basic operations and concepts of Python in order to efficiently use such libraries. They assume you are familiar with the basics of Python. Thank you for reading. Please let me know if you have any feedback.
[ { "code": null, "e": 358, "s": 172, "text": "Python is dominating the data science ecosystem. What I think the top two reasons for such dominance are being relatively easy to learn and the rich selection of data science libraries." }, { "code": null, "e": 519, "s": 358, "text": "...
Map function and Lambda expression in Python to replace characters - GeeksforGeeks
20 Oct, 2018 Given a string S, c1 and c2. Replace character c1 with c2 and c2 with c1.Examples: Input : str = 'grrksfoegrrks' c1 = e, c2 = r Output : geeksforgeeks Input : str = 'ratul' c1 = t, c2 = h Output : rahul We have existing solution for this problem in C++ please refer Replace a character c1 with c2 and c2 with c1 in a string S link. We will solve this problem quickly in Python using Lambda expression and map() function. We will create a lambda expression where character c1 in string will be replaced by c2 and c2 will be replaced by c1 and other will remain same, then we will map this expression on each character of string and will get updated string. # Function to replace a character c1 with c2 # and c2 with c1 in a string S def replaceChars(input,c1,c2): # create lambda to replace c1 with c2, c2 # with c1 and other will remain same # expression will be like "lambda x: # x if (x!=c1 and x!=c2) else c1 if (x==c2) else c2" # and map it onto each character of string newChars = map(lambda x: x if (x!=c1 and x!=c2) else \ c1 if (x==c2) else c2,input) # now join each character without space # to print resultant string print (''.join(newChars)) # Driver programif __name__ == "__main__": input = 'grrksfoegrrks' c1 = 'e' c2 = 'r' replaceChars(input,c1,c2) Output: geeksforgeeks python-lambda Python Strings Strings Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Python Dictionary Read a file line by line in Python Enumerate() in Python How to Install PIP on Windows ? Iterate over a list in Python Reverse a string in Java Write a program to reverse an array or string C++ Data Types Longest Common Subsequence | DP-4 Write a program to print all permutations of a given string
[ { "code": null, "e": 24190, "s": 24162, "text": "\n20 Oct, 2018" }, { "code": null, "e": 24273, "s": 24190, "text": "Given a string S, c1 and c2. Replace character c1 with c2 and c2 with c1.Examples:" }, { "code": null, "e": 24414, "s": 24273, "text": "Input :...
An Analysis of the Fatal Police Shootings in the US | by Soner Yıldırım | Towards Data Science
The Washington Post has released a dataset containing fatal shootings by police in the US between 2015 and 2020. In the midst of racism discussions, this dataset is likely to shed light on the current situation. We will use data analysis tools and techniques to reveal some numbers that summarize the dataset. This post is aimed to be a practical guide of data analysis and a discussion that approaches police shootings in the US from many perspectives. The dataset is available in this repo by the Washington Post. I will be using python data analysis and visualization libraries. Let’s start with importing these libraries. # Data analysisimport numpy as npimport pandas as pd# Data visualizationimport matplotlib.pyplot as pltimport seaborn as snssns.set(style='darkgrid')%matplotlib inline We can now read the dataset into a pandas dataframe. df = pd.read_csv("/content/fatal_police_shootings.csv")print("Dataset has {}".format(df.shape[0]),"rows and {}".format(df.shape[1]), "columns") Each row represents a shooting and the columns give information about the details of shootins. Let’s see what kind of data we have in these 14 columns. The names are too personal to use and might be against individual rights. The id column is redundant. Thus, I will drop these columns. df.drop(['id','name'], axis=1, inplace=True) Remaining 12 columns are: We have data about the person being shot, location of shooting, and the shooting action itself. The next step is to check missing values and handle them. df.isna().sum() “Armed”, “age”, “gender”, “race”, and “flee” columns have missing values. A more informative tool on missing values is the missing value matrix provided by missingno library. It gives an idea about the distribution of missing values in the dataframe. import missingno as msnomsno.matrix(df) It seems like missing values on “gender” and “race” columns overlap to some extent (i.e. they are likely to be in the same rows). We can check the heatmap of missing values to confirm: msno.heatmap(df, figsize=(10,6)) The missing values in “race” and “age” columns are correlated. The “flee” and “armed” columns describe the action of the person being shot. df.flee.value_counts() The action of “not fleeing” dominates “flee” column. We can fill in the missing values with “not fleeing”. Please note that this is not a strict rule for handling missing values. You can choose a different way to handle them (e.g. drop them). df.flee.fillna('Not fleeing', inplace=True) Let’s check the “armed” column. df.armed.value_counts() “Gun” is the most frequent value so I will use that to fill in missing values. We can use the index of the returned series from value_counts function: df.armed.fillna(df.armed.value_counts().index[0], inplace=True) I will drop the rows that have missing values in “race”, “age”, and “gender” columns because they describe the person being shot and thus it could be misleading to make assumption without accurate information. df.dropna(axis=0, how='any', inplace=True)print("There are {}".format(df.isna().sum().sum()), "missing values left in the dataframe")There are 0 missing values left in the dataframe The dataframe does not have any missing values left. Data types are important in the process of data analysis as they determine how certain actions and computations are handled. The categorical variables can be represented with “object” or “category” data type. The age can be represented with “integer” or “float” data types and true/false type of data is handled with “bool”. All of the data types seem appropriate except for date column. I will convert it to datetime which is the data type of pandas to handle dates. After converting, I will extract “year” and “month” from the date and create new columns. We can use them to see yearly or monthly shooting rates. df['date'] = pd.to_datetime(df['date'])df['year'] = pd.to_datetime(df['date']).dt.yeardf['month'] = pd.to_datetime(df['date']).dt.month Let’s see if there is any continuous upward or downward trend in the daily number of shootings from 2015 to 2020. One way is to group dates and the number of shooting at each day. df_date = df[['date','armed']].groupby('date').count().sort_values(by='date')df_date.rename(columns={'armed':'count'}, inplace=True)df_date.head() “Armed” column is chosen randomly just to count the number of rows for each day. We can now create a time series plot. plt.figure(figsize=(12,6))plt.title('Daily Fatal Shootings', fontsize=15)sns.lineplot(x=df_date.index, y='count', data=df_date) This does not tell much. It will look better if we plot 10-day averages. df_date.resample('10D').mean().plot(figsize=(12,6))plt.title('Fatal Shootings - 10 day average', fontsize=15) We can observe some peaks but there is not a continuous trend. Let’s check how numbers are changing in different states. I will use sidetable which is pandas utility library. It is like a superior version of value_counts. !pip install sidetableimport sidetabledf.stb.freq(['state'], thresh=50) 684 fatal shootings have occurred in CA which consist of approximately 14% of all shootings. The top 3 states in terms of the total number of shootings are CA, TX, and FL. This is not strange when the populations of states are considered. I think the “age” is an important aspect to be considered. Preventive actions can be designed according to different age groups. plt.figure(figsize=(12,8))plt.title('Age Distribution of Deaths', fontsize=15)sns.distplot(df.age) Most of the people who were shot are younger than 40. Every life equally matters but it makes it harder for the family when a young person dies. Racism is the most severe disease of the history of humankind. It is more dangerous than coronavirus or any other pandemis that people have been struggled with. Unfortunately, there is a difference in the number of fatal shootings with respect to different races. We will first create a new dataframe that contains the number of yearly shootings for each race. The dataset consists of 6 different races which are: df_race = df[['race','year','armed']].groupby(['race','year']).count().reset_index()df_race.rename(columns={'armed':'number_of_deaths'}, inplace=True)df_race.head() Only the number of deaths do not tell us much because these races are not proportionately represented in terms of population. Thus, I will use the number of deaths per 1 million people as baseline. I will use the populations in 2019 which is available on US Census website. Although the ratios have changed from 2015 to 2020, it is not a dramatic change like 10–15 percent. I think the ratios remain within a margin of a few percents. However, you can use exact populations in each year to be more accurate. df_pop = pd.DataFrame({'race':['W','B','A','H','N','O'],'population':[0.601, 0.134, 0.059, 0.185, 0.013, 0.008]})df_pop['population'] = df_pop['population']*328df_pop Population column represents the population of each race in millions. We can now merge df_race and df_pop dataframes. df_race = pd.merge(df_race, df_pop, on='race')df_race['deaths_per_million'] = df_race['number_of_deaths'] / df_race['population']df_race.head() We can create a barplot that shows deaths_per_million of each race by police shootings from 2015 to 2020. plt.figure(figsize=(12,8))plt.title("Fatal Shootings by Police", fontsize=15)sns.barplot(x='year', y='deaths_per_million', hue='race', data=df_race ) The ratio for black people (B) is clearly higher than other races. The native (N) and other (O) have very low population so a more logical comparison would be among black (B), white (W), hispanic (H), and asian (A) races. The overall ratio of deaths_per_million: If ratio of deaths_per_million for black (B) people is double the ratio of hispanic (H) people. The difference between black (B) and white (W) people is even more. Racism is something we should not even be discussing. It should not exist. When a child hear the word “racism”, the response should be “what does it mean?”. Any type of racism in any where in the world needs to disappear. Unfortunately, it is not the case in the world now. But, we can educate our children in a way that the word “racism” stops existing. Thank you for reading. Please let me know if you have feedback.
[ { "code": null, "e": 384, "s": 172, "text": "The Washington Post has released a dataset containing fatal shootings by police in the US between 2015 and 2020. In the midst of racism discussions, this dataset is likely to shed light on the current situation." }, { "code": null, "e": 626, ...
Python | Pandas Series.str.isdecimal() - GeeksforGeeks
28 Sep, 2018 Python is a great language for doing data analysis, primarily because of the fantastic ecosystem of data-centric Python packages. Pandas is one of those packages and makes importing and analyzing data much easier. Pandas isdecimal() is used to check whether all characters in a string are decimal. This method works in a similar way to str.isdigit() method, but there is a difference that the latter is more expansive with respect to non ASCII digits. This will be cleared with the help of an example. Syntax: Series.str.isdecimal()Return type: Boolean series Example #1:In this example, a new data frame is created with just one column and some values are passed to it. Then str.isdecimal() method is called on that column and output is returned to a new column Bool. # importing pandas module import pandas as pd # creating data framedata = pd.DataFrame(["hey", "gfg", 3, "4", 5, "5.5"]) # calling method and returning seriesdata["Bool"]= data[0].str.isdecimal() # displaydata Output:As shown in the output image, the decimal returns True for decimal values in string form. If the element is in int, float or any other data type other than string, NaN is returned ( No matter if it’s a decimal number ) Example #2:In this example, numbers with power are also added to that column. Both str isdigit() and str.isdecimal() are called and output is stored in different columns to compare the difference between both. # importing pandas module import pandas as pd # creating data framedata = pd.DataFrame(["hey", "gfg", 3, "42", 5, "5.5", "1292"]) # calling method and returning seriesdata["Bool"]= data[0].str.isdecimal() # calling method and returning seriesdata["Bool2"]= data[0].str.isdigit() # displaydata Output:As shown in the Output image, the isdigit () returns True for numbers with power but isdecimal () returns False for those values. Python pandas-series Python pandas-series-methods Python-pandas Python-pandas-series-str Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments How to Install PIP on Windows ? How to drop one or multiple columns in Pandas Dataframe How To Convert Python Dictionary To JSON? Check if element exists in list in Python Python | Pandas dataframe.groupby() Python | Get unique values from a list Defaultdict in Python Python | os.path.join() method Python Classes and Objects Create a directory in Python
[ { "code": null, "e": 23901, "s": 23873, "text": "\n28 Sep, 2018" }, { "code": null, "e": 24115, "s": 23901, "text": "Python is a great language for doing data analysis, primarily because of the fantastic ecosystem of data-centric Python packages. Pandas is one of those packages a...
Window Functions In Pandas. Running Totals, Period To Date Returns... | by Tony Yiu | Towards Data Science
SQL has a neat feature called window functions. By the way, you should definitely know how to work with these in SQL if you are looking for a data analyst job. Pandas (with a little bit of legwork) allows us to do the same things. Let’s see how. Window functions allow us to perform an operation with a given row’s data and data from another row that is a specified number of rows away — this “number of rows away value” is called the window. For example, let’s say you have 10 days of stock prices: Window functions allow us to perform computations among the values of a specified column. For example, I might want to compare today’s stock price with yesterday’s — then I would want a window of “1” looking backwards. A window function allows us to do that. If on the other hand, I want to compare today’s price with the price 1 year ago, then I would want a window of “356” (assuming weekends are in your dataset). Window functions are especially useful for time series data where at each point in time in your data, you are only supposed to know what has happened as of that point (no crystal balls allowed). The good news is that windows functions exist in pandas and they are very easy to use. Let’s say we want to calculate the daily change in price of our stock. To do this we would need to take each day’s price and divide it by the previous day’s price and subtract 1. We get our data as a list: stock_list = [100, 98, 95, 96, 99, 102, 103, 105, 105, 108] Lists are not very math friendly so we can put our data in a numpy array (I reshape it to be a 9 by 1 array so that it’s easier to view and display): In:stock_array = np.array(stock_list)(stock_array[1:]/stock_array[:-1] - 1).reshape(-1,1)Out:array([[-0.02 ], [-0.03061224], [ 0.01052632], [ 0.03125 ], [ 0.03030303], [ 0.00980392], [ 0.01941748], [ 0. ], [ 0.02857143]]) Cool, we got the returns. No need for dataframes right? Well, not exactly. Dataframes are more versatile than numpy arrays (which are optimized for dealing with numerical data). And the developers of pandas have created all these nifty window methods to make our lives easier — they would be sad if we didn’t take advantage. So let’s put our stock prices into a dataframe: stock_df = pd.DataFrame(stock_list, columns=['price']) We can use the shift method to get the previous day’s price. The shift method is very similar to SQL’s lag function. The “1” tells it to lag things by one day, giving us the previous day’s price: stock_df['prev_price'] = stock_df.shift(1) Now stock_df looks like this: price prev_price0 100 NaN1 98 100.02 95 98.03 96 95.04 99 96.05 102 99.06 103 102.07 105 103.08 105 105.09 108 105.0 Cool, now we just need to divide price by prev_price and subtract 1 to get the daily return: In:stock_df['daily_return'] = stock_df['price']/stock_df['prev_price']-1print(stock_df)Out: price prev_price daily_return0 100 NaN NaN1 98 100.0 -0.0200002 95 98.0 -0.0306123 96 95.0 0.0105264 99 96.0 0.0312505 102 99.0 0.0303036 103 102.0 0.0098047 105 103.0 0.0194178 105 105.0 0.0000009 108 105.0 0.028571 We can do a lot more than just calculating returns. For example, say we want to compare our daily return to an expanding window average return in order to see how each return compares to the historical average. You might think, why not just calculate the average of all the values in the daily return_column and use that? The answer is data leakage. In time series analysis, when we are trying to forecast the future, we need to be really careful about what could have been observed and what could not have been observed on a specific date. For example, on day 5 of our dataset, we can only observe the first 5 prices: 100, 98, 95, 96, 99. So if we are testing features in order to make a forecast for day 6, we can’t compare day 5’s return of 3.03% with the mean daily change of the entire period because on day 5, we have not yet observed days 6 through 9. That’s where an expanding window comes in. In case you are not familiar with expanding and rolling windows, the following picture visualizes what they are. With an expanding window, we calculate metrics in an expanding fashion — meaning that we include all rows up to the current one in the calculation. A rolling window allows us to calculate metrics on a rolling basis — for example, rolling(3) means that we use the current observation as well as the two preceding ones in order to calculate our desired metric. The rationale behind using an expanding window is that with every day that passes, we get another price and another daily change that we can add to our mean calculation. That’s new information that we should capture in our calculated metrics. We can do this with the following code (I also threw in a 3 day rolling window as well for fun). stock_df['expand_mean']=stock_df['daily_return'].expanding().mean()stock_df['roll_mean_3']=stock_df['daily_return'].rolling(3).mean() Calling .expanding() on a pandas dataframe or series creates a pandas expanding object. It’s a lot like the more well known groupby object (which groups things based on specified column labels). The expanding (or rolling) object is what allows us to calculate various metrics in an expanding fashion. Let’s see what our dataframe looks like now: price prev_price daily_return expand_mean roll_mean_30 100 NaN NaN NaN NaN1 98 100.0 -0.020000 -0.020000 NaN2 95 98.0 -0.030612 -0.025306 NaN3 96 95.0 0.010526 -0.013362 -0.0133624 99 96.0 0.031250 -0.002209 0.0037215 102 99.0 0.030303 0.004293 0.0240266 103 102.0 0.009804 0.005212 0.0237867 105 103.0 0.019417 0.007241 0.0198418 105 105.0 0.000000 0.006336 0.0097409 108 105.0 0.028571 0.008807 0.015996 Notice that on day 1, expand_mean and daily_return are equal — that’s necessarily the case because we are calculating the expanding mean with only one daily return on day 1. Also, on day 3 when we finally have enough data to calculate our rolling 3 day mean, roll_mean_3’s first value is equal to expand_mean. That makes sense too — on day 3, our expanding mean is also calculated using the most recent 3 days’ returns. Here is the plot (and code) of the stock’s daily returns and the 2 means that we calculated: plt.subplots(figsize=(8,6))plt.plot(stock_df['daily_return'], label='Daily Return')plt.plot(stock_df['expand_mean'], label='Expanding Mean')plt.plot(stock_df['roll_mean_3'], label = 'Rolling Mean')plt.xlabel('Day')plt.ylabel('Return')plt.legend()plt.show() We can apply other functions besides the mean as well. Let’s say our boss comes over and says, “I want you to keep track of how many days this stock has been up.” We can do this using an expanding object and the sum method (for keeping a running total). First we need to add a column to our dataframe to denote whether the stock was up that day or not. We can take advantage of the apply method (which applies a function to each row in the dataframe or series). We can either define a function to give apply or use a lambda function — I opted for a lambda function (fewer lines of code) that takes each return and returns 1 if it’s positive and 0 if it’s negative. stock_df['positive'] = stock_df['daily_return'].apply(lambda x: 1 if x>0 else 0) Once we have the “positive” column, we can apply an expanding window to it and the sum method (since each positive day is denoted by a 1, we just need to keep a running total of the number of 1s): stock_df['num_positive'] = stock_df['positive'].expanding().sum() And the dataframe that we would send to our boss looks like this: price daily_return num_positive0 100 NaN 0.01 98 -0.020000 0.02 95 -0.030612 0.03 96 0.010526 1.04 99 0.031250 2.05 102 0.030303 3.06 103 0.009804 4.07 105 0.019417 5.08 105 0.000000 5.09 108 0.028571 6.0 So as of right now, there are 6 positive days for the stock. But going forward as we collect more prices, our running total will update with it. Nice, our boss should be happy with this. Thanks for reading! Cheers and stay safe and healthy everyone. If you liked this article and my writing in general, please consider supporting my writing by signing up for Medium via my referral link here. Thanks!
[ { "code": null, "e": 418, "s": 172, "text": "SQL has a neat feature called window functions. By the way, you should definitely know how to work with these in SQL if you are looking for a data analyst job. Pandas (with a little bit of legwork) allows us to do the same things. Let’s see how." }, {...
Tryit Editor v3.6 - Show Python
a = mymodule.person1["age"]
[]
How to create a split screen (50/50) with CSS?
To create a split screen with CSS, the code is as follows − Live Demo <!DOCTYPE html> <html> <head> <meta name="viewport" content="width=device-width, initial-scale=1"> <style> body { font-family: Arial; color: white; } h1{ padding:20px; } .left,.right { height: 100%; width: 50%; position: fixed; z-index: 1; top: 0; overflow-x: hidden; padding-top: 20px; } .left { left: 0; background-color: rgb(36, 0, 95); } .right { right: 0; background-color: rgb(56, 1, 44); } .centered { position: absolute; top: 50%; left: 50%; transform: translate(-50%, -50%); text-align: center; } .centered img { width: 150px; border-radius: 50%; } </style> </head> <body> <div class="left"> <h1>Some random text on the left</h1> </div> <div class="right"> <h1>Some random text on the right</h1> </div> </body> </html> The above code will produce the following output −
[ { "code": null, "e": 1122, "s": 1062, "text": "To create a split screen with CSS, the code is as follows −" }, { "code": null, "e": 1133, "s": 1122, "text": " Live Demo" }, { "code": null, "e": 2029, "s": 1133, "text": "<!DOCTYPE html>\n<html>\n<head>\n<meta n...
How to create a Mochawesome report in Cypress?
We can create a Mochawesome report in Cypress. Cypress is bundled with Mocha, so any reports that can be generated for Mocha can also be utilized with Cypress. The Mochawesome report is one of the most important reports in Cypress. To install mochawesome, run the command − npm install mochawesome --save-dev To install mocha, run the command − npm install mocha --save-dev To merge mochawesome json reports, run the command − npm install mochawesome-merge --save-dev All these packages after installation should get reflected on the package.json file. To merge multiple reports in a single report, run the command − npm run combine-reports In the cypress.json file, we can set the following configurations for the mochawesome reports − overwrite – if its value is set to false there should not be any overwriting from the prior generated reports. overwrite – if its value is set to false there should not be any overwriting from the prior generated reports. reportDir – the location where reports are to be saved. reportDir – the location where reports are to be saved. quiet – if its value is set to true, there should not be any Cypress-related output. Only the mochawesome output to be printed. quiet – if its value is set to true, there should not be any Cypress-related output. Only the mochawesome output to be printed. html – if its value is set to false, there should not be any generation of html reports after execution. html – if its value is set to false, there should not be any generation of html reports after execution. json – if its value is set to true, a json file with execution details will be generated. json – if its value is set to true, a json file with execution details will be generated. Implementation in cypress.json { "reporter": "mochawesome", "reporterOptions": { "reportDir": "cypress/results", "overwrite": false,x "html": false, "json": true } } To generate a report for all specs in the integration folder of the Cypress project, run the command − npx cypress run For running a particular test, run the command − npx cypress run --spec "<path of spec file>" After execution is completed, the mochawesome-report folder gets generated within the Cypress project containing reports in html and json formats. Right-click on the mochawesome.html report, select the Copy Path option, and open the path copied on the browser. The mochawesome report gets opened with details of the execution results, duration, test case name, test steps, and so on. On clicking on the icon (highlighted in the above image) on the left upper corner of the screen, more options get displayed. We can get the different views to select the passed, failed, pending, skipped test cases, and the hooks applied to the test.
[ { "code": null, "e": 1222, "s": 1062, "text": "We can create a Mochawesome report in Cypress. Cypress is bundled with Mocha, so any reports that can be generated for Mocha can also be utilized with Cypress." }, { "code": null, "e": 1336, "s": 1222, "text": "The Mochawesome report...
Android Grid View
Android GridView shows items in two-dimensional scrolling grid (rows & columns) and the grid items are not necessarily predetermined but they automatically inserted to the layout using a ListAdapter An adapter actually bridges between UI components and the data source that fill data into UI Component. Adapter can be used to supply the data to like spinner, list view, grid view etc. The ListView and GridView are subclasses of AdapterView and they can be populated by binding them to an Adapter, which retrieves data from an external source and creates a View that represents each data entry. Following are the important attributes specific to GridView − android:id This is the ID which uniquely identifies the layout. android:columnWidth This specifies the fixed width for each column. This could be in px, dp, sp, in, or mm. android:gravity Specifies the gravity within each cell. Possible values are top, bottom, left, right, center, center_vertical, center_horizontal etc. android:horizontalSpacing Defines the default horizontal spacing between columns. This could be in px, dp, sp, in, or mm. android:numColumns Defines how many columns to show. May be an integer value, such as "100" or auto_fit which means display as many columns as possible to fill the available space. android:stretchMode Defines how columns should stretch to fill the available empty space, if any. This must be either of the values − none − Stretching is disabled. none − Stretching is disabled. spacingWidth − The spacing between each column is stretched. spacingWidth − The spacing between each column is stretched. columnWidth − Each column is stretched equally. columnWidth − Each column is stretched equally. spacingWidthUniform − The spacing between each column is uniformly stretched.. spacingWidthUniform − The spacing between each column is uniformly stretched.. android:verticalSpacing Defines the default vertical spacing between rows. This could be in px, dp, sp, in, or mm. This example will take you through simple steps to show how to create your own Android application using GridView. Follow the following steps to modify the Android application we created in Hello World Example chapter − Following is the content of the modified main activity file src/com.example.helloworld/MainActivity.java. This file can include each of the fundamental lifecycle methods. package com.example.helloworld; import android.os.Bundle; import android.app.Activity; import android.view.Menu; import android.widget.GridView; public class MainActivity extends Activity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); GridView gridview = (GridView) findViewById(R.id.gridview); gridview.setAdapter(new ImageAdapter(this)); } } Following will be the content of res/layout/activity_main.xml file − <?xml version="1.0" encoding="utf-8"?> <GridView xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/gridview" android:layout_width="fill_parent" android:layout_height="fill_parent" android:columnWidth="90dp" android:numColumns="auto_fit" android:verticalSpacing="10dp" android:horizontalSpacing="10dp" android:stretchMode="columnWidth" android:gravity="center" /> Following will be the content of res/values/strings.xml to define two new constants − <?xml version="1.0" encoding="utf-8"?> <resources> <string name="app_name">HelloWorld</string> <string name="action_settings">Settings</string> </resources> Following will be the content of src/com.example.helloworld/ImageAdapter.java file − package com.example.helloworld; import android.content.Context; import android.view.View; import android.view.ViewGroup; import android.widget.BaseAdapter; import android.widget.GridView; import android.widget.ImageView; public class ImageAdapter extends BaseAdapter { private Context mContext; // Constructor public ImageAdapter(Context c) { mContext = c; } public int getCount() { return mThumbIds.length; } public Object getItem(int position) { return null; } public long getItemId(int position) { return 0; } // create a new ImageView for each item referenced by the Adapter public View getView(int position, View convertView, ViewGroup parent) { ImageView imageView; if (convertView == null) { imageView = new ImageView(mContext); imageView.setLayoutParams(new GridView.LayoutParams(85, 85)); imageView.setScaleType(ImageView.ScaleType.CENTER_CROP); imageView.setPadding(8, 8, 8, 8); } else { imageView = (ImageView) convertView; } imageView.setImageResource(mThumbIds[position]); return imageView; } // Keep all Images in array public Integer[] mThumbIds = { R.drawable.sample_2, R.drawable.sample_3, R.drawable.sample_4, R.drawable.sample_5, R.drawable.sample_6, R.drawable.sample_7, R.drawable.sample_0, R.drawable.sample_1, R.drawable.sample_2, R.drawable.sample_3, R.drawable.sample_4, R.drawable.sample_5, R.drawable.sample_6, R.drawable.sample_7, R.drawable.sample_0, R.drawable.sample_1, R.drawable.sample_2, R.drawable.sample_3, R.drawable.sample_4, R.drawable.sample_5, R.drawable.sample_6, R.drawable.sample_7 }; } Let's try to run our modified Hello World! application we just modified. I assume you had created your AVD while doing environment setup. To run the app from Android Studio, open one of your project's activity files and click Run icon from the toolbar. Android studio installs the app on your AVD and starts it and if everything is fine with your setup and application, it will display following Emulator window − Let's extend the functionality of above example where we will show selected grid image in full screen. To achieve this we need to introduce a new activity. Just keep in mind for any activity we need perform all the steps like we have to implement an activity class, define that activity in AndroidManifest.xml file, define related layout and finally link that sub-activity with the main activity by it in the main activity class. So let's follow the steps to modify above example − Following is the content of the modified main activity file src/com.example.helloworld/MainActivity.java. This file can include each of the fundamental life cycle methods. package com.example.helloworld; import android.app.Activity; import android.content.Intent; import android.os.Bundle; import android.view.Menu; import android.view.View; import android.widget.AdapterView; import android.widget.AdapterView.OnItemClickListener; import android.widget.GridView; public class MainActivity extends Activity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); GridView gridview = (GridView) findViewById(R.id.gridview); gridview.setAdapter(new ImageAdapter(this)); gridview.setOnItemClickListener(new OnItemClickListener() { public void onItemClick(AdapterView<?> parent, View v, int position, long id){ // Send intent to SingleViewActivity Intent i = new Intent(getApplicationContext(), SingleViewActivity.class); // Pass image index i.putExtra("id", position); startActivity(i); } }); } } Following will be the content of new activity file src/com.example.helloworld/SingleViewActivity.java file − package com.example.helloworld; import android.app.Activity; import android.content.Intent; import android.os.Bundle; import android.widget.ImageView; public class SingleViewActivity extends Activity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.single_view); // Get intent data Intent i = getIntent(); // Selected image id int position = i.getExtras().getInt("id"); ImageAdapter imageAdapter = new ImageAdapter(this); ImageView imageView = (ImageView) findViewById(R.id.SingleView); imageView.setImageResource(imageAdapter.mThumbIds[position]); } } Following will be the content of res/layout/single_view.xml file − <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" > <ImageView android:id="@+id/SingleView" android:layout_width="fill_parent" android:layout_height="fill_parent"/> </LinearLayout> Following will be the content of AndroidManifest.xml to define two new constants − <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.helloworld"> <application android:allowBackup="true" android:icon="@drawable/ic_launcher" android:label="@string/app_name" android:theme="@style/AppTheme" > <activity android:name="com.example.helloworld.MainActivity" android:label="@string/app_name" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <activity android:name=".SingleViewActivity"></activity> </application> </manifest> Let's try to run our modified Hello World! application we just modified. I assume you had created your AVD while doing environment setup. To run the app from Android studio, open one of your project's activity files and click Run icon from the toolbar. Android studio installs the app on your AVD and starts it and if everything is fine with your setup and application, it will display following Emulator window − Now if you click on either of the images it will be displayed as a single image, for example− 46 Lectures 7.5 hours Aditya Dua 32 Lectures 3.5 hours Sharad Kumar 9 Lectures 1 hours Abhilash Nelson 14 Lectures 1.5 hours Abhilash Nelson 15 Lectures 1.5 hours Abhilash Nelson 10 Lectures 1 hours Abhilash Nelson Print Add Notes Bookmark this page
[ { "code": null, "e": 3807, "s": 3607, "text": "Android GridView shows items in two-dimensional scrolling grid (rows & columns) and the grid items are not necessarily predetermined but they automatically inserted to the layout using a ListAdapter" }, { "code": null, "e": 3994, "s": 3...
Python String Interpolation. Learn the basics of string... | by Iffat Malik Gore | Towards Data Science
String interpolation is a process of injecting value into a placeholder (a placeholder is nothing but a variable to which you can assign data/value later) in a string literal. It helps in dynamically formatting the output in a fancier way. Python supports multiple ways to format string literals. All string interpolation methods always return new values and do not manipulate the original string. Apart from its normal calculation usage, % operator is overloaded in str class to perform string formatting, it interpolates various class types into the string literal. Below are some commonly used format-specifiers, General format for using % is, Here, % indicates a conversion type of string when using Python’s string formatting capabilities. Output:-------Result of calculation is 4.38Hey! I'm Emma, 33 years old and I love Python ProgramingHey! I'm Emma and I'm 33 years old. In the above example, we have used two %s and one %d format specifiers which are nothing but placeholders for a tuple (‘Emma’, 33, ‘Python’) values. % takes only one argument and hence tuple is used for multiple value substitution. Note that, the values of a tuple are passed in the order they are specified. Python will throw TypeError exception if the type of a value and the type of corresponding format specifier doesn’t match. In the below code, %d is a format specifier for first value ‘Emma’ which is a string, and hence TypeError exception is raised. print("\nHey! I'm %d, %d years old and I love %s Programing"%('Emma',33,'Python')) --------------------------------------------------------------------TypeError Traceback (most recent call last)<ipython-input-5-f2e9acc11cdb> in <module> 1 ----> 2 print("\nHey! I'm %d, %d years old and I love %s Programing"%('Emma',33,'Python')) TypeError: %d format: a number is required, not str Though %-formatting is available since the beginning in Python, it is clumsy when there are multiple substitutions in a single string. Str.format( ) is used for positional formatting, this allows re-arranging the order of placeholders within string literals without changing the order they are listed in .format( ). { } is used as a placeholder and only value passed by method .format( ) will be replaced by { }, the rest of the string literals will be unchanged in the output. Positional formatting can be achieved by using either index or keyword in placeholders. If none of them are specified, objects will be injected in the order they are mentioned in .format( ), by default. Output:-------Hey! I'm Emma, 33 years old, and I love Python Programming.33 years old Emma loves Python programming.Emma loves Python programming and she is 33 years old. Use either indexing/keywords or keep it black, you cannot hybrid them together as Python will throw ValueError exception. In the below code, two of the placeholders are kept blank and one has index ‘2’ reference and hence the exception is raised. name="Emma"age=33lan="Python"print("{} years old {2} loves {} programming.".format(age,lan,name))--------------------------------------------------------------------ValueError Traceback (most recent call last)<ipython-input-15-b38087c9c415> in <module> 5 lan="Python" 6 ----> 7 print("{} years old {2} loves {} programming.".format(age,lan,name)) 8 ValueError: cannot switch from automatic field numbering to manual field specification .format( ) method is versatile and can be easily used with all data structures as well. Output:-------Hey! My name is Emma, I'm 33 years old, currently living in UK and love Python programmingPerson info from List: Emma from UKPerson info from Tuple: Emma 33 UKPerson info Set: 33 Python Emma As shown in the code above, values received from dictionary person_dict have key-values as placeholders in a string literal. ‘**’ (double asterisk) is used for unpacking dictionary values mapped to key-values. For both list and tuple, indexing has been used in a placeholder. f-string or formatted string literal provides a way of formatting strings using a minimal syntax. (It was introduced in Python 3.6). Code readability is high and hence it is generally preferred way of formatting string literals. ‘f’ or ‘F’ is used as a prefix and {} are used as placeholders. Unlike .format( ), f-string doesn’t allow empty braces {}. F-string expressions are evaluated at run time. F-strings are faster than the two most commonly used string formatting mechanisms,%-formatting and .format( ) Output:-------Emma is 33 years old currently living in UK and loves Python programming.33 years old Emma lives in UK and loves Python programming'Emma' is a python developer from UKDate in default format: 2020-06-04 17:01:31.407452 Date in custom format: 04/06/20 As shown in the above code, we just have to append ‘f’ at the beginning of the string and include variable names directly into placeholder { }. f-string formatting can be seamlessly used with class and its objects as well. Output:-------Person Details: Emma is 33 years old Python developer.[Name: Emma, Age: 33, Programming_language: Python] Here, we have defined two methods in Person class. Appended ‘f’ at the beginning and class/object reference variables are placed in { }. Clean and precise! Another way of string interpolation is facilitated by Template class of String module. It lets you make substitutions in a string using a mapping object. Here, a valid python identifier preceded by sign ‘$’ is used as a placeholder. Basically, we can define a string literal using a Template class object and then map the value of placeholders through substitute( ) or safe_substitute( ) method. ‘$’ sign performs the actual substitution and the identifier is used to map the replacement keywords specified in method substitute( ) or safe_substitute( ). substitute( ) — It raises an error if the value for the corresponding placeholder is not supplied. safe_substitute( ) — It is more appropriate when there is a possibility of incomplete user-supplied data. (When data is missing, it leaves the placeholder unchanged.) Output:-------Emma is 33 years old and loves Python programming!$age years old Emma loves Python programming$age years old Harry loves Java programming In the above example, we have created object person_info of Template class which holds the string literal. Then actual values are injected using substitute( ) method, it maps the values to placeholder names. substitute( ) raises an error if the value for the corresponding placeholder is not supplied. Here, no value for $years is supplied in substitute( ) and hence it raises KeyError exception. person_info=Template('\n$name is $years years old and loves $lan programming!') print(person_info.substitute(name='Emma',lan='Python'))--------------------------------------------------------------------KeyError Traceback (most recent call last)<ipython-input-7-2b10997fef23> in <module> 4 #creating object of Template class 5 person_info=Template('\n$name is $years years old and loves $lan programming!')----> 6 print(person_info.substitute(name='Emma',lan='Python')) #substitute() 7 8 ~\Anaconda3\lib\string.py in substitute(*args, **kws) 130 raise ValueError('Unrecognized named group in pattern', 131 self.pattern)--> 132 return self.pattern.sub(convert, self.template) 133 134 def safe_substitute(*args, **kws):~\Anaconda3\lib\string.py in convert(mo) 123 named = mo.group('named') or mo.group('braced') 124 if named is not None:--> 125 return str(mapping[named]) 126 if mo.group('escaped') is not None: 127 return self.delimiterKeyError: 'years' In the same example, we have also used safe_substitute( ) where we have not assigned any value for the place holder $age and hence it remains the same in the output without raising any exception. $age years old Emma loves Python programming$age years old Harry loves Java programming This method is considered for complex customized string manipulations; however, the biggest limitation of str.Template class is, it only takes string arguments. I personally use f-string most of the time as it is concise, very convenient to write, and at the same time, code readability is high. Some of the great resources for the topic are, PEP 498 — Literal String Interpolation Formatted String Literals The code used in this article can be accessed from my GitHub Repository.
[ { "code": null, "e": 569, "s": 171, "text": "String interpolation is a process of injecting value into a placeholder (a placeholder is nothing but a variable to which you can assign data/value later) in a string literal. It helps in dynamically formatting the output in a fancier way. Python supports...
Java HashMap
In the ArrayList chapter, you learned that Arrays store items as an ordered collection, and you have to access them with an index number (int type). A HashMap however, store items in "key/value" pairs, and you can access them by an index of another type (e.g. a String). One object is used as a key (index) to another object (value). It can store different types: String keys and Integer values, or the same type, like: String keys and String values: Create a HashMap object called capitalCities that will store String keys and String values: import java.util.HashMap; // import the HashMap class HashMap<String, String> capitalCities = new HashMap<String, String>(); The HashMap class has many useful methods. For example, to add items to it, use the put() method: // Import the HashMap class import java.util.HashMap; public class Main { public static void main(String[] args) { // Create a HashMap object called capitalCities HashMap<String, String> capitalCities = new HashMap<String, String>(); // Add keys and values (Country, City) capitalCities.put("England", "London"); capitalCities.put("Germany", "Berlin"); capitalCities.put("Norway", "Oslo"); capitalCities.put("USA", "Washington DC"); System.out.println(capitalCities); } } Try it Yourself » To access a value in the HashMap, use the get() method and refer to its key: capitalCities.get("England"); Try it Yourself » To remove an item, use the remove() method and refer to the key: capitalCities.remove("England"); Try it Yourself » To remove all items, use the clear() method: capitalCities.clear(); Try it Yourself » To find out how many items there are, use the size() method: capitalCities.size(); Try it Yourself » Loop through the items of a HashMap with a for-each loop. Note: Use the keySet() method if you only want the keys, and use the values() method if you only want the values: // Print keys for (String i : capitalCities.keySet()) { System.out.println(i); } Try it Yourself » // Print values for (String i : capitalCities.values()) { System.out.println(i); } Try it Yourself » // Print keys and values for (String i : capitalCities.keySet()) { System.out.println("key: " + i + " value: " + capitalCities.get(i)); } Try it Yourself » Keys and values in a HashMap are actually objects. In the examples above, we used objects of type "String". Remember that a String in Java is an object (not a primitive type). To use other types, such as int, you must specify an equivalent wrapper class: Integer. For other primitive types, use: Boolean for boolean, Character for char, Double for double, etc: Create a HashMap object called people that will store String keys and Integer values: // Import the HashMap class import java.util.HashMap; public class Main { public static void main(String[] args) { // Create a HashMap object called people HashMap<String, Integer> people = new HashMap<String, Integer>(); // Add keys and values (Name, Age) people.put("John", 32); people.put("Steve", 30); people.put("Angie", 33); for (String i : people.keySet()) { System.out.println("key: " + i + " value: " + people.get(i)); } } } Try it Yourself » We just launchedW3Schools videos Get certifiedby completinga course today! If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail: help@w3schools.com Your message has been sent to W3Schools.
[ { "code": null, "e": 271, "s": 0, "text": "In the ArrayList chapter, you learned that Arrays store items as an ordered collection, and you have to access them with an index number (int type).\nA HashMap however, store items in \"key/value\" pairs, and you can access them by an index of another type ...
BigInteger pow() Method in Java - GeeksforGeeks
18 Feb, 2022 The java.math.BigInteger.pow(int exponent) method is used to calculate a BigInteger raise to the power of some other number passed as exponent whose value is equal to (this)exponent. This method performs operation upon the current BigInteger by which this method is called and exponent passed as parameter. Syntax: public BigInteger pow(int exponent) Parameter: This method accepts a parameter exponent which is the exponent to which this BigInteger should be raised to. Returns: This method returns a BigInteger which is equal to (this)exponent. Exception: The parameter exponent must be positive number (exponent >= 0) otherwise ArithmeticException is thrown. Examples: Input: BigInteger1=321456, exponent=5 Output: 3432477361331488865859403776 Explanation: BigInteger1.pow(exponent)=3432477361331488865859403776. 321456^5=3432477361331488865859403776 Input: BigInteger1=45321, exponent=3 Output: 93089018611161 Explanation: BigInteger1.pow(exponent)=93089018611161. 321456^5=93089018611161 Below programs illustrate pow() method of BigInteger class Example 1: Java // Java program to demonstrate// pow() method of BigInteger import java.math.BigInteger; public class GFG { public static void main(String[] args) { // Creating BigInteger object BigInteger b1; b1 = new BigInteger("321456"); int exponent = 5; // apply pow() method BigInteger result = b1.pow(exponent); // print result System.out.println("Result of pow operation between BigInteger " + b1 + " and exponent " + exponent + " equal to " + result); }} Output: Result of pow operation between BigInteger 321456 and exponent 5 equal to 3432477361331488865859403776 Example 2: Java // Java program to demonstrate// pow() method of BigInteger import java.math.BigInteger; public class GFG { public static void main(String[] args) { // Creating BigInteger object BigInteger b1; b1 = new BigInteger("12346"); int exponent = 6; // apply pow() method BigInteger result = b1.pow(exponent); // print result System.out.println("Result of pow operation between BigInteger " + b1 + " and exponent " + exponent + " equal to " + result); }} Output: Result of pow operation between BigInteger 41432345678 and exponent 6 equal to 5058679076487529899393537031261743031889730764186441745527485504 Example 3: Program showing exception when exponent passed as parameter is less than zero. Java // Java program to demonstrate// pow() method of BigInteger import java.math.BigInteger; public class GFG { public static void main(String[] args) { // Creating BigInteger object BigInteger b1; b1 = new BigInteger("76543"); int exponent = -17; try { // apply pow() method BigInteger result = b1.pow(exponent); // print result System.out.println("Result of pow operation between " + b1 + " and " + exponent + " equal to " + result); } catch (Exception e) { System.out.println(e); } }} Output: java.lang.ArithmeticException: Negative exponent Reference:https://docs.oracle.com/en/java/javase/12/docs/api/java.base/java/math/BigInteger.html#pow(int) Rajnis09 kk773572498 java-basics Java-BigInteger Java-Functions java-math Java-math-package Java Java-BigInteger Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Initialize an ArrayList in Java Object Oriented Programming (OOPs) Concept in Java HashMap in Java with Examples Interfaces in Java How to iterate any Map in Java ArrayList in Java Multidimensional Arrays in Java Stack Class in Java Stream In Java Singleton Class in Java
[ { "code": null, "e": 24368, "s": 24340, "text": "\n18 Feb, 2022" }, { "code": null, "e": 24685, "s": 24368, "text": "The java.math.BigInteger.pow(int exponent) method is used to calculate a BigInteger raise to the power of some other number passed as exponent whose value is equal...
Angular 4 - Http Service
Http Service will help us fetch external data, post to it, etc. We need to import the http module to make use of the http service. Let us consider an example to understand how to make use of the http service. To start using the http service, we need to import the module in app.module.ts as shown below − import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { BrowserAnimationsModule } from '@angular/platform-browser/animations'; import { HttpModule } from '@angular/http'; import { AppComponent } from './app.component'; @NgModule({ declarations: [ AppComponent ], imports: [ BrowserModule, BrowserAnimationsModule, HttpModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule { } If you see the highlighted code, we have imported the HttpModule from @angular/http and the same is also added in the imports array. Let us now use the http service in the app.component.ts. import { Component } from '@angular/core'; import { Http } from '@angular/http'; import 'rxjs/add/operator/map'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent { constructor(private http: Http) { } ngOnInit() { this.http.get("http://jsonplaceholder.typicode.com/users"). map((response) ⇒ response.json()). subscribe((data) ⇒ console.log(data)) } } Let us understand the code highlighted above. We need to import http to make use of the service, which is done as follows − import { Http } from '@angular/http'; In the class AppComponent, a constructor is created and the private variable http of type Http. To fetch the data, we need to use the get API available with http as follows this.http.get(); It takes the url to be fetched as the parameter as shown in the code. We will use the test url - https://jsonplaceholder.typicode.com/users to fetch the json data. Two operations are performed on the fetched url data map and subscribe. The Map method helps to convert the data to json format. To use the map, we need to import the same as shown below − import 'rxjs/add/operator/map'; Once the map is done, the subscribe will log the output in the console as shown in the browser − If you see, the json objects are displayed in the console. The objects can be displayed in the browser too. For the objects to be displayed in the browser, update the codes in app.component.html and app.component.ts as follows − import { Component } from '@angular/core'; import { Http } from '@angular/http'; import 'rxjs/add/operator/map'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent { constructor(private http: Http) { } httpdata; ngOnInit() { this.http.get("http://jsonplaceholder.typicode.com/users"). map( (response) ⇒ response.json() ). subscribe( (data) ⇒ {this.displaydata(data);} ) } displaydata(data) {this.httpdata = data;} } In app.component.ts, using the subscribe method we will call the display data method and pass the data fetched as the parameter to it. In the display data method, we will store the data in a variable httpdata. The data is displayed in the browser using for over this httpdata variable, which is done in the app.component.html file. <ul *ngFor = "let data of httpdata"> <li>Name : {{data.name}} Address: {{data.address.city}}</li> </ul> The json object is as follows − { "id": 1, "name": "Leanne Graham", "username": "Bret", "email": "Sincere@april.biz", "address": { "street": "Kulas Light", "suite": "Apt. 556", "city": "Gwenborough", "zipcode": "92998-3874", "geo": { "lat": "-37.3159", "lng": "81.1496" } }, "phone": "1-770-736-8031 x56442", "website": "hildegard.org", "company": { "name": "Romaguera-Crona", "catchPhrase": "Multi-layered client-server neural-net", "bs": "harness real-time e-markets" } } The object has properties such as id, name, username, email, and address that internally has street, city, etc. and other details related to phone, website, and company. Using the for loop, we will display the name and the city details in the browser as shown in the app.component.html file. This is how the display is shown in the browser − Let us now add the search parameter, which will filter based on specific data. We need to fetch the data based on the search param passed. Following are the changes done in app.component.html and app.component.ts files − import { Component } from '@angular/core'; import { Http } from '@angular/http'; import 'rxjs/add/operator/map'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent { title = 'app'; searchparam = 2; jsondata; name; constructor(private http: Http) { } ngOnInit() { this.http.get("http://jsonplaceholder.typicode.com/users?id="+this.searchparam). map( (response) ⇒ response.json() ). subscribe((data) ⇒ this.converttoarray(data)) } converttoarray(data) { console.log(data); this.name = data[0].name; } } For the get api, we will add the search param id = this.searchparam. The searchparam is equal to 2. We need the details of id=2 from the json file. {{name}} This is how the browser is displayed − We have consoled the data in the browser, which is received from the http. The same is displayed in the browser console. The name from the json with id=2 is displayed in the browser. 16 Lectures 1.5 hours Anadi Sharma 28 Lectures 2.5 hours Anadi Sharma 11 Lectures 7.5 hours SHIVPRASAD KOIRALA 16 Lectures 2.5 hours Frahaan Hussain 69 Lectures 5 hours Senol Atac 53 Lectures 3.5 hours Senol Atac Print Add Notes Bookmark this page
[ { "code": null, "e": 2201, "s": 1992, "text": "Http Service will help us fetch external data, post to it, etc. We need to import the http module to make use of the http service. Let us consider an example to understand how to make use of the http service." }, { "code": null, "e": 2298, ...
How to transform JSON text to a JavaScript object ? - GeeksforGeeks
01 Dec, 2021 JSON (JavaScript Object Notation) is a lightweight data-interchange format. As its name suggests, JSON is derived from the JavaScript programming language, but it’s available for use by many languages including Python, Ruby, PHP, and Java, and hence, it can be said as language-independent. For humans, it is easy to read and write and for machines, it is easy to parse and generate. It is very useful for storing and exchanging data. A JSON object is a key-value data format that is typically rendered in curly braces. JSON object consists of curly braces ( { } ) at either end and has key-value pairs inside the braces. Each key-value pair inside braces are separated by a comma (, ). JSON object looks something like this : { "key":"value", "key":"value", "key":"value", } Example for a JSON object : { "rollno":101", "name":"Nikita", "age":21, } Conversion of JSON text to Javascript Object: JSON text/object can be converted into Javascript object using the function JSON.parse(). The JSON.parse() method in JavaScript is used to parse a JSON string which is written in a JSON format and returns a JavaScript object. Syntax: JSON.parse(string, function) Parameters: This method accepts two parameters as mentioned above and described below string: It is a required parameter and it contains a string that is written in JSON format. function: It is an optional parameter and is used to transform results. The function called for each item. Example: HTML <script> var obj = JSON.parse('{"rollno":101, "name": "Nikita", "age": 21}'); document.write("Roll no is " + obj.rollno + "<br>"); document.write("Name is " + obj.name + "<br>"); document.write("Age is " + obj.age + "<br>");</script> Output: Roll no is 101 Name is Nikita Age is 21 Example 2: HTML <html> <body> <h2>JavaScript JSON parse() Method</h2> <p id="Geek"></p> </body><script> var obj = JSON.parse('{"var1":"Hello","var2":"Geeks!"}'); document.getElementById("Geek").innerHTML = obj.var1 + " " + obj.var2;</script> </html> Output: References: https://www.geeksforgeeks.org/javascript-json-parse-method/ https://www.geeksforgeeks.org/javascript-json/ javascript-object JavaScript-Questions JSON Picked JavaScript Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Remove elements from a JavaScript Array Difference between var, let and const keywords in JavaScript Difference Between PUT and PATCH Request JavaScript | Promises How to get character array from string in JavaScript? Remove elements from a JavaScript Array Installation of Node.js on Linux How to fetch data from an API in ReactJS ? Top 10 Projects For Beginners To Practice HTML and CSS Skills Difference between var, let and const keywords in JavaScript
[ { "code": null, "e": 26545, "s": 26517, "text": "\n01 Dec, 2021" }, { "code": null, "e": 26980, "s": 26545, "text": "JSON (JavaScript Object Notation) is a lightweight data-interchange format. As its name suggests, JSON is derived from the JavaScript programming language, but it’...
Explain the components of Bootstrap - GeeksforGeeks
10 Nov, 2021 Bootstrap 4 provides a variety of customizable and reusable components which makes the development faster and easier. They are heavily based on the base modifier nomenclature i.e. the base class has many groups of shared properties together while the modifier class has a group of individual styles. For example, .btn is a base class and .btn-primary or .btn-success is a modifier class. The bootstrap components range from alerts, buttons, badges, cards to various other components. List of components: Jumbotron: It simply put extra attention to particular content or information by making it larger and more eye-catching.Alerts: It is a popup with a predefined message that appears after a particular action.Buttons: It is customized buttons that are used to perform an action in the form, dialogue box, etc. They are in multiple states, sizes and have predefined styles.Button group: It is a group of buttons aligned in a single line and they can be arranged both vertically as well as horizontally.Badge: It Is a labeling component that is used to add additional information.Progress Bar: It is used to show the progress of a particular operation with a custom progress bar. They have text labels, stacked bars, and animated backgrounds.Spinner: The spinner displays the loading state of websites or projects. They are built with HTML, CSS and don’t require any JavaScript.Scrollspy: It keeps updating the navigation bar to the currently active link based on the scroll position in the viewport.List group: It is used to display an unordered series of content in a proper way.Card: It provides a customizable, extensible, and flexible content container.Dropdown: It is used to drop the menu in the format of a list of links, they are contextual and toggleable overlays.Navs: It is used to create a basic and simple navigation menu with a .nav base class.Navbar: The navigation bar is the headers at the top of a website or webpage.Forms: Forms are used to take multiple inputs at once from the user. Bootstrap has two layouts available stacked and inline.Input groups: They have extended form controls by adding a button, button group or text on either side of inputs.Breadcrumb: It provides the location of the current page in a navigational hierarchy and also adds separators through CSS.Carousel: It is a slide show of image or text content built with CSS 3D and JavaScript.Toast: It displays a message for a small amount of time, a few seconds. They are alert messages designed to imitate push notifications popular in desktop and mobile systems.Tooltip: It provides small information about the element/link when the mouse hovers over the element.Popovers: It displays extra information about the element/link when clicked on it.Collapse: It is a JavaScript plugin that is used to show or hide the content.Modal: It is a small popup window positioned over the actual window.Pagination: It is used to easily navigate between different pages, a large block of connected links is used for making them accessible.Media Object: The Media object is used for repetitive and complex components like tweets or blogs. The images or videos are placed/aligned to the left or the right of the content. Jumbotron: It simply put extra attention to particular content or information by making it larger and more eye-catching. Alerts: It is a popup with a predefined message that appears after a particular action. Buttons: It is customized buttons that are used to perform an action in the form, dialogue box, etc. They are in multiple states, sizes and have predefined styles. Button group: It is a group of buttons aligned in a single line and they can be arranged both vertically as well as horizontally. Badge: It Is a labeling component that is used to add additional information. Progress Bar: It is used to show the progress of a particular operation with a custom progress bar. They have text labels, stacked bars, and animated backgrounds. Spinner: The spinner displays the loading state of websites or projects. They are built with HTML, CSS and don’t require any JavaScript. Scrollspy: It keeps updating the navigation bar to the currently active link based on the scroll position in the viewport. List group: It is used to display an unordered series of content in a proper way. Card: It provides a customizable, extensible, and flexible content container. Dropdown: It is used to drop the menu in the format of a list of links, they are contextual and toggleable overlays. Navs: It is used to create a basic and simple navigation menu with a .nav base class. Navbar: The navigation bar is the headers at the top of a website or webpage. Forms: Forms are used to take multiple inputs at once from the user. Bootstrap has two layouts available stacked and inline. Input groups: They have extended form controls by adding a button, button group or text on either side of inputs. Breadcrumb: It provides the location of the current page in a navigational hierarchy and also adds separators through CSS. Carousel: It is a slide show of image or text content built with CSS 3D and JavaScript. Toast: It displays a message for a small amount of time, a few seconds. They are alert messages designed to imitate push notifications popular in desktop and mobile systems. Tooltip: It provides small information about the element/link when the mouse hovers over the element. Popovers: It displays extra information about the element/link when clicked on it. Collapse: It is a JavaScript plugin that is used to show or hide the content. Modal: It is a small popup window positioned over the actual window. Pagination: It is used to easily navigate between different pages, a large block of connected links is used for making them accessible. Media Object: The Media object is used for repetitive and complex components like tweets or blogs. The images or videos are placed/aligned to the left or the right of the content. Example 1: In this example, we will use a few of the components from the list. HTML <!DOCTYPE html><html> <head> <title>Components of BootStrap 4</title> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1"/> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css"/> <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"> </script> <script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.7/umd/popper.min.js"> </script> <script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js"> </script> </head> <!-- A nav code--> <br /> <h3>Nav:</h3> <ul class="nav"> <li class="nav-item"> <a class="nav-link active" href="#">Active link</a> </li> <li class="nav-item"> <a class="nav-link" href="#provide link url here"> First link </a> </li> <li class="nav-item"> <a class="nav-link" href="#provide link url here"> second link </a> </li> <li class="nav-item"> <a class="nav-link disabled" href="#provide link url here" tabindex="-1" aria-disabled="true" >Disabled</a> </li> </ul> <!-- A nav code--> <!-- A small alert code--> <h3>Alert:</h3> <div class="alert alert-info" role="alert"> A simple alert! </div> <!-- A small alert code--> <h3>Modal:</h3> <!-- Button trigger modal--> <button type="button" class="btn btn-success" data-toggle="modal" data-target="#exampleModal"> geeksforgeeks </button> <!-- Modal --> <div class="modal fade" id="exampleModal" tabindex="-1" aria-labelledby="exampleModalLabel" aria-hidden="true"> <div class="modal-dialog"> <div class="modal-content"> <div class="modal-header"> <h5 class="modal-title" id="exampleModalLabel"> geeksforgeeks </h5> <button type="button" class="close" data-dismiss="modal" aria-label="Close"> <span aria-hidden="true">×</span> </button> </div> <div class="modal-body"> Hello, thanks for checking out geeksforgeeks! </div> <div class="modal-footer"> <button type="button" class="btn btn-secondary" data-dismiss="modal"> Close </button> <button type="button" class="btn btn-primary"> Save </button> </div> </div> </div> </div> <!-- A button trigger modal--></html> Output: Example 2: This example illustrates the use of Bootstrap jumbotron. HTML <!DOCTYPE html><html lang="en"> <head> <title>Bootstrap Example</title> <meta charset="utf-8" /> <meta name="viewport" content= "width=device-width, initial-scale=1" /> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.5.2/css/bootstrap.min.css" /> </head> <body> <div class="container"> <div class="jumbotron"> <h1 class="text-center text-success"> GeeksforGeeks </h1> <h3>Bootstrap Jumbotron Tutorial</h3> <p> Bootstrap is a free and open-source tool collection for creating responsive websites and web applications. It is the most popular HTML, CSS, and JavaScript framework for developing responsive, mobile-first websites. </p> </div> </div> </body></html> Output: Bootstrap Jumbotron Example 3: This example illustrates the use of Bootstrap toast. HTML <!DOCTYPE html><html lang="en"> <head> <title>Bootstrap Toast Example</title> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1" /> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.5.2/css/bootstrap.min.css" /> <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js"> </script> <script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.16.0/umd/popper.min.js"> </script> <script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.5.2/js/bootstrap.min.js"> </script> </head> <body> <div class="container"> <h3 class="text-success">GeeksforGeeks</h3> <h5>Toast Example</h5> <div class="toast" data-autohide="false"> <div class="toast-header"> <strong class="mr-auto text-primary"> GeeksforGeeks </strong> <button type="button" class="ml-2 mb-1 close" data-dismiss="toast"> × </button> </div> <div class="toast-body"> A Computer Science portal for geeks. </div> </div> </div> <script> $(document).ready(function () { $(".toast").toast("show"); }); </script> </body></html> Output: Bootstrap Toast kashishsoda Bootstrap-4 Bootstrap-Questions HTML-Questions Picked Bootstrap Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Show Images on Click using HTML ? How to set Bootstrap Timepicker using datetimepicker library ? How to Use Bootstrap with React? How to keep gap between columns using Bootstrap? Tailwind CSS vs Bootstrap Remove elements from a JavaScript Array Installation of Node.js on Linux Convert a string to an integer in JavaScript How to fetch data from an API in ReactJS ? How to insert spaces/tabs in text using HTML/CSS?
[ { "code": null, "e": 26865, "s": 26837, "text": "\n10 Nov, 2021" }, { "code": null, "e": 27349, "s": 26865, "text": "Bootstrap 4 provides a variety of customizable and reusable components which makes the development faster and easier. They are heavily based on the base modifier n...
list merge() function in C++ STL - GeeksforGeeks
19 Jul, 2018 The list::merge() is an inbuilt function in C++ STL which merges two sorted lists into one. The lists should be sorted in ascending order. If no comparator is passed in parameter, then it merges two sorted lists into a single sorted list. If a comparator is passed in the parameter, then it merges the list accordingly doing internal comparisons. Syntax:list1_name.merge(list2_name) Parameters: The function accepts a single mandatory parameter list2_name which specifies the list to be merged into list1.Return value: The function does not returns anythingProgram below demonstrates the function:Program 1:// program below demonstrates the// merge function in c++#include <bits/stdc++.h>using namespace std; int main(){ // declaring the lists // initially sorted list<int> list1 = { 10, 20, 30 }; list<int> list2 = { 40, 50, 60 }; // merge operation list2.merge(list1); cout << "List: "; for (auto it = list2.begin(); it != list2.end(); ++it) cout << *it << " "; return 0;}Output:List: 10 20 30 40 50 60 Syntax:list1_name.merge(list2_name, comparator) Parameters: The function accepts two parameters which are described below:list2-name – It specifies the list2 which is to be merged in list1.comparator – It is a binary predicate which takes two values of the same type that of those contained in the list, returns true if the first argument is considered to go before the second in the strict weak ordering it defines, and false otherwise.Return value: The function does not returns anything// program below demonstrates the// merge function in c++#include <bits/stdc++.h>using namespace std; // comparator which compares elements internallybool comparator(int first, int second){ return first < second;}int main(){ // declaring the lists list<int> list1 = { 1, 70, 80 }; list<int> list2 = { 2, 3, 4 }; // merge operation list1.merge(list2, comparator); cout << "List: "; for (auto it = list1.begin(); it != list1.end(); ++it) cout << *it << " "; return 0;}Output:List: 1 2 3 4 70 80 Syntax:list1_name.merge(list2_name) Parameters: The function accepts a single mandatory parameter list2_name which specifies the list to be merged into list1.Return value: The function does not returns anythingProgram below demonstrates the function:Program 1:// program below demonstrates the// merge function in c++#include <bits/stdc++.h>using namespace std; int main(){ // declaring the lists // initially sorted list<int> list1 = { 10, 20, 30 }; list<int> list2 = { 40, 50, 60 }; // merge operation list2.merge(list1); cout << "List: "; for (auto it = list2.begin(); it != list2.end(); ++it) cout << *it << " "; return 0;}Output:List: 10 20 30 40 50 60 list1_name.merge(list2_name) Parameters: The function accepts a single mandatory parameter list2_name which specifies the list to be merged into list1. Return value: The function does not returns anything Program below demonstrates the function: Program 1: // program below demonstrates the// merge function in c++#include <bits/stdc++.h>using namespace std; int main(){ // declaring the lists // initially sorted list<int> list1 = { 10, 20, 30 }; list<int> list2 = { 40, 50, 60 }; // merge operation list2.merge(list1); cout << "List: "; for (auto it = list2.begin(); it != list2.end(); ++it) cout << *it << " "; return 0;} List: 10 20 30 40 50 60 Syntax:list1_name.merge(list2_name, comparator) Parameters: The function accepts two parameters which are described below:list2-name – It specifies the list2 which is to be merged in list1.comparator – It is a binary predicate which takes two values of the same type that of those contained in the list, returns true if the first argument is considered to go before the second in the strict weak ordering it defines, and false otherwise.Return value: The function does not returns anything// program below demonstrates the// merge function in c++#include <bits/stdc++.h>using namespace std; // comparator which compares elements internallybool comparator(int first, int second){ return first < second;}int main(){ // declaring the lists list<int> list1 = { 1, 70, 80 }; list<int> list2 = { 2, 3, 4 }; // merge operation list1.merge(list2, comparator); cout << "List: "; for (auto it = list1.begin(); it != list1.end(); ++it) cout << *it << " "; return 0;}Output:List: 1 2 3 4 70 80 list1_name.merge(list2_name, comparator) Parameters: The function accepts two parameters which are described below: list2-name – It specifies the list2 which is to be merged in list1. comparator – It is a binary predicate which takes two values of the same type that of those contained in the list, returns true if the first argument is considered to go before the second in the strict weak ordering it defines, and false otherwise. Return value: The function does not returns anything // program below demonstrates the// merge function in c++#include <bits/stdc++.h>using namespace std; // comparator which compares elements internallybool comparator(int first, int second){ return first < second;}int main(){ // declaring the lists list<int> list1 = { 1, 70, 80 }; list<int> list2 = { 2, 3, 4 }; // merge operation list1.merge(list2, comparator); cout << "List: "; for (auto it = list1.begin(); it != list1.end(); ++it) cout << *it << " "; return 0;} List: 1 2 3 4 70 80 CPP-Functions cpp-list STL C++ STL CPP Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Inheritance in C++ C++ Classes and Objects Virtual Function in C++ Operator Overloading in C++ Templates in C++ with Examples Constructors in C++ vector erase() and clear() in C++ Socket Programming in C/C++ Object Oriented Programming in C++ Substring in C++
[ { "code": null, "e": 25680, "s": 25652, "text": "\n19 Jul, 2018" }, { "code": null, "e": 26027, "s": 25680, "text": "The list::merge() is an inbuilt function in C++ STL which merges two sorted lists into one. The lists should be sorted in ascending order. If no comparator is pass...
basic_istream::putback() in C++ with Examples - GeeksforGeeks
28 May, 2020 The basic_istream::putback() used to put the character back in the input string. This function is present in the iostream header file. Below is the syntax for the same: Header File: #include<iostream> Syntax: basic_istream& putback (char_type ch); Parameter: ch: It represents the character to be put into the input string back. Return Value: The iostream::basic_istream::putback() return the basic_istream object. Below are the programs to understand the implementation of std::basic_istream::putback() in a better way: Program 1: // C++ code for basic_istream::putback()#include <bits/stdc++.h> using namespace std; int main(){ stringstream gfg1("GeeksforGeeks"); gfg1.get(); // putback A into the input string if (gfg1.putback('A')) cout << gfg1.rdbuf() << endl; istringstream gfg2("GeeksforGeeks"); gfg2.get(); if (gfg2.putback('A')) cout << gfg2.rdbuf() << endl; else cout << "putback is failed here\n"; gfg2.clear(); // Again putback G in the string if (gfg2.putback('G')) cout << gfg2.rdbuf() << endl;} AeeksforGeeks putback is failed here GeeksforGeeks Program 2: // C++ code for basic_istream::putback()#include <bits/stdc++.h> using namespace std; int main(){ stringstream gfg1("GOOD"); gfg1.get(); // putback B into the input string if (gfg1.putback('B')) cout << gfg1.rdbuf() << endl; istringstream gfg2("GOOD"); gfg2.get(); if (gfg2.putback('B')) cout << gfg2.rdbuf() << endl; else cout << "putback is failed here\n"; gfg2.clear(); // Again putback G in the string if (gfg2.putback('G')) cout << gfg2.rdbuf() << endl;} BOOD putback is failed here GOOD Reference: http://www.cplusplus.com/reference/istream/istream/putback/ CPP-Functions C++ CPP Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Operator Overloading in C++ Polymorphism in C++ Friend class and function in C++ Sorting a vector in C++ std::string class in C++ Pair in C++ Standard Template Library (STL) Inline Functions in C++ Queue in C++ Standard Template Library (STL) Array of Strings in C++ (5 Different Ways to Create) Convert string to char array in C++
[ { "code": null, "e": 25343, "s": 25315, "text": "\n28 May, 2020" }, { "code": null, "e": 25512, "s": 25343, "text": "The basic_istream::putback() used to put the character back in the input string. This function is present in the iostream header file. Below is the syntax for the ...
How to use for and foreach loop in Golang? - GeeksforGeeks
23 Nov, 2021 There is only one looping construct in Golang, and that is the for loop. The for loop in Golang has three components, which must be separated by semicolons (;), those are: The initialization statement: which is executed before the first iteration. e.g. i := 0 The condition expression: which is executed just before every iteration. e.g. i < 5 The Post statement: which is executed at the end of every iteration. e.g. i++ There is no need for any parentheses for enclosing those three components, but to define a block we must use braces { }. for i := 0 ; i < 5 ; i++{ // statements to execute...... } The initialization and post statements are optional. i:=0 for ; i < 5;{ i++ } You can use for loop as while loop in Golang. Just drop all the semicolons. i := 0 for i < 5 { i++ } Infinite Loop: If there is no condition statement, the loop becomes an infinite loop. for { } Example: Go package main import "fmt" // function to print numbers 0// to 9 and print the sum of 0 to 9func main() { // variable to store the sum sum := 0 // this is a for loop which runs from 0 to 9 for i := 0; i < 10; i++ { // printing the value of // i : the iterating variable fmt.Printf("%d ", i) // calculating the sum sum += i } fmt.Printf("\nsum = %d", sum)} Output: 0 1 2 3 4 5 6 7 8 9 sum = 45 In Golang there is no foreach loop instead, the for loop can be used as “foreach“. There is a keyword range, you can combine for and range together and have the choice of using the key or value within the loop. Syntax: for <key>, <value> := range <container>{ } Here, key and value: It can be any variable you want to choose. container: It can be any variable which is an array, list, map, etc. Example 1: Go package main import "fmt" // Driver function to show the// use of for and range togetherfunc main() { // here we used a map of integer to string mapp := map[int]string{1: "one", 2: "two", 3: "three"} // integ act as keys of mapp // spell act as the values of // mapp which is mapped to integ for integ, spell := range mapp { // using integ and spell as // key and value of the map fmt.Println(integ, " = ", spell) }} Output: 1 = one 2 = two 3 = three Example 2: Go package main import "fmt" // Driver function to show the// use of for and range togetherfunc main() { // declaring an array of integers arra := []int{1, 2, 3, 4} // traversing through the array for index, itr := range arra { // the key or value variables // used in for syntax // depends on the container. // If its an array or list, // the key refers to the index... fmt.Print(index, " : ", itr, "\n") } // if we use only one // variable in the for loop, // it by default refers to // the value in the container. for itr := range arra { fmt.Print(it, " ") }} Output: 0 : 1 1 : 2 2 : 3 3 : 4 1 2 3 4 kapoorsagar226 nm370130 Picked Go Language Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. 6 Best Books to Learn Go Programming Language strings.Replace() Function in Golang With Examples fmt.Sprintf() Function in Golang With Examples How to Split a String in Golang? Golang Maps Different Ways to Find the Type of Variable in Golang Inheritance in GoLang Interfaces in Golang How to Trim a String in Golang? How to compare times in Golang?
[ { "code": null, "e": 25643, "s": 25615, "text": "\n23 Nov, 2021" }, { "code": null, "e": 25816, "s": 25643, "text": "There is only one looping construct in Golang, and that is the for loop. The for loop in Golang has three components, which must be separated by semicolons (;), th...
Python Program for Common Divisors of Two Numbers - GeeksforGeeks
30 Nov, 2018 Given two integer numbers, the task is to find count of all common divisors of given numbers? Input : a = 12, b = 24 Output: 6 // all common divisors are 1, 2, 3, // 4, 6 and 12 Input : a = 3, b = 17 Output: 1 // all common divisors are 1 Input : a = 20, b = 36 Output: 3 // all common divisors are 1, 2, 4 # Python Program to find # Common Divisors of Two Numbers a = 12b = 24n = 0 for i in range(1, min(a, b)+1): if a%i==b%i==0: n+=1 print(n) # Code contributed by Mohit Gupta_OMG Output: 6 Please refer complete article on Common Divisors of Two Numbers for more details! Python Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Appending to list in Python dictionary Python program to interchange first and last elements in a list How to inverse a matrix using NumPy Python | Get the first key in dictionary Differences and Applications of List, Tuple, Set and Dictionary in Python Python Program for Merge Sort Python | Find most frequent element in a list Python - Convert JSON to string Python | Difference between two dates (in minutes) using datetime.timedelta() method Python program to find smallest number in a list
[ { "code": null, "e": 26121, "s": 26093, "text": "\n30 Nov, 2018" }, { "code": null, "e": 26215, "s": 26121, "text": "Given two integer numbers, the task is to find count of all common divisors of given numbers?" }, { "code": null, "e": 26432, "s": 26215, "text...
jQuery UI Dialog modal Option - GeeksforGeeks
13 Jan, 2021 ‘modal option’ if set to true will disable the other items in the dialog box. By default, value is false. Syntax: $( ".selector" ).dialog({ modal: false }); Approach: First, add jQuery UI scripts needed for your project. <link href = “https://code.jquery.com/ui/1.10.4/themes/ui-lightness/jquery-ui.css” rel = “stylesheet”><script src = “https://code.jquery.com/jquery-1.10.2.js”></script><script src = “https://code.jquery.com/ui/1.10.4/jquery-ui.js”></script> Example 1: HTML <!doctype html> <html lang = "en"> <head> <meta charset = "utf-8"> <link rel = "stylesheet" href ="https://code.jquery.com/ui/1.10.4/themes/ui-lightness/jquery-ui.css"> <script src ="https://code.jquery.com/jquery-1.10.2.js"> </script> <script src ="https://code.jquery.com/ui/1.10.4/jquery-ui.js"> </script> <script> $(function() { $( "#gfg" ).dialog({ autoOpen: false, modal : true }); $( "#geeks" ).click(function() { $( "#gfg" ).dialog( "open" ); }); }); </script></head> <body> <div id = "gfg" title = "GeeksforGeeks"> Jquery UI| modal dialog option </div> <button id = "geeks">Open Dialog</button></body> </html> Output: Example 2: HTML <!doctype html> <html lang = "en"> <head> <meta charset = "utf-8"> <link rel = "stylesheet" href ="https://code.jquery.com/ui/1.10.4/themes/ui-lightness/jquery-ui.css"> <script src ="https://code.jquery.com/jquery-1.10.2.js"> </script> <script src ="https://code.jquery.com/ui/1.10.4/jquery-ui.js"> </script> <script> $(function() { $( "#gfg" ).dialog({ autoOpen: false, modal : false }); $( "#geeks" ).click(function() { $( "#gfg" ).dialog( "open" ); }); }); </script></head> <body> <div id = "gfg" title ="GeeksforGeeks"> Jquery UI| modal dialog option </div> <button id = "geeks">Open Dialog</button></body> </html> Output: Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course. jQuery-UI HTML JQuery Web Technologies HTML Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. REST API (Introduction) HTML Cheat Sheet - A Basic Guide to HTML Design a web page using HTML and CSS Form validation using jQuery Angular File Upload JQuery | Set the value of an input text field Form validation using jQuery How to change selected value of a drop-down list using jQuery? How to change the background color after clicking the button in JavaScript ? How to fetch data from JSON file and display in HTML table using jQuery ?
[ { "code": null, "e": 26392, "s": 26364, "text": "\n13 Jan, 2021" }, { "code": null, "e": 26498, "s": 26392, "text": "‘modal option’ if set to true will disable the other items in the dialog box. By default, value is false." }, { "code": null, "e": 26506, "s": 2649...
Python program to print all even numbers in a range - GeeksforGeeks
25 Apr, 2022 Given starting and end points, write a Python program to print all even numbers in that given range. Example: Input: start = 4, end = 15 Output: 4, 6, 8, 10, 12, 14 Input: start = 8, end = 11 Output: 8, 10 Example #1: Print all even numbers from given list using for loop Define start and end limit of range. Iterate from start till the range in the list using for loop and check if num % 2 == 0. If the condition satisfies, then only print the number. Python3 for num in range(4,15,2): #here inside range function first no dentoes starting, second denotes end and third denotes the interval print(num) Output: 4 6 8 10 12 14 Example #2: Taking range limit from user input Python3 # Python program to print Even Numbers in given range start = int(input("Enter the start of range: "))end = int(input("Enter the end of range: ")) # iterating each number in listfor num in range(start, end + 1): # checking condition if num % 2 == 0: print(num, end = " ") Output: Enter the start of range: 4 Enter the end of range: 10 4 6 8 10 rohhitrz Python list-programs python-list Python Python Programs School Programming python-list Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Install PIP on Windows ? Enumerate() in Python Different ways to create Pandas Dataframe Python String | replace() *args and **kwargs in Python Defaultdict in Python Python | Get dictionary keys as a list Python | Split string into list of characters Python | Convert a list to dictionary How to print without newline in Python?
[ { "code": null, "e": 26251, "s": 26223, "text": "\n25 Apr, 2022" }, { "code": null, "e": 26361, "s": 26251, "text": "Given starting and end points, write a Python program to print all even numbers in that given range. Example:" }, { "code": null, "e": 26458, "s": ...
PrintWriter printf(String, Object) method in Java with Examples - GeeksforGeeks
31 Jan, 2019 The printf(String, Object) method of PrintWriter Class in Java is used to print a formatted string in the stream. The string is formatted using specified format and arguments passed as the parameter. Syntax: public PrintWriter printf(String format, Object...args) Parameters: This method accepts two mandatory parameter: format which is the format according to which the String is to be formatted. args which is the number of arguments for the formatted string. It can be optional, i.e. no arguments or any number of arguments according to the format. Return Value: This method returns this PrintWriter instance. Exception: This method throws following exceptions: NullPointerException This is thrown if the format is null. IllegalFormatException This is thrown if the format specified is illegal or there are insufficient arguments. Below methods illustrates the working of printf(String, Object) method: Program 1: // Java program to demonstrate// PrintWriter printf(String, Object) method import java.io.*; class GFG { public static void main(String[] args) { try { // Get the parameters double arg = 47.65734; String format = "GeeksForGeeks %.8f"; // Create a PrintWriter instance PrintWriter writer = new PrintWriter(System.out); // print the formatted string // to this writer using printf() method writer.printf(format, arg); writer.flush(); } catch (Exception e) { System.out.println(e); } }} GeeksForGeeks 47.65734000 Program 2: // Java program to demonstrate// PrintWriter printf(String, Object) method import java.io.*; class GFG { public static void main(String[] args) { try { // Get the parameters String arg1 = "GFG"; String arg2 = "GeeksforGeeks"; String format = "A Computer Science " + "Portal %1$s, %1$s and %2$s"; // Create a PrintWriter instance PrintWriter writer = new PrintWriter(System.out); // print the formatted string // to this writer using printf() method writer.printf(format, arg1, arg2); writer.flush(); } catch (Exception e) { System.out.println(e); } }} A Computer Science Portal GFG, GFG and GeeksforGeeks Java-Functions Java-IO package Java-PrintWriter Java Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Stream In Java Constructors in Java Exceptions in Java Functional Interfaces in Java Different ways of Reading a text file in Java Generics in Java Introduction to Java Comparator Interface in Java with Examples Internal Working of HashMap in Java Strings in Java
[ { "code": null, "e": 25225, "s": 25197, "text": "\n31 Jan, 2019" }, { "code": null, "e": 25425, "s": 25225, "text": "The printf(String, Object) method of PrintWriter Class in Java is used to print a formatted string in the stream. The string is formatted using specified format an...
Line Graph View in Android with Example - GeeksforGeeks
31 Jan, 2021 If you are looking for a view to represent some statistical data or looking for a UI for displaying a graph in your app then in this article we will take a look on creating a line graph view in our Android App. We will be building a simple Line Graph View in our Android app and we will be displaying some sample data in our application. A sample image is given below to get an idea about what we are going to do in this article. Note that we are going to implement this project using the Java language. Step 1: Create a New Project To create a new project in Android Studio please refer to How to Create/Start a New Project in Android Studio. Note that select Java as the programming language. Step 2: Add dependency to the build.gradle(Module:app) file Navigate to the Gradle Scripts > build.gradle(Module:app) and add the below dependency in the dependencies section. implementation ‘com.jjoe64:graphview:4.2.2’ After adding this dependency sync your project and now we will move towards its implementation. Step 3: Working with the activity_main.xml file Navigate to the app > res > layout > activity_main.xml and add the below code to that file. Below is the code for the activity_main.xml file. XML <?xml version="1.0" encoding="utf-8"?><RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".MainActivity"> <!--line graph view where we will be displaying our data--> <com.jjoe64.graphview.GraphView android:id="@+id/idGraphView" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_alignParentTop="true" /> </RelativeLayout> Step 4: Working with the MainActivity.java file Go to the MainActivity.java file and refer to the following code. Below is the code for the MainActivity.java file. Comments are added inside the code to understand the code in more detail. Java import android.os.Bundle; import androidx.appcompat.app.AppCompatActivity; import com.jjoe64.graphview.GraphView;import com.jjoe64.graphview.series.DataPoint;import com.jjoe64.graphview.series.LineGraphSeries; public class MainActivity extends AppCompatActivity { // creating a variable // for our graph view. GraphView graphView; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); // on below line we are initializing our graph view. graphView = findViewById(R.id.idGraphView); // on below line we are adding data to our graph view. LineGraphSeries<DataPoint> series = new LineGraphSeries<DataPoint>(new DataPoint[]{ // on below line we are adding // each point on our x and y axis. new DataPoint(0, 1), new DataPoint(1, 3), new DataPoint(2, 4), new DataPoint(3, 9), new DataPoint(4, 6), new DataPoint(5, 3), new DataPoint(6, 6), new DataPoint(7, 1), new DataPoint(8, 2) }); // after adding data to our line graph series. // on below line we are setting // title for our graph view. graphView.setTitle("My Graph View"); // on below line we are setting // text color to our graph view. graphView.setTitleColor(R.color.purple_200); // on below line we are setting // our title text size. graphView.setTitleTextSize(18); // on below line we are adding // data series to our graph view. graphView.addSeries(series); }} Now run your app and see the output of the app. android Technical Scripter 2020 Android Java Technical Scripter Java Android Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Resource Raw Folder in Android Studio Flutter - Custom Bottom Navigation Bar How to Read Data from SQLite Database in Android? How to Post Data to API using Retrofit in Android? Retrofit with Kotlin Coroutine in Android Arrays in Java Split() String method in Java with examples For-each loop in Java Object Oriented Programming (OOPs) Concept in Java Arrays.sort() in Java with examples
[ { "code": null, "e": 26381, "s": 26353, "text": "\n31 Jan, 2021" }, { "code": null, "e": 26593, "s": 26381, "text": "If you are looking for a view to represent some statistical data or looking for a UI for displaying a graph in your app then in this article we will take a look on...
p5.js | frameRate() Function - GeeksforGeeks
09 Jul, 2019 The frameRate() function in p5.js is used to specify the number of frames to be displayed every second. Calling frameRate() with no arguments returns the current framerate. The draw function must run at least once before it will return a value. This function is same as getFrameRate() function. Syntax frameRate( c ) Parameters: The function accepts single parameter c which stores the value of frame rate variable. Below program illustrates the frameRate() function in p5.js: Example: function setup() { // Create canvas of given size createCanvas(500, 200); // Set the font size textSize(20); // Use frameRate() function frameRate(3);} function draw() { // Set the background color background(0, 153, 0); // Display the output text("Frame Count with frameRate " + int(getFrameRate()), 100, 100);} Output: Reference: https://p5js.org/reference/#/p5/frameRate JavaScript-p5.js JavaScript Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Remove elements from a JavaScript Array Convert a string to an integer in JavaScript Difference between var, let and const keywords in JavaScript Differences between Functional Components and Class Components in React How to calculate the number of days between two dates in javascript? Remove elements from a JavaScript Array Installation of Node.js on Linux Convert a string to an integer in JavaScript How to fetch data from an API in ReactJS ? Top 10 Projects For Beginners To Practice HTML and CSS Skills
[ { "code": null, "e": 44937, "s": 44909, "text": "\n09 Jul, 2019" }, { "code": null, "e": 45232, "s": 44937, "text": "The frameRate() function in p5.js is used to specify the number of frames to be displayed every second. Calling frameRate() with no arguments returns the current f...
Generate current month in C#
To display current month, firstly use “Now“ to get the current date. DateTime dt = DateTime.Now; Now, use the Month property to get the current month. dt.Month Let us see the complete code. Live Demo using System; using System.Linq; public class Demo { public static void Main() { DateTime dt = DateTime.Now; Console.WriteLine(dt.Month); } } 9 To display current month’s name. Live Demo using System; using System.Linq; public class Demo { public static void Main() { DateTime dt = DateTime.Now; Console.WriteLine(dt.ToString("MMM")); } } Sep
[ { "code": null, "e": 1131, "s": 1062, "text": "To display current month, firstly use “Now“ to get the current date." }, { "code": null, "e": 1159, "s": 1131, "text": "DateTime dt = DateTime.Now;" }, { "code": null, "e": 1213, "s": 1159, "text": "Now, use the M...
What is the meaning of DOCTYPE in HTML ? - GeeksforGeeks
06 Apr, 2021 The HTML document type declaration or Doctype is an instruction used by web browsers to fetch what version of HTML the website is written in. It helps browsers in understanding how the document should be interpreted thus eases the rendering process. It is neither an element nor a tag. The doctype should be placed on the top of the document. It must not contain any content and does not need a closing tag. Syntax: <!DOCTYPE html> Example 1: HTML <!-- This resembles doctype for HTML5 file --><!DOCTYPE html><html> <head> <title>Page Title</title></head> <body> <h2>Welcome To GFG</h2> <p>Default code has been loaded into the Editor.</p></body> </html> Example 2: HTML <!-- Below is declaration of XHTML 1.1 document --><!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"><html> <head> <title>Page Title</title></head> <body> <h2>Welcome To GFG</h2> <p>Default code has been loaded into the Editor.</p></body> </html> Following are some doctype files to be included with the following document types: XHTML 1.1 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"> XHTML 1.0 Frameset <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Frameset//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-frameset.dtd"> XHTML 1.0 Transitional <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> XHTML 1.0 Strict <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> HTML 4.01 Frameset <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Frameset//EN" "http://www.w3.org/TR/html4/frameset.dtd"> HTML 4.01 Transitional <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> HTML 4.01 Strict <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> HTML 5 <!DOCTYPE html> Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course. HTML-Questions HTML-Tags Picked HTML Web Technologies HTML Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments REST API (Introduction) Design a web page using HTML and CSS Form validation using jQuery How to place text on image using HTML and CSS? How to auto-resize an image to fit a div container using CSS? Top 10 Front End Developer Skills That You Need in 2022 Installation of Node.js on Linux How to fetch data from an API in ReactJS ? Difference between var, let and const keywords in JavaScript Convert a string to an integer in JavaScript
[ { "code": null, "e": 24503, "s": 24475, "text": "\n06 Apr, 2021" }, { "code": null, "e": 24911, "s": 24503, "text": "The HTML document type declaration or Doctype is an instruction used by web browsers to fetch what version of HTML the website is written in. It helps browsers in ...
Combinatorial Game Theory | Set 2 (Game of Nim) - GeeksforGeeks
04 May, 2020 We strongly recommend to refer below article as a prerequisite of this. Combinatorial Game Theory | Set 1 (Introduction) In this post, Game of Nim is discussed. The Game of Nim is described by the following rules- “ Given a number of piles in which each pile contains some numbers of stones/coins. In each turn, a player can choose only one pile and remove any number of stones (at least one) from that pile. The player who cannot move is considered to lose the game (i.e., one who take the last stone is the winner). ” For example, consider that there are two players- A and B, and initially there are three piles of coins initially having 3, 4, 5 coins in each of them as shown below. We assume that first move is made by A. See the below figure for clear understanding of the whole game play. A Won the match (Note: A made the first move) So was A having a strong expertise in this game ? or he/she was having some edge over B by starting first ? Let us now play again, with the same configuration of the piles as above but this time B starting first instead of A. B Won the match (Note: B made the first move) By the above figure, it must be clear that the game depends on one important factor – Who starts the game first ? So does the player who starts first will win everytime ?Let us again play the game, starting from A , and this time with a different initial configuration of piles. The piles have 1, 4, 5 coins initially. Will A win again as he has started first ? Let us see.A made the first move, but lost the Game. So, the result is clear. A has lost. But how? We know that this game depends heavily on which player starts first. Thus, there must be another factor which dominates the result of this simple-yet-interesting game. That factor is the initial configuration of the heaps/piles. This time the initial configuration was different from the previous one. So, we can conclude that this game depends on two factors- The player who starts first.The initial configuration of the piles/heaps. The player who starts first. The initial configuration of the piles/heaps. In fact, we can predict the winner of the game before even playing the game ! Nim-Sum : The cumulative XOR value of the number of coins/stones in each piles/heaps at any point of the game is called Nim-Sum at that point.“If both A and B play optimally (i.e- they don’t make any mistakes), then the player starting first is guaranteed to win if the Nim-Sum at the beginning of the game is non-zero. Otherwise, if the Nim-Sum evaluates to zero, then player A will lose definitely.” For the proof of the above theorem, see- https://en.wikipedia.org/wiki/Nim#Proof_of_the_winning_formula Optimal Strategy : If the XOR sum of ‘n’ numbers is already zero then there is no possibility to make the XOR sum zero by single reduction of a number. If the XOR sum of ‘n’ numbers is non-zero then there is at least a single approach by which if you reduce a number, the XOR sum is zero. Initially two cases could exist. Case 1: Initial Nim Sum is zeroAs we know, in this case if played optimally B wins, which means B would always prefer to have Nim sum of zero for A‘s turn.So, as the Nim Sum is initially zero, whatever number of items A removes the new Nim Sum would be non-zero (as mentioned above). Also, as B would prefer Nim sum of zero for A‘s turn, he would then play a move so as to make the Nim Sum zero again (which is always possible, as mentioned above).The game will run as long as there are items in any of the piles and in each of their respective turns A would make Nim sum non-zero and B would make it zero again and eventually there will be no elements left and B being the one to pick the last wins the game. It is evident by above explanation that the optimal strategy for each player is to make the Nim Sum for his opponent zero in each of their turn, which will not be possible if it’s already zero. Case 2: Initial Nim Sum is non-zeroNow going by the optimal approach A would make the Nim Sum to be zero now (which is possible as the initial Nim sum is non-zero, as mentioned above). Now, in B‘s turn as the nim sum is already zero whatever number B picks, the nim sum would be non-zero and A can pick a number to make the nim sum zero again. This will go as long as there are items available in any pile.And A will be the one to pick the last item. So, as discussed in the above cases, it should be obvious now that Optimal strategy for any player is to make the nim sum zero if it’s non-zero and if it is already zero then whatever moves the player makes now, it can be countered. Let us apply the above theorem in the games played above. In the first game A started first and the Nim-Sum at the beginning of the game was, 3 XOR 4 XOR 5 = 2, which is a non-zero value, and hence A won. Whereas in the second game-play, when the initial configuration of the piles were 1, 4, and 5 and A started first, then A was destined to lose as the Nim-Sum at the beginning of the game was 1 XOR 4 XOR 5 = 0 . Implementation: In the program below, we play the Nim-Game between computer and human(user)The below program uses two functionsknowWinnerBeforePlaying() : : Tells the result before playing.playGame() : plays the full game and finally declare the winner. The function playGame() doesn’t takes input from the human(user), instead it uses a rand() function to randomly pick up a pile and randomly remove any number of stones from the picked pile. The below program can be modified to take input from the user by removing the rand() function and inserting cin or scanf() functions. C++ C /* A C++ program to implement Game of Nim. The programassumes that both players are playing optimally */#include <iostream>#include <math.h>using namespace std; #define COMPUTER 1#define HUMAN 2 /* A Structure to hold the two parameters of a move A move has two parameters-1) pile_index = The index of pile from which stone is going to be removed2) stones_removed = Number of stones removed from the pile indexed = pile_index */struct move{ int pile_index; int stones_removed;}; /*piles[] -> Array having the initial count of stones/coins in each piles before the game has started.n -> Number of piles The piles[] are having 0-based indexing*/ // A C function to output the current game state.void showPiles (int piles[], int n){ int i; cout <<"Current Game Status -> "; for (i=0; i<n; i++) cout << " " << piles[i]; cout <<"\n"; return;} // A C function that returns True if game has ended and// False if game is not yet overbool gameOver(int piles[], int n){ int i; for (i=0; i<n; i++) if (piles[i]!=0) return (false); return (true);} // A C function to declare the winner of the gamevoid declareWinner(int whoseTurn){ if (whoseTurn == COMPUTER) cout <<"\nHUMAN won\n\n"; else cout <<"\nCOMPUTER won\n\n"; return;} // A C function to calculate the Nim-Sum at any point// of the game.int calculateNimSum(int piles[], int n){ int i, nimsum = piles[0]; for (i=1; i<n; i++) nimsum = nimsum ^ piles[i]; return(nimsum);} // A C function to make moves of the Nim Gamevoid makeMove(int piles[], int n, struct move * moves){ int i, nim_sum = calculateNimSum(piles, n); // The player having the current turn is on a winning // position. So he/she/it play optimally and tries to make // Nim-Sum as 0 if (nim_sum != 0) { for (i=0; i<n; i++) { // If this is not an illegal move // then make this move. if ((piles[i] ^ nim_sum) < piles[i]) { (*moves).pile_index = i; (*moves).stones_removed = piles[i]-(piles[i]^nim_sum); piles[i] = (piles[i] ^ nim_sum); break; } } } // The player having the current turn is on losing // position, so he/she/it can only wait for the opponent // to make a mistake(which doesn't happen in this program // as both players are playing optimally). He randomly // choose a non-empty pile and randomly removes few stones // from it. If the opponent doesn't make a mistake,then it // doesn't matter which pile this player chooses, as he is // destined to lose this game. // If you want to input yourself then remove the rand() // functions and modify the code to take inputs. // But remember, you still won't be able to change your // fate/prediction. else { // Create an array to hold indices of non-empty piles int non_zero_indices[n], count; for (i=0, count=0; i<n; i++) if (piles[i] > 0) non_zero_indices [count++] = i; (*moves).pile_index = (rand() % (count)); (*moves).stones_removed = 1 + (rand() % (piles[(*moves).pile_index])); piles[(*moves).pile_index] = piles[(*moves).pile_index] - (*moves).stones_removed; if (piles[(*moves).pile_index] < 0) piles[(*moves).pile_index]=0; } return;} // A C function to play the Game of Nimvoid playGame(int piles[], int n, int whoseTurn){ cout <<"\nGAME STARTS\n\n"; struct move moves; while (gameOver (piles, n) == false) { showPiles(piles, n); makeMove(piles, n, &moves); if (whoseTurn == COMPUTER) { cout <<"COMPUTER removes" << moves.stones_removed << "stones from pile at index " << moves.pile_index << endl; whoseTurn = HUMAN; } else { cout <<"HUMAN removes"<< moves.stones_removed << "stones from pile at index " << moves.pile_index << endl; whoseTurn = COMPUTER; } } showPiles(piles, n); declareWinner(whoseTurn); return;} void knowWinnerBeforePlaying(int piles[], int n, int whoseTurn){ cout <<"Prediction before playing the game -> "; if (calculateNimSum(piles, n) !=0) { if (whoseTurn == COMPUTER) cout <<"COMPUTER will win\n"; else cout <<"HUMAN will win\n"; } else { if (whoseTurn == COMPUTER) cout <<"HUMAN will win\n"; else cout <<"COMPUTER will win\n"; } return;} // Driver program to test above functionsint main(){ // Test Case 1 int piles[] = {3, 4, 5}; int n = sizeof(piles)/sizeof(piles[0]); // We will predict the results before playing // The COMPUTER starts first knowWinnerBeforePlaying(piles, n, COMPUTER); // Let us play the game with COMPUTER starting first // and check whether our prediction was right or not playGame(piles, n, COMPUTER); /* Test Case 2 int piles[] = {3, 4, 7}; int n = sizeof(piles)/sizeof(piles[0]); // We will predict the results before playing // The HUMAN(You) starts first knowWinnerBeforePlaying (piles, n, COMPUTER); // Let us play the game with COMPUTER starting first // and check whether our prediction was right or not playGame (piles, n, HUMAN); */ return(0);} // This code is contributed by shivanisinghss2110 /* A C program to implement Game of Nim. The program assumes that both players are playing optimally */#include <stdio.h>#include <stdlib.h>#include <stdbool.h> #define COMPUTER 1#define HUMAN 2 /* A Structure to hold the two parameters of a move A move has two parameters- 1) pile_index = The index of pile from which stone is going to be removed 2) stones_removed = Number of stones removed from the pile indexed = pile_index */struct move{ int pile_index; int stones_removed;}; /* piles[] -> Array having the initial count of stones/coins in each piles before the game has started. n -> Number of piles The piles[] are having 0-based indexing*/ // A C function to output the current game state.void showPiles (int piles[], int n){ int i; printf ("Current Game Status -> "); for (i=0; i<n; i++) printf ("%d ", piles[i]); printf("\n"); return;} // A C function that returns True if game has ended and// False if game is not yet overbool gameOver(int piles[], int n){ int i; for (i=0; i<n; i++) if (piles[i]!=0) return (false); return (true);} // A C function to declare the winner of the gamevoid declareWinner(int whoseTurn){ if (whoseTurn == COMPUTER) printf ("\nHUMAN won\n\n"); else printf("\nCOMPUTER won\n\n"); return;} // A C function to calculate the Nim-Sum at any point// of the game.int calculateNimSum(int piles[], int n){ int i, nimsum = piles[0]; for (i=1; i<n; i++) nimsum = nimsum ^ piles[i]; return(nimsum);} // A C function to make moves of the Nim Gamevoid makeMove(int piles[], int n, struct move * moves){ int i, nim_sum = calculateNimSum(piles, n); // The player having the current turn is on a winning // position. So he/she/it play optimally and tries to make // Nim-Sum as 0 if (nim_sum != 0) { for (i=0; i<n; i++) { // If this is not an illegal move // then make this move. if ((piles[i] ^ nim_sum) < piles[i]) { (*moves).pile_index = i; (*moves).stones_removed = piles[i]-(piles[i]^nim_sum); piles[i] = (piles[i] ^ nim_sum); break; } } } // The player having the current turn is on losing // position, so he/she/it can only wait for the opponent // to make a mistake(which doesn't happen in this program // as both players are playing optimally). He randomly // choose a non-empty pile and randomly removes few stones // from it. If the opponent doesn't make a mistake,then it // doesn't matter which pile this player chooses, as he is // destined to lose this game. // If you want to input yourself then remove the rand() // functions and modify the code to take inputs. // But remember, you still won't be able to change your // fate/prediction. else { // Create an array to hold indices of non-empty piles int non_zero_indices[n], count; for (i=0, count=0; i<n; i++) if (piles[i] > 0) non_zero_indices [count++] = i; (*moves).pile_index = (rand() % (count)); (*moves).stones_removed = 1 + (rand() % (piles[(*moves).pile_index])); piles[(*moves).pile_index] = piles[(*moves).pile_index] - (*moves).stones_removed; if (piles[(*moves).pile_index] < 0) piles[(*moves).pile_index]=0; } return;} // A C function to play the Game of Nimvoid playGame(int piles[], int n, int whoseTurn){ printf("\nGAME STARTS\n\n"); struct move moves; while (gameOver (piles, n) == false) { showPiles(piles, n); makeMove(piles, n, &moves); if (whoseTurn == COMPUTER) { printf("COMPUTER removes %d stones from pile " "at index %d\n", moves.stones_removed, moves.pile_index); whoseTurn = HUMAN; } else { printf("HUMAN removes %d stones from pile at " "index %d\n", moves.stones_removed, moves.pile_index); whoseTurn = COMPUTER; } } showPiles(piles, n); declareWinner(whoseTurn); return;} void knowWinnerBeforePlaying(int piles[], int n, int whoseTurn){ printf("Prediction before playing the game -> "); if (calculateNimSum(piles, n) !=0) { if (whoseTurn == COMPUTER) printf("COMPUTER will win\n"); else printf("HUMAN will win\n"); } else { if (whoseTurn == COMPUTER) printf("HUMAN will win\n"); else printf("COMPUTER will win\n"); } return;} // Driver program to test above functionsint main(){ // Test Case 1 int piles[] = {3, 4, 5}; int n = sizeof(piles)/sizeof(piles[0]); // We will predict the results before playing // The COMPUTER starts first knowWinnerBeforePlaying(piles, n, COMPUTER); // Let us play the game with COMPUTER starting first // and check whether our prediction was right or not playGame(piles, n, COMPUTER); /* Test Case 2 int piles[] = {3, 4, 7}; int n = sizeof(piles)/sizeof(piles[0]); // We will predict the results before playing // The HUMAN(You) starts first knowWinnerBeforePlaying (piles, n, COMPUTER); // Let us play the game with COMPUTER starting first // and check whether our prediction was right or not playGame (piles, n, HUMAN); */ return(0);} Prediction before playing the game -> COMPUTER will win GAME STARTS Current Game Status -> 3 4 5 COMPUTER removes 2 stones from pile at index 0 Current Game Status -> 1 4 5 HUMAN removes 3 stones from pile at index 1 Current Game Status -> 1 1 5 COMPUTER removes 5 stones from pile at index 2 Current Game Status -> 1 1 0 HUMAN removes 1 stones from pile at index 1 Current Game Status -> 1 0 0 COMPUTER removes 1 stones from pile at index 0 Current Game Status -> 0 0 0 COMPUTER won References :https://en.wikipedia.org/wiki/Nim This article is contributed by Rachit Belwariar. If you like GeeksforGeeks and would like to contribute, you can also write an article and mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above prafull911 shivanisinghss2110 Game Theory Mathematical Mathematical Game Theory Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Minimax Algorithm in Game Theory | Set 2 (Introduction to Evaluation Function) Expectimax Algorithm in Game Theory A Binary String Game Classification of Algorithms with Examples Find the winner of game of repeatedly removing the first character to empty given string Program for Fibonacci numbers C++ Data Types Write a program to print all permutations of a given string Set in C++ Standard Template Library (STL) Program to find GCD or HCF of two numbers
[ { "code": null, "e": 24788, "s": 24760, "text": "\n04 May, 2020" }, { "code": null, "e": 24860, "s": 24788, "text": "We strongly recommend to refer below article as a prerequisite of this." }, { "code": null, "e": 24909, "s": 24860, "text": "Combinatorial Game...
Change Column Data Type in Python Pandas | Towards Data Science
Working with data is rarely straightforward. Mostly one needs to perform various transformations on the imported dataset, to make it easy to analyze. In all of my projects, pandas never detect the correct data type for all the columns of the imported dataset. But at the same time, Pandas offer a range of methods to easily convert the column data types. Here, you will get all the methods for changing the data type of one or more columns in Pandas and certainly the comparison amongst them. Throughout the read, the resources are indicated with 📚, the shortcuts are indicated ⚡️ with and the takeaways are denoted by 📌. Don’t forget to check out an interesting 💡 project idea at the end of this read. You can quickly follow along with this Notebook 📌. To make it easier to understand for you, Let’s create a simple DataFrame. Using this example, it will be much easier to understand — how to change the data type of columns in Pandas. This method is used to assign a specific data type to a DataFrame column.Let’s assign int64 as the data type of the column Year. With the commands .head() and .info(), the resulting DataFrame can be quickly reviewed. df1 = df.copy()df1["Year"] = df1["Year"].astype("int64")df1.head()df1.info() Similarly, the column can be changed to any of the available data types in Python. However, if the data type is not suitable for the values of the column, by default this method will throw a ValueError. Hate ValueErrors??? Pandas have the solution. The 2nd optional argument in this method ì.e. errors gives you the freedom to deal with the errors. This option defaults to raise, meaning, raise the errors and do not return any output. Simply, assign ‘ignore’ to this argument to ignore the errors and return the original value. df1["Car"] = df1["Car"].astype("int64", errors='ignore') ❓ Want to change the data type of all the columns in one go ❓ ⚡️ Just pass the dictionary of column name & data type pairs to this method and the problem is solved. df1 = df1.astype({"Year": "complex", "Rating": "float64",\ "Car": 'int32'}, errors='ignore') Much simpler, assign a single data type to all the columns by directly passing the data type in astype() , just like the below example. df1 = df1.astype("int64", errors='ignore')df1.head()df1.info() As shown in the above picture, the Dtype of columns Year and Rating is changed to int64, whereas the original data types of other non-numeric columns are returned without throwing the errors. 📚 pandas.DataFrame.astype() Well well, there is no such method called pandas.to_DataType(), however, if the word DataType is replaced by the desired data type, you can get the below 2 methods. This method is used to convert the data type of the column to the numerical one. As a result, the float64 or int64 will be returned as the new data type of the column based on the values in the column. df2 = df.copy()df2["Rating"]=pd.to_numeric(df2["Rating"])df2.info() Here the column gets converted to the DateTime data type. This method accepts 10 optional arguments to help you to decide how to parse the dates. df2 = df.copy()df2["RealDate"] = pd.to_datetime(df2["Service"])df2.info() ❓ Need to change the data types of multiple columns at a time ❓ ⚡ ️Use the method .apply() df2[["Rating", "Year"]] = df2[["Rating",\ "Year"]].apply(pd.to_numeric) Similar to pandas.DataFrame.astype() the method pandas.to_numeric() also gives you the flexibility to deal with the errors. 📚 pandas.to_numeric()📚 pandas.to_datetime() This method will automatically detect the best suitable data type for the given column. By default, all the columns with Dtypes as object will be converted to strings. df3 = df.copy()dfn = df3.convert_dtypes()dfn.info() As per my observation, this method offers poor control over the data type conversion 📚 pandas.DataFrame.convert_dtypes() Summing up, In this quick read, I demonstrated how the data type of single or multiple columns can be changed quickly. I frequently use the method pandas.DataFrame.astype() as it provides better control over the different data types and has minimum optional arguments. Certainly, based on analysis requirements, different methods can be used, such as converting the data type to datetime64(ns) the methodpandas.to_datetime() is much straightforward. Become a Medium member today & get ⚡ unlimited ⚡ access to all the Medium stories. Sign up here and Join my email subscriptions When you sign-up here and choose to become a paid Medium member, I will get a portion of your membership fee as a reward. It can be a good idea to start with a new dataset, assess and clean it by practicing Data Wrangling techniques and store it in a SQL Database to finally visualize it in Power BI. Additionally, this project idea can be implemented with the resources given in it. As I always say, I am open to constructive feedback and knowledge sharing through LinkedIn.
[ { "code": null, "e": 322, "s": 172, "text": "Working with data is rarely straightforward. Mostly one needs to perform various transformations on the imported dataset, to make it easy to analyze." }, { "code": null, "e": 527, "s": 322, "text": "In all of my projects, pandas never ...
Predicting Cancer with Logistic Regression in Python | by Andrew Hershy | Towards Data Science
In my first logistic regression analysis, we merely scratched the surface. Discussed were only high level concepts and a bivariate model example. In this analysis we will look at more challenging data and learn more advanced techniques and interpretations. 1. Data Background 2. Data Exploration/Cleaning 3. Data Visualization 4. Building the Model 5. Testing the Model Measuring certain protein levels in the body have been proven to be predictive in diagnosing cancer growth. Doctors can perform tests to check these proteins levels. We have a sample of 255 patients and would like to gain information with regard to 4 proteins and their potential relationships with cancer growth. We know: The concentration of each protein measured per patient. Whether or not each patient has been diagnosed with cancer (0 = no cancer; 1= cancer). Our goal is: To predict whether future patients have cancer by extracting information from the relationship between protein levels and cancer in our sample. The 4 proteins we’ll be looking at: Alpha-fetoprotein (AFP) Carcinoembryonic antigen (CEA) Cancer Antigen 125 (CA125) Cancer Antigen 50 (CA50) I received this data set to use for educational purposes from the MBA program @UAB. Let’s jump into the analysis by pulling in the data and importing necessary modules. %matplotlib inlineimport numpy as npimport pandas as pdimport matplotlib.pyplot as pltfrom sklearn.model_selection import train_test_splitimport seaborn as snsdf = pd.read_excel(r"C:\Users\Andrew\Desktop\cea.xlsx")df.head() df.head() gives us the first 5 features of our data set. Each row is a patient and each column contains a descriptive attribute. Class (Y) describes if the patient has no cancer (0) or has cancer (1). The next 4 columns are the protein levels found in that patient’s bloodstream. df.describe() We can retrieve some basic information about the sample from the describe method. There are 255 rows in this data set with varying std and means. An insight worth mentioning is that the 4 proteins all have lower 50% values compared to their means. This implies that the majority of protein-levels are small. The 4 proteins additionally have high ranges, which implies there are high-value outliers. For example, if we look at AFP, the mean is 4.58, median is 3.86, and the highest value is 82.26. df.isnull().sum() It’s always good to check if there are null (nonexistent) values in the data. This can be addressed in various ways. Luckily we don’t have to worry about nulls this time. Let’s visualize each of our variables and hypothesize what could be going on based on what we see: yhist = plt.hist('class (Y)', data = df, color='g')plt.xlabel('Diagnosis (1) vs no diagnosis (0)')plt.ylabel('Quantity')plt.title('Class (Y) Distribution') There are more cancer-free patients in this sample. The mean of the “class (Y)” variable as shown in Figure 2 is 0.44. #setting the axesaxes = plt.axes()axes.set_xlim([0,(df['AFP'].max())])#making histogram with 20 binsplt.hist('AFP', data = df, bins = 20)plt.xlabel('AFP Level')plt.ylabel('Quantity')plt.title('AFP Distribution') As concluded earlier, there are a relatively high number of patients with low levels of AFP. #color palettepal = sns.color_palette("Set1")#setting variable for max level of protein in datasetlim = df['AFP'].max()#setting bin size to be 20bins = np.linspace(0,lim,(lim/(lim*(1/20))))#creating new column in df with bin categories per featuredf['AFP_binned'] = pd.cut(df['AFP'], bins)#creating a crosstab stacked bar chart variablechart = pd.crosstab(df['AFP_binned'],df['class (Y)'])#normalizing chart and plotting chartchart.div(chart.sum(1).astype(float), axis=0).plot(kind='bar', color = pal,stacked=True)plt.xlabel('Bins')plt.ylabel('Quantity')plt.title('Normalized Stacked Bar Chart: AFP vs Class(Y)') Complementing the distribution histogram, the stacked bar chart above displays the proportion of 1’s to 0’s as the AFP levels increase. This chart, like the distribution histogram, is also separated into 20 bins. Combining our knowledge from the distribution and the proportions of our target variable above, we can intuitively determine there likely isn’t much predictive knowledge to be gained from this protein. Let’s take it step by step. The majority of patients have an AFP value of under 10, which is shown in the first 2 bars in Figure 4. Because the majority of patients are in those first 2 bars, the change in Y between them in Figure 5 matters more than the changes in Y among the other bars. The proportion of cancerous patients increases slightly from bar 1 to bar 2. The proportion of cancerous patients decreases from bar 2 to 3. After bar 3, there are so few patients left to analyze that it has little effect on the trend. From what we can see here, the target variable looks mostly independent to changes in AFP. The most significant change (bar 1 to 2) is very slight and the changes after that are not in the same direction. Let’s see how the other proteins look. CEA appears to have a different story. Figure 6 shows the distribution shape is similar to AFP; however, Figure 7 shows different changes in cancer rates. Just like with AFP (due to the distribution shape) the most significant cancer change between bars would be among bar 1 and 2. The change from bar 1 to bar 2 went from around 63% noncancerous to 18% noncancerous (or to put that another way, 37% cancerous to 82% cancerous). Additionally, the change from bin 2 to bin 3 is in the same direction, more cancer. The outliers starting on bin 5 with 100% cancer reinforces the trend that higher CEA likely indicates cancer. CA125 is a bit trickier. Bar 1 to 2 indicates, like CEA, that a higher level of this protein could cause cancer. However, it appears there could be 2 trends in this one. The trend reverses as almost all of the latter bins turn noncancerous. We will look at this variable in more detail later. CA50 doesn’t look promising. The first 4 bins appear to indicate a trend of higher cancer rates. However, the trend looks to reverse in bins 7–9. There is likely a small or negligible relationship between CA50 levels and cancer. Let’s put together this model and see what the regression can tell us. #importing moduleimport statsmodels.api as sm#setting up X and ycols= [‘AFP’,’CEA’,’CA125',’CA50']X= df[cols]y = df[‘class (Y)’]#filling in the statsmodels Logit methodlogit_model = sm.Logit(y,X)result = logit_model.fit()print(result.summary()) The highlighted values are what matter from this report: our 4 independent variables and their p values. AFP and CA50 have high p values. If our alpha is 0.05, then AFP and CA50 have pvalues too high to reject our null hypothesis (Our null hypothesis is that the proteins levels have no impact on cancer rates). CEA and CA125 pass the test, however, and are determined to be significant. Both AFP and CA50 were hypothesized to be disregarded based on what we saw on our stacked bar charts, so it makes sense. Let’s take those variables out and run the regression a second time: #deleted p values above the 0.05 alpha thresholdcols= ['CEA','CA125']X= df[cols]y = df['class (Y)']logit_model = sm.Logit(y,X)result = logit_model.fit()print(result.summary()) With our final coefficients, we have more insight about the relationship between each remaining protein and cancer. CEA has a positive relationship about 3 times stronger than CA125's negative relationship. As CEA increases, the likelihood for cancer increases. As CA125 increases, the likelihood of cancer decreases. We will divide our sample data into training and testing to test our regression results. from sklearn.linear_model import LogisticRegressionfrom sklearn import metrics#shuffling dfdf = df.sample(frac=1).reset_index(drop=True)#redefining columns cols= ['CEA','CA125']X= df[cols]y = df['class (Y)']#Dividing into training(70%) and testing(30%)X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)#Running new regression on training datalogreg = LogisticRegression()logreg.fit(X_train, y_train)#Calculating the accuracy of the training model on the testing dataaccuracy = logreg.score(X_test, y_test)print('The accuracy is: ' + str(accuracy *100) + '%') A good way to visualize the accuracy calculated above is with the use of a confusion matrix. Below is the conceptual framework for what a confusion matrix is. Edit: I was talking with a friend in biostats about my analysis, and the convention in that field is that the disease is attributed as being positive. I arbitrarily set cancer as negative because I didn’t know that at the time. “Confusing” is the key word for a lot of people. Try to look at one line at a time: The top row is a good place to start. This row is telling us how many instances were predicted to be benign. If we look at the columns, we can see the split of the actual values within that prediction. Just remember, rows are predictions and columns are the actual values. from sklearn.metrics import confusion_matrixconfusion_matrix = confusion_matrix(y_test, y_pred)print(confusion_matrix) Match the matrix above to Figure 14 to learn what it is saying: 39 of our model’s guesses were True Positive: The model thought the patient had no cancer, and they indeed had no cancer). 18 of our model’s guesses were True Negative: The model thought the patient had cancer, and they indeed had cancer. 14 of our model’s guesses were False Negative: The model thought the patient had cancer, but they actually didn’t have cancer 6 of our model’s guesses were False Positive: The model thought the patient had no cancer but they actually did have cancer. 30% of our total data went to testing group, that leaves 255(.3) = 77 instances that were tested. The sum of the matrix is 77. Divide the “True” numbers by the total and that will give the accuracy of our model: 57/77 = 74.03%. Keep in mind, we randomly shuffled the data before performing this test. I ran the regression a few times and got anywhere between 65% and 85% accuracy. Lastly, we are going to perform a Receiver operating characteristic (ROC) analysis as another way of testing our model. The 2 purposes of this test are to Determine where the best “cut off” point is.Determine how well the model classifies through another metric called “Area under curve” (AUC). Determine where the best “cut off” point is. Determine how well the model classifies through another metric called “Area under curve” (AUC). We will be creating our ROC curve from scratch. Below is all the code used to format a new dataframe to calculate the ROC, cutoff point, and AUC. #Formatting y_test and y_predicted probabilities for ROC curvey_pred_prob = pd.DataFrame(y_pred_prob)y_1_prob = y_pred_prob[1]y_test_1 = y_test.reset_index()y_test_1 = y_test_1['class (Y)']#Forming new df for ROC Curve and Accuracy curvedf = pd.DataFrame({ 'y_test': y_test_1, 'model_probability': y_1_prob})df = df.sort_values('model_probability')#Creating 'True Positive', 'False Positive', 'True Negative' and 'False Negative' columns df['tp'] = (df['y_test'] == int(0)).cumsum()df['fp'] = (df['y_test'] == int(1)).cumsum()total_0s = df['y_test'].sum()total_1s = abs(total_0s - len(df))df['total_1s'] = total_1sdf['total_0s']= total_0sdf['total_instances'] = df['total_1s'] + df['total_0s']df['tn'] = df['total_0s'] - df['fp']df['fn'] = df['total_1s'] - df['tp']df['fp_rate'] = df['fp'] / df['total_0s']df['tp_rate'] = df['tp'] / df['total_1s']#Calculating accuracy columndf['accuracy'] = (df['tp'] + df['tn']) / (df['total_1s'] + df['total_0s'])#Deleting unnecessary columnsdf.reset_index(inplace = True)del df['total_1s']del df['total_0s']del df['total_instances']del df['index']df To understand what is going on in the dataframe below, let’s analyze it, row by row. Index: This dataframe is sorted on the model_probability, so I reindexed for convenience. CA125 and CEA: The original testing data protein levels. model_probability: This column is from our training data’s logistic model outputting it’s probabilistic prediction of being classified as “1” (cancerous) based on the input testing protein levels. The first row is the least-likely instance to be classified as cancerous with it’s high CA125 and low CEA levels. y_test: The actual classifications of the testing data we are checking our model’s performance with. The rest of the columns are based solely on “y_test”, not our model’s predictions. Think of these values as their own confusion matrices. These will help us determine where the optimal cut off point will be later. tp (True Positive): This column starts at 0. If y_test is ‘0’ (benign), this value increases by 1. It is a cumulative tracker of all the potential true positives.The first row is an example of this. fp (False Positive): This column starts at 0. If y_test is ‘1’(cancerous), this value increases by 1. It is a cumulative tracker of all potential false positives.The fourth row is an example of this. tn (True Negative): This column starts at 32(the total number of 1’s in the testing set). If y_test is ‘1’(cancerous), this value decreases by 1. It is a cumulative tracker of all potential true negatives.The fourth row is an example of this. fn (False Negative): This column starts at 45(the total number of 0’s in the testing set). If y_test is ‘0’(benign), this value decreases by 1. It is a cumulative tracker of all potential false negatives.The fourth row is an example of this. fp_rate (False Positive Rate): This is calculated by taking the row’s false positive count and dividing it by the total number of positives (45, in our case). It lets us know the number of false positives we could classify by setting the cutoff point at that row. We want to keep this as low as possible. tp_rate (True Positive Rate): Also known as sensitivity, this is calculated by taking the row’s true positive count and dividing it by the total number of positives. It lets us know the number of true positives we could classify by setting the cutoff point at that row. We want to keep this as high as possible. accuracy: the sum of true positive and true negative divided by the total instances (77 in our case). Row by row, we are calculating the potential accuracy based on the possibilities of our confusion matrices. I pasted the entire dataframe because it’s worthwhile to study it for a while and make sense out of all the moving pieces. After looking it over, try to find the highest accuracy percentage. If you can locate that, you can match it to the corresponding model_probability to discover the optimal cut-off point for our data. #Plottingplt.plot(df['model_probability'],df['accuracy'], color = 'c')plt.xlabel('Model Probability')plt.ylabel('Accuracy')plt.title('Optimal Cutoff')#Arrow and Starplt.plot(0.535612, 0.753247, 'r*')ax = plt.axes()ax.arrow(0.41, 0.625, 0.1, 0.1, head_width=0.01, head_length=0.01, fc='k', ec='k')plt.show() The model probability is 54% where the accuracy is highest at 75%. It may seem counter-intuitive, but that means if we use 54% instead of 50% when classifying a patient as cancerous, it will actually be more accurate. If we want to maximize the accuracy, we would set the threshold to 54%, however, due to the extreme nature of cancer, it is probably wise to lower our threshold to below 50% to ensure patients who may have cancer are checked out anyway. In other words, false positives are more consequential than false negatives when it comes to cancer! Lastly, let’s graph the ROC curve and find AUC: #Calculating AUCAUC = 1-(np.trapz(df[‘fp_rate’], df[‘tp_rate’]))#Plotting ROC/AUC graphplt.plot(df[‘fp_rate’], df[‘tp_rate’], color = ‘k’, label=’ROC Curve (AUC = %0.2f)’ % AUC)#Plotting AUC=0.5 red lineplt.plot([0, 1], [0, 1],’r — ‘)plt.xlabel(‘False Positive Rate’)plt.ylabel(‘True Positive Rate (Sensitivity)’)plt.title(‘Receiver operating characteristic’)plt.legend(loc=”lower right”) The black ROC curve is showing the trade-off between our testing data’s true positive rate and false positive rate. The dotted red line cutting through the center of the graph is to provide a sense of what the worst possible model would look like as an ROC curve. The closer the ROC line can get to the top-left side, the more predictive our model is. The closer it resembles the dotted red line, the less predictive it is. That’s where the area under curve (AUC) comes in. AUC is, like it sounds, the area of space that lies under the ROC curve. Intuitively, the closer this is to 1, the better our classification model is. The AUC of the dotted line is 0.5. The AUC of a perfect model would be 1. Our line with an AUC of 0.82 fairs pretty well. Please subscribe if you found this helpful. Other articles by me if you want to learn more:
[ { "code": null, "e": 428, "s": 171, "text": "In my first logistic regression analysis, we merely scratched the surface. Discussed were only high level concepts and a bivariate model example. In this analysis we will look at more challenging data and learn more advanced techniques and interpretations...
Experimental Design in Data Science | by Benedict Neo | Towards Data Science
Full seriesPart 1 - What is Data Science, Big data and the Data Science processPart 2 - The origin of R, why use R, R vs Python and resources to learnPart 3 - Version Control, Git & GitHub and best practices for sharing code.Part 4 - The 6 types of Data AnalysisPart 5 - The ability to design experiments to answer your Ds questionsPart 6 - P-value & P-hacking Part 7 - Big Data, it's benefits, challenges, and future This series is based on the Data Science Specialization offered by John Hopkins University on Coursera. The articles in this series are notes based on the course, with additional research and topics for my own learning purposes. For the first course, Data Scientist Toolbox, the notes will be separated into 7 parts. Notes on the series can also be found here. Asking the right questions before solving a problem is a great start, it points you towards the right direction in the midst of complexity. But to really solve your problems effectively, you also need proper planning and good experimental design. Similar to conducting experiments in the scientific method, which entails concise series of steps to follow for conducting a safe experiment — the name of the equipment and measurements for certain things, you’ll also have to design your Data Science experiments. Then only can a clear path be paved towards breaking down your problem part by part, meticulously, and with the proper methodology? Here’s more about experimental design. It generally means organizing an experiment so that you have the right data to effectively answer your DS questions. Before you begin solving your DS problems, a few problems can arise. What’s the best way to answer the question? What are you measuring and how? What is the right data? How can I collect that data? What tools and libraries do I use? etc... All these problems are a crucial part of a DS experiment and have to be answered right from the start. Without proper planning and designing, you’ll be facing many of these issues throughout your workflow, making it unproductive. So what’s the right flow in an experiment? 1. Formulate question 2. Design experiment 3. Identify problems and sources of error 4. Collect data The process starts with clearly formulating your questions before any data collection, then designing the best set-up possible to gather the data to answer your questions, identifying problems or sources of error in your design, and only then, collecting data. Bad Data → Wrong analysis → Wrong conclusions Erroneous conclusions can have sweeping effects that have a trickle effect (citations in papers eventually is applied in real medicinal cases) For studies that have high stakes, such as determining cancer patients' treatment plans, experimental design is crucial. Papers with bad data and wrong conclusions will be retracted and have a bad reputation. The variable that is manipulated, not dependent on other variables Variable expected to change as a result of changes in the independent variable An educated guess as to the relationship between variables and outcome of the experiment Here’s an example of experimental design in action testing the correlation between books read and literacy. Hypothesis: As books read increases, literacy also increases X-axis: Books read Y-axis: Literacy Experimental set-up: I hypothesize the literacy level depends on books read Design of experiment — Measure books read and literacy of 100 individuals Sample size (n) — Number of experimental subjects included in the experiment Before collecting data to test your hypothesis, you first have to consider the problems that can cause errors in your result, one of them being a confounder. extraneous variables that may affect the relationships between dependent and independent variables For this example, a confounder can be the age variable, because it can both affect the number of books read and literacy. Any relationship between books read and literacy may be caused by age. Hypothesis: Books read → Literacy Confounder: Books read ← Age → Literacy Having confounders in mind, and designing your experiment that controls those confounders is crucial so that it won’t affect your results, and that you’re designing your experiments correctly. There are three ways you can deal with confounders: ControlRandomizationReplication Control Randomization Replication Going back to the book-literacy experiment, to control for the effects of age on the result, we can measure the age of each individual to take into account the effects of age on literacy. This creates two groups in your study : (1) Control group & (2) Treatment group, where: control group → participants of fixed ages treatment group →participants of the range of ages. results are compared, know whether the fixed age group differs in literacy from the group of various ages. Another general example is with drug testing, to test the effect of a drug on patients, a control group won’t receive the drugs, while the treatment group is given the drugs, and the effects are compared. This is a bias where a person believes psychologically that a certain treatment is positively affecting them, even though no treatment was given at all. This is commonly seen in medicine, where a placebo drug can substitute the effects of a real drug. To tackle this bias, subjects would be blinded, meaning they won’t know what group they are assigned to: Control group with mock treatment (sugar pill but told drug) treatment group with the actual treatment if placebo present, both groups will experience it equally Randomization basically assigns individuals randomly to distinct groups, this is great for two reasons, (1) you don’t know the confounder variables, and (2) it lessens the risk of biasing one group to be enriched for confounder Taking the drug testing example A number of subjects sample, randomization was done Distribute participants over two main groups, one confounded and the other randomized, each with their own control groups and treatment group It’s from the randomized group that you can tell whether there are biases present. Replication is basically repeating your experiment, but this time with different subjects, this is very important, as it shows the reproducibility of your experiment. Replication is also necessary mainly because conducting just one experiment might be of chance, a result of many factors, such as: confounders being unevenly distributed systemic error in data collection outliers If replication is done(with a new set of data) and it produces the same conclusions, this shows that the experiment is strong and has a good design. Moreover, the heart of replication is all about the variability of data, which relates to p-value, which has a golden rule of (P ≤ 0.05) that many strive for in statistical hypothesis testing. More about that in the next article Building a city like New York required blueprints and very precise planning, which created what you see today. Designing experiments in Data Science should be the same. This is the basics of experimental design, which is fundamentally about precise planning and design to ensure that you have the appropriate data and design for your analysis or studies so that erroneous conclusions can be prevented. A cool mnemonic for the flow is QDPC- Question Data Plan Carefully Formulating questions → Designing experiment → Identifying problems and errors → Collecting data. Erroneous conclusions will have trickle effects as they’re cited and used in real-life applications especially in medicine. Independent variable (x-axis) — manipulated and not affected by other variables Dependent variable (y-axis) — change in x results in y Hypothesis — Educated guess about the relationship between X and Y Confounder — Extraneous variable that affects the relationship between independent and dependent Control — fixing confounder in an experiment or taking into account and measuring Control groups — a group of subjects that have an independent variable measured but don’t receive treatment. Acts as a control for the experiment, compare changes and effect Blinded subjects — blindly assigned participants to groups to test the placebo effect Placebo effect — One’s belief of an effect created by a placebo treatment Randomization — randomly assign participants to present biases and confounding effects on group Replication — repeating the experiment to strengthens the conclusion of the result FiveThirtyEight xkcd comic yale statistics experimental design medium.com towardsdatascience.com towardsdatascience.com medium.com towardsdatascience.com If you want to be updated with my latest articles follow me on Medium. Follow my other social profiles too! Linkedin Twitter GitHub Reddit Be on the lookout for my next article and remember to stay safe!
[ { "code": null, "e": 590, "s": 172, "text": "Full seriesPart 1 - What is Data Science, Big data and the Data Science processPart 2 - The origin of R, why use R, R vs Python and resources to learnPart 3 - Version Control, Git & GitHub and best practices for sharing code.Part 4 - The 6 types of Data A...
Gerrit - Configure Git
Once you have installed Git, you need to customize the configuration variables to add your personal information. You can get and set the configuration variables by using Git tool called git config along with the -l option (this option provides the current configuration). git config -l When you run the above command, you will get the configuration variables as shown in the following image You can change the customized information any time by using the commands again. In the next chapter, you will learn how to configure the user name and user Email by using git config command. Print Add Notes Bookmark this page
[ { "code": null, "e": 2510, "s": 2238, "text": "Once you have installed Git, you need to customize the configuration variables to add your personal information. You can get and set the configuration variables by using Git tool called git config along with the -l option (this option provides the curre...
8086 program to sort an integer array in ascending order - GeeksforGeeks
29 Oct, 2021 Problem – Write a program in 8086 microprocessor to sort numbers in ascending order in an array of n numbers, where size “n” is stored at memory address 2000 : 500 and the numbers are stored from memory address 2000 : 501. Example – Example explanation: Pass-1: F9 F2 39 05 F2 F9 39 05 F2 39 F9 05 F2 39 05 F9 (1 number got fix) Pass-2: F2 39 05 F9 39 F2 05 F9 39 05 F2 F9 (2 number got fix) Pass-3: 39 05 F2 F9 05 39 F2 F9 (sorted) Algorithm – Load data from offset 500 to register CL (for count). Travel from starting memory location to last and compare two numbers if first number is greater than second number then swap them. First pass fix the position for last number. Decrease the count by 1. Again travel from starting memory location to (last-1, by help of count) and compare two numbers if first number is greater than second number then swap them. Second pass fix the position for last two numbers. Repeated. Load data from offset 500 to register CL (for count). Travel from starting memory location to last and compare two numbers if first number is greater than second number then swap them. First pass fix the position for last number. Decrease the count by 1. Again travel from starting memory location to (last-1, by help of count) and compare two numbers if first number is greater than second number then swap them. Second pass fix the position for last two numbers. Repeated. Program – Explanation – MOV SI, 500: set the value of SI to 500. MOV CL, [SI]: load data from offset SI to register CL. DEC CL: decrease value of register CL BY 1. MOV SI, 500: set the value of SI to 500. MOV CH, [SI]: load data from offset SI to register CH. DEC CH: decrease value of register CH BY 1. INC SI: increase value of SI BY 1. MOV AL, [SI]: load value from offset SI to register AL. INC SI: increase value of SI BY 1. CMP AL, [SI]: compares value of register AL and [SI] (AL-[SI]). JC 41C: jump to address 41C if carry generated. XCHG AL, [SI]: exchange the contents of register AL and SI. DEC SI: decrease value of SI by 1. XCHG AL, [SI]: exchange the contents of register AL and SI. INC SI: increase value of SI by 1. DEC CH: decrease value of register CH by 1. JNZ 40F: jump to address 40F if zero flat reset. DEC CL: decrease value of register CL by 1. JNZ 407: jump to address 407 if zero flat reset. HLT: stop. MOV SI, 500: set the value of SI to 500. MOV CL, [SI]: load data from offset SI to register CL. DEC CL: decrease value of register CL BY 1. MOV SI, 500: set the value of SI to 500. MOV CH, [SI]: load data from offset SI to register CH. DEC CH: decrease value of register CH BY 1. INC SI: increase value of SI BY 1. MOV AL, [SI]: load value from offset SI to register AL. INC SI: increase value of SI BY 1. CMP AL, [SI]: compares value of register AL and [SI] (AL-[SI]). JC 41C: jump to address 41C if carry generated. XCHG AL, [SI]: exchange the contents of register AL and SI. DEC SI: decrease value of SI by 1. XCHG AL, [SI]: exchange the contents of register AL and SI. INC SI: increase value of SI by 1. DEC CH: decrease value of register CH by 1. JNZ 40F: jump to address 40F if zero flat reset. DEC CL: decrease value of register CL by 1. JNZ 407: jump to address 407 if zero flat reset. HLT: stop. sagar0719kumar microprocessor system-programming Computer Organization & Architecture microprocessor Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Addressing modes in 8085 microprocessor Logical and Physical Address in Operating System Memory Hierarchy Design and its Characteristics 8085 program to add two 8 bit numbers Architecture of 8085 microprocessor Pin diagram of 8086 microprocessor Computer Organization and Architecture | Pipelining | Set 1 (Execution, Stages and Throughput) Architecture of 8086 Computer Organization | RISC and CISC Memory mapped I/O and Isolated I/O
[ { "code": null, "e": 24686, "s": 24658, "text": "\n29 Oct, 2021" }, { "code": null, "e": 24910, "s": 24686, "text": "Problem – Write a program in 8086 microprocessor to sort numbers in ascending order in an array of n numbers, where size “n” is stored at memory address 2000 : 500...
How to Install Packages in R Google Colab | by Fidocia Wima Adityawarman | Towards Data Science
In my previous article, I have tried to explain how to run R in Google Colab. In short, there are two ways, first is to use R and Python simultaneously in the same runtime using rpy2, and the second is to run R natively. towardsdatascience.com But, there are several limitations when running the R notebook. The first limitation is related to the version of R used in the runtime. To check what version of R is running at runtime, you can type: version. The limitation I am referring to here is that we cannot choose which version of R to run. Until the time this post was made, the version of R used in Colab notebook is: platform x86_64-pc-linux-gnu arch x86_64 os linux-gnu system x86_64, linux-gnu status major 4 minor 0.4 year 2021 month 02 day 15 svn rev 80002 language R version.string R version 4.0.4 (2021-02-15)nickname Lost Library Book This is different from the Python runtime. You can choose to use Python 2 or Python 3 even though Google is currently trying to stop support for Python 2 on Colab.Python 2 deprecation announcements can be accessed on the Google Colab FAQ page. Here’s what it reads: Does Colab support Python 2?The Python development team has declared that Python 2 will no longer be supported after January 1st, 2020. Colab has stopped updating Python 2 runtimes, and is gradually phasing out support for Python 2 notebooks. We suggest migrating important notebooks to Python 3. However, we can still access it via this link https://colab.to/py2. The second limitation of running R natively relates to Google Drive. Mounting and fetching data from Google Drive and converting it to dataframe is not directly supported if we use R natively in Colab. In my previous article, I have also provided a workaround to retrieve data from Google Drive or BigQuery. Now, suppose you have successfully run R natively on Colab. Then what libraries or packages are available there? How to install a package that we need, but it’s not available by default? What options are available? In this post, I’ll try to break that down. To get a list of libraries that have been installed, you can run this code: str(allPackage <- installed.packages())allPackage [, c(1,3:5)] The code above will produce a table like this: Or you can also run to display a help window that contains information about what packages have been installed at this R runtime: library() From there we know that there are some commonly used packages like, for example, dplyr, broom, surival which have been installed by default. To install packages that are not available by default we could use the install.packages() function, as we usually do. But there are times when we will find packages that are not available in CRAN, so we may need to install via Github. Or when a newer version of the package is not yet available in CRAN. In that case, we can use the devtools::install_github (“DeveloperName/PackageName”) function. For example, let’s try installing the “rsvg” package which is required to render SVG Images into PDF, PNG, PostScript, or Bitmap Arrays. We’ll run the install.packages(“rsvg”) code in our runtime. Wait a few moments, and an error message will appear: Installing package into ‘/usr/local/lib/R/site-library’(as ‘lib’ is unspecified)Warning message in install.packages("rsvg"):“installation of package ‘rsvg’ had non-zero exit status” And this, I think, is the third limitation in using R natively in Google Colab. The message (error or warning) that is displayed is not as complete as when we, for example, use RStudio. non-zero exit status means that the package was not successfully installed on our system or runtime, and it does not say why. This incompleteness is a bit troublesome for us in some ways because the information is not clear. We must be forced to find what causes it in search engines. In the case of the rsvg packages used in the example, it turns out that the problem lies in the existence of a system dependency that is needed but does not exist in the system we are using. R packages often depend on system libraries or other software external to R. Interaction with the operating system (Ubuntu terminal) in the R runtime can be done by using the system() function. Continuing the example above with the rsvg package, it turns out that the package requires librsvg2-dev to be installed on our system. To install it, we need to run code like this: system('sudo apt-get install -y librsvg2-dev', intern=TRUE) Or, we could use cat() to get relatively prettier result: cat(system('sudo apt-get install -y librsvg2-dev', intern=TRUE), sep = "\n") Some of the other R packages may have different dependencies. You can install it in the same way as above. Some other libs you might need are libcurl4-openssl-dev, libgsl0ldbl, gsl-bin, libgsl0-dev, g++, gcc, gfortran-8. The availability to run R on Google Colab is something to celebrate, despite some limitations. Packages that have been provided by Google Colab may be enough to fulfill some parts of our work in processing data. This post, as well as the previous one, are just examples of how to get around the limitations that exist in R Google Colab. Finally, I hope this article is useful for anyone who needs it.
[ { "code": null, "e": 393, "s": 172, "text": "In my previous article, I have tried to explain how to run R in Google Colab. In short, there are two ways, first is to use R and Python simultaneously in the same runtime using rpy2, and the second is to run R natively." }, { "code": null, "e...
How to use IF in stored procedure and select in MySQL?
You can use IF in stored procedure and IF() in select statement as well. IF() in select statement mysql> select if(0=0,'Hello MySQL','condition is wrong'); This will produce the following output − +------------------------------------------------------+ | if('test'='test','Hello MySQL','condition is wrong') | +------------------------------------------------------+ | Hello MySQL | +------------------------------------------------------+ 1 row in set (0.00 sec) The second case if your condition becomes wrong − mysql> select if(1=0,'Hello MySQL','condition is wrong'); This will produce the following output − +--------------------------------------------+ | if(1=0,'Hello MySQL','condition is wrong') | +--------------------------------------------+ | condition is wrong | +--------------------------------------------+ 1 row in set (0.00 sec) The query to create a stored procedure is as follows. Here, we have used IF to set conditions − mysql> DELIMITER // mysql> CREATE PROCEDURE if_demo(value int) BEGIN IF 1=value THEN SELECT "Hello MySQL"; ELSE SELECT "Wrong Condition"; END IF; END // Query OK, 0 rows affected (0.20 sec) mysql> DELIMITER ; Now you can call the stored procedure using call command. mysql> call if_demo(1); This will produce the following output − +-------------+ | Hello MySQL | +-------------+ | Hello MySQL | +-------------+ 1 row in set (0.00 sec) Query OK, 0 rows affected (0.01 sec) If your condition becomes false − mysql> call if_demo(0); This will produce the following output − +-----------------+ | Wrong Condition | +-----------------+ | Wrong Condition | +-----------------+ 1 row in set (0.00 sec) Query OK, 0 rows affected (0.01 sec)
[ { "code": null, "e": 1135, "s": 1062, "text": "You can use IF in stored procedure and IF() in select statement as well." }, { "code": null, "e": 1218, "s": 1135, "text": "IF() in select statement mysql> select if(0=0,'Hello MySQL','condition is wrong');" }, { "code": null...
vlcj - Play Video
vlcj library provides a class which does the auto discovery of installed VLC player in the system using following syntax. EmbeddedMediaPlayerComponent mediaPlayerComponent = = new EmbeddedMediaPlayerComponent(); Now using media we can easily load a video in our application using following syntax− mediaPlayerComponent.mediaPlayer().media().startPaused(path); Now using controls we can easily play a video in our application using following syntax− mediaPlayerComponent.mediaPlayer().controls().play(); Open project mediaPlayer as created in Environment Setup chapter in Eclipse. Update App.java with following code− App.java package com.tutorialspoint.media; import java.awt.BorderLayout; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.awt.event.WindowAdapter; import java.awt.event.WindowEvent; import javax.swing.JButton; import javax.swing.JFrame; import javax.swing.JPanel; import javax.swing.UIManager; import uk.co.caprica.vlcj.player.component.EmbeddedMediaPlayerComponent; public class App extends JFrame { private static final long serialVersionUID = 1L; private static final String TITLE = "My First Media Player"; private static final String VIDEO_PATH = "D:\\Downloads\\sunset-beach.mp4"; private final EmbeddedMediaPlayerComponent mediaPlayerComponent; private JButton playButton; public App(String title) { super(title); mediaPlayerComponent = new EmbeddedMediaPlayerComponent(); } public void initialize() { this.setBounds(100, 100, 600, 400); this.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); this.addWindowListener(new WindowAdapter() { @Override public void windowClosing(WindowEvent e) { mediaPlayerComponent.release(); System.exit(0); } }); JPanel contentPane = new JPanel(); contentPane.setLayout(new BorderLayout()); contentPane.add(mediaPlayerComponent, BorderLayout.CENTER); JPanel controlsPane = new JPanel(); playButton = new JButton("Play"); controlsPane.add(playButton); contentPane.add(controlsPane, BorderLayout.SOUTH); playButton.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { mediaPlayerComponent.mediaPlayer().controls().play(); } }); this.setContentPane(contentPane); this.setVisible(true); } public void loadVideo(String path) { mediaPlayerComponent.mediaPlayer().media().startPaused(path); } public static void main( String[] args ){ try { UIManager.setLookAndFeel( UIManager.getSystemLookAndFeelClassName()); } catch (Exception e) { System.out.println(e); } App application = new App(TITLE); application.initialize(); application.setVisible(true); application.loadVideo(VIDEO_PATH); } } Run the application by right clicking the file and choose run as Java Application. After a successful startup, if everything is fine then it should display the following result − Click on Play Button and video will start playing. Print Add Notes Bookmark this page
[ { "code": null, "e": 2045, "s": 1923, "text": "vlcj library provides a class which does the auto discovery of installed VLC player in the system using following syntax." }, { "code": null, "e": 2136, "s": 2045, "text": "EmbeddedMediaPlayerComponent mediaPlayerComponent = = new Em...
Find smallest permutation of given number in C++
In this problem, we are given a large number N. Our task is to find the smallest permutation of a given number. Let’s take an example to understand the problem, N = 4529016 1024569 A simple solution to the problem is by storing the long integer value to a string. Then we will sort the string which is our result. But if there are any leading zeros, we will shift them after the first non zero value. Program to illustrate the working of our solution, Live Demo #include <bits/stdc++.h> using namespace std; string smallestNumPer(string s) { int len = s.length(); sort(s.begin(), s.end()); int i = 0; while (s[i] == '0') i++; swap(s[0], s[i]); return s; } int main() { string s = "4529016"; cout<<"The number is "<<s<<endl; cout<<"The smallest permutation of the number is "<<smallestNumPer(s); return 0; } The number is 4529016 The smallest permutation of the number is 1024569
[ { "code": null, "e": 1174, "s": 1062, "text": "In this problem, we are given a large number N. Our task is to find the\nsmallest permutation of a given number." }, { "code": null, "e": 1223, "s": 1174, "text": "Let’s take an example to understand the problem," }, { "code"...
Python program to swap case of English word
Suppose we have a string with English letters. We have to swap the case of the letters. So uppercase will be converted to lower and lowercase converted to upper. So, if the input is like s = "PrograMMinG", then the output will be pROGRAmmINg To solve this, we will follow these steps − ret := blank string for each letter in s, doif letter is in uppercase, thenret := ret concatenate lower case equivalent of letterotherwise,ret := ret concatenate upper case equivalent of letter if letter is in uppercase, thenret := ret concatenate lower case equivalent of letter ret := ret concatenate lower case equivalent of letter otherwise,ret := ret concatenate upper case equivalent of letter ret := ret concatenate upper case equivalent of letter return ret Let us see the following implementation to get better understanding def solve(s): ret = '' for letter in s: if letter.isupper(): ret += letter.lower() else: ret += letter.upper() return ret s = "PrograMMinG" print(solve(s)) "PrograMMinG" pROGRAmmINg
[ { "code": null, "e": 1224, "s": 1062, "text": "Suppose we have a string with English letters. We have to swap the case of the letters. So uppercase will be converted to lower and lowercase converted to upper." }, { "code": null, "e": 1304, "s": 1224, "text": "So, if the input is ...
Area of Largest rectangle that can be inscribed in an Ellipse - GeeksforGeeks
16 Mar, 2021 Given an ellipse, with major axis length 2a & 2b. The task is to find the area of the largest rectangle that can be inscribed in it.Examples: Input: a = 4, b = 3 Output: 24 Input: a = 10, b = 8 Output: 160 Approach: Let the upper right corner of the rectangle has co-ordinates (x, y), Then the area of rectangle, A = 4*x*y.Now, Equation of ellipse, (x2/a2) + (y2/b2) = 1 Thinking of the area as a function of x, we have dA/dx = 4xdy/dx + 4yDifferentiating equation of ellipse with respect to x, we have 2x/a2 + (2y/b2)dy/dx = 0,so, dy/dx = -b2x/a2y, and dAdx = 4y – (4b2x2/a2y)Setting this to 0 and simplifying, we have y2 = b2x2/a2.From equation of ellipse we know that, y2=b2 – b2x2/a2Thus, y2=b2 – y2, 2y2=b2, and y2b2 = 1/2. Clearly, then, x2a2 = 1/2 as well, and the area is maximized when x= a/√2 and y=b/√2So the maximum area Area, Amax = 2ab Below is the implementation of the above approach: C++ Java Python 3 C# PHP Javascript // C++ Program to find the biggest rectangle// which can be inscribed within the ellipse#include <bits/stdc++.h>using namespace std; // Function to find the area// of the rectanglefloat rectanglearea(float a, float b){ // a and b cannot be negative if (a < 0 || b < 0) return -1; // area of the rectangle return 2 * a * b;} // Driver codeint main(){ float a = 10, b = 8; cout << rectanglearea(a, b) << endl; return 0;} // Java Program to find the biggest rectangle// which can be inscribed within the ellipse import java.util.*;import java.lang.*;import java.io.*; class GFG{// Function to find the area// of the rectanglestatic float rectanglearea(float a, float b){ // a and b cannot be negative if (a < 0 || b < 0) return -1; // area of the rectangle return 2 * a * b;} // Driver codepublic static void main(String args[]){ float a = 10, b = 8; System.out.println(rectanglearea(a, b));}} # Python 3 Program to find the biggest rectangle# which can be inscribed within the ellipse # Function to find the area# of the rectangledef rectanglearea(a, b) : # a and b cannot be negative if a < 0 or b < 0 : return -1 # area of the rectangle return 2 * a * b # Driver code if __name__ == "__main__" : a, b = 10, 8 print(rectanglearea(a, b)) # This code is contributed by ANKITRAI1 // C# Program to find the// biggest rectangle which// can be inscribed within// the ellipseusing System; class GFG{// Function to find the area// of the rectanglestatic float rectanglearea(float a, float b){ // a and b cannot be negative if (a < 0 || b < 0) return -1; // area of the rectangle return 2 * a * b;} // Driver codepublic static void Main(){ float a = 10, b = 8; Console.WriteLine(rectanglearea(a, b));}} // This code is contributed// by inder_verma <?php// PHP Program to find the biggest// rectangle which can be inscribed// within the ellipse // Function to find the area// of the rectanglefunction rectanglearea($a, $b){ // a and b cannot be negative if ($a < 0 or $b < 0) return -1; // area of the rectangle return 2 * $a * $b;} // Driver code$a = 10; $b = 8;echo rectanglearea($a, $b); // This code is contributed// by inder_verma?> <script> // javascript Program to find the biggest rectangle// which can be inscribed within the ellipse // Function to find the area// of the rectanglefunction rectanglearea(a , b){ // a and b cannot be negative if (a < 0 || b < 0) return -1; // area of the rectangle return 2 * a * b;} // Driver code var a = 10, b = 8;document.write(rectanglearea(a, b)); // This code contributed by Princi Singh </script> 160 tufan_gupta2000 ankthon inderDuMCA princi singh square-rectangle Geometric Mathematical Mathematical Geometric Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Convex Hull | Set 1 (Jarvis's Algorithm or Wrapping) Line Clipping | Set 1 (Cohen–Sutherland Algorithm) Program for distance between two points on earth Closest Pair of Points | O(nlogn) Implementation Equation of circle when three points on the circle are given Program for Fibonacci numbers Write a program to print all permutations of a given string C++ Data Types Set in C++ Standard Template Library (STL) Coin Change | DP-7
[ { "code": null, "e": 25246, "s": 25218, "text": "\n16 Mar, 2021" }, { "code": null, "e": 25390, "s": 25246, "text": "Given an ellipse, with major axis length 2a & 2b. The task is to find the area of the largest rectangle that can be inscribed in it.Examples: " }, { "code...
Program to find longest prefix that is also a suffix in C++
Suppose we have a string s, we have to find the longest prefix of s, which is also a suffix (excluding itself). If there is no such prefix, then simply return blank string. So, if the input is like "madam", then the output will be "m", it has 4 prefixes excluding itself. These are "m", "ma", "mad", "mada" and 4 suffixes like "m", "am", "dam", "adam". The largest prefix which is also suffix is given by "m". To solve this, we will follow these steps − Define a function lps(), this will take s, Define a function lps(), this will take s, n := size of s n := size of s Define an array ret of size n Define an array ret of size n j := 0, i := 1 j := 0, i := 1 while i < n, do −if s[i] is same as s[j], then −ret[i] := j + 1(increase i by 1)(increase j by 1)otherwise when s[i] is not equal to s[j], then −if j > 0, then −j := ret[j − 1]Otherwise(increase i by 1) while i < n, do − if s[i] is same as s[j], then −ret[i] := j + 1(increase i by 1)(increase j by 1) if s[i] is same as s[j], then − ret[i] := j + 1 ret[i] := j + 1 (increase i by 1) (increase i by 1) (increase j by 1) (increase j by 1) otherwise when s[i] is not equal to s[j], then −if j > 0, then −j := ret[j − 1]Otherwise(increase i by 1) otherwise when s[i] is not equal to s[j], then − if j > 0, then −j := ret[j − 1] if j > 0, then − j := ret[j − 1] j := ret[j − 1] Otherwise(increase i by 1) Otherwise (increase i by 1) (increase i by 1) return ret return ret From the main method do the following − From the main method do the following − n := size of s n := size of s if n is same as 1, then −return blank string if n is same as 1, then − return blank string return blank string Define an array v = lps(s) Define an array v = lps(s) x := v[n − 1] x := v[n − 1] ret := blank string ret := blank string for initialize i := 0, when i < x, update (increase i by 1), do −ret := ret + s[i] for initialize i := 0, when i < x, update (increase i by 1), do − ret := ret + s[i] ret := ret + s[i] return ret return ret Let us see the following implementation to get better understanding − Live Demo #include <bits/stdc++.h> using namespace std; class Solution { public: vector <int> lps(string s){ int n = s.size(); vector<int> ret(n); int j = 0; int i = 1; while (i < n) { if (s[i] == s[j]) { ret[i] = j + 1; i++; j++; } else if (s[i] != s[j]) { if (j > 0) j = ret[j - 1]; else { i++; } } } return ret; } string longestPrefix(string s) { int n = s.size(); if (n == 1) return ""; vector<int> v = lps(s); int x = v[n - 1]; string ret = ""; for (int i = 0; i < x; i++) { ret += s[i]; } return ret; } }; main(){ Solution ob; cout << (ob.longestPrefix("helloworldhello")); } "helloworldhello" hello
[ { "code": null, "e": 1235, "s": 1062, "text": "Suppose we have a string s, we have to find the longest prefix of s, which is also a suffix (excluding itself). If there is no such prefix, then simply return blank string." }, { "code": null, "e": 1472, "s": 1235, "text": "So, if th...
How to Get the current language in Android device?
While doing internalization in android application, we should know what is the current language in android device. This example demonstrate about how to Get the current language in Android device. Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project. Step 2 − Add the following code to res/layout/activity_main.xml. <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".MainActivity" android:background="#dde4dd" android:gravity="center" android:orientation="vertical"> <TextView android:id="@+id/language" android:layout_width="wrap_content" android:layout_height="wrap_content" /> <Button android:id="@+id/click" android:layout_marginTop="10dp" android:layout_width="match_parent" android:layout_height="wrap_content" android:text="click"/> </LinearLayout> In the above code we have taken Button. When user click on button, it will take device country name and language and append to text view. Step 3 − Add the following code to src/MainActivity.java package com.example.andy.myapplication; import android.os.Bundle; import android.support.v7.app.AppCompatActivity; import android.view.View; import android.widget.Button; import android.widget.TextView; import java.util.Locale; public class MainActivity extends AppCompatActivity { TextView language; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); language = findViewById(R.id.language); Button button = findViewById(R.id.click); button.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { String languagename = Locale.getDefault().getDisplayLanguage(); String country = Locale.getDefault().getCountry(); language.setText("Language " + languagename + " Country name " + country); } }); } } In the above code, we have used Locale class to get Display language and country as shown below - String languagename = Locale.getDefault().getDisplayLanguage(); String country = Locale.getDefault().getCountry(); Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Runicon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen − In the above result it shows initial screen when user click on button it will show country name and language as shown below - Click here to download the project code
[ { "code": null, "e": 1259, "s": 1062, "text": "While doing internalization in android application, we should know what is the current language in android device. This example demonstrate\nabout how to Get the current language in Android device." }, { "code": null, "e": 1388, "s": 125...
Creating Bitcoin trading bots don’t lose money | by Adam King | Towards Data Science
In this article we are going to create deep reinforcement learning agents that learn to make money trading Bitcoin. In this tutorial we will be using OpenAI’s gym and the PPO agent from the stable-baselines library, a fork of OpenAI’s baselines library. The purpose of this series of articles is to experiment with state-of-the-art deep reinforcement learning technologies to see if we can create profitable Bitcoin trading bots. It seems to be the status quo to quickly shut down any attempts to create reinforcement learning algorithms, as it is “the wrong way to go about building a trading algorithm”. However, recent advances in the field have shown that RL agents are often capable of learning much more than supervised learning agents within the same problem domain. For this reason, I am writing these articles to see just how profitable we can make these trading agents, or if the status quo exists for a reason. Many thanks to OpenAI and DeepMind for the open source software they have been providing to deep learning researchers for the past couple of years. If you haven’t yet seen the amazing feats they’ve accomplished with technologies like AlphaGo, OpenAI Five, and AlphaStar, you may have been living under a rock for the last year, but you should also go check them out. While we won’t be creating anything quite as impressive, it is still no easy feat to trade Bitcoin profitably on a day-to-day basis. However, as Teddy Roosevelt once said, Nothing worth having comes easy. So instead of learning to trade ourselves... let’s make a robot to do it for us. towardsdatascience.com When you’ve read this article, check out TensorTrade — the successor framework to the codebase produced in this article. Create a gym environment for our agent to learn fromRender a simple, yet elegant visualization of that environmentTrain our agent to learn a profitable trading strategy Create a gym environment for our agent to learn from Render a simple, yet elegant visualization of that environment Train our agent to learn a profitable trading strategy If you are not already familiar with how to create a gym environment from scratch, or how to render simple visualizations of those environments, I have just written articles on both of those topics. Feel free to pause here and read either of those before continuing. For this tutorial, we are going to be using the Kaggle data set produced by Zielak. The .csv data file will also be available on my GitHub repo if you’d like to download the code to follow along. Okay, let’s get started. First, let’s import all of the necessary libraries. Make sure to pip install any libraries you are missing. import gymimport pandas as pdimport numpy as npfrom gym import spacesfrom sklearn import preprocessing Next, let’s create our class for the environment. We’ll require a pandas data frame to be passed in, as well as an optional initial_balance, and a lookback_window_size, which will indicate how many time steps in the past the agent will observe at each step. We will default the commission per trade to 0.075%, which is Bitmex’s current rate, and default the serial parameter to false, meaning our data frame will be traversed in random slices by default. We also call dropna() and reset_index() on the data frame to first remove any rows with NaN values, and then reset the frame’s index since we’ve removed data. class BitcoinTradingEnv(gym.Env): """A Bitcoin trading environment for OpenAI gym""" metadata = {'render.modes': ['live', 'file', 'none']} scaler = preprocessing.MinMaxScaler() viewer = None def __init__(self, df, lookback_window_size=50, commission=0.00075, initial_balance=10000 serial=False): super(BitcoinTradingEnv, self).__init__() self.df = df.dropna().reset_index() self.lookback_window_size = lookback_window_size self.initial_balance = initial_balance self.commission = commission self.serial = serial # Actions of the format Buy 1/10, Sell 3/10, Hold, etc. self.action_space = spaces.MultiDiscrete([3, 10]) # Observes the OHCLV values, net worth, and trade history self.observation_space = spaces.Box(low=0, high=1, shape=(10, lookback_window_size + 1), dtype=np.float16) Our action_space here is represented as a discrete set of 3 options (buy, sell, or hold) and another discrete set of 10 amounts (1/10, 2/10, 3/10, etc). When the buy action is selected, we will buy amount * self.balance worth of BTC. For the sell action, we will sell amount * self.btc_held worth of BTC. Of course, the hold action will ignore the amount and do nothing. Our observation_space is defined as a continuous set of floats between 0 and 1, with the shape (10, lookback_window_size + 1). The + 1 is to account for the current time step. For each time step in the window, we will observe the OHCLV values, our net worth, the amount of BTC bought or sold, and the total amount in USD we’ve spent on or received from those BTC. Next, we need to write our reset method to initialize the environment. def reset(self): self.balance = self.initial_balance self.net_worth = self.initial_balance self.btc_held = 0 self._reset_session() self.account_history = np.repeat([ [self.net_worth], [0], [0], [0], [0] ], self.lookback_window_size + 1, axis=1) self.trades = [] return self._next_observation() Here we use both self._reset_session and self._next_observation, which we haven’t defined yet. Let’s define them. An important piece of our environment is the concept of a trading session. If we were to deploy this agent into the wild, we would likely never run it for more than a couple months at a time. For this reason, we are going to limit the amount of continuous frames in self.df that our agent will see in a row. In our _reset_session method, we are going to first reset the current_step to 0. Next, we’ll set steps_left to a random number between 1 and MAX_TRADING_SESSION, which we will now define at the top of the file. MAX_TRADING_SESSION = 100000 # ~2 months Next, if we are traversing the frame serially, we will setup the entire frame to be traversed, otherwise we’ll set the frame_start to a random spot within self.df, and create a new data frame called active_df, which is just a slice of self.df from frame_start to frame_start + steps_left. def _reset_session(self): self.current_step = 0 if self.serial: self.steps_left = len(self.df) - self.lookback_window_size - 1 self.frame_start = self.lookback_window_size else: self.steps_left = np.random.randint(1, MAX_TRADING_SESSION) self.frame_start = np.random.randint( self.lookback_window_size, len(self.df) - self.steps_left) self.active_df = self.df[self.frame_start - self.lookback_window_size:self.frame_start + self.steps_left] One important side effect of traversing the data frame in random slices is our agent will have much more unique data to work with when trained for long periods of time. For example, if we only ever traversed the data frame in a serial fashion (i.e. in order from 0 to len(df)), then we would only ever have as many unique data points as are in our data frame. Our observation space could only even take on a discrete number of states at each time step. However, by randomly traversing slices of the data frame, we essentially manufacture more unique data points by creating more interesting combinations of account balance, trades taken, and previously seen price action for each time step in our initial data set. Let me explain with an example. At time step 10 after resetting a serial environment, our agent will always be at the same time within the data frame, and would have had 3 choices to make at each time step: buy, sell, or hold. And for each of these 3 choices, another choice would then be required: 10%, 20%, ..., or 100% of the amount possible. This means our agent could experience any of (103)10 total states, for a total of 1030 possible unique experiences. Now consider our randomly sliced environment. At time step 10, our agent could be at any of len(df) time steps within the data frame. Given the same choices to make at each time step, this means this agent could experience any of len(df)30 possible unique states within the same 10 time steps. While this may add quite a bit of noise to large data sets, I believe it should allow the agent to learn more from our limited amount of data. We will still traverse our test data set in serial fashion, to get a more accurate understanding of the algorithm’s usefulness on fresh, seemingly “live” data. It can often be helpful to visual an environment’s observation space, in order to get an idea of the types of features your agent will be working with. For example, here is a visualization of our observation space rendered using OpenCV. Each row in the image represents a row in our observation_space. The first 4 rows of frequency-like red lines represent the OHCL data, and the spurious orange and yellow dots directly below represent the volume. The fluctuating blue bar below that is the agent’s net worth, and the lighter blips below that represent the agent’s trades. If you squint, you can just make out a candlestick graph, with volume bars below it and a strange morse-code like interface below that shows trade history. It looks like our agent should be able to learn sufficiently from the data in our observation_space, so let’s move on. Here we’ll define our _next_observation method, where we’ll scale the observed data from 0 to 1. It’s important to only scale the data the agent has observed so far to prevent look-ahead biases. def _next_observation(self): end = self.current_step + self.lookback_window_size + 1 obs = np.array([ self.active_df['Open'].values[self.current_step:end], self.active_df['High'].values[self.current_step:end], self.active_df['Low'].values[self.current_step:end], self.active_df['Close'].values[self.current_step:end], self.active_df['Volume_(BTC)'].values[self.current_step:end], ]) scaled_history = self.scaler.fit_transform(self.account_history) obs = np.append(obs, scaled_history[:, -(self.lookback_window_size + 1):], axis=0) return obs Now that we’ve set up our observation space, it’s time to write our step function, and in turn, take the agent’s prescribed action. Whenever self.steps_left == 0 for our current trading session, we will sell any BTC we are holding and call _reset_session(). Otherwise, we set the reward to our current net worth and only set done to True if we’ve run out of money. def step(self, action): current_price = self._get_current_price() + 0.01 self._take_action(action, current_price) self.steps_left -= 1 self.current_step += 1 if self.steps_left == 0: self.balance += self.btc_held * current_price self.btc_held = 0 self._reset_session() obs = self._next_observation() reward = self.net_worth done = self.net_worth <= 0 return obs, reward, done, {} Taking an action is as simple as getting the current_price, determining the specified action, and either buying or selling the specified amount of BTC. Let’s quickly write _take_action so we can test our environment. def _take_action(self, action, current_price): action_type = action[0] amount = action[1] / 10 btc_bought = 0 btc_sold = 0 cost = 0 sales = 0 if action_type < 1: btc_bought = self.balance / current_price * amount cost = btc_bought * current_price * (1 + self.commission) self.btc_held += btc_bought self.balance -= cost elif action_type < 2: btc_sold = self.btc_held * amount sales = btc_sold * current_price * (1 - self.commission) self.btc_held -= btc_sold self.balance += sales Finally, in the same method, we will append the trade to self.trades and update our net worth and account history. if btc_sold > 0 or btc_bought > 0: self.trades.append({ 'step': self.frame_start+self.current_step, 'amount': btc_sold if btc_sold > 0 else btc_bought, 'total': sales if btc_sold > 0 else cost, 'type': "sell" if btc_sold > 0 else "buy" }) self.net_worth = self.balance + self.btc_held * current_price self.account_history = np.append(self.account_history, [ [self.net_worth], [btc_bought], [cost], [btc_sold], [sales] ], axis=1) Our agents can now initiate a new environment, step through that environment, and take actions that affect the environment. It’s time to watch them trade. Our render method could be something as simple as calling print(self.net_worth), but that’s no fun. Instead we are going to plot a simple candlestick chart of the pricing data with volume bars and a separate plot for our net worth. We are going to take the code in StockTradingGraph.py from the last article I wrote, and re-purposing it to render our Bitcoin environment. You can grab the code from my GitHub. The first change we are going to make is to update self.df['Date'] everywhere to self.df['Timestamp'], and remove all calls to date2num as our dates already come in unix timestamp format. Next, in our render method we are going to update our date labels to print human-readable dates, instead of numbers. from datetime import datetime First, import the datetime library, then we’ll use the utcfromtimestamp method to get a UTC string from each timestamp and strftime to format the string in Y-m-d H:M format. date_labels = np.array([datetime.utcfromtimestamp(x).strftime('%Y-%m-%d %H:%M') for x in self.df['Timestamp'].values[step_range]]) Finally, we change self.df['Volume'] to self.df['Volume_(BTC)'] to match our data set, and we’re good to go. Back in our BitcoinTradingEnv, we can now write our render method to display the graph. def render(self, mode='human', **kwargs): if mode == 'human': if self.viewer == None: self.viewer = BitcoinTradingGraph(self.df, kwargs.get('title', None)) self.viewer.render(self.frame_start + self.current_step, self.net_worth, self.trades, window_size=self.lookback_window_size) And voila! We can now watch our agents trade Bitcoin. The green ghosted tags represent buys of BTC and the red ghosted tags represent sells. The white tag on the top right is the agent’s current net worth and the bottom right tag is the current price of Bitcoin. Simple, yet elegant. Now, it’s time to train our agent and see how much money we can make! One of the criticisms I received on my first article was the lack of cross-validation, or splitting the data into a training set and test set. The purpose of doing this is to test the accuracy of your final model on fresh data it has never seen before. While this was not a concern of that article, it definitely is here. Since we are using time series data, we don’t have many options when it comes to cross-validation. For example, one common form of cross validation is called k-fold validation, in which you split the data into k equal groups and one by one single out a group as the test group and use the rest of the data as the training group. However time series data is highly time dependent, meaning later data is highly dependent on previous data. So k-fold won’t work, because our agent will learn from future data before having to trade it, an unfair advantage. This same flaw applies to most other cross-validation strategies when applied to time series data. So we are left with simply taking a slice of the full data frame to use as the training set from the beginning of the frame up to some arbitrary index, and using the rest of the data as the test set. slice_point = int(len(df) - 100000)train_df = df[:slice_point]test_df = df[slice_point:] Next, since our environment is only set up to handle a single data frame, we will create two environments, one for the training data and one for the test data. train_env = DummyVecEnv([lambda: BitcoinTradingEnv(train_df, commission=0, serial=False)])test_env = DummyVecEnv([lambda: BitcoinTradingEnv(test_df, commission=0, serial=True)]) Now, training our model is as simple as creating an agent with our environment and calling model.learn. model = PPO2(MlpPolicy, train_env, verbose=1, tensorboard_log="./tensorboard/")model.learn(total_timesteps=50000) Here, we are using tensorboard so we can easily visualize our tensorflow graph and view some quantitative metrics about our agents. For example, here is a graph of the discounted rewards of many agents over 200,000 time steps: Wow, it looks like our agents are extremely profitable! Our best agent was even capable of 1000x’ing his balance over the course of 200,000 steps, and the rest averaged at least a 30x increase! It was at this point that I realized there was a bug in the environment... Here is the new rewards graph, after fixing that bug: As you can see, a couple of our agents did well, and the rest traded themselves into bankruptcy. However, the agents that did well were able to 10x and even 60x their initial balance, at best. I must admit, all of the profitable agents were trained and tested in an environment without commissions, so it is still entirely unrealistic for our agent’s to make any real money. But we’re getting somewhere! Let’s test our agents on the test environment (with fresh data they’ve never seen before), to see how well they’ve learned to trade Bitcoin. Clearly, we’ve still got quite a bit of work to do. By simply switching our model to use stable-baseline’s A2C, instead of the current PPO2 agent, we can greatly improve our performance on this data set. Finally, we can update our reward function slightly, as per Sean O’Gorman’s advice, so that we reward increases in net worth, not just achieving a high net worth and staying there. reward = self.net_worth - prev_net_worth These two changes alone greatly improve the performance on the test data set, and as you can see below, we are finally able to achieve profitability on fresh data that wasn’t in the training set. However, we can do much better. In order for us to improve these results, we are going to need to optimize our hyper-parameters and train our agents for much longer. Time to break out the GPU and get to work! However, this article is already a bit long and we’ve still got quite a bit of detail to go over, so we are going to take a break here. In my next article, we will use Bayesian optimization to zone in on the best hyper-parameters for our problem space, and improve the agent’s model to achieve highly profitable trading strategies. In this article, we set out to create a profitable Bitcoin trading agent from scratch, using deep reinforcement learning. We were able to accomplish the following: Created a Bitcoin trading environment from scratch using OpenAI’s gym.Built a visualization of that environment using Matplotlib.Trained and tested our agents using simple cross-validation.Tuned our agent slightly to achieve profitability. Created a Bitcoin trading environment from scratch using OpenAI’s gym. Built a visualization of that environment using Matplotlib. Trained and tested our agents using simple cross-validation. Tuned our agent slightly to achieve profitability. While our trading agent isn’t quite as profitable as we’d hoped, it is definitely getting somewhere. Next time, we will improve on these algorithms through advanced feature engineering and Bayesian optimization to make sure our agents can consistently beat the market. Stay tuned for my next article, and long live Bitcoin! towardsdatascience.com towardsdatascience.com It is important to understand that all of the research documented in this article is for educational purposes, and should not be taken as trading advice. You should not trade based on any algorithms or strategies defined in this article, as you are likely to lose your investment. Thanks for reading! As always, all of the code for this tutorial can be found on my GitHub. Leave a comment below if you have any questions or feedback, I’d love to hear from you! I can also be reached on Twitter at @notadamking. You can also sponsor me on Github Sponsors or Patreon via the links below. github.com Github Sponsors is currently matching all donations 1:1 up to $5,000!
[ { "code": null, "e": 426, "s": 172, "text": "In this article we are going to create deep reinforcement learning agents that learn to make money trading Bitcoin. In this tutorial we will be using OpenAI’s gym and the PPO agent from the stable-baselines library, a fork of OpenAI’s baselines library." ...
C++ Program to Perform Dictionary Operations in a Binary Search Tree
A Binary Search Tree is a sorted binary tree in which all the nodes have following two properties− The right sub-tree of a node has a key greater than to its parent node's key. The left sub-tree of a node has a key less than or equal to its parent node's key. Each node should not have more than two children. This is a C++ Program to Perform Dictionary Operations in a Binary Search Tree. For insert: Begin Declare function insert(int k) in = int(k mod max) p[in] = (n_type*) malloc(sizeof(n_type)) p[in]->d = k if (r[in] == NULL) then r[in] = p[in] r[in]->n = NULL t[in] = p[in] else t[in] = r[in] while (t[in]->n != NULL) t[in] = t[in]->n t[in]->n= p[in] End. For search a value: Begin Declare function search(int k) int flag = 0 in= int(k mod max) t[in] = r[in] while (t[in] != NULL) do if (t[in]->d== k) then Print “Search key is found”. flag = 1 break else t[in] = t[in]->n if (flag == 0) Print “search key not found”. End. For Delete: Begin Declare function delete_element(int k) in = int(k mod max) t[in] = r[in] while (t[in]->d!= k and t[in] != NULL) p[in] = t[in] t[in] = t[in]->n p[in]->n = t[in]->n Print the deleted element t[in]->d = -1 t[in] = NULL free(t[in]) End #include<iostream> #include<stdlib.h> using namespace std; # define max 20 typedef struct dictionary { int d; struct dictionary *n; } n_type; n_type *p[max], *r[max], *t[max]; class Dict { public: int in; Dict(); void insert(int); void search(int); void delete_element(int); }; int main(int argc, char **argv) { int v, choice, n, num; char c; Dict d; do { cout << "\n1.Create"; cout << "\n2.Search for a value"; cout<<"\n3.Delete a value"; cout << "\nEnter your choice:"; cin >> choice; switch (choice) { case 1: cout << "\nEnter the number of elements to be inserted:"; cin >> n; cout << "\nEnter the elements to be inserted:"; for (int i = 0; i < n; i++) { cin >> num; d.insert(num); } break; case 2: cout << "\nEnter the element to be searched:"; cin >> n; d.search(n); case 3: cout << "\nEnter the element to be deleted:"; cin >> n; d.delete_element(n); break; default: cout << "\nInvalid choice...."; break; } cout << "\nEnter y to continue......"; cin >> c; } while (c == 'y'); } Dict::Dict() { in = -1; for (int i = 0; i < max; i++) { r[i] = NULL; p[i] = NULL; t[i] = NULL; } } void Dict::insert(int k) { in = int(k % max); p[in] = (n_type*) malloc(sizeof(n_type)); p[in]->d = k; if (r[in] == NULL) { r[in] = p[in]; r[in]->n = NULL; t[in] = p[in]; } else { t[in] = r[in]; while (t[in]->n != NULL) t[in] = t[in]->n; t[in]->n= p[in]; } } void Dict::search(int k) { int flag = 0; in= int(k % max); t[in] = r[in]; while (t[in] != NULL) { if (t[in]->d== k) { cout << "\nSearch key is found!!"; flag = 1; break; } else t[in] = t[in]->n; } if (flag == 0) cout << "\nsearch key not found......."; } void Dict::delete_element(int k) { in = int(k % max); t[in] = r[in]; while (t[in]->d!= k && t[in] != NULL) { p[in] = t[in]; t[in] = t[in]->n; } p[in]->n = t[in]->n; cout << "\n" << t[in]->d << " has been deleted."; t[in]->d = -1; t[in] = NULL; free(t[in]); } 1.Create 2.Search for a value 3.Delete a value Enter your choice:1 Enter the number of elements to be inserted:3 Enter the elements to be inserted:111 222 3333 Enter y to continue......y 1.Create 2.Search for a value 3.Delete a value Enter your choice:2 Enter the element to be searched:111 Search key is found!! Enter the element to be deleted:222 222 has been deleted. Enter y to continue......y 1.Create 2.Search for a value 3.Delete a value Enter your choice:222 Invalid choice.... Enter y to continue......y 1.Create 2.Search for a value 3.Delete a value Enter your choice:2 Enter the element to be searched:222 search key not found....... Enter the element to be deleted:0
[ { "code": null, "e": 1161, "s": 1062, "text": "A Binary Search Tree is a sorted binary tree in which all the nodes have following two properties−" }, { "code": null, "e": 1239, "s": 1161, "text": "The right sub-tree of a node has a key greater than to its parent node's key." },...
Generalized Linear Mixed Effects Models in R and Python with GPBoost | by Fabio Sigrist | Towards Data Science
GPBoost is a recently released C++ software library that, among other things, allows for fitting generalized linear mixed effects models in R and Python. This article shows how this can be done using the corresponding R and Python gpboost packages. Further, we do a comparison to the lme4 R package and the statsmodels Python package. In simulated experiments, we find that gpboost is considerably faster than the lme4 R package (more than 100 times in some cases). Disconcertingly, the statsmodels Python package often wrongly estimates models. Generalized linear mixed effects models (GLMMs) assume that a response variable y follows a known parametric distribution p(y|mu) and that a parameter mu of this distribution (often the mean) is related to the sum of so-called fixed effects Xb and random effects Zu: y ~ p(y|mu) mu = f( Xb + Zu ) y is the response variable (aka label, dependent variable) Xb are the fixed effects, X is a matrix with predictor variables (aka features, covariates), b are coefficients Zu are the random effects, where u are assumed to follow a multivariate normal distribution and Z is a matrix that relates u to the samples f() is a link-function that ensures that mu = f( Xb + Zu ) is in the proper range (for instance, for binary data, the mean must be between 0 and 1) What distinguishes a GLMM from a generalized linear model (GLM) is the presence of the random effects Zu. Random effects can consist of, for instance, grouped (aka clustered) random effects with a potentially nested or crossed grouping structure. As such, random effects can also be seen as an approach for modeling high-cardinality categorical variables. Further, random effects can consist of Gaussian processes used, for instance, for modeling spatial data. Compared to using fixed effects only, random effects have the advantage that a model can be more efficiently estimated when, e.g., the number of groups or categories is large relative to the sample size. Linear mixed effects models (LMEs) are a special case of GLMMs in which p(y|mu) is Gaussian and f() is simply the identity. We briefly demonstrate how the R and Python gpboost packages can be used for inference and prediction with GLMMs. For more details, we refer to the GitHub page, in particular the R and Python GLMM examples. The gpboost R and Python packages are available on CRAN and PyPI and can be installed as follows: Python: pip install gpboost -U R: install.packages("gpboost", repos="https://cran.r-project.org") Estimation of GLMMs is a non-trivial task due to the fact that the likelihood (the quantity that should be maximized) cannot be written down in closed form. The current implementation of GPBoost (version 0.6.3) is based on the Laplace approximation. Model estimation in Python and R can be done as follows: Python gp_model = gpb.GPModel(group_data=group_data, likelihood="binary")gp_model.fit(y=y, X=X)gp_model.summary() R gp_model <- fitGPModel(group_data=group_data, likelihood="binary", y=y, X=X)summary(gp_model) where group_data is a matrix or vector with categorical grouping variable(s) specifying the random effects structure. If there are multiple (crossed or nested) random effects, the corresponding grouping variables should be in the columns of group_data y is a vector with response variable data X is a matrix with fixed effects covariate data likelihood denotes the distribution of the response variable (e.g., likelihood=”binary” denotes a Bernoulli distribution with a probit link function) After estimation, the summary() function shows the estimated variance and covariance parameters of the random effects and the fixed effects coefficients b. Obtaining p-values for fixed effects coefficients in GLMMs is a somewhat murky endeavor. Since the likelihood cannot be calculated exactly in the first place, one has to rely on multiple asymptotic arguments. I.e., p-values can be (very) approximate and should be taken with a grain of salt. However, since the lme4 and statsmodels packages allow for calculating approximate standard deviations and p-values, we also show how this can be done using gpboost relying on the same approach as lme4. In short, one has to enable "std_dev": True / std_dev=TRUE, to calculate approximate standard deviations when fitting the model, and then use approximate Wald tests as shown below. Python gp_model = gpb.GPModel(group_data=group, likelihood="binary")gp_model.fit(y=y, X=X, params={“std_dev”: True})coefs = gp_model.get_coef()z_values = coefs[0] / coefs[1]p_values = 2 * stats.norm.cdf(-np.abs(z_values))print(p_values) # show p-values R gp_model <- fitGPModel(group_data=group_data, likelihood="binary", y=y, X=X, params=list(std_dev=TRUE))coefs <- gp_model$get_coef()z_values <- coefs[1,] / coefs[2,]p_values <- 2 * exp(pnorm(-abs(z_values), log.p=TRUE))coefs_summary <- rbind(coefs, z_values, p_values)print(signif(coefs_summary, digits=4)) # show p-values Predictions can be obtained by calling the predict() function as shown below. Python pred = gp_model.predict(X_pred=X_test, group_data_pred=group_test, predict_var=True, predict_response=False)print(pred['mu']) # predicted latent meanprint(pred['var']) # predicted latent variance R pred <- predict(gp_model, X_pred=X_test, group_data_pred=group_test, predict_var=TRUE, predict_response=FALSE)pred$mu # predicted latent meanpred$var # predicted latent variance where group_data_pred is a matrix or vector with categorical grouping variable(s) for which predictions are made X_pred is a matrix with fixed effects covariate data for which predictions are made predict_var (boolean) indicates whether predictive variances should be calculated in addition to the mean predict_response (boolean) indicates whether the response y or the latent Xb + Zu should be predicted. I.e., the random effects part is also predicted. If group_data_pred contains new, unobserved categories, the corresponding random effects predictions will be 0. In the following, we do a simulation study to compare gpboost (version 0.6.3) with lme4 (version 1.1–27) and statsmodels (version 0.12.2). The code to reproduce the full simulation study can be found here. We use the default options for all packages. In particular, all packages use the Laplace approximation to approximate the (marginal) likelihood. We evaluate both computational time and the accuracy of variance parameters and fixed effects coefficients estimates measured in terms of root mean squared error (RMSE). Concerning the latter, we expect to see only minor differences as, in theory, all packages rely on the same statistical methodology and differ only in their specific software implementations. As baseline setting, we use the following model to simulate data: n=1000 samples10 samples per group (i.e., 100 different groups)10 fixed effects covariates plus an intercept termA single-level grouped random effects modelA binary Bernoulli likelihood with a probit link function n=1000 samples 10 samples per group (i.e., 100 different groups) 10 fixed effects covariates plus an intercept term A single-level grouped random effects model A binary Bernoulli likelihood with a probit link function We instigate how the results change when varying each of these choices while holding all others fix. In detail, we vary these choices as follows: (1.) number of samples: 100, 200, 500, 1000, 2000, (2.) number of groups: 2, 5, 10, 20, 50, 100, 200, 500, (3.) number of covariates: 1, 2, 5, 10, 20, (4.) nested and crossed random effects models, and (5.) a Poisson likelihood instead of binary one. A variance of 1 is used for the random effects. The covariates X are sampled from a normal distribution with mean 0 and variance chosen such that the signal-to-noise ratio between the fixed and random effects is one, and the true regression coefficients are all 1 except for the intercept which is 0. For each combination of the above-mentioned model choices, we simulate 100 times data and estimate the corresponding models using the three different software packages. See here for more details on the simulation study. All calculations are run on a laptop with a 2.9 GHz quad-core processor. The results are reported in the five figures below. Note that we plot the results on a logarithmic scale since, e.g., the differences in computational time between lme4 and gpboost are very large. We observe the following findings. First, statsmodels gives parameter estimates with very large RMSEs, i.e., very inaccurate estimates. This is disconcerting as, theoretically, all three packages should be doing the “same thing”. Further, gpboost is considerably faster than lme4with the difference being larger the higher the dimension of the random effects and the larger the number of fixed effects covariates. For instance, for a binary single-level random effects model with 100 groups, 1000 samples, and 20 covariates, gpboost is on average approximately 600 times faster than lme4. As expected, gpboost and lme4 have almost the same RMSEs as both packages use the same methodology. Note: gpboost uses OpenMP parallelization in C++ for operations that can be parallelized. But the main computational bottleneck is the calculation of a Cholesky factorization, and this operation cannot be parallelized. I.e., the large difference in computational time between lme4and gpboost is not a result of parallelization. GPBoost is a recently released C++ software library that, among other things, allows for fitting generalized linear mixed effects models in R and Python. As shown above, gpboost is considerably faster than the lme4 R package. Disconcertingly, the statsmodels Python package often results in very inaccurate estimates. Besides grouped random effects considered in this article, GPBoost also allows for modeling Gaussian processes for, e.g., spatial or temporal random effects, as well as combined grouped random effects and Gaussian process models. Further, GPBoost supports random coefficients such as random slopes or spatially varying coefficients. Finally, apart from LMMs and GLMMs, GPBoost also allows for learning non-linear models without assuming any functional form on the fixed effects using tree-boosting; see these two blog posts on combining tree-boosting with grouped random effects and Gaussian processes or Sigrist (2020) and Sigrist (2021) for more details.
[ { "code": null, "e": 718, "s": 172, "text": "GPBoost is a recently released C++ software library that, among other things, allows for fitting generalized linear mixed effects models in R and Python. This article shows how this can be done using the corresponding R and Python gpboost packages. Furthe...
C/C++ program to make a simple calculator - GeeksforGeeks
22 Nov, 2018 Calculators are widely used device nowadays. It makes calculations easier and faster. Calculators are used to everyone in daily life. A simple calculator can be made using a C++ program which is able to add, subtract, multiply and divide, two operands entered by the user. The switch and break statement is used to create a calculator. Program: // C++ program to create calculator using// switch statement#include <iostream>using namespace std; // Main programmain(){ char op; float num1, num2; // It allows user to enter operator i.e. +, -, *, / cin >> op; // It allow user to enter the operands cin >> num1 >> num2; // Switch statement begins switch (op) { // If user enter + case '+': cout << num1 + num2; break; // If user enter - case '-': cout << num1 - num2; break; // If user enter * case '*': cout << num1 * num2; break; // If user enter / case '/': cout << num1 / num2; break; // If the operator is other than +, -, * or /, // error message will display default: cout << "Error! operator is not correct"; break; } // switch statement ends return 0; } Output: Click on the run ide button, another tab will be open with web address“https://ide.geeksforgeeks.org/index.php” Put the input in the input box Enter the arithmetic operator (i.e.: either +, -, * or /) then, enter two operands on which need to perform the calculation Click on the run button Then output box will appear with output Technical Scripter 2018 C++ C++ Programs School Programming CPP Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Operator Overloading in C++ Iterators in C++ STL Friend class and function in C++ Polymorphism in C++ Sorting a vector in C++ Header files in C/C++ and its uses C++ Program for QuickSort How to return multiple values from a function in C or C++? CSV file management using C++ Program to print ASCII Value of a character
[ { "code": null, "e": 24042, "s": 24014, "text": "\n22 Nov, 2018" }, { "code": null, "e": 24378, "s": 24042, "text": "Calculators are widely used device nowadays. It makes calculations easier and faster. Calculators are used to everyone in daily life. A simple calculator can be ma...
How do I get the current time zone of MySQL?
The following is the syntax to get the current time zone of MySQL. mysql> SELECT @@global.time_zone, @@session.time_zone; The following is the output. +--------------------+---------------------+ | @@global.time_zone | @@session.time_zone | +--------------------+---------------------+ | SYSTEM | SYSTEM | +--------------------+---------------------+ 1 row in set (0.00 sec) The above just returns “SYSTEM” because MySQL is set for system time zones. Alternately, we can get the current time zone with the help of now() function. Let us first create a new table for our example. mysql> create table CurrentTimeZone -> ( -> currenttimeZone datetime -> )ENGINE=MYISAM; Query OK, 0 rows affected (0.19 sec) Inserting records into table. mysql> INSERT INTO CurrentTimeZone values(now()); Query OK, 1 row affected (0.10 sec) Displaying current time zone. mysql> select *from CurrentTimeZone; The following is the output. +---------------------+ | currenttimeZone | +---------------------+ | 2018-10-29 17:20:12 | +---------------------+ 1 row in set (0.00 sec)
[ { "code": null, "e": 1129, "s": 1062, "text": "The following is the syntax to get the current time zone of MySQL." }, { "code": null, "e": 1184, "s": 1129, "text": "mysql> SELECT @@global.time_zone, @@session.time_zone;" }, { "code": null, "e": 1213, "s": 1184, ...
Create a padded grey box with rounded corners in Bootstrap
Use the .jumbotron class to create a padded grey box with rounded corners. You can try to run the following code to implement .jumbotron class in Bootstrap. Live Demo <!DOCTYPE html> <html> <head> <title>Bootstrap Example</title> <link href = "/bootstrap/css/bootstrap.min.css" rel = "stylesheet"> <script src = "/scripts/jquery.min.js"></script> <script src = "/bootstrap/js/bootstrap.min.js"></script> </head> <body> <div class = "container"> <div class = "jumbotron"> <h1>Welcome to my website.</h1> <p>This is demo text.</p> <p> <a class = "btn btn-default btn-sm" role = "button">More</a> </p> </div> </div> </body> </html>
[ { "code": null, "e": 1137, "s": 1062, "text": "Use the .jumbotron class to create a padded grey box with rounded corners." }, { "code": null, "e": 1219, "s": 1137, "text": "You can try to run the following code to implement .jumbotron class in Bootstrap." }, { "code": nul...
How to remove the boxes around legend of a plot created by ggplot2 in R?
When we create a plot with legend using ggplot2, the legend values are covered with a box and that makes an impact on the smoothness of the plot. These boxes around the legend values can be removed so that complete the chart becomes more appealing to the viewer and it can be done with the help of theme function by setting the legend.key element to blank. Consider the below data frame − set.seed(1) x<-rnorm(20) y<-rpois(20,2) Group<-rep(c("A","B","C","D"),times=5) df<-data.frame(x,y,Group) df x y Group 1 -0.62645381 3 A 2 0.18364332 2 B 3 -0.83562861 3 C 4 1.59528080 2 D 5 0.32950777 2 A 6 -0.82046838 3 B 7 0.48742905 0 C 8 0.73832471 2 D 9 0.57578135 3 A 10 -0.30538839 3 B 11 1.51178117 2 C 12 0.38984324 4 D 13 -0.62124058 2 A 14 -2.21469989 1 B 15 1.12493092 0 C 16 -0.04493361 0 D 17 -0.01619026 1 A 18 0.94383621 2 B 19 0.82122120 2 C 20 0.59390132 2 D > library(ggplot2) Creating the scatter plot with different colors of groups − ggplot(df,aes(x,y,color=Group))+geom_point() Here, we are getting legend colors in boxes. If we want to get rid of these boxes then we can use theme function as shown below − ggplot(df,aes(x,y,color=Group))+geom_point()+theme(legend.key=element_blank())
[ { "code": null, "e": 1419, "s": 1062, "text": "When we create a plot with legend using ggplot2, the legend values are covered with a box and that makes an impact on the smoothness of the plot. These boxes around the legend values can be removed so that complete the chart becomes more appealing to th...
Analysis of time and space complexity of C++ STL containers - GeeksforGeeks
04 Apr, 2022 In this article, we will discuss the time and space complexity of some C++ STL classes. Characteristics of C++ STL: C++ has a low execution time as compared to other programming languages. This makes STL in C++ advantageous and powerful. The thing that makes STL powerful is that it contains a vast variety of classes that are implementations of popular and standard algorithms and predefined classes with functions that makes them well-optimized while doing competitive programming or problem-solving questions. Analysis of functions in STL: The major thing required while using the STL is the analysis of STL. Analysis of the problem can’t be done without knowing the complexity analysis of the STL class used in the problem. Implementation and complexity analysis of STL is required to answer the asked interview questions. Below is the analysis of some STL Containers: Priority Queue is used in many popular algorithms . Priority Queue is the implementation of Max Heap by default. Priority Queue does even optimize some major operations. Syntax: priority_queue<data_type> Q The Min Heap can also be implemented by using the following syntax.Syntax: priority_queue<data_type, vector<data_type>, greater<data_type>> Q The table containing the time and space complexity with different functions given below: O(1) O(1) O(log n) O(1) O(log n) O(1) O(1) O(1) Below is the C++ program illustrating the priority queue: C++ // C++ program illustrating the // priority queue #include <bits/stdc++.h> using namespace std; // Function illustrating the // priority queue void priorityQueue() { int Array[5] = { 1, 2, 3, 4, 5 }; // Max heap int i; priority_queue<int> Q; for (i = 0; i < 5; i++) { // Pushes array elements to // priority queue and rearranges // to form max heap; Q.push(Array[i]); } // Maximum element in the priority // queue. cout << "The maximum element is " << Q.top() << endl; i = 1; while (Q.empty() != 1) { int peek = Q.top(); cout << "The " << i++ << " th max element is " << peek << endl; // Pops the maximum element // out of priority queue Q.pop(); } cout << " Is priority queue " << "Q empty() ?" << endl << "check -->" << endl; // Checks whether priority // queue is empty if (Q.empty() == 1) cout << "The priority queue" << " is empty" << endl; else cout << "The priority queue" << " is not empty." << endl; } // Driver Code int main() { // Function Call priorityQueue(); return 0; } The maximum element is 5 The 1 th max element is 5 The 2 th max element is 4 The 3 th max element is 3 The 4 th max element is 2 The 5 th max element is 1 Is priority queue Q empty() ? check --> The priority queue is empty It is the famous class of STL that stores the values in the pattern of key-value pair. It maps the value using the key value, and no same keys will have a different value. It can be modified to multimap to make it work for the same keys with different values. The map can be even used for keys and values of different data types. Syntax: map<data_type, data_type> M The map <int, int> M is the implementation of self-balancing Red-Black Trees. The unordered_map<int, int> M is the implementation of Hash Table which makes the complexity of operations like insert, delete and search to Theta(1). The multimap<int, int> M is the implementation of Red-Black Trees which are self-balancing trees making the cost of operations the same as the map. The unordered_multimap<int, int> M is the implemented same as the unordered map is implemented which is the Hash Table. The only difference is it keeps track of one more variable which keeps track of the count of occurrences. The pairs are inserted into the map using pair<int, int>(x, y) and can be accessed using the map iterator.first and map iterator.second. The map by default keeps sorted based on keys and in the case of the unordered map, it can be in any order. The table containing the time and space complexity with different functions given below(n is the size of the map): O(log n) O(1) O(log n) O(1) O(log n) O(1) O(1) O(1) Theta(n) O(1) O(1) O(1) Below is the C++ program illustrating map: C++ // C++ program illustrating the map #include <bits/stdc++.h> using namespace std; // Function illustrating the map void Map() { int i; // Declaring maps map<int, int> M; unordered_map<int, int> UM; multimap<int, int> MM; unordered_multimap<int, int> UMM; // Inserting pairs of key // and value for (i = 101; i <= 105; i++) { // Inserted the Key and // value twice M.insert( pair<int, int>(i - 100, i)); UM.insert( pair<int, int>(i - 100, i)); M.insert( pair<int, int>(i - 100, i)); UM.insert( pair<int, int>(i - 100, i)); } for (i = 101; i <= 105; i++) { // Inserted the key and // value twice MM.insert( pair<int, int>(i - 100, i)); UMM.insert( pair<int, int>(i - 100, i)); MM.insert( pair<int, int>(i - 100, i)); UMM.insert( pair<int, int>(i - 100, i)); } // Iterators for accessing map<int, int>::iterator Mitr; unordered_map<int, int>::iterator UMitr; multimap<int, int>::iterator MMitr; unordered_multimap<int, int>::iterator UMMitr; // Output cout << "In map" << endl; cout << "Key" << " " << "Value" << endl; for (Mitr = M.begin(); Mitr != M.end(); Mitr++) { cout << Mitr->first << " " << Mitr->second << endl; } // Unsorted and is unordered output cout << "In unordered_map" << endl; cout << "Key" << " " << "Value" << endl; for (UMitr = UM.begin(); UMitr != UM.end(); UMitr++) { cout << UMitr->first << " " << UMitr->second << endl; } // Sorted output cout << "In multimap" << endl; cout << "Key" << " " << "Value" << endl; for (MMitr = MM.begin(); MMitr != MM.end(); MMitr++) { cout << MMitr->first << " " << MMitr->second << endl; } // Unsorted and is unordered // output cout << "In unordered_multimap" << endl; cout << "Key" << " " << "Value" << endl; for (UMMitr = UMM.begin(); UMMitr != UMM.end(); UMMitr++) { cout << UMMitr->first << " " << UMMitr->second << endl; } cout << "The erase() function" << " erases respective key:" << endl; M.erase(1); cout << "Key" << " " << "Value" << endl; for (Mitr = M.begin(); Mitr != M.end(); Mitr++) { cout << Mitr->first << " " << Mitr->second << endl; } cout << "The find() function" << " finds the respective key:" << endl; if (M.find(1) != M.end()) { cout << "Found!" << endl; } else { cout << "Not Found!" << endl; } cout << "The clear() function " << "clears the map:" << endl; M.clear(); // Returns the size of the map cout << "Now the size is :" << M.size(); } // Driver Code int main() { // Function Call Map(); return 0; } In map Key Value 1 101 2 102 3 103 4 104 5 105 In unordered_map Key Value 5 105 4 104 3 103 1 101 2 102 In multimap Key Value 1 101 1 101 2 102 2 102 3 103 3 103 4 104 4 104 5 105 5 105 In unoredered_multimap Key Value 5 105 5 105 4 104 4 104 1 101 1 101 2 102 2 102 3 103 3 103 The erase() function erases respective key: Key Value 2 102 3 103 4 104 5 105 The find() function finds the respective key: Not Found! The clear() function clears the map: Now the size is :0 Explanation: m.begin(): points the iterator to starting element. m.end(): points the iterator to the element after the last which is theoretical. The first useful property of the set is that it contains only distinct elements of course the variation multiset can even contain repeated elements. Set contains the distinct elements in an ordered manner whereas unordered set contains distinct elements in an unsorted order and multimaps contain repeated elements. Syntax: set<data_type> S Set (set<int> s) is the implementation of Binary Search Trees. Unordered set (unordered_set<int> S) is the implementation of Hash Table. Multiset (multiset<int> S) is implementation of Red-Black trees. Unordered_multiset(unordered_multiset<int> S) is implemented the same as the unordered set but uses an extra variable that keeps track of the count. The complexity becomes Theta(1) and O(n) when using unordered<set> the ease of access becomes easier due to Hash Table implementation. The table containing the time and space complexity with different functions given below(n is the size of the set): O(log n) O(1) O(log n) O(1) O(log n) O(1) O(1) O(1) O(1) O(1) Below is the C++ program illustrating set: C++ // C++ program illustrating the set #include <bits/stdc++.h> using namespace std; // Function illustrating the set void Set() { // Set declaration set<int> s; unordered_set<int> us; multiset<int> ms; unordered_multiset<int> ums; int i; for (i = 1; i <= 5; i++) { // Inserting elements s.insert(2 * i + 1); us.insert(2 * i + 1); ms.insert(2 * i + 1); ums.insert(2 * i + 1); s.insert(2 * i + 1); us.insert(2 * i + 1); ms.insert(2 * i + 1); ums.insert(2 * i + 1); } // Iterator to access values // in set set<int>::iterator sitr; unordered_set<int>::iterator uitr; multiset<int>::iterator mitr; unordered_multiset<int>::iterator umitr; cout << "The difference: " << endl; cout << "The output for set " << endl; for (sitr = s.begin(); sitr != s.end(); sitr++) { cout << *sitr << " "; } cout << endl; cout << "The output for " << "unordered set " << endl; for (uitr = us.begin(); uitr != us.end(); uitr++) { cout << *uitr << " "; } cout << endl; cout << "The output for " << "multiset " << endl; for (mitr = ms.begin(); mitr != ms.end(); mitr++) { cout << *mitr << " "; } cout << endl; cout << "The output for " << "unordered multiset " << endl; for (umitr = ums.begin(); umitr != ums.end(); umitr++) { cout << *umitr << " "; } cout << endl; } // Driver Code int main() { // Function Call Set(); return 0; } The difference: The output for set 3 5 7 9 11 The output for unordered set 11 9 7 3 5 The output for multiset 3 3 5 5 7 7 9 9 11 11 The output for unordered multiset 11 11 9 9 3 3 5 5 7 7 It is a data structure that follows the Last In First Out (LIFO) rule, this class of STL is alsoused in many algorithms during their implementations. For e.g, many recursive solutions use a system stack to backtrack the pending calls of recursive functions the same can be implemented using the STL stack iteratively. Syntax: stack<data_type> A It is implemented using the linked list implementation of a stack. O(1) O(1) O(1) O(1) O(1) O(1) O(1) O(1) Below is the C++ program illustrating stack: C++ // C++ program illustrating the stack #include <bits/stdc++.h> using namespace std; // Function illustrating stack void Stack() { stack<int> s; int i; for (i = 0; i <= 5; i++) { cout << "The pushed element" << " is " << i << endl; s.push(i); } // Points to top element of stack cout << "The top element of the" << " stack is: " << s.top() << endl; // Return size of stack cout << "The size of the stack" << " is: " << s.size() << endl; // Pops the elements of the // stack in the LIFO manner // Checks whether the stack // is empty or not while (s.empty() != 1) { cout << "The popped element" << " is " << s.top() << endl; s.pop(); } } // Driver Code int main() { // Function Call Stack(); return 0; } The pushed element is 0 The pushed element is 1 The pushed element is 2 The pushed element is 3 The pushed element is 4 The pushed element is 5 The top element of the stack is: 5 The size of the stack is: 6 The popped element is 5 The popped element is 4 The popped element is 3 The popped element is 2 The popped element is 1 The popped element is 0 It is a data structure that follows the First In First Out (FIFO) rule. The inclusion of queue STL class queue in code reduces the function calls for basic operations. The queue is often used in BFS traversals of trees and graphs and also many popular algorithms. Queue in STL is implemented using a linked list. Syntax: queue<data_type> Q Table containing the time and space complexity with different functions given below: O(1) O(1) O(1) O(1) O(1) O(1) O(1) O(1) O(1) O(1) O(1) O(1) Below is the C++ program illustrating queue: C++ // C++ program illustrating the queue #include <bits/stdc++.h> using namespace std; // Function illustrating queue void Queue() { queue<int> q; int i; for (i = 101; i <= 105; i++) { // Inserts into the queue // in the FIFO manner q.push(i); cout << "The first and last" << " elements of the queue " << "are " << q.front() << " " << q.back() << endl; } // Check whether the queue is // empty or not while (q.empty() != 1) { // Pops the first element // of the queue cout << "The Element popped" << " following FIFO is " << q.front() << endl; q.pop(); } } // Driver Code int main() { // Function Call Queue(); return 0; } The first and last elements of the queue are 101 101 The first and last elements of the queue are 101 102 The first and last elements of the queue are 101 103 The first and last elements of the queue are 101 104 The first and last elements of the queue are 101 105 The Element popped following FIFO is 101 The Element popped following FIFO is 102 The Element popped following FIFO is 103 The Element popped following FIFO is 104 The Element popped following FIFO is 105 Vector is the implementation of dynamic arrays and uses new for memory allocation in heap.Syntax: vector<int> A 2-dimensional vectors can also be implemented using the below syntax: Syntax: vector<vector<int>> A The table containing the time and space complexity with different functions given below: Theta(log n) O(n) O(1) O(1) O(1) O(1) O(1) O(1) O(1) O(n) O(1) O(n) O(1) Below is the C++ program illustrating vector: C++ // C++ program illustrating vector #include <bits/stdc++.h> using namespace std; // Function displaying values void display(vector<int> v) { for (int i = 0; i < v.size(); i++) { cout << v[i] << " "; } } // Function illustrating vector void Vector() { int i; vector<int> v; for (i = 100; i < 106; i++) { // Inserts an element in vector v.push_back(i); } cout << "The vector after " << "push_back is :" << v.size() << endl; cout << "The vector now is :"; display(v); cout << endl; // Deletes an element at the back v.pop_back(); cout << "The vector after " << "pop_back is :" << v.size() << endl; cout << "The vector now is :"; display(v); cout << endl; // Reverses the vector reverse(v.begin(), v.end()); cout << "The vector after " << "reversing is :" << v.size() << endl; cout << "The vector now is :"; display(v); cout << endl; // Sorts vector using Quick Sort sort(v.begin(), v.end()); cout << "The vector after " << "sorting is :" << v.size() << endl; cout << "The vector now is :"; display(v); cout << endl; // Erases ith position element v.erase(v.begin() + 2); cout << "The size of vector " << "after erasing at position " "3 is :" << v.size() << endl; cout << "The vector now is :"; display(v); cout << endl; // Deletes the vector completely v.clear(); cout << "The size of the vector" << " after clearing is :" << v.size() << endl; cout << "The vector now is :"; display(v); cout << endl; } // Driver Code int main() { // Function Call Vector(); return 0; } The vector after push_back is :6 The vector now is :100 101 102 103 104 105 The vector after pop_back is :5 The vector now is :100 101 102 103 104 The vector after reversing is :5 The vector now is :104 103 102 101 100 The vector after sorting is :5 The vector now is :100 101 102 103 104 The size of vector after erasing at position 3 is :4 The vector now is :100 101 103 104 The size of the vector after clearing is :0 The vector now is : lokeshpotta20 ahmedabdalmola2017 madhav_mohan saurabh1990aror STL Technical Scripter 2020 Technical Scripter Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 25953, "s": 25922, "text": " \n04 Apr, 2022\n" }, { "code": null, "e": 26041, "s": 25953, "text": "In this article, we will discuss the time and space complexity of some C++ STL classes." }, { "code": null, "e": 26069, "s": 26041, "text": ...
p-value Basics with Python Code. What is p-value? It is the probability... | by Sujeewa Kumaratunga PhD | Towards Data Science
What is p-value? It is the probability that you will obtain a test result given an actual distribution. Or in an A/B test setting, it is the probability that we measure something, like an average order value, given an initial hypothesis, like ‘we believe the average order value is $170’. The p-value is answering the question with a certain confidence level. You hear people often say, “I am 90% confident I will get that job” or “I am 99.99% confident I can not sing like Freddie Mercury” or something like that involving confidence level. This is sort of the same thing, except it will be an actual quantitative number. We saw in the Central Limit Theorem post how, if we draw samples enough times the distribution of those sample means will be a normal distribution with the mean of that normal distribution approaching the population mean. So then a natural question to ask is how likely is it that we would observe a certain sample mean. Let’s look at an example. Let’s assume, like before, the average order value of our company’s customers is $170. That is, our population mean is $170 (and the standard deviation is $5). So our hypothesis is that our entire customer base has an average order value of $170. Now we want to test our hypothesis. In general we can not test our entire population, so we resort to testing many smaller samples of the population. Now if we draw 10,000 samples, then the distribution of those sample means will look like this : So you see that any given sample’s mean could vary anywhere from $150 to $190. Then, let’s say we took a sample group of people and get their mean order value and that turned out to be $183. And we want to know the probability that this could happen. But in statistics we ask, what is the probability that a sample mean could be $183 or above, given the hypothesis that the population mean is $170. This can be calculated by counting all the numbers that are above 183 in the above plot and dividing it by the total number of sample draws, which is 10000. Use the code at the and with: pvalue_101(170.0, 5.0, 10000, 183.0) Percentage of numbers larger than 183.0 is 0.35%. It is a tiny percentage, but it is not zero. It would be wrong for you to reject the hypothesis that the population mean is $170, since we clearly derived this sample mean from that population distribution. And similarly if you wanted to ask what is the probability of getting a sample mean that is more than $13 different from the population mean of $170? That is what is the probability of getting a sample mean that is less than $157 or more than $183? Percentage of numbers further than the population mean of 170.0 by +/-13.0 is 0.77%. You see this is about double the percentage that the sample mean could be just only larger than $183. This is because a normal distribution is symmetrical around the mean. It is important to understand this small but non-zero probability. Even if draw samples from the exact same population, there is a non-zero chance that the sample mean will vary by quite a lot from our population mean. So, if we run the A/B test for just one day, which is the equivalent of drawing one sample, we can not make a decision on the population mean. What we can do is estimate the probability that we would get this sample mean, given this population. Now, instead, assume you wanted to know the reverse — how far from the population mean do 95% of the sample means lie? That is when the 68–95–99.7 rule comes in handy. It says: We are 68.2% confident that if we draw many random samples, the sample means will be between μ+/- σ. in our case 170 +/- 5, i.e., 68.2% of the time sample means will be between $165 and $17595.4% of the time our sample means will be between μ+/- 2σ. i.e., 95.4% of the time our sample means will be between $160 and $180. We are 68.2% confident that if we draw many random samples, the sample means will be between μ+/- σ. in our case 170 +/- 5, i.e., 68.2% of the time sample means will be between $165 and $175 95.4% of the time our sample means will be between μ+/- 2σ. i.e., 95.4% of the time our sample means will be between $160 and $180. In business people often talk about p-value. The p-value is closely related to the above rule. p-value measures the probability that a sample mean would be a certain value or more, given the population mean and standard deviation. The p-value gives us the probability of observing what we observed, given a hypothesis is true. It does not tell us the probability that the null hypothesis is true. In our example, A p-value of 0.35% will give the probability that we get a sample mean that is more than $183, given the hypothesis that the population mean is $170.A p-value of 0.77% will give the probability that we get a sample mean that is more than $183 or less than $157, given the hypothesis that the population mean is $170.It does not give us the probability of the hypothesis being true. In fact it would be dangerous to reject the hypothesis that the population mean is $170, because we clearly got the sample mean from a population mean of $170. A p-value of 0.35% will give the probability that we get a sample mean that is more than $183, given the hypothesis that the population mean is $170. A p-value of 0.77% will give the probability that we get a sample mean that is more than $183 or less than $157, given the hypothesis that the population mean is $170. It does not give us the probability of the hypothesis being true. In fact it would be dangerous to reject the hypothesis that the population mean is $170, because we clearly got the sample mean from a population mean of $170. Often people would start an experiment saying they would want a 95% confidence level, which means they are expecting a p-value of 5% (which comes from 100%-95%). Then they would take the above sample mean of $183, take its p-value of 0.35% and say: “Since 0.35%<0.5%, our sample mean is further in the tails than the 5% that we have allowed. So we reject the hypothesis that the population mean is $170”. We know this is a wrong conclusion, because we used a population mean of $170 to generate some, albeit few, sample means above $183 (see graph above). You can say nothing about if the hypothesis is true or not. In fact you can not even reject the hypothesis just with this one data point. It is a difficult lesson to drive home. But in general, in life, we are not able to tell the probability that any hypothesis is true — all we can say is something about the probability of an observation given someone’s hypothesis. You can generate the plots and p-values in this post with the following Python code. def pvalue_101(mu, sigma, samp_size, samp_mean=0, deltam=0): np.random.seed(1234) s1 = np.random.normal(mu, sigma, samp_size) if samp_mean > 0: print(len(s1[s1>samp_mean])) outliers = float(len(s1[s1>samp_mean])*100)/float(len(s1)) print('Percentage of numbers larger than {} is {}%'.format(samp_mean, outliers)) if deltam == 0: deltam = abs(mu-samp_mean) if deltam > 0 : outliers = (float(len(s1[s1>(mu+deltam)])) +float(len(s1[s1<(mu-deltam)])))*100.0/float(len(s1)) print('Percentage of numbers further than the population mean of {} by +/-{} is {}%'.format(mu, deltam, outliers)) fig, ax = plt.subplots(figsize=(8,8)) fig.suptitle('Normal Distribution: population_mean={}'.format(mu) ) plt.hist(s1) plt.axvline(x=mu+deltam, color='red') plt.axvline(x=mu-deltam, color='green') plt.show()
[ { "code": null, "e": 336, "s": 47, "text": "What is p-value? It is the probability that you will obtain a test result given an actual distribution. Or in an A/B test setting, it is the probability that we measure something, like an average order value, given an initial hypothesis, like ‘we believe t...
Java Collections emptyMap() Method with Examples - GeeksforGeeks
03 Jan, 2022 The emptyMap() method of Java Collections is a method that is used to return an empty map such that we can not change the data in map IE it is immutable. Syntax: public static final <Key,Value> Map<Key,Value> emptyMap() where, key is the key element value is the value element Parameters: This will not accept any parameters, Return Type: This will return an empty map that is immutable. Exceptions: It will not arise any exception. Example 1: Java // Java program to create an empty mapimport java.util.*; public class GFG { // main method public static void main(String[] args) { // create an empty map Map<String, String> data = Collections.emptyMap(); System.out.println(data); }} {} Example 2: Java // Java program to create an// empty map and add elements// We will get an error because // the method will work on only // an empty mapimport java.util.*; public class GFG { // main method public static void main(String[] args) { // create an empty map Map<String, String> data = Collections.emptyMap(); // add element data.put("1", "python/R"); System.out.println(data); }} Output: Exception in thread "main" java.lang.UnsupportedOperationException at java.util.AbstractMap.put(AbstractMap.java:209) at GFG.main(GFG.java:8) Java-Collections-Class Java-Functions Picked Java Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Stream In Java Constructors in Java Different ways of Reading a text file in Java Exceptions in Java Functional Interfaces in Java Generics in Java Comparator Interface in Java with Examples Introduction to Java HashMap get() Method in Java Strings in Java
[ { "code": null, "e": 23948, "s": 23920, "text": "\n03 Jan, 2022" }, { "code": null, "e": 24102, "s": 23948, "text": "The emptyMap() method of Java Collections is a method that is used to return an empty map such that we can not change the data in map IE it is immutable." }, {...
How to Fine-Tune BERT Transformer with spaCy 3 | by Walid Amamou | Towards Data Science
Since the seminal paper “Attention is all you need” of Vaswani et al, Transformer models have become by far the state of the art in NLP technology. With applications ranging from NER, Text Classification, Question Answering or text generation, the applications of this amazing technology are limitless. More specifically, BERT — which stands for Bidirectional Encoder Representations from Transformers— leverages the transformer architecture in a novel way. For example, BERT analyses both sides of the sentence with a randomly masked word to make a prediction. In addition to predicting the masked token, BERT predicts the sequence of the sentences by adding a classification token [CLS] at the beginning of the first sentence and tries to predict if the second sentence follows the first one by adding a separation token[SEP] between the two sentences. In this tutorial, I will show you how to fine-tune a BERT model to predict entities such as skills, diploma, diploma major and experience in software job descriptions. If you are interested to go a step further and extract relations between entities, please read our article on how to perform joint entities and relation extraction using transformers. Fine tuning transformers requires a powerful GPU with parallel processing. For this we use Google Colab since it provides freely available servers with GPUs. For this tutorial, we will use the newly released spaCy 3 library to fine tune our transformer. Below is a step-by-step guide on how to fine-tune the BERT model on spaCy 3 (video tutorial here). The code along with the necessary files are available in the Github repo. To fine-tune BERT using spaCy 3, we need to provide training and dev data in the spaCy 3 JSON format (see here) which will be then converted to a .spacy binary file. We will provide the data in IOB format contained in a TSV file then convert to spaCy JSON format. I have only labeled 120 job descriptions with entities such as skills, diploma, diploma major, and experience for the training dataset and about 70 job descriptions for the dev dataset. In this tutorial, I used the UBIAI annotation tool because it comes with extensive features such as: ML auto-annotation Dictionary, regex, and rule-based auto-annotation Team collaboration to share annotation tasks Direct annotation export to IOB format Using the regular expression feature in UBIAI, I have pre-annotated all the experience mentions that follows the pattern “\d.*\+.*” such as “5 + years of experience in C++”. I then uploaded a csv dictionary containing all the software languages and assigned the entity skills. The pre-annotation saves a lot of time and will help you minimize manual annotation. For more information about UBIAI annotation tool, please visit the documentation page and my previous post “Introducing UBIAI: Easy-to-Use Text Annotation for NLP Applications”. The exported annotation will look like this: MS B-DIPLOMAin Oelectrical B-DIPLOMA_MAJORengineering I-DIPLOMA_MAJORor Ocomputer B-DIPLOMA_MAJORengineering I-DIPLOMA_MAJOR. O5+ B-EXPERIENCEyears I-EXPERIENCEof I-EXPERIENCEindustry I-EXPERIENCEexperience I-EXPERIENCE. I-EXPERIENCEFamiliar Owith Ostorage B-SKILLSserver I-SKILLSarchitectures I-SKILLSwith OHDD B-SKILLS In order to convert from IOB to JSON (see documentation here), we use spaCy 3 command: !python -m spacy convert drive/MyDrive/train_set_bert.tsv ./ -t json -n 1 -c iob!python -m spacy convert drive/MyDrive/dev_set_bert.tsv ./ -t json -n 1 -c iob After conversion to spaCy 3 JSON, we need to convert both the training and dev JSON files to .spacy binary file using this command (update the file path with your own): !python -m spacy convert drive/MyDrive/train_set_bert.json ./ -t spacy!python -m spacy convert drive/MyDrive/dev_set_bert.json ./ -t spacy Open a new Google Colab project and make sure to select GPU as hardware accelerator in the notebook settings. In order to accelerate the training process, we need to run parallel processing on our GPU. To this end we install the NVIDIA 9.2 cuda library: !wget https://developer.nvidia.com/compute/cuda/9.2/Prod/local_installers/cuda-repo-ubuntu1604-9-2-local_9.2.88-1_amd64 -O cuda-repo-ubuntu1604–9–2-local_9.2.88–1_amd64.deb!dpkg -i cuda-repo-ubuntu1604–9–2-local_9.2.88–1_amd64.deb!apt-key add /var/cuda-repo-9–2-local/7fa2af80.pub!apt-get update!apt-get install cuda-9.2 To check the correct cuda compiler is installed, run: !nvcc --version Install the spacy library and spacy transformer pipeline: pip install -U spacy!python -m spacy download en_core_web_trf Next, we install the pytorch machine learning library that is configured for cuda 9.2: pip install torch==1.7.1+cu92 torchvision==0.8.2+cu92 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html After pytorch install, we need to install spacy transformers tuned for cuda 9.2 and change the CUDA_PATH and LD_LIBRARY_PATH as below. Finally, install the cupy library which is the equivalent of numpy library but for GPU: !pip install -U spacy[cuda92,transformers]!export CUDA_PATH=”/usr/local/cuda-9.2"!export LD_LIBRARY_PATH=$CUDA_PATH/lib64:$LD_LIBRARY_PATH!pip install cupy SpaCy 3 uses a config file config.cfg that contains all the model training components to train the model. In spaCy training page, you can select the language of the model (English in this tutorial), the component (NER) and hardware (GPU) to use and download the config file template. The only thing we need to do is to fill out the path for the train and dev .spacy files. Once done, we upload the file to Google Colab. Now we need to auto-fill the config file with the rest of the parameters that the BERT model will need; all you have to do is run this command: !python -m spacy init fill-config drive/MyDrive/config.cfg drive/MyDrive/config_spacy.cfg I suggest to debug your config file in case there is an error: !python -m spacy debug data drive/MyDrive/config.cfg We are finally ready to train the BERT model! Just run this command and the training should start: !python -m spacy train -g 0 drive/MyDrive/config.cfg — output ./ P.S: if you get the error cupy_backends.cuda.api.driver.CUDADriverError: CUDA_ERROR_INVALID_PTX: a PTX JIT compilation failed, just uninstall cupy and install it again and it should fix the issue. If everything went correctly, you should start seeing the model scores and losses being updated: At the end of the training, the model will be saved under folder model-best. The model scores are located in meta.json file inside the model-best folder: “performance”:{“ents_per_type”:{“DIPLOMA”:{“p”:0.5584415584,“r”:0.6417910448,“f”:0.5972222222},“SKILLS”:{“p”:0.6796805679,“r”:0.6742957746,“f”:0.6769774635},“DIPLOMA_MAJOR”:{“p”:0.8666666667,“r”:0.7844827586,“f”:0.8235294118},“EXPERIENCE”:{“p”:0.4831460674,“r”:0.3233082707,“f”:0.3873873874}},“ents_f”:0.661754386,“ents_p”:0.6745350501,“ents_r”:0.6494490358,“transformer_loss”:1408.9692438675,“ner_loss”:1269.1254348834} The scores are certainly well below a production model level because of the limited training dataset, but it’s worth checking its performance on a sample job description. To test the model on a sample text, we need to load the model and run it on our text: nlp = spacy.load(“./model-best”)text = ['''Qualifications- A thorough understanding of C# and .NET Core- Knowledge of good database design and usage- An understanding of NoSQL principles- Excellent problem solving and critical thinking skills- Curious about new technologies- Experience building cloud hosted, scalable web services- Azure experience is a plusRequirements- Bachelor's degree in Computer Science or related field(Equivalent experience can substitute for earned educational qualifications)- Minimum 4 years experience with C# and .NET- Minimum 4 years overall experience in developing commercial software''']for doc in nlp.pipe(text, disable=["tagger", "parser"]): print([(ent.text, ent.label_) for ent in doc.ents]) Below are the entities extracted from our sample job description: [("C", "SKILLS"),("#", "SKILLS"),(".NET Core", "SKILLS"),("database design", "SKILLS"),("usage", "SKILLS"),("NoSQL", "SKILLS"),("problem solving", "SKILLS"),("critical thinking", "SKILLS"),("Azure", "SKILLS"),("Bachelor", "DIPLOMA"),("'s", "DIPLOMA"),("Computer Science", "DIPLOMA_MAJOR"),("4 years experience with C# and .NET\n-", "EXPERIENCE"),("4 years overall experience in developing commercial software\n\n", "EXPERIENCE")] Pretty impressive for only using 120 training documents! We were able to extract most of the skills, diploma, diploma major, and experience correctly. With more training data, the model would certainly improve further and yield higher scores. With only a few lines of code, we have successfully trained a functional NER transformer model thanks to the amazing spaCy 3 library. Go ahead and try it out on your use case and please share your results. Note, you can use UBIAI annotation tool to label your data, we offer free 14 days trial. As always, if you have any comment, please leave a note below or email at admin@ubiai.tools! Follow us on Twitter @UBIAI5
[ { "code": null, "e": 475, "s": 172, "text": "Since the seminal paper “Attention is all you need” of Vaswani et al, Transformer models have become by far the state of the art in NLP technology. With applications ranging from NER, Text Classification, Question Answering or text generation, the applica...
Python | os.environ object - GeeksforGeeks
13 Apr, 2022 OS module in Python provides functions for interacting with the operating system. OS comes under Python’s standard utility modules. This module provides a portable way of using operating system dependent functionality. os.environ in Python is a mapping object that represents the user’s environmental variables. It returns a dictionary having user’s environmental variable as key and their values as value. os.environ behaves like a python dictionary, so all the common dictionary operations like get and set can be performed. We can also modify os.environ but any changes will be effective only for the current process where it was assigned and it will not change the value permanently. Syntax: os.environ Parameter: It is a non-callable object. Hence, no parameter is required Return Type: This returns a dictionary representing the user’s environmental variables Code #1: Use of os.environ to get access of environment variables # Python program to explain os.environ object # importing os module import osimport pprint # Get the list of user's# environment variablesenv_var = os.environ # Print the list of user's# environment variablesprint("User's Environment variable:")pprint.pprint(dict(env_var), width = 1) {'CLUTTER_IM_MODULE': 'xim', 'COLORTERM': 'truecolor', 'DBUS_SESSION_BUS_ADDRESS': 'unix:path=/run/user/1000/bus', 'DESKTOP_SESSION': 'ubuntu', 'DISPLAY': ':0', 'GDMSESSION': 'ubuntu', 'GJS_DEBUG_OUTPUT': 'stderr', 'GJS_DEBUG_TOPICS': 'JS ' 'ERROR;JS ' 'LOG', 'GNOME_DESKTOP_SESSION_ID': 'this-is-deprecated', 'GNOME_SHELL_SESSION_MODE': 'ubuntu', 'GTK_IM_MODULE': 'ibus', 'HOME': '/home/ihritik', 'IM_CONFIG_PHASE': '2', 'JAVA_HOME': '/opt/jdk-10.0.1', 'JOURNAL_STREAM': '9:28586', 'JRE_HOME': '/opt/jdk-10.0.1/jre', 'LANG': 'en_IN', 'LANGUAGE': 'en_IN:en', 'LESSCLOSE': '/usr/bin/lesspipe ' '%s ' '%s', 'LESSOPEN': '| ' '/usr/bin/lesspipe ' '%s', 'LOGNAME': 'ihritik', 'PATH': '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games: /usr/local/games:/snap/bin:/usr/local/java/jdk-10.0.1/bin: /usr/local/java/jdk-10.0.1/jre/bin:/opt/jdk-10.0.1/bin:/opt/jdk-10.0.1/jre/bin', 'PWD': '/home/ihritik', 'QT4_IM_MODULE': 'xim', 'QT_IM_MODULE': 'ibus', 'SESSION_MANAGER': 'local/hritik:@/tmp/.ICE-unix/1127, unix/hritik:/tmp/.ICE-unix/1127', 'SHELL': '/bin/bash', 'SHLVL': '2', 'SSH_AUTH_SOCK': '/run/user/1000/keyring/ssh', 'TERM': 'xterm-256color', 'TEXTDOMAIN': 'im-config', 'TEXTDOMAINDIR': '/usr/share/locale/', 'USER': 'ihritik', 'USERNAME': 'ihritik', 'VTE_VERSION': '4804', 'WAYLAND_DISPLAY': 'wayland-0', 'XDG_CONFIG_DIRS': '/etc/xdg/xdg-ubuntu:/etc/xdg', 'XDG_CURRENT_DESKTOP': 'ubuntu:GNOME', 'XDG_MENU_PREFIX': 'gnome-', 'XDG_RUNTIME_DIR': '/run/user/1000', 'XDG_SEAT': 'seat0', 'XDG_SESSION_DESKTOP': 'ubuntu', 'XDG_SESSION_ID': '2', 'XDG_SESSION_TYPE': 'wayland', 'XDG_VTNR': '2', 'XMODIFIERS': '@im=ibus', '_': '/usr/bin/python3'} Code #2: Accessing a particular environment variable # Python program to explain os.environ object # importing os module import os # Get the value of# 'HOME' environment variablehome = os.environ['HOME'] # Print the value of# 'HOME' environment variableprint("HOME:", home) # Get the value of# 'JAVA_HOME' environment variable# using get operation of dictionaryjava_home = os.environ.get('JAVA_HOME') # Print the value of# 'JAVA_HOME' environment variableprint("JAVA_HOME:", java_home) HOME: /home/ihritik JAVA_HOME: /opt/jdk-10.0.1 Code #3: Modifying a environment variable # Python program to explain os.environ object # importing os module import os # Print the value of# 'JAVA_HOME' environment variable print("JAVA_HOME:", os.environ['JAVA_HOME']) # Modify the value of# 'JAVA_HOME' environment variable os.environ['JAVA_HOME'] = '/home / ihritik / jdk-10.0.1' # Print the modified value of# 'JAVA_HOME' environment variableprint("Modified JAVA_HOME:", os.environ['JAVA_HOME']) JAVA_HOME: /opt/jdk-10.0.1 Modified JAVA_HOME: /home/ihritik/jdk-10.0.1 Code #4: Adding a new environment variable # Python program to explain os.environ object # importing os module import os # Add a new environment variable os.environ['GeeksForGeeks'] = 'www.geeksforgeeks.org' # Get the value of# Added environment variable print("GeeksForGeeks:", os.environ['GeeksForGeeks']) GeeksForGeeks: www.geeksforgeeks.org Code #5: Accessing a environment variable which does not exists # Python program to explain os.environ object # importing os module import os # Print the value of# 'MY_HOME' environment variable print("MY_HOME:", os.environ['MY_HOME'] # If the key does not exists# it will produce an error Traceback (most recent call last): File "osenviron.py", line 8, in print("MY_HOME:", os.environ['MY_HOME']) File "/usr/lib/python3.6/os.py", line 669, in __getitem__ raise KeyError(key) from None KeyError: 'MY_HOME' Code #6: Handling error while Accessing a environment variable which does not exists # Python program to explain os.environ object # importing os module import os # Method 1# Print the value of# 'MY_HOME' environment variable print("MY_HOME:", os.environ.get('MY_HOME', "Environment variable does not exist")) # Method 2try: print("MY_HOME:", os.environ['MY_HOME'])except KeyError: print("Environment variable does not exist") MY_HOME: Environment variable does not exist Environment variable does not exist aiyesbolatova python-os-module Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Python Dictionary Read a file line by line in Python Enumerate() in Python How to Install PIP on Windows ? Iterate over a list in Python Different ways to create Pandas Dataframe Python String | replace() Create a Pandas DataFrame from Lists Python program to convert a list to string Reading and Writing to text files in Python
[ { "code": null, "e": 24244, "s": 24216, "text": "\n13 Apr, 2022" }, { "code": null, "e": 24463, "s": 24244, "text": "OS module in Python provides functions for interacting with the operating system. OS comes under Python’s standard utility modules. This module provides a portable...
Google Charts - Stacked bar chart
Following is an example of a stacked bar chart. We've already seen the configuration used to draw this chart in Google Charts Configuration Syntax chapter. So, let's see the complete example. We've used isStacked configuration to show stacked chart. // Set chart options var options = { isStacked: true }; googlecharts_bar_stacked.htm <html> <head> <title>Google Charts Tutorial</title> <script type = "text/javascript" src = "https://www.gstatic.com/charts/loader.js"> </script> <script type = "text/javascript"> google.charts.load('current', {packages: ['corechart']}); </script> </head> <body> <div id = "container" style = "width: 550px; height: 400px; margin: 0 auto"> </div> <script language = "JavaScript"> function drawChart() { // Define the chart to be drawn. var data = google.visualization.arrayToDataTable([ ['Year', 'Asia', 'Europe'], ['2012', 900, 390], ['2013', 1000, 400], ['2014', 1170, 440], ['2015', 1250, 480], ['2016', 1530, 540] ]); var options = {title: 'Population (in millions)', isStacked:true}; // Instantiate and draw the chart. var chart = new google.visualization.BarChart(document.getElementById('container')); chart.draw(data, options); } google.charts.setOnLoadCallback(drawChart); </script> </body> </html> Verify the result. Print Add Notes Bookmark this page
[ { "code": null, "e": 2453, "s": 2261, "text": "Following is an example of a stacked bar chart. We've already seen the configuration used to draw this chart in Google Charts Configuration Syntax chapter. So, let's see the complete example." }, { "code": null, "e": 2511, "s": 2453, ...
C++ Arrays (Sum of array) | Set 1 | Practice | GeeksforGeeks
Given an array of N integers. Your task is to print the sum of all of the integers. Example 1: Input: 4 1 2 3 4 Output: 10 Example 2: Input: 6 5 8 3 10 22 45 Output: 93 Your Task: You don't need to read input or print anything. Your task is to complete the function getSum() which takes the array A[] and its size N as inputs and returns the sum of array in a new line. Expected Time Complexity: O(N) Expected Auxiliary Space: O(1) Constraints: 1 ≤ N ≤ 106 0 ≤ Arr[i] ≤ 200 0 laveshgoyal63in 20 minutes class Solution{ public: int getSum(int a[], int n) { // Your code goes here int sum=0; for(int i=0;i<n;i++) { sum=sum+a[i]; } return sum; } }; 0 visionsameer392 days ago 1) using for loopsclass Solution: def getSum(self, arr, n): s=0 for i in range(n): s+=arr[i] return s 2) using inbuilt function of sum. return sum(arr) 0 tanmeetsinghreel1 week ago //Python3 def getSum(self, arr, n): count=0 for i in range(n): count+=arr[i] return(count) 0 rsbly7300952 weeks ago class Solution { public long getSum(long a[], long n) { long sum = 0; for(int i = 0 ;i<n;i++){ sum = a[i] + sum; } return sum; }} 0 sauarbh1472 This comment was deleted. 0 vinuthah1270 This comment was deleted. -2 vikasarya18893 weeks ago class Solution{ public: int getSum(int a[], int n) { // Your code goes here int sum =0; for(int i =0;i<n;i++){ sum =sum +a[i]; } return sum; } }; 0 kmg9tpqt8rvlpr74o0f2r8la5fgj004mmp0hsy7a1 month ago class solution{ public: int getSum(int a[], int n) { int sum =0; for(int i = 0; i<n; i++) { sum = sum +a[i]; } return sum ; } // Your code goes 0 shashwatchaurasia831 month ago class Solution{ public: int getSum(int a[], int n) { int sum=0; for(int i=0;i<n;i++) { sum=sum+a[i]; } return sum; } }; 0 rishiyvofficial1 month ago int getSum(int a[], int n) { // Your code goes here int sum=0; for(int i=0; i<n; i++) sum+=a[i]; return sum; } We strongly recommend solving this problem on your own before viewing its editorial. Do you still want to view the editorial? Login to access your submissions. Problem Contest Reset the IDE using the second button on the top right corner. Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values. Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints. You can access the hints to get an idea about what is expected of you as well as the final solution code. You can view the solutions submitted by other users from the submission tab.
[ { "code": null, "e": 312, "s": 226, "text": "Given an array of N integers. Your task is to print the sum of all of the integers.\n " }, { "code": null, "e": 323, "s": 312, "text": "Example 1:" }, { "code": null, "e": 351, "s": 323, "text": "Input:\n4\n1 2 3 4\...
Best yarn Commands to use for being productive - GeeksforGeeks
10 Sep, 2021 Overview :YARN stands for “Yet Another Resource Negotiator“. It was introduced in Hadoop 2.0 to remove the bottleneck on Job Tracker which was present in Hadoop 1.0. YARN was described as a “Redesigned Resource Manager” at the time of its launching, but it has now evolved to be known as large-scale distributed operating system used for Big Data processing. In this article, we will discuss Some popular yarn commands to use for being a productive software developer. Let’s discuss it one by one. Command-1 :YARN Install Command – Installs a package in the package.json file in the local node_modules folder. yarn Example – Installing yarn in the project Command-2 :YARN add Command – Specifying the package name with yarn add can install that package, and that can be used in our project. Yarn add also takes parameters that can be used to even specify the package’s version (specific) to be installed in the current project being developed. Syntax – yarn add <package name...> yarn add lodash Alternative –We can also use to install lodash globally as follows. yarn global add lodash Example – Image shows how to install lodash package in the project Command-3 :YARN Remove Command – Remove the package given as a parameter from your direct dependencies updating your package.json and yarn.lock files in the process. Suppose you have a package installed lodash you can remove it using the following command. Syntax – yarn remove <package name...> yarn remove lodash Example – Image shows the command for removing lodash package Command-4 :YARN AutoClean command – This command is used to free up space from dependencies by removing unnecessary files or folders from there. Syntax – yarn autoclean <parameters...> yarn autoclean --force Example – Autoclean command used with yarn Command-5 :YARN Install command – Install all the dependencies listed within package.json in the local node_modules folder. This command is somewhat Syntax – yarn install <parameters ....> Example – Suppose we have developed a project and pushed in Github then we are cloning it on our machine, so what we can do is perform yarn install to install all of the required dependencies for the project and we can do this with the following command in the terminal yarn install Yarn install command, this can update the packages out of their latest version Command-6 :YARN help command – This command when used gives out a variety of commands that are available to be used with yarn. Syntax – yarn help <parameters...> This command helps us with option available with a short description of each of the commands. yarn help Output : Yarn help command More yarn help commands References : Learn more about yarn and npm here Refer to this link to view the official documentation of YARN Hadoop Hadoop Hadoop Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments How to Create Table in Hive? Import and Export Data using SQOOP Hive - Alter Table Hadoop - Schedulers and Types of Schedulers Difference Between Hadoop 2.x vs Hadoop 3.x Hadoop - File Blocks and Replication Factor How to Install Single Node Cluster Hadoop on Windows? MapReduce - Combiners Hadoop - Cluster, Properties and its Types Hadoop - HDFS (Hadoop Distributed File System)
[ { "code": null, "e": 24346, "s": 24318, "text": "\n10 Sep, 2021" }, { "code": null, "e": 24844, "s": 24346, "text": "Overview :YARN stands for “Yet Another Resource Negotiator“. It was introduced in Hadoop 2.0 to remove the bottleneck on Job Tracker which was present in Hadoop 1....
Create database view in SAP ABAP
In ABAP, you can make use of Function modules - DDIF_VIEW_PUT and DDIF_VIEW_ACTIVATE for view activation. All table parameters should be defined correctly otherwise it can result in an error in the creation process. DDIF_VIEW_PUT − Interface for writing a view in the ABAP Dictionary. You can refer to below link for more details − http://www.se80.co.uk/sapfms/d/ddif/ddif_view_put.htmCALL FUNCTION 'DDIF_VIEW_PUT' "DD: Interface for writing a view in the ABAP DictionaryEXPORTING name = " ddname Name of the view to be written * dd25v_wa = ' ' " dd25v View header * dd09l_wa = ' ' " dd09v Technical settings of the view * TABLES * dd26v_tab = " dd26v Basis tables of the view * dd27p_tab = " dd27p View fields * dd28j_tab = " dd28j Join conditions of the view * dd28v_tab = " dd28v Selection conditions of the view EXCEPTIONS VIEW_NOT_FOUND = 1 " Header of the view could not be found NAME_INCONSISTENT = 2 " Name in Sources Inconsistent with NAME VIEW_INCONSISTENT = 3 " Inconsistent Sources PUT_FAILURE = 4 " Write Error (ROLLBACK Recommended) PUT_REFUSED = 5 " Write not Allowed . " DDIF_VIEW_PUT DDIF_VIEW_ACTIVATE: Interface for activating a view EXPORTING name = " ddname Name of view to be activated * auth_chk = 'X' " ddbool_d 'X': Perform authorization check for DB operations * prid = -1 " sy-tabix ID for Log Writer IMPORTING rc = " sy-subrc Result of Activation EXCEPTIONS NOT_FOUND = 1 "View not found PUT_FAILURE = 2 "View could not be written . " DDIF_VIEW_ACTIVATE Both of these views can be used in T-code: SE80 or SE37. You can use both the transactions to display SAP function module documentation available within your SAP system. You can also create a view of Transaction code SE11 in SAP ABAP as below: You have to maintain and activate the view. Once activated, you can view the contents of standard transactions code SE16 as shown below −
[ { "code": null, "e": 1278, "s": 1062, "text": "In ABAP, you can make use of Function modules - DDIF_VIEW_PUT and DDIF_VIEW_ACTIVATE for view activation. All table parameters should be defined correctly otherwise it can result in an error in the creation process." }, { "code": null, "e": ...
Rexx - DataType
This method returns the value of ‘NUM’ if the input is a valid number else it will return the value of ‘CHAR’. You can also specify if you want to compare the input value to a NUM or CHAR value. In each case, the value returned will be either 1 or 0 depending on the result. DATATYPE(String,type) String − The string value for which the datatype needs to be determined. String − The string value for which the datatype needs to be determined. Type − Optional type against which the datatype need to be compared to. Type − Optional type against which the datatype need to be compared to. This method returns the value of ‘NUM’ if the input is a valid number else it will return the value of ‘CHAR’. You can also specify if you want to compare the input value to a NUM or CHAR value. In each case, the value returned will be either 1 or 0 depending on the result. /* Main program */ say DATATYPE(" 12345 ") say DATATYPE("") say DATATYPE("12345*") say DATATYPE("123.4","N") say DATATYPE("123.4","W") When we run the above program we will get the following result. NUM CHAR CHAR 1 0 Print Add Notes Bookmark this page
[ { "code": null, "e": 2614, "s": 2339, "text": "This method returns the value of ‘NUM’ if the input is a valid number else it will return the value of ‘CHAR’. You can also specify if you want to compare the input value to a NUM or CHAR value. In each case, the value returned will be either 1 or 0 dep...
Distribute N candies among K people - GeeksforGeeks
25 Nov, 2021 Given N candies and K people. In the first turn, the first person gets 1 candy, the second gets 2 candies, and so on till K people. In the next turn, the first person gets K+1 candies, the second person gets k+2 candies, and so on. If the number of candies is less than the required number of candies at every turn, then the person receives the remaining number of candies. The task is to find the total number of candies every person has at the end. Examples: Input: N = 7, K = 4 Output: 1 2 3 1 At the first turn, the fourth people has to be given 4 candies, but there is only 1 left, hence he takes one only. Input: N = 10, K = 3 Output: 5 2 3 At the second turn first one receives 4 and then we have no more candies left. A naive approach is to iterate for every turn and distribute candies accordingly till candies are finished. Time complexity: O(Number of distributions)A better approach is to perform every turn in O(1) by calculating sum of natural numbers till the last term of series which will be (turns*k) and subtracting the sum of natural numbers till the last term of previous series which is (turns-1)*k. Keep doing this till the sum is less than N, once it exceeds then distribute candies in the given way till possible. We call a turn completed if every person gets the desired number of candies he is to get in a turn. Below is the implementation of the above approach: C++ Java Python3 C# PHP Javascript // C++ code for better approach// to distribute candies#include <bits/stdc++.h>using namespace std; // Function to find out the number of// candies every person receivedvoid candies(int n, int k){ // Count number of complete turns int count = 0; // Get the last term int ind = 1; // Stores the number of candies int arr[k]; memset(arr, 0, sizeof(arr)); while (n) { // Last term of last and // current series int f1 = (ind - 1) * k; int f2 = ind * k; // Sum of current and last series int sum1 = (f1 * (f1 + 1)) / 2; int sum2 = (f2 * (f2 + 1)) / 2; // Sum of current series only int res = sum2 - sum1; // If sum of current is less than N if (res <= n) { count++; n -= res; ind++; } else // Individually distribute { int i = 0; // First term int term = ((ind - 1) * k) + 1; // Distribute candies till there while (n > 0) { // Candies available if (term <= n) { arr[i++] = term; n -= term; term++; } else // Not available { arr[i++] = n; n = 0; } } } } // Count the total candies for (int i = 0; i < k; i++) arr[i] += (count * (i + 1)) + (k * (count * (count - 1)) / 2); // Print the total candies for (int i = 0; i < k; i++) cout << arr[i] << " ";} // Driver Codeint main(){ int n = 10, k = 3; candies(n, k); return 0;} // Java code for better approach// to distribute candies class GFG { // Function to find out the number of // candies every person received static void candies(int n, int k){ int[] arr = new int[k]; int j = 0; while(n>0){ for(int i =0;i<k;i++){ j++; if(n<=0){ break; } else{ if(j<n){ arr[i] = arr[i]+j; } else{ arr[i] = arr[i]+n; } n = n-j; } } } for(int i:arr){ System.out.print(i+" "); } } // Driver Code public static void main(String[] args) { int n = 10, k = 3; candies(n, k); } } // This code is contributed by ihritik # Python3 code for better approach# to distribute candiesimport math as mt # Function to find out the number of# candies every person receiveddef candies(n, k): # Count number of complete turns count = 0 # Get the last term ind = 1 # Stores the number of candies arr = [0 for i in range(k)] while n > 0: # Last term of last and # current series f1 = (ind - 1) * k f2 = ind * k # Sum of current and last series sum1 = (f1 * (f1 + 1)) // 2 sum2 = (f2 * (f2 + 1)) //2 # Sum of current series only res = sum2 - sum1 # If sum of current is less than N if (res <= n): count += 1 n -= res ind += 1 else: # Individually distribute i = 0 # First term term = ((ind - 1) * k) + 1 # Distribute candies till there while (n > 0): # Candies available if (term <= n): arr[i] = term i += 1 n -= term term += 1 else: arr[i] = n i += 1 n = 0 # Count the total candies for i in range(k): arr[i] += ((count * (i + 1)) + (k * (count * (count - 1)) // 2)) # Print the total candies for i in range(k): print(arr[i], end = " ") # Driver Coden, k = 10, 3candies(n, k) # This code is contributed by Mohit kumar // C# code for better approach// to distribute candies using System;class GFG{ // Function to find out the number of // candies every person received static void candies(int n, int k) { // Count number of complete turns int count = 0; // Get the last term int ind = 1; // Stores the number of candies int []arr=new int[k]; for(int i=0;i<k;i++) arr[i]=0; while (n>0) { // Last term of last and // current series int f1 = (ind - 1) * k; int f2 = ind * k; // Sum of current and last series int sum1 = (f1 * (f1 + 1)) / 2; int sum2 = (f2 * (f2 + 1)) / 2; // Sum of current series only int res = sum2 - sum1; // If sum of current is less than N if (res <= n) { count++; n -= res; ind++; } else // Individually distribute { int i = 0; // First term int term = ((ind - 1) * k) + 1; // Distribute candies till there while (n > 0) { // Candies available if (term <= n) { arr[i++] = term; n -= term; term++; } else // Not available { arr[i++] = n; n = 0; } } } } // Count the total candies for (int i = 0; i < k; i++) arr[i] += (count * (i + 1)) + (k * (count * (count - 1)) / 2); // Print the total candies for (int i = 0; i < k; i++) Console.Write( arr[i] + " "); } // Driver Code public static void Main() { int n = 10, k = 3; candies(n, k); }} // This code is contributed by ihritik <?php// PHP code for better approach// to distribute candies // Function to find out the number of// candies every person receivedfunction candies($n, $k){ // Count number of complete turns $count = 0; // Get the last term $ind = 1; // Stores the number of candies $arr = array_fill(0, $k, 0) ; while ($n) { // Last term of last and // current series $f1 = ($ind - 1) * $k; $f2 = $ind * $k; // Sum of current and last series $sum1 = floor(($f1 * ($f1 + 1)) / 2); $sum2 = floor(($f2 * ($f2 + 1)) / 2); // Sum of current series only $res = $sum2 - $sum1; // If sum of current is less than N if ($res <= $n) { $count++; $n -= $res; $ind++; } else // Individually distribute { $i = 0; // First term $term = (($ind - 1) * $k) + 1; // Distribute candies till there while ($n > 0) { // Candies available if ($term <= $n) { $arr[$i++] = $term; $n -= $term; $term++; } else // Not available { $arr[$i++] = $n; $n = 0; } } } } // Count the total candies for ($i = 0; $i < $k; $i++) $arr[$i] += floor(($count * ($i + 1)) + ($k * ($count * ($count - 1)) / 2)); // Print the total candies for ($i = 0; $i < $k; $i++) echo $arr[$i], " ";} // Driver Code$n = 10;$k = 3;candies($n, $k); // This code is contributed by Ryuga?> <script> // JavaScript code for better approach// to distribute candies // Function to find out the number of // candies every person received function candies(n , k) { // Count number of complete turns var count = 0; // Get the last term var ind = 1; // Stores the number of candies var arr = Array(k); for (i = 0; i < k; i++) arr[i] = 0; while (n > 0) { // Last term of last and // current series var f1 = (ind - 1) * k; var f2 = ind * k; // Sum of current and last series var sum1 = (f1 * (f1 + 1)) / 2; var sum2 = (f2 * (f2 + 1)) / 2; // Sum of current series only var res = sum2 - sum1; // If sum of current is less than N if (res <= n) { count++; n -= res; ind++; } else // Individually distribute { var i = 0; // First term var term = ((ind - 1) * k) + 1; // Distribute candies till there while (n > 0) { // Candies available if (term <= n) { arr[i++] = term; n -= term; term++; } else // Not available { arr[i++] = n; n = 0; } } } } // Count the total candies for (i = 0; i < k; i++) arr[i] += (count * (i + 1)) + (k * (count * (count - 1)) / 2); // Print the total candies for (i = 0; i < k; i++) document.write(arr[i] + " "); } // Driver Code var n = 10, k = 3; candies(n, k); // This code contributed by Rajput-Ji </script> 5 2 3 Time complexity: O(Number of turns + K)An efficient approach is to find the largest number(say MAXI) whose sum upto natural numbers is less than N using Binary search. Since the last number will always be a multiple of K, we get the last number of complete turns. Subtract the summation till then from N. Distribute the remaining candies by traversing in the array. Below is the implementation of the above approach: C++ Java Python3 C# PHP Javascript // C++ implementation of the above approach#include <bits/stdc++.h>using namespace std; // Function to find out the number of// candies every person receivedvoid candies(int n, int k){ // Count number of complete turns int count = 0; // Get the last term int ind = 1; // Stores the number of candies int arr[k]; memset(arr, 0, sizeof(arr)); int low = 0, high = n; // Do a binary search to find the number whose // sum is less than N. while (low <= high) { // Get mide int mid = (low + high) >> 1; int sum = (mid * (mid + 1)) >> 1; // If sum is below N if (sum <= n) { // Find number of complete turns count = mid / k; // Right halve low = mid + 1; } else { // Left halve high = mid - 1; } } // Last term of last complete series int last = (count * k); // Subtract the sum till n -= (last * (last + 1)) / 2; int i = 0; // First term of incomplete series int term = (count * k) + 1; while (n) { if (term <= n) { arr[i++] = term; n -= term; term++; } else { arr[i] += n; n = 0; } } // Count the total candies for (int i = 0; i < k; i++) arr[i] += (count * (i + 1)) + (k * (count * (count - 1)) / 2); // Print the total candies for (int i = 0; i < k; i++) cout << arr[i] << " ";} // Driver Codeint main(){ int n = 7, k = 4; candies(n, k); return 0;} // Java implementation of the above approach class GFG{ // Function to find out the number of // candies every person received static void candies(int n, int k) { // Count number of complete turns int count = 0; // Get the last term int ind = 1; // Stores the number of candies int []arr=new int[k]; for(int i=0;i<k;i++) arr[i]=0; int low = 0, high = n; // Do a binary search to find the number whose // sum is less than N. while (low <= high) { // Get mide int mid = (low + high) >> 1; int sum = (mid * (mid + 1)) >> 1; // If sum is below N if (sum <= n) { // Find number of complete turns count = mid / k; // Right halve low = mid + 1; } else { // Left halve high = mid - 1; } } // Last term of last complete series int last = (count * k); // Subtract the sum till n -= (last * (last + 1)) / 2; int j = 0; // First term of incomplete series int term = (count * k) + 1; while (n > 0) { if (term <= n) { arr[j++] = term; n -= term; term++; } else { arr[j] += n; n = 0; } } // Count the total candies for (int i = 0; i < k; i++) arr[i] += (count * (i + 1)) + (k * (count * (count - 1)) / 2); // Print the total candies for (int i = 0; i < k; i++) System.out.print( arr[i] + " " ); } // Driver Code public static void main(String []args) { int n = 7, k = 4; candies(n, k); } } // This code is contributed by ihritik # Python3 implementation of the above approach # Function to find out the number of# candies every person receiveddef candies(n, k): # Count number of complete turns count = 0; # Get the last term ind = 1; # Stores the number of candies arr = [0] * k; low = 0; high = n; # Do a binary search to find the # number whose sum is less than N. while (low <= high): # Get mide mid = (low + high) >> 1; sum = (mid * (mid + 1)) >> 1; # If sum is below N if (sum <= n): # Find number of complete turns count = int(mid / k); # Right halve low = mid + 1; else: # Left halve high = mid - 1; # Last term of last complete series last = (count * k); # Subtract the sum till n -= int((last * (last + 1)) / 2); i = 0; # First term of incomplete series term = (count * k) + 1; while (n): if (term <= n): arr[i] = term; i += 1; n -= term; term += 1; else: arr[i] += n; n = 0; # Count the total candies for i in range(k): arr[i] += ((count * (i + 1)) + int(k * (count * (count - 1)) / 2)); # Print the total candies for i in range(k): print(arr[i], end = " "); # Driver Coden = 7;k = 4;candies(n, k); # This code is contributed by chandan_jnu // C# implementation of the above approach using System;class GFG{ // Function to find out the number of // candies every person received static void candies(int n, int k) { // Count number of complete turns int count = 0; // Get the last term int ind = 1; // Stores the number of candies int []arr=new int[k]; for(int i=0;i<k;i++) arr[i]=0; int low = 0, high = n; // Do a binary search to find the number whose // sum is less than N. while (low <= high) { // Get mide int mid = (low + high) >> 1; int sum = (mid * (mid + 1)) >> 1; // If sum is below N if (sum <= n) { // Find number of complete turns count = mid / k; // Right halve low = mid + 1; } else { // Left halve high = mid - 1; } } // Last term of last complete series int last = (count * k); // Subtract the sum till n -= (last * (last + 1)) / 2; int j = 0; // First term of incomplete series int term = (count * k) + 1; while (n > 0) { if (term <= n) { arr[j++] = term; n -= term; term++; } else { arr[j] += n; n = 0; } } // Count the total candies for (int i = 0; i < k; i++) arr[i] += (count * (i + 1)) + (k * (count * (count - 1)) / 2); // Print the total candies for (int i = 0; i < k; i++) Console.Write( arr[i] + " " ); } // Driver Code public static void Main() { int n = 7, k = 4; candies(n, k); } } // This code is contributed by ihritik <?php// PHP implementation of the above approach // Function to find out the number of// candies every person receivedfunction candies($n, $k){ // Count number of complete turns $count = 0; // Get the last term $ind = 1; // Stores the number of candies $arr = array_fill(0, $k, 0); $low = 0; $high = $n; // Do a binary search to find the // number whose sum is less than N. while ($low <= $high) { // Get mide $mid = ($low + $high) >> 1; $sum = ($mid * ($mid + 1)) >> 1; // If sum is below N if ($sum <= $n) { // Find number of complete turns $count = (int)($mid / $k); // Right halve $low = $mid + 1; } else { // Left halve $high = $mid - 1; } } // Last term of last complete series $last = ($count * $k); // Subtract the sum till $n -= (int)(($last * ($last + 1)) / 2); $i = 0; // First term of incomplete series $term = ($count * $k) + 1; while ($n) { if ($term <= $n) { $arr[$i++] = $term; $n -= $term; $term++; } else { $arr[$i] += $n; $n = 0; } } // Count the total candies for ($i = 0; $i < $k; $i++) $arr[$i] += ($count * ($i + 1)) + (int)($k * ($count * ($count - 1)) / 2); // Print the total candies for ($i = 0; $i < $k; $i++) echo $arr[$i] . " ";} // Driver Code$n = 7;$k = 4;candies($n, $k); // This code is contributed// by chandan_jnu?> <script>// javascript implementation of the above approach // Function to find out the number of // candies every person received function candies(n , k) { // Count number of complete turns var count = 0; // Get the last term var ind = 1; // Stores the number of candies var arr = Array(k).fill(0); for (i = 0; i < k; i++) arr[i] = 0; var low = 0, high = n; // Do a binary search to find the number whose // sum is less than N. while (low <= high) { // Get mide var mid = parseInt((low + high) /2); var sum = parseInt((mid * (mid + 1)) / 2); // If sum is below N if (sum <= n) { // Find number of complete turns count = parseInt(mid / k); // Right halve low = mid + 1; } else { // Left halve high = mid - 1; } } // Last term of last complete series var last = (count * k); // Subtract the sum till n -= (last * (last + 1)) / 2; var j = 0; // First term of incomplete series var term = (count * k) + 1; while (n > 0) { if (term <= n) { arr[j++] = term; n -= term; term++; } else { arr[j] += n; n = 0; } } // Count the total candies for (i = 0; i < k; i++) arr[i] += (count * (i + 1)) + (k * (count * (count - 1)) / 2); // Print the total candies for (i = 0; i < k; i++) document.write(arr[i] + " "); } // Driver Code var n = 7, k = 4; candies(n, k); // This code contributed by aashish1995</script> 1 2 3 1 Time Complexity: O(log N + K) ihritik mohit kumar 29 ankthon Chandan_Kumar Rajput-Ji aashish1995 ankita_saini tripathikrishnakanttripathi Binary Search Mathematical Mathematical Binary Search Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Modulo Operator (%) in C/C++ with Examples Program to find GCD or HCF of two numbers Merge two sorted arrays Prime Numbers Program to find sum of elements in a given array Program for factorial of a number Program for Decimal to Binary Conversion Sieve of Eratosthenes Operators in C / C++ The Knight's tour problem | Backtracking-1
[ { "code": null, "e": 25120, "s": 25092, "text": "\n25 Nov, 2021" }, { "code": null, "e": 25583, "s": 25120, "text": "Given N candies and K people. In the first turn, the first person gets 1 candy, the second gets 2 candies, and so on till K people. In the next turn, the first per...
Linear programming with Python and Julia | by Himalaya Bir Shrestha | Towards Data Science
I was intrigued by the concept of optimization when I attended the course Operations Research (OR) during my undergraduate studies in Mechanical Engineering half a decade ago. The main reason this course was fascinating to me was that it dealt with solving real-world problems such as optimizing the workflow in a factory, supply chain management, scheduling flights in an airport, travelling salesman problem, etc. Operations Research deals with how to make decisions efficiently through the use of different mathematical techniques or algorithms. In a real-world setting, this could mean maximizing (profit, yield) or minimizing (losses, risks) the given expression while satisfying the constraints such as costs, time, and resource allocation. There are several applications of OR in domains such as energy system optimization, supply chain management, logistics and inventory management, routing and pathfinding problems, etc. [1]. When I was an undergrad student, solving optimization problems in the OR course manually with hand and a calculator was a daunting task. A human error in a single step meant all the following steps are wrong, and one has to redo the entire procedure from the start. Thanks to the advancement in the programming techniques, now there are open-source tools such as Google OR-Tools, and different packages in Python, and Julia, which facilitate solving the optimization problem in a matter of seconds. One just has to define the problem in a framework understandable to the given package. Besides the Google OR-Tools, some open-source packages available for solving optimization problems in Python are scipy.optimize, PuLP and Pyomo. In Julia, there is a similar package embedded within the language called JuMP. In this post, I am going to solve a simple linear optimization problem first using the Pyomo package in Python, replicate it in Julia using JuMP, and share my experience. The code used in this post is available in this GitHub repository. Let’s get started: Problem Statement For this post, I consider a problem of determining an optimal product mix in a factory based on the processing time of the products in different machines, availability of the machines, and the unit profit of each product. A company manufactures two products: X and Y. To manufacture each product, it has to go through three machines: A, B, and C. Manufacturing X require 3 hours in machine A, 9 hours in machine B, and 2 hours in machine C. Similarly, manufacturing product Y require 2, 4, and 10 hours in machines A, B, and C respectively. The availability of each of the machines A, B, and C during a manufacturing period are 66, 180, and 200 hours respectively. The profit per product X is USD 90 and that per product Y is USD 75. How many units of X and Y should be produced during a production period to maximize profit? Assuming x and y as the units of X and Y to be produced respectively, x and y are our decision variables. The problem could be represented in algebraic form as follows: Profit = 90x+75yObjective: maximize 90x+75y subject to:3x+2y≤669x+4y≤1802x+10y≤200x, y≥0 Solver Solvers embed powerful algorithms to solve optimization problems and help improve decision-making around planning, allocation, and scheduling of the resources under constraints to meet the given objectives. Based on the problem class, one needs to select the suitable solver to solve the optimization problem. The following table shows the examples of some solvers which have both Python and Julia wrappers available. It is important to note that not all solvers are open-access. For e.g. IPOPT and GLPK are open-access solvers, while CPLEX and GUROBI require commercial licenses. The problem used in this post is an example of linear programming since both the objective and constraints are linear. In this case, we can use open-access solvers either GLPK or IPOPT to solve the problem. If the problem was non-linear (for e.g. with quadratic constraints), then only the IPOPT solver would have been usable in that case, and not GLPK based on the problem class. Solving in Python with Pyomo package Having some basic hands-on experience of working with the Pyomo package, I find it pretty intuitive to work with this package in Python. Pyomo allows two strategies for declaring models. When a mathematical model is defined by using symbols that rely on unspecified parameter values (e.g. ax+by=c), it is referred to as an Abstract Model. Passing data through an Abstract Model creates an instance of the model (also referred to as a Concrete Model; e.g. 2x+3y = 5) which could be passed through the solver, as depicted in the flowchart below. The declaration of modelling components such as variables is quite straightforward in Pyomo. The objective function and constraints are declared in the form of expressions. While declaring an objective, it is important to provide the sense: whether to minimize or maximize the given expression. After declaring the necessary components, the model is passed through a solver (here GLPK) for solving the linear optimization problem. When the solver status is “ok” and the termination condition “optimal”, it means that the optimization problem has been solved successfully. For the given problem, the model determines that 10 items of X and 18 items of Y are optimal to maximize the profit of the company, which is $2250 in this case. This problem could also be solved graphically by plotting the model variables and constraints in the graph. I have plotted them using matplotlib, with the code in the following gist: As shown in the plot above, the red marker at (10, 18) represents the optimal product strategy. I define the space where all the red, green, and blue shades overlap each other as “Feasibility Space”, wherein, the model meets all the constraints. Solving in Julia with JuMP Launched in 2012, Julia is an open-source language whose community is growing in recent years. Thanks to the strong performance and flexibility of the language, Julia has been deemed apt for computationally intensive tasks [3]. At the time of writing, there are over 4000 packages in the General registry of the language. As a newbie with the Julia language, I wanted to give it a try by replicating the same linear problem. In this section, I am going to describe how I did it. First I opened Julia’s command-line REPL (read-eval-print-loop) and added all the required packages using Pkg, which is the built-in package manager of Julia. using Pkg #In-built package manager of JuliaPkg.add(“LinearAlgebra”) #Analogous to numpy package in PythonPkg.add(“Plots”) #Add Plots.jl framework for plottingPkg.add(“JuMP”) #Add mathematical optimization packagePkg.add(“GLPK”) #Add solverPkg.add(“IJulia”) #Backend for Jupyter interactive environmentPkg.add(“PyPlot”) #Julia interface to matplotlib.pyplot in Python Installing these packages was pretty convenient and each package was installed in my system within a matter of seconds. After installing the packages, they are imported using the syntax: using <package> in Julia which is similar to import <package>in Python. After the packages are called, the model is declared and the optimizer is set from GLPK. As shown in the gist above, declaring variables, constraints and objectives are much easier in Julia because one can declare directly using the real algebraic expression, which makes the script shorter and more comprehensible. As shown in the screenshot above, I get the same results with JuMP as with Pyomo. Next, I plot the linear problem graphically in Julia using PyPlot. The PyPlot module in Julia provides a Julia interface to the matplotlib plotting library for Python, and specifically to the matplotlib.pyplot module. Therefore, it is a pre-requisite to have the matplotlib library installed in one’s system to use PyPlot [4]. I noticed that the parameters to be passed using PyPlot in Julia are basically the same as for matplotlib.pyplot in Python, and that only the nomenclature is different. Conclusion In this post, I demonstrated an example of solving a simple linear optimization problem first using the Pyomo in Python and then using JuMP in Julia. These open-source packages as well as the solvers are a real boon to solve complicated optimization problems in the domain of Operations Research, which otherwise would cost a lot of time and resources. Moreover, they also facilitate improving speed, precision, and flexibility in solving optimization problems (for e.g. including sensitivity analysis). Although a newbie with the Julia language, with my experience in Python, I found it relatively easy to understand the syntax, which is user-friendly, and formulate the given problem in Julia. Julia has been deemed to combine the interactivity and syntax of scripting languages such as Python, and the speed of compiled languages such as C [3]. It might be slow when a function is keyed in initially, however, the subsequent runs are supposedly faster. References: [1] Vasegaard, A. (2021). Why Operations Research is awesome — An Introduction? [2] Hart et al., (2021). Pyomo - Optimization Modeling in Python. [3] Perkel, J.M. (2021). Julia: come for the syntax, stay for the speed [4] Johnson, S.G. (2021). The PyPlot module for Julia.
[ { "code": null, "e": 919, "s": 172, "text": "I was intrigued by the concept of optimization when I attended the course Operations Research (OR) during my undergraduate studies in Mechanical Engineering half a decade ago. The main reason this course was fascinating to me was that it dealt with solvin...
How to Present the Relationships Amongst Multiple Variables in Python | by Rashida Nasrin Sucky | Towards Data Science
While dealing with a big dataset, it is important to understand the relationship between the features. That is a big part of data analysis. The relationships can be between two variables or amongst several variables. In this article, I will discuss how to present the relationships between multiple variables with some simple techniques. I am going to use Python’s Numpy, Pandas, Matplotlib, and Seaborn libraries. First, import the necessary packages and the dataset. %matplotlib inlineimport matplotlib.pyplot as pltimport seaborn as snsimport pandas as pdimport numpy as npdf = pd.read_csv("nhanes_2015_2016.csv") This dataset is very large. At least too large to show a screenshot here. Here are the columns in this dataset. df.columns#Output:Index(['SEQN', 'ALQ101', 'ALQ110', 'ALQ130', 'SMQ020', 'RIAGENDR', 'RIDAGEYR', 'RIDRETH1', 'DMDCITZN', 'DMDEDUC2', 'DMDMARTL', 'DMDHHSIZ', 'WTINT2YR', 'SDMVPSU', 'SDMVSTRA', 'INDFMPIR', 'BPXSY1', 'BPXDI1', 'BPXSY2', 'BPXDI2', 'BMXWT', 'BMXHT', 'BMXBMI', 'BMXLEG', 'BMXARML', 'BMXARMC', 'BMXWAIST', 'HIQ210'], dtype='object') Now, let’s make the dataset smaller with a few columns. So, it’s easier to handle and show in this article. df = df[['SMQ020', 'RIAGENDR', 'RIDAGEYR','DMDCITZN', 'DMDEDUC2', 'DMDMARTL', 'DMDHHSIZ','SDMVPSU', 'BPXSY1', 'BPXDI1', 'BPXSY2', 'BPXDI2', 'RIDRETH1']]df.head() Column names may look strange to you. I will keep explaining as we keep using them. In this dataset, we have two systolic blood pressure data (‘BPXSY1’, ‘BPXSY2) and two diastolic blood pressure data (‘BPXDI1’, ‘BPXDI2’). It is worth looking at if there is any relationship between them. Observe the relationship between the first and second systolic blood pressure. In this dataset, we have two systolic blood pressure data (‘BPXSY1’, ‘BPXSY2) and two diastolic blood pressure data (‘BPXDI1’, ‘BPXDI2’). It is worth looking at if there is any relationship between them. Observe the relationship between the first and second systolic blood pressure. To find out the relation between two variables, scatter plots have been being used for a long time. It is the most popular, basic, and easily understandable way of looking at a relationship between two variables. sns.regplot(x = "BPXSY1", y="BPXSY2", data=df, fit_reg = False, scatter_kws={"alpha": 0.2}) The relationship between the two systolic blood pressures is positively linear. There is a lot of overlapping observed in the plot. 2. To understand the systolic and diastolic blood pressure data and their relationships more, make a joint plot. Jointplot shows the density of the data and the distribution of both the variables at the same time. sns.jointplot(x = "BPXSY1", y="BPXSY2", data=df, kind = 'kde') In this plot, it shows very clearly that the densest area is from 115 to 135. Both the first and second systolic blood pressure distribution is right-skewed. Also, both of them have some outliers. 3. Find out if the correlation between the first and second systolic blood pressures are different in the male and female population. df["RIAGENDRx"] = df.RIAGENDR.replace({1: "Male", 2: "Female"}) sns.FacetGrid(df, col = "RIAGENDRx").map(plt.scatter, "BPXSY1", "BPXSY2", alpha =0.6).add_legend() This picture shows, both the correlations are positively linear. Let’s find out the correlation with more clarity. print(df.loc[df.RIAGENDRx=="Female",["BPXSY1", "BPXSY2"]].dropna().corr())print(df.loc[df.RIAGENDRx=="Male",["BPXSY1", "BPXSY2"]].dropna().corr()) From the two correlation chart above, the correlation between two systolic blood pressure is 1% higher in the female population than in the male. If these things are new to you, I encourage you to try understanding the correlation between two diastolic blood pressures or systolic and diastolic blood pressures. 4. Human behavior can change with so many different factors such as gender, education level, ethnicity, financial situation, and so on. In this dataset, we have ethnicity (“RIDRETH1”) information as well. Check the effect of both ethnicity and gender on the relationship between both the systolic blood pressures. sns.FacetGrid(df, col="RIDRETH1", row="RIAGENDRx").map(plt.scatter, "BPXSY1", "BPXSY2", alpha = 0.5).add_legend() With different ethnic origins and gender, correlations seem to be changing a little bit but generally stays positively linear as before. 5. Now, focus on some other variables in the dataset. Find the relationship between education and marital status. Both the education column(‘DMDEDUC2’) and the marital status (‘DMDMARTL’) column are categorical. First, replace the numerical values with the string values that will make sense. We also need to get rid of values that do not add good information to the chart. Such as the education column has some values ‘Don’t know’ and the marital status column has some ‘Refused’ values. df["DMDEDUC2x"] = df.DMDEDUC2.replace({1: "<9", 2: "9-11", 3: "HS/GED", 4: "Some college/AA", 5: "College", 7: "Refused", 9: "Don't know"})df["DMDMARTLx"] = df.DMDMARTL.replace({1: "Married", 2: "Widowed", 3: "Divorced", 4: "Separated", 5: "Never married", 6: "Living w/partner", 77: "Refused"})db = df.loc[(df.DMDEDUC2x != "Don't know") & (df.DMDMARTLx != "Refused"), :] Finally, we got this DataFrame that is clean and ready for the chart. x = pd.crosstab(db.DMDEDUC2x, db.DMDMARTLx)x Here is the result. The numbers look very simple to understand. But a chart of population proportions will be a more appropriate presentation. I am getting a population proportion based on marital status. x.apply(lambda z: z/z.sum(), axis=1) 6. Find the population proportion of marital status segregated by Ethnicity (‘RIDRETH1’) and education level. First, replace the numeric value with meaningful strings in the ethnicity column. I found these string values from the Center for Disease Control website. db.groupby(["RIDRETH1x", "DMDEDUC2x", "DMDMARTLx"]).size().unstack().fillna(0).apply(lambda x: x/x.sum(), axis=1) 7. Observe the difference in education level with age. Here, education level is a categorical variable and age is a continuous variable. A good way of observing the difference in education levels with age will be to make a boxplot. plt.figure(figsize=(12, 4))a = sns.boxplot(db.DMDEDUC2x, db.RIDAGEYR) This plot shows, the rate of a college education is higher in younger people. A violin plot may provide a better picture. plt.figure(figsize=(12, 4))a = sns.violinplot(db.DMDEDUC2x, db.RIDAGEYR) So, the violin plot shows a distribution. The most college-educated people are around age 30. At the same time, most people who are less than 9th grade, are about 68 to 88 years old. 8. Show the marital status distributed by and segregated by gender. fig, ax = plt.subplots(figsize = (12,4))ax = sns.violinplot(x= "DMDMARTLx", y="RIDAGEYR", hue="RIAGENDRx", data= db, scale="count", split=True, ax=ax) Here, blue color shows the male population distribution and orange color represents the female population distribution. Only ‘never married’ and ‘living with partner’categories have similar distributions for the male and female populations. Every other category has a notable difference in the male and female populations. I hope it was helpful. Please feel free to follow me on Twitter and like my Facebook page. Here is the dataset I used in this article:
[ { "code": null, "e": 586, "s": 171, "text": "While dealing with a big dataset, it is important to understand the relationship between the features. That is a big part of data analysis. The relationships can be between two variables or amongst several variables. In this article, I will discuss how to...
Struts2 - Interview Questions
Dear readers, these Struts2 Interview Questions have been designed especially to get you acquainted with the nature of questions you may encounter during your interview for the subject of Struts2 Programming. As per my experience, good interviewers hardly planned to ask any particular question during your interview, normally questions start with some basic concept of the subject and later they continue based on further discussion and what you answer − Struts2 is popular and mature web application framework based on the MVC design pattern. Struts2 is not just the next version of Struts 1, but it is a complete rewrite of the Struts architecture. Here are some of the great features that may force you to consider Struts2 − POJO forms and POJO actions − Struts2 has done away with the Action Forms that were an integral part of the Struts framework. With Struts2, you can use any POJO to receive the form input. Similarly, you can now see any POJO as an Action class. POJO forms and POJO actions − Struts2 has done away with the Action Forms that were an integral part of the Struts framework. With Struts2, you can use any POJO to receive the form input. Similarly, you can now see any POJO as an Action class. Tag support − Struts2 has improved the form tags and the new tags allow the developers to write less code. Tag support − Struts2 has improved the form tags and the new tags allow the developers to write less code. AJAX support − Struts2 has recognised the take over by Web2.0 technologies, and has integrated AJAX support into the product by creating AJAX tags, that function very similar to the standard Struts2 tags. AJAX support − Struts2 has recognised the take over by Web2.0 technologies, and has integrated AJAX support into the product by creating AJAX tags, that function very similar to the standard Struts2 tags. Easy Integration − Integration with other frameworks like Spring, Tiles and SiteMesh is now easier with a variety of integration available with Struts2. Easy Integration − Integration with other frameworks like Spring, Tiles and SiteMesh is now easier with a variety of integration available with Struts2. Template Support − Support for generating views using templates. Template Support − Support for generating views using templates. Plugin Support − The core Struts2 behaviour can be enhanced and augmented by the use of plugins. A number of plugins are available for Struts2. Plugin Support − The core Struts2 behaviour can be enhanced and augmented by the use of plugins. A number of plugins are available for Struts2. The Model-View-Controller pattern in Struts2 is realized with following five core components − Actions Actions Interceptors Interceptors Value Stack / OGNL Value Stack / OGNL Results / Result types Results / Result types View technologies View technologies Following is the life cycle of a request in Struct2 application − User sends a request to the server for requesting for some resource (i.e pages). User sends a request to the server for requesting for some resource (i.e pages). The FilterDispatcher looks at the request and then determines the appropriate Action. The FilterDispatcher looks at the request and then determines the appropriate Action. Configured interceptors functionalities applies such as validation, file upload etc. Configured interceptors functionalities applies such as validation, file upload etc. Selected action is executed to perform the requested operation. Selected action is executed to perform the requested operation. Again, configured interceptors are applied to do any post-processing if required. Again, configured interceptors are applied to do any post-processing if required. Finally the result is prepared by the view and returns the result to the user. Finally the result is prepared by the view and returns the result to the user. The struts.xml file contains the configuration information that you will be modifying as actions are developed. This file can be used to override default settings for an application, for example struts.devMode = false and other settings which are defined in property file. This file can be created under the folder WEB-INF/classes. The constant tag along with name and value attributes will be used to override any of the following properties defined in default.properties, like we just set struts.devMode property. Setting struts.devMode property allows us to see more debug messages in the log file. We define action tags corresponds to every URL we want to access and we define a class with execute() method which will be accessed whenever we will access corresponding URL. Results determine what gets returned to the browser after an action is executed. The string returned from the action should be the name of a result. Results are configured per-action as above, or as a "global" result, available to every action in a package. Results have optional name and type attributes. The default name value is "success". The struts-config.xml configuration file is a link between the View and Model components in the Web Client. This is where you map your ActionForm subclass to a name. You use this name as an alias for your ActionForm throughout the rest of the struts-config.xml file, and even on your JSP pages. This section maps a page on your webapp to a name. You can use this name to refer to the actual page. This avoids hardcoding URLs on your web pages. This is where you declare form handlers and they are also known as action mappings. This section tells Struts where to find your properties files, which contain prompts and error messages. This configuration file provides a mechanism to change the default behavior of the framework. Actually all of the properties contained within the struts.properties configuration file can also be configured in the web.xml using the init-param, as well using the constant tag in the struts.xml configuration file. But if you like to keep the things separate and more struts specific then you can create this file under the folder WEB-INF/classes. The values configured in this file will override the default values configured in default.properties which is contained in the struts2-core-x.y.z.jar distribution. Interceptors are conceptually the same as servlet filters or the JDKs Proxy class. Interceptors allow for crosscutting functionality to be implemented separately from the action as well as the framework. You can achieve the following using interceptors − Providing preprocessing logic before the action is called. Providing preprocessing logic before the action is called. Providing postprocessing logic after the action is called. Providing postprocessing logic after the action is called. Catching exceptions so that alternate processing can be performed. Catching exceptions so that alternate processing can be performed. Creating a custom interceptor is easy; the interface that needs to be extended is the Interceptor interface. Actual action will be executed using the interceptor by invocation.invoke() call. So you can do some pre-processing and some post-processing based on your requirement. The framework itself starts the process by making the first call to the ActionInvocation object's invoke(). Each time invoke() is called, ActionInvocation consults its state and executes whichever interceptor comes next. When all of the configured interceptors have been invoked, the invoke() method will cause the action itself to be executed. The Action class manages the application's state, and the Result Type manages the view. Default result type is dispatcher, which is used to dispatch to JSP pages. The dispatcher result type is the default type, and is used if no other result type is specified. It's used to forward to a servlet, JSP, HTML page, and so on, on the server. It uses the RequestDispatcher.forward() method. he redirect result type calls the standard response.sendRedirect() method, causing the browser to create a new request to the given location. We can provide the location either in the body of the <result...> element or as a <param name = "location"> element. The value stack is a set of several objects which keeps the following objects in the provided order − Temporary Objects − There are various temporary objects which are created during execution of a page. For example the current iteration value for a collection being looped over in a JSP tag. Temporary Objects − There are various temporary objects which are created during execution of a page. For example the current iteration value for a collection being looped over in a JSP tag. The Model Object − If you are using model objects in your struts application, the current model object is placed before the action on the value stack. The Model Object − If you are using model objects in your struts application, the current model object is placed before the action on the value stack. The Action Object − This will be the current action object which is being executed. The Action Object − This will be the current action object which is being executed. Named Objects − These objects include #application, #session, #request, #attr and #parameters and refer to the corresponding servlet scopes. Named Objects − These objects include #application, #session, #request, #attr and #parameters and refer to the corresponding servlet scopes. The Object-Graph Navigation Language (OGNL) is a powerful expression language that is used to reference and manipulate data on the ValueStack. OGNL also helps in data transfer and type conversion. The ActionContext map consists of the following − application − application scoped variables. application − application scoped variables. session − session scoped variables. session − session scoped variables. root / value stack − all your action variables are stored here. root / value stack − all your action variables are stored here. request − request scoped variables. request − request scoped variables. parameters − request parameters. parameters − request parameters. atributes − the attributes stored in page, request, session and application scope. atributes − the attributes stored in page, request, session and application scope. File uploading in Struts is possible through a pre-defined interceptor called FileUpload interceptor which is available through the org.apache.struts2.interceptor.FileUploadInterceptor class and included as part of the defaultStack. Following are the Struts2 configuration properties that control file uploading process − struts.multipart.maxSize − The maximum size (in bytes) of a file to be accepted as a file upload. Default is 250M. struts.multipart.maxSize − The maximum size (in bytes) of a file to be accepted as a file upload. Default is 250M. struts.multipart.parser − The library used to upload the multipart form. By default is jakarta. struts.multipart.parser − The library used to upload the multipart form. By default is jakarta. struts.multipart.saveDir − The location to store the temporary file. By default is javax.servlet.context.tempdir. struts.multipart.saveDir − The location to store the temporary file. By default is javax.servlet.context.tempdir. The fileUplaod interceptor uses several default error message keys − struts.messages.error.uploading − A general error that occurs when the file could not be uploaded. struts.messages.error.uploading − A general error that occurs when the file could not be uploaded. struts.messages.error.file.too.large − Occurs when the uploaded file is too large as specified by maximumSize. struts.messages.error.file.too.large − Occurs when the uploaded file is too large as specified by maximumSize. struts.messages.error.content.type.not.allowed − Occurs when the uploaded file does not match the expected content types specified. struts.messages.error.content.type.not.allowed − Occurs when the uploaded file does not match the expected content types specified. You can override the text of these messages in WebContent/WEB-INF/classes/messages.properties resource files. At Struts's core, we have the validation framework that assists the application to run the rules to perform validation before the action method is executed. Action class should extend the ActionSupport class, in order to get the validate method executed. When the user presses the submit button, Struts 2 will automatically execute the validate method and if any of the if statements listed inside the method are true, Struts 2 will call its addFieldError method. If any errors have been added then Struts 2 will not proceed to call the execute method. Rather the Struts 2 framework will return input as the result of calling the action. So when validation fails and Struts 2 returns input, the Struts 2 framework will redisplay the view file. Since we used Struts 2 form tags, Struts 2 will automatically add the error messages just above the form filed. These error messages are the ones we specified in the addFieldError method call. The addFieldError method takes two arguments. The first is the form field name to which the error applies and the second is the error message to display above that form field. The second method of doing validation is by placing an xml file next to the action class. Struts2 XML based validation provides more options of validation like email validation, integer range validation, form validation field, expression validation, regex validation, required validation, requiredstring validation, stringlength validation and etc. The xml file needs to be named '[action-class]'-validation.xml. Following is the list of various types of field level and non-field level validation available in Struts2 − date validator date validator double validator double validator email validator email validator expression validator expression validator int validator int validator regex validator regex validator required validator required validator requiredstring validator requiredstring validator stringlength validator stringlength validator url validator url validator Internationalization (i18n) is the process of planning and implementing products and services so that they can easily be adapted to specific local languages and cultures, a process called localization. The internationalization process is sometimes called translation or localization enablement. Struts2 provides localization ie. internationalization (i18n) support through resource bundles, interceptors and tag libraries in the following places − The UI Tags. The UI Tags. Messages and Errors. Messages and Errors. Within action classes. Within action classes. The simplest naming format for a resource file is − bundlename_language_country.properties Here bundlename could be ActionClass, Interface, SuperClass, Model, Package, Global resource properties. Next part language_country represents the country locale for example Spanish (Spain) locale is represented by es_ES and English (United States) locale is represented by en_US etc. Here you can skip country part which is optional. When you reference a message element by its key, Struts framework searches for a corresponding message bundle in the following order − ActionClass.properties ActionClass.properties Interface.properties Interface.properties SuperClass.properties SuperClass.properties model.properties model.properties package.properties package.properties struts.properties struts.properties global.properties global.properties StrutsTypeConverter class tells Struts how to convert Environment to a String and vice versa by overriding two methods convertFromString() and convertToString(). Struts 2 comes with three built-in themes − simple theme − A minimal theme with no "bells and whistles". For example, the textfield tag renders the HTML <input/> tag without a label, validation, error reporting, or any other formatting or functionality. simple theme − A minimal theme with no "bells and whistles". For example, the textfield tag renders the HTML <input/> tag without a label, validation, error reporting, or any other formatting or functionality. xhtml theme − This is the default theme used by Struts 2 and provides all the basics that the simple theme provides and adds several features like standard two-column table layout for the HTML, Labels for each of the HTML, Validation and error reporting etc. xhtml theme − This is the default theme used by Struts 2 and provides all the basics that the simple theme provides and adds several features like standard two-column table layout for the HTML, Labels for each of the HTML, Validation and error reporting etc. css_xhtml theme − This theme provides all the basics that the simple theme provides and adds several features like standard two-column CSS-based layout, using <div> for the HTML Struts Tags, Labels for each of the HTML Struts Tags, placed according to the CSS stylesheet. css_xhtml theme − This theme provides all the basics that the simple theme provides and adds several features like standard two-column CSS-based layout, using <div> for the HTML Struts Tags, Labels for each of the HTML Struts Tags, placed according to the CSS stylesheet. Struts makes the exception handling easy by the use of the "exception" interceptor. The "exception" interceptor is included as part of the default stack, so you don't have to do anything extra to configure it. It is available out-of-the-box ready for you to use. A @Results annotation is a collection of results. Under the @Results annotation, we can have multiple @Result annotations. @Results({ @Result(name = "success", value = "/success.jsp"), @Result(name = "error", value = "/error.jsp") }) public class Employee extends ActionSupport{ ... } The @result annotations have the name that correspond to the outcome of the execute method. They also contain a location as to which view should be served corresponding to return value from execute(). @Result(name = "success", value = "/success.jsp") public class Employee extends ActionSupport{ ... } This is used to decorate the execute() method. The Action method also takes in a value which is the URL on which the action is invoked. public class Employee extends ActionSupport{ private String name; private int age; @Action(value = "/empinfo") public String execute() { return SUCCESS; } } The @After annotation marks a action method that needs to be called after the main action method and the result was executed. Return value is ignored. public class Employee extends ActionSupport{ @After public void isValid() throws ValidationException { // validate model object, throw exception if failed } public String execute() { // perform secure action return SUCCESS; } } The @Before annotation marks a action method that needs to be called before the main action method and the result was executed. Return value is ignored. public class Employee extends ActionSupport{ @Before public void isAuthorized() throws AuthenticationException { // authorize request, throw exception if failed } public String execute() { // perform secure action return SUCCESS; } } The @BeforeResult annotation marks a action method that needs to be executed before the result. Return value is ignored. public class Employee extends ActionSupport{ @BeforeResult public void isValid() throws ValidationException { // validate model object, throw exception if failed } public String execute() { // perform action return SUCCESS; } } This validation annotation checks if there are any conversion errors for a field and applies them if they exist. public class Employee extends ActionSupport{ @ConversionErrorFieldValidator(message = "Default message", key = "i18n.key", shortCircuit = true) public String getName() { return name; } } This validation annotation checks that a date field has a value within a specified range. public class Employee extends ActionSupport{ @DateRangeFieldValidator(message = "Default message", key = "i18n.key", shortCircuit = true, min = "2005/01/01", max = "2005/12/31") public String getDOB() { return dob; } } This validation annotation checks that a double field has a value within a specified range. If neither min nor max is set, nothing will be done. public class Employee extends ActionSupport{ @DoubleRangeFieldValidator(message = "Default message", key = "i18n.key", shortCircuit = true, minInclusive = "0.123", maxInclusive = "99.987") public String getIncome() { return income; } } This validation annotation checks that a field is a valid e-mail address if it contains a non-empty String. public class Employee extends ActionSupport{ @EmailValidator(message = "Default message", key = "i18n.key", shortCircuit = true) public String getEmail() { return email; } } This non-field level validator validates a supplied regular expression. @ExpressionValidator(message = "Default message", key = "i18n.key", shortCircuit = true, expression = "an OGNL expression" ) This validation annotation checks that a numeric field has a value within a specified range. If neither min nor max is set, nothing will be done. public class Employee extends ActionSupport{ @IntRangeFieldValidator(message = "Default message", key = "i18n.key", shortCircuit = true, min = "0", max = "42") public String getAge() { return age; } } This annotation validates a string field using a regular expression. @RegexFieldValidator( key = "regex.field", expression = "yourregexp") This validation annotation checks that a field is non-null. The annotation must be applied at method level. public class Employee extends ActionSupport{ @RequiredFieldValidator(message = "Default message", key = "i18n.key", shortCircuit = true) public String getAge() { return age; } } This validation annotation checks that a String field is not empty (i.e. non-null with a length > 0). public class Employee extends ActionSupport{ @RequiredStringValidator(message = "Default message", key = "i18n.key", shortCircuit = true, trim = true) public String getName() { return name; } } This validator checks that a String field is of the right length. It assumes that the field is a String. If neither minLength nor maxLength is set, nothing will be done. public class Employee extends ActionSupport{ @StringLengthFieldValidator(message = "Default message", key = "i18n.key", shortCircuit = true, trim = true, minLength = "5", maxLength = "12") public String getName() { return name; } } This validator checks that a field is a valid URL. public class Employee extends ActionSupport{ @UrlValidator(message = "Default message", key = "i18n.key", shortCircuit = true) public String getURL() { return url; } } If you want to use several annotations of the same type, these annotation must be nested within the @Validations() annotation. public class Employee extends ActionSupport{ @Validations( requiredFields = {@RequiredFieldValidator(type = ValidatorType.SIMPLE, fieldName = "customfield", message = "You must enter a value for field.")}, requiredStrings = {@RequiredStringValidator(type = ValidatorType.SIMPLE, fieldName = "stringisrequired", message = "You must enter a value for string.")} ) public String getName() { return name; } } This annotation can be used for custom validators. Use the ValidationParameter annotation to supply additional params. @CustomValidator(type ="customValidatorName", fieldName = "myField") This is a marker annotation for type conversions at Type level. The Conversion annotation must be applied at Type level. @Conversion() public class ConversionAction implements Action { } This annotation sets the CreateIfNull for type conversion. The CreateIfNull annotation must be applied at field or method level. @CreateIfNull( value = true ) private List<User> users; This annotation sets the Element for type conversion. The Element annotation must be applied at field or method level. @Element( value = com.acme.User ) private List<User> userList; This annotation sets the Key for type conversion. The Key annotation must be applied at field or method level. @Key( value = java.lang.Long.class ) private Map<Long, User> userMap; This annotation sets the KeyProperty for type conversion. The KeyProperty annotation must be applied at field or method level. @KeyProperty( value = "userName" ) protected List<User> users = null; This annotation annotation is used for class and application wide conversion rules. The TypeConversion annotation can be applied at property and method level. @TypeConversion(rule = ConversionRule.COLLECTION, converter = "java.util.String") public void setUsers( List users ) { this.users = users; } Further, you can go through your past assignments you have done with the subject and make sure you are able to speak confidently on them. If you are fresher then interviewer does not expect you will answer very complex questions, rather you have to make your basics concepts very strong. Second it really doesn't matter much if you could not answer few questions but it matters that whatever you answered, you must have answered with confidence. So just feel confident during your interview. We at tutorialspoint wish you best luck to have a good interviewer and all the very best for your future endeavor. Cheers :-) Print Add Notes Bookmark this page
[ { "code": null, "e": 2702, "s": 2246, "text": "Dear readers, these Struts2 Interview Questions have been designed especially to get you acquainted with the nature of questions you may encounter during your interview for the subject of Struts2 Programming. As per my experience, good interviewers hard...
Check if a given string is made up of two alternating characters - GeeksforGeeks
06 May, 2021 Given a string str, the task is to check whether the given string is made up of only two alternating characters.Examples: Input: str = “ABABABAB” Output: YesInput: str = “XYZ” Output: No Approach: In order for the string to be made up of only two alternating characters, it must satisfy the following conditions: All the characters at odd indices must be same.All the characters at even indices must be same.str[0] != str[1] (This is because string of type “AAAAA” where a single character is repeated a number of time will also satisfy the above two conditions) All the characters at odd indices must be same. All the characters at even indices must be same. str[0] != str[1] (This is because string of type “AAAAA” where a single character is repeated a number of time will also satisfy the above two conditions) Below is the implementation of the above approach: C++ Java Python 3 C# Javascript // C++ implementation of the approach#include <bits/stdc++.h>using namespace std; // Function that returns true if the string// is made up of two alternating charactersbool isTwoAlter(string s){ // Check if ith character matches // with the character at index (i + 2) for (int i = 0; i < s.length() - 2; i++) { if (s[i] != s[i + 2]) { return false; } } // If string consists of a single // character repeating itself if (s[0] == s[1]) return false; return true;} // Driver codeint main(){ string str = "ABAB"; if (isTwoAlter(str)) cout << "Yes"; else cout << "No"; return 0;} // Java implementation of the approachimport java.io.*; class GFG{ // Function that returns true if the string// is made up of two alternating charactersstatic boolean isTwoAlter(String s){ // Check if ith character matches // with the character at index (i + 2) for (int i = 0; i < s.length() - 2; i++) { if (s.charAt(i) != s.charAt(i + 2)) { return false; } } // If string consists of a single // character repeating itself if (s.charAt(0) == s.charAt(1)) return false; return true;} // Driver codepublic static void main (String[] args){ String str = "ABAB"; if (isTwoAlter(str)) System.out.print( "Yes"); else System.out.print("No");}} // This code is contributed by anuj_67.. # Function that returns true if the string# is made up of two alternating charactersdef isTwoAlter( s): # Check if ith character matches # with the character at index (i + 2) for i in range ( len( s) - 2) : if (s[i] != s[i + 2]) : return False #If string consists of a single #character repeating itself if (s[0] == s[1]): return False return True # Driver codeif __name__ == "__main__": str = "ABAB" if (isTwoAlter(str)): print ( "Yes") else: print ("No") # This code is contributed by ChitraNayal // C# implementation of the approachusing System; class GFG{ // Function that returns true if the string // is made up of two alternating characters static bool isTwoAlter(string s) { // Check if ith character matches // with the character at index (i + 2) for (int i = 0; i < s.Length - 2; i++) { if (s[i] != s[i +2]) { return false; } } // If string consists of a single // character repeating itself if (s[0] == s[1]) return false; return true; } // Driver code public static void Main() { string str = "ABAB"; if (isTwoAlter(str)) Console.WriteLine( "Yes"); else Console.WriteLine("No"); }} // This code is contributed by AnkitRai01 <script> // Javascript implementation of the approach // Function that returns true if the string // is made up of two alternating characters function isTwoAlter(s) { // Check if ith character matches // with the character at index (i + 2) for (let i = 0; i < s.length - 2; i++) { if (s[i] != s[i+2]) { return false; } } // If string consists of a single // character repeating itself if (s[0] == s[1]) return false; return true; } // Driver code let str = "ABAB"; if (isTwoAlter(str)) document.write( "Yes"); else document.write("No"); // This code is contributed by rag2127 </script> Yes Time Complexity : O(N) Auxiliary Space : O(1) vt_m ankthon ukasp muskan_garg rag2127 Constructive Algorithms Strings Strings Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Check for Balanced Brackets in an expression (well-formedness) using Stack Python program to check if a string is palindrome or not KMP Algorithm for Pattern Searching Different methods to reverse a string in C/C++ Convert string to char array in C++ Longest Palindromic Substring | Set 1 Array of Strings in C++ (5 Different Ways to Create) Caesar Cipher in Cryptography Check whether two strings are anagram of each other Reverse words in a given string
[ { "code": null, "e": 25080, "s": 25052, "text": "\n06 May, 2021" }, { "code": null, "e": 25204, "s": 25080, "text": "Given a string str, the task is to check whether the given string is made up of only two alternating characters.Examples: " }, { "code": null, "e": 25...
Java - String startsWith() Method
This method has two variants and tests if a string starts with the specified prefix beginning a specified index or by default at the beginning. Here is the syntax of this method − public boolean startsWith(String prefix) Here is the detail of parameters − prefix − the prefix to be matched. prefix − the prefix to be matched. It returns true if the character sequence represented by the argument is a prefix of the character sequence represented by this string; false otherwise. It returns true if the character sequence represented by the argument is a prefix of the character sequence represented by this string; false otherwise. import java.io.*; public class Test { public static void main(String args[]) { String Str = new String("Welcome to Tutorialspoint.com"); System.out.print("Return Value :" ); System.out.println(Str.startsWith("Welcome") ); System.out.print("Return Value :" ); System.out.println(Str.startsWith("Tutorials") ); } } This will produce the following result − Return Value :true Return Value :false 16 Lectures 2 hours Malhar Lathkar 19 Lectures 5 hours Malhar Lathkar 25 Lectures 2.5 hours Anadi Sharma 126 Lectures 7 hours Tushar Kale 119 Lectures 17.5 hours Monica Mittal 76 Lectures 7 hours Arnab Chakraborty Print Add Notes Bookmark this page
[ { "code": null, "e": 2521, "s": 2377, "text": "This method has two variants and tests if a string starts with the specified prefix beginning a specified index or by default at the beginning." }, { "code": null, "e": 2557, "s": 2521, "text": "Here is the syntax of this method −" ...
mplfinance — matplolib’s relatively unknown library for plotting financial data | by Eryk Lewinson | Towards Data Science
It is a well-known fact that matplotlib is very versatile and can be used to create pretty much any kind of chart you want. It might not be the simplest or prettiest, but after viewing enough questions on StackOverflow it will most likely work out quite well in the end. I knew that it is possible to create financial plots such as a candlestick chart in pure matplotlib, but that is not the most pleasant experience and there are much easier ways to do it with libraries such as plotly or altair (I covered this in another article). However, only recently have I found out that there is a separate library/module of matplotlib dedicated to financial plotting. It is called mplfinance and in this article, I will show some of its nice and quite unique features. The setup is quite standard. First, we import the libraries. Then, we download stock prices to work with — for this article we use Apple’s stock prices from the second half of 2020. If you are interested in more details on downloading the stock prices, you can check out my other article on that. mplfinance offers a few kinds of plots useful for analyzing patterns in asset prices. The first one, also the default one in the library, is the OHLC chart. We can create it by simply using the plot function: mpf.plot(df["2020-12-01":]) where df is a pandas DataFrame containing OHLC data and a DatetimeIndex. We restrict the data to the last month only in order to clearly see the shape of the plot’s elements. The interpretation is pretty similar to candlestick charts. The horizontal line on the left indicates the open price, the one on the right the close price. The vertical line represents the volatility of the prices and we can read the high/low prices from the two extremes. At this point, it is worth mentioning that mplfinance offers an easy way to stack multiple layers of information on one chart. For example, imagine that we wanted to add the high and low prices as lines to the previously created plot. We can easily do that using the make_addplot function, as illustrated below. We first define the additional lines and then pass them as an extra argument to the plot function. Running the code generates the following image, which only confirms that the extremes of the vertical lines correspond to the high and low prices of the given day. Naturally, this is a simplified example. In a more complex case, we might be interested in adding some technical indicators, for example, the Bollinger Bands or a Simple Moving Average. We will get back to the latter one soon. We can also use the same functions to create symbols showing where we entered/exited a position. You can find a good example here. The next in line of the available types of plots is the candlestick chart. Generating them with mplfinance is as simple as adding an extra argument to the plot function. mpf.plot(df["2020-12-01":], type="candle") When you look at the candles and the dates, it is clear that there are some missing dates there. That naturally comes from the fact that the markets are closed on the weekends and some special days. If you want to take this into account, you can provide an extra argument to the plot function: mpf.plot(df["2020-12-01":], type="candle", show_nontrading=True) Let’s add even more information to the plot. First, there is a handy argument we can pass into the plot function — mav — which will automatically add any simple moving averages we want. For this plot, let’s take the 10- and 20-day MAs. Secondly, we can also add the traded volume. mpf.plot(df, type="candle", mav=(10, 20), volume=True) To be honest, mplfinance is the first place in which I saw the following two kinds of plots, as they are not as popular as OHLC and candlestick charts. The first one is called a Renko chart and is built using price movement, without taking into account a standardized time interval as most charts do. What it means in practice is that a new block is created when the price moves by a specified amount and each subsequent block is added at a 45-degree angle to the prior one, either above it or below. The most common use of the Renko chart is to filter out noise from the price series and to help with identifying trends in the prices. That is because all price movements smaller than the indicated box size are filtered out. We can create the Renko chart by simply specifying the type argument when using the plot function. mpf.plot(df,type="renko") We can also modify the brick’s size to our liking. In the following snippet, we set it to 2. mpf.plot(df, type="renko", renko_params=dict(brick_size=2)) The last type of plot available in the library is the Point and Figure chart. Similarly to the Renko chart, it does not take into account the passage of time. The P&F chart uses columns of stacked X’s and O’s, where each symbol stands for a certain price movement (determined by the box size, which we can tune to our preferences). X represents a rise in the price by a certain amount, while O stands for a drop. The last piece of information we need is the condition for creating new columns of different symbols (O’s following X’s, and vice versa). In order for a new column to be created, the price must change by the reversal amount, which is typically set to be three times the box size (in mplfinance, the default value is 1). mpf.plot(df, type="pnf") We can easily compare this P&F chart to the first Renko plot to see the very same patterns. Plots created using mplfinance are already quite nice to look at for one-liners, so definitely something that does not happen that often in pure matplotlib. However, we can use a few more options available in the plot function to make our plots even prettier. For the next plot, we change the ratio of the figure, add a title, choose the tight layout and apply a style. We use the binance style, which makes the plot similar to the ones available at the popular Crypto exchange. Personally, I would say that this is quite an improvement for the amount of extra code we had to write. If you are curious about what styles are available in the library, you can use the following command to view them all: mpf.available_styles() Lastly, we can also easily save the figure to a local file. To do so, we just need to provide the file’s name to the savefig argument of the plot function. The code would look as follows. mplfinance is a library in matplotlib’s portfolio dedicated to plotting asset price data the API is very easy to use and we can often create nice charts with one-liners the library offers some uncommon types of plots such as the Renko chart or the Point and Figure chart You can find the code used for this article on my GitHub. Also, any constructive feedback is welcome. I am also curious if you have heard about Renko/Point and Figure charts and maybe even used them in practice. You can reach out to me on Twitter or in the comments. If you are interested in learning about how to use Python for quantitative finance, you might want to check out Quantra (disclaimer: an affiliate link), which offers a variety of different courses on the topic. Note from Towards Data Science’s editors: While we allow independent authors to publish articles in accordance with our rules and guidelines, we do not endorse each author’s contribution. You should not rely on an author’s works without seeking professional advice. See our Reader Terms for details. If you liked this article, you might also be interested in one of the following:
[ { "code": null, "e": 442, "s": 171, "text": "It is a well-known fact that matplotlib is very versatile and can be used to create pretty much any kind of chart you want. It might not be the simplest or prettiest, but after viewing enough questions on StackOverflow it will most likely work out quite w...
How to access first value of an object using JavaScript ? - GeeksforGeeks
04 Sep, 2019 There are many methods to access the first value of an object in JavaScript some of them are discussed below: Example 1: This example accesses the first value object of GFG_object by using object.keys() method. <!DOCTYPE html> <html> <head> <title> How to access first value of an object using JavaScript ? </title> </head> <body style = "text-align:center;"> <h1 style = "color:green;" > GeeksForGeeks </h1> <p id = "GFG_UP" style = "color:green;"> </p> <button onclick = "gfg_Fun()"> find First </button> <p id = "GFG_DOWN" style = "color:green; font-size: 20px;"> </p> <script> // Declare an object var GFG_object = {prop_1: "GFG_1", prop_2: "GFG_2", prop_3: "GFG_3"}; var el_up = document.getElementById("GFG_UP"); var el_down = document.getElementById("GFG_DOWN"); // Use SON.stringify() function to take object // or array and create JSON string el_up.innerHTML = JSON.stringify(GFG_object); // Access the first value of an object function gfg_Fun() { el_down.innerHTML = GFG_object[Object.keys(GFG_object)[0]]; } </script> </body> </html> Output: Before clicking on the button: After clicking on the button: Example 2: This example accesses the first value of object GFG_object by looping through the object and breaking the loop on accessing the first value. <!DOCTYPE html> <html> <head> <title> How to access first value of an object using JavaScript ? </title> </head> <body style = "text-align:center;"> <h1 style = "color:green;" > GeeksForGeeks </h1> <p id = "GFG_UP" style = "color:green;"></p> <button onclick = "gfg_Fun()"> find First </button> <p id="GFG_DOWN" style="color:green;font-size:20px;"></p> <script> // Declare an object var GFG_object = {prop_1: "GFG_1", prop_2: "GFG_2", prop_3: "GFG_3"}; var el_up = document.getElementById("GFG_UP"); var el_down = document.getElementById("GFG_DOWN"); // Use SON.stringify() function to take object // or array and create JSON string el_up.innerHTML = JSON.stringify(GFG_object); // Function to access the first value of an object function gfg_Fun() { for (var prop in GFG_object) { el_down.innerHTML = GFG_object[prop] break; } } </script> </body> </html> Output: Before clicking on the button: After clicking on the button: Example 3: This example accesses the first value of objectGFG_object by using object.values() method. <!DOCTYPE html> <html> <head> <title> JavaScript | Access the first value of an object </title> </head> <body style = "text-align:center;"> <h1 style = "color:green;" > GeeksForGeeks </h1> <p id = "GFG_UP" style = "color:green;"></p> <button onclick = "gfg_Fun()"> find First </button> <p id = "GFG_DOWN" style = "color:green; font-size: 20px;"></p> <script> // Declare an object var GFG_object = {prop_1: "GFG_value", prop_2: "GFG_2", prop_3: "GFG_3"}; var el_up = document.getElementById("GFG_UP"); var el_down = document.getElementById("GFG_DOWN"); // Use SON.stringify() function to take object // or array and create JSON string el_up.innerHTML = JSON.stringify(GFG_object); // Function to access the first value of an object function gfg_Fun() { el_down.innerHTML = Object.values(GFG_object)[0]; } </script> </body> </html> Output: Before clicking on the button: After clicking on the button: JavaScript-Misc JavaScript Web Technologies Web technologies Questions Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Convert a string to an integer in JavaScript Difference between var, let and const keywords in JavaScript Differences between Functional Components and Class Components in React How to append HTML code to a div using JavaScript ? How to Open URL in New Tab using JavaScript ? Roadmap to Become a Web Developer in 2022 Installation of Node.js on Linux How to fetch data from an API in ReactJS ? Top 10 Projects For Beginners To Practice HTML and CSS Skills Convert a string to an integer in JavaScript
[ { "code": null, "e": 24722, "s": 24694, "text": "\n04 Sep, 2019" }, { "code": null, "e": 24832, "s": 24722, "text": "There are many methods to access the first value of an object in JavaScript some of them are discussed below:" }, { "code": null, "e": 24933, "s": ...
HEX() function in MySQL - GeeksforGeeks
04 Dec, 2020 HEX() : This function in MySQL is used to return an equivalent hexadecimal string value of a string or numeric Input. If the input is a string then each byte of each character in the string is converted to two hexadecimal digits. This function also returns a hexadecimal string representation of the numeric argument N treated as a longlong (BIGINT) number. Syntax : HEX(string) OR HEX(N) Parameter : This method accepts only one parameter. string – Input string who’s each character is to be converted to two hexadecimal digits. N – Input number which is to be converted to hexadecimal. Returns : It returns an equivalent hexadecimal string representation of a string or numeric Input. Example-1 : Hexadecimal representation of the decimal number 0 using HEX Function as follows. SELECT HEX(0) AS Hex_number ; Output : Example-2 : Hexadecimal representation of the decimal number 2020 using HEX Function as follows. SELECT HEX( 2020 ) AS Hex_number ; Output : Example -3 : Hexadecimal representation of the string ‘geeksforgeeks’ using HEX Function as follows. SELECT HEX( 'geeksforgeeks') AS Hex_string ; Output : Example-4 : Using HEX Function to find a hexadecimal representation of all decimal numbers present in a column as follows. Creating a Player table : CREATE TABLE Player( Player_id INT AUTO_INCREMENT, Player_name VARCHAR(100) NOT NULL, Playing_team VARCHAR(20) NOT NULL, Highest_Run_Scored INT NOT NULL, PRIMARY KEY(Player_id ) ); Inserting data into the Table : INSERT INTO Player(Player_name, Playing_team, Highest_Run_Scored) VALUES ('Virat Kohli', 'RCB', 60 ), ('Rohit Sharma', 'MI', 45), ('Dinesh Karthik', 'KKR', 26 ), ('Shreyash Iyer', 'DC', 40 ), ('David Warner', 'SRH', 65), ('Steve Smith', 'RR', 52 ), ('Andre Russell', 'KKR', 70), ('Jasprit Bumrah', 'MI', 10), ('Risabh Panth', 'DC', 34 ) ; To verify use the following command as follows. SELECT * FROM Player; Output : Now, we will find the highest run scored by each player in hexadecimal using the HEX Function. SELECT Player_id, Player_name, Playing_team, HEX(HIGHEST_RUN_SCORED) AS HighestRunInHexaDecimal FROM Player ; Output : DBMS-SQL mysql SQL SQL Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Update Multiple Columns in Single Update Statement in SQL? SQL | Subquery How to Create a Table With Multiple Foreign Keys in SQL? What is Temporary Table in SQL? SQL Query to Find the Name of a Person Whose Name Starts with Specific Letter SQL Query to Convert VARCHAR to INT SQL using Python How to Write a SQL Query For a Specific Date Range and Date Time? How to Select Data Between Two Dates and Times in SQL Server? SQL Query to Compare Two Dates
[ { "code": null, "e": 25513, "s": 25485, "text": "\n04 Dec, 2020" }, { "code": null, "e": 25521, "s": 25513, "text": "HEX() :" }, { "code": null, "e": 25873, "s": 25521, "text": "This function in MySQL is used to return an equivalent hexadecimal string value of...
Difference between Procedural and Declarative Knowledge - GeeksforGeeks
28 Jun, 2020 Procedural Knowledge:Procedural Knowledge also known as Interpretive knowledge, is the type of knowledge in which it clarifies how a particular thing can be accomplished. It is not so popular because it is generally not used.It emphasize how to do something to solve a given problem.Let’s see it with an example: var a=[1, 2, 3, 4, 5]; var b=[]; for(var i=0;i<a.length;i++) { b.push(a[i]); } console.log(b); Output is: [1, 2, 3, 4, 5] Declarative Knowledge:Declarative Knowledge also known as Descriptive knowledge, is the type of knowledge which tells the basic knowledge about something and it is more popular than Procedural Knowledge.It emphasize what to do something to solve a given problem.Let’s see it with an example: var a=[1, 2, 3, 4, 5]; var b=a.map(function(number) { return number*1}); console.log(b); Output is: [1, 2, 3, 4, 5] In both example we can see that the output of a given problem is same because the only difference in that two methods to achieve the output or solution of problem. Difference the Procedural and Declarative Knowledge: pp_pankaj Difference Between Misc Misc Misc Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Difference between var, let and const keywords in JavaScript Differences between IPv4 and IPv6 Difference Between Method Overloading and Method Overriding in Java Stack vs Heap Memory Allocation Differences between JDK, JRE and JVM vector::push_back() and vector::pop_back() in C++ STL Top 10 algorithms in Interview Questions Overview of Data Structures | Set 1 (Linear Data Structures) How to write Regular Expressions? Minimax Algorithm in Game Theory | Set 3 (Tic-Tac-Toe AI - Finding optimal move)
[ { "code": null, "e": 25992, "s": 25964, "text": "\n28 Jun, 2020" }, { "code": null, "e": 26305, "s": 25992, "text": "Procedural Knowledge:Procedural Knowledge also known as Interpretive knowledge, is the type of knowledge in which it clarifies how a particular thing can be accomp...
Build a Grocery Store Web App using PHP with MySQL - GeeksforGeeks
12 Mar, 2021 In this article, we are going to build a Grocery Store Web Application using PHP with MySQL. In this application, we can add grocery items by their name, quantity, status (pending, bought, not available), and date. We can view, delete and update those items. There will be a date filtering feature where we can view the grocery items according to the dates. Prerequisites: XAMPP Server, Basic Concepts of HTML, CSS, Bootstrap, PHP, and MySQL We will follow the following steps to build this application. Step-1: Open XAMPP Control Panel and start Apache and MySQL services. In XAMPP folder, go to htdocs folder and create a folder named project1. We will keep all the files in project1 folder. Inside this folder, there will be five files (add.php, connect.php, delete.php, index.php, update.php) and one folder called css inside which a file called style.css will be there. Step-2: Go to localhost/phpMyAdmin and create a database called grocerydb. Under that, make a table called grocerytb with 5 columns. The columns are Id (primary key), Item_name, Item_Quantity, Item_status, and Date. The auto-increment mode should be on for the Id column. Finally, the table structure should look like shown in the given image. Step-3: Open the editor of your choice. Make a file named connect.php and code the following lines. connect.php <?php $con=mysqli_connect("localhost","root","","grocerydb"); if(!$con) { die("cannot connect to server"); } ?> This page is made to connect our PHP page with the database “grocerydb”. After connecting with this database, the connection object is returned to $con variable. If connection is not established, “cannot connect to server” message will be displayed. Step-4: Create another file named add.php and code the following lines. add.php <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>Add List</title> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.4.1/css/bootstrap.min.css"> <link rel="stylesheet" href="css/style.css"></head> <body> <div class="container mt-5"> <h1>Add Grocery List</h1> <form action="add.php" method="POST"> <div class="form-group"> <label>Item name</label> <input type="text" class="form-control" placeholder="Item name" name="iname" /> </div> <div class="form-group"> <label>Item quantity</label> <input type="text" class="form-control" placeholder="Item quantity" name="iqty" /> </div> <div class="form-group"> <label>Item status</label> <select class="form-control" name="istatus"> <option value="0"> PENDING </option> <option value="1"> BOUGHT </option> <option value="2"> NOT AVAILABLE </option> </select> </div> <div class="form-group"> <label>Date</label> <input type="date" class="form-control" placeholder="Date" name="idate"> </div> <div class="form-group"> <input type="submit" value="Add" class="btn btn-danger" name="btn"> </div> </form> </div> <?php if(isset($_POST["btn"])) { include("connect.php"); $item_name=$_POST['iname']; $item_qty=$_POST['iqty']; $item_status=$_POST['istatus']; $date=$_POST['idate']; $q="insert into grocerytb(Item_name, Item_Quantity,Item_status,Date) values('$item_name',$item_qty, '$item_status','$date')"; mysqli_query($con,$q); header("location:index.php"); } // if(!mysqli_query($con,$q)) // { // echo "Value Not Inserted"; // } // else // { // echo "Value Inserted"; // } ?></body> </html> This page is made to insert the grocery items data from HTML form to the “grocerytb” table in the “grocerydb” database. The html form contains the Item name, Item Quantity, Item status, and Date values which are to be entered by the user. We have set the option value as 0, 1, and 2 for Pending, Bought, and Not Available (for item status) respectively. When a button is clicked, we include the file “connect.php” to connect the page with the database. Then, we are fetching all the data entered by the user and inserting them into the “grocerytb” table. If the values are entered successfully in the table, the page will move to “index.php” which will enable the user to view the items entered so far (or the items which are in the table as of now). Create a “style.css” file inside css folder and code the following. style.css @import url('https://fonts.googleapis.com/css2?family=Poppins:wght@300;400;700&display=swap'); body { font-family: 'Poppins', sans-serif; font-weight: 300; background-color: beige;} h1, h2, h3, h4, h5 { font-family: 'Poppins', sans-serif; font-weight: 700;} The “add.php” file should look like shown in the given image. Step-5: Make another file named index.php and code the following lines. index.php <?php include("connect.php"); if (isset($_POST['btn'])) { $date=$_POST['idate']; $q="select * from grocerytb where Date='$date'"; $query=mysqli_query($con,$q); } else { $q= "select * from grocerytb"; $query=mysqli_query($con,$q); }?> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>View List</title> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.4.1/css/bootstrap.min.css"> <link rel="stylesheet" href="css/style.css"></head> <body> <div class="container mt-5"> <!-- top --> <div class="row"> <div class="col-lg-8"> <h1>View Grocery List</h1> <a href="add.php">Add Item</a> </div> <div class="col-lg-4"> <div class="row"> <div class="col-lg-8"> <!-- Date Filtering--> <form method="post" action=""> <input type="date" class="form-control" name="idate"> <div class="col-lg-4" method="post"> <input type="submit" class="btn btn-danger float-right" name="btn" value="filter"> </div> </form> </div> </div> </div> </div> <!-- Grocery Cards --> <div class="row mt-4"> <?php while ($qq=mysqli_fetch_array($query)) { ?> <div class="col-lg-4"> <div class="card"> <div class="card-body"> <h5 class="card-title"> <?php echo $qq['Item_name']; ?> </h5> <h6 class="card-subtitle mb-2 text-muted"> <?php echo $qq['Item_Quantity']; ?> </h6> <?php if($qq['Item_status'] == 0) { ?> <p class="text-info">PENDING</p> <?php } else if($qq['Item_status'] == 1) { ?> <p class="text-success">BOUGHT</p> <?php } else { ?> <p class="text-danger">NOT AVAILABLE</p> <?php } ?> <a href= "delete.php?id=<?php echo $qq['Id']; ?>" class="card-link"> Delete </a> <a href= "update.php?id=<?php echo $qq['Id']; ?>" class="card-link"> Update </a> </div> </div><br> </div> <?php } ?> </div> </div></body> </html> We are again including “connect.php” to connect the page with the database. Then, we are fetching all the data from the table using a function called mysqli_fetch_array() and displaying them on the page. For every item, there is a delete and update link. Using Add Item link on the top, the page will again move to “add.php” from where the user can again add grocery items to the database. We are also adding a date filtering feature on this page. When a user enters a date and clicks on the filter button, all the grocery items data will be displayed according to the date entered. For now, our “grocerytb” table looks like shown in the given image. After moving to “index.php” file, the page will look like shown in the given image. After entering a date 01/14/2021, the page will look like shown in the given image. Step-6: Make another file named update.php and code the following lines. update.php <?php include("connect.php"); if(isset($_POST['btn'])) { $item_name=$_POST['iname']; $item_qty=$_POST['iqty']; $istatus=$_POST['istatus']; $date=$_POST['idate']; $id = $_GET['id']; $q= "update grocerytb set Item_name='$item_name', Item_Quantity='$item_qty', Item_status='$istatus', Date='$date' where Id=$id"; $query=mysqli_query($con,$q); header('location:index.php'); } else if(isset($_GET['id'])) { $q = "SELECT * FROM grocerytb WHERE Id='".$_GET['id']."'"; $query=mysqli_query($con,$q); $res= mysqli_fetch_array($query); }?><html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>Update List</title> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.4.1/css/bootstrap.min.css"> <link rel="stylesheet" href="css/style.css"></head> <body> <div class="container mt-5"> <h1>Update Grocery List</h1> <form method="post"> <div class="form-group"> <label>Item name</label> <input type="text" class="form-control" name="iname" placeholder="Item name" value= "<?php echo $res['Item_name'];?>" /> </div> <div class="form-group"> <label>Item quantity</label> <input type="text" class="form-control" name="iqty" placeholder="Item quantity" value="<?php echo $res['Item_Quantity'];?>" /> </div> <div class="form-group"> <label>Item status</label> <select class="form-control" name="istatus"> <?php if($res['Item_status'] == 0) { ?> <option value="0" selected>PENDING</option> <option value="1">BOUGHT</option> <option value="2">NOT AVAILABLE</option> <?php } else if($res['Item_status'] == 1) { ?> <option value="0">PENDING</option> <option value="1" selected>BOUGHT</option> <option value="2">NOT AVAILABLE</option> <?php } else if($res['Item_status'] == 2) { ?> <option value="0">PENDING</option> <option value="1">BOUGHT</option> <option value="2" selected>NOT AVAILABLE</option> <?php } ?> </select> </div> <div class="form-group"> <label>Date</label> <input type="date" class="form-control" name="idate" placeholder="Date" value="<?php echo $res['Date']?>"> </div> <div class="form-group"> <input type="submit" value="Update" name="btn" class="btn btn-danger"> </div> </form> </div></body> </html> In “index.php”, we fetched the Id’s of every item. In “update.php”, the user can edit any data. For that item, we are fetching the id and the updated item’s data. Then, we are running an update query through which items are getting updated. After the items are updated, the page will move to “index.php”. Here, we are updating the value of Item_name called pineapple and Id having 6. We are updating its Item_Quantity from 1 to 2 and Item_status from Pending to Not available. After that, the page will look like shown in the given image. After updating, index.php will look like this. The updated table will look like this. Step-7: Make another file named delete.php and code the following lines. delete.php <?php include("connect.php"); $id = $_GET['id']; $q = "delete from grocerytb where Id = $id "; mysqli_query($con,$q); ?> In “index.php”, we fetched the Id’s of every item so that we can delete any data. For the item which is to be deleted, we are fetching the id in the “delete.php”. Then, we are running a delete query through which the selected item’s record will get deleted. We are deleting the Item having Id 6 and Item_name as pineapple. After deleting it, the page will look like shown in the given image. And, the table will look like this. Source Code Link– https://github.com/anshu37/grocery-php-project Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course. CSS-Properties CSS-Questions HTML-Attributes HTML-Questions PHP-function PHP-MySQL PHP-Questions Technical Scripter 2020 Amazon Web Services CSS HTML PHP Technical Scripter Web Technologies HTML PHP Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Install Python3 on AWS EC2? How to Provide the Static IP to a Docker Container? AWS DynamoDB - Read Data from a Table How to Set Up Apache Web Server in AWS EC2 Linux (Ubuntu) Instance? Simple Notification Service (SNS) in AWS How to insert spaces/tabs in text using HTML/CSS? Top 10 Projects For Beginners To Practice HTML and CSS Skills How to update Node.js and NPM to next version ? How to create footer to stay at the bottom of a Web page? How to apply style to parent if it has child with CSS?
[ { "code": null, "e": 26173, "s": 26145, "text": "\n12 Mar, 2021" }, { "code": null, "e": 26533, "s": 26173, "text": "In this article, we are going to build a Grocery Store Web Application using PHP with MySQL. In this application, we can add grocery items by their name, quantity,...