title
stringlengths 3
221
| text
stringlengths 17
477k
| parsed
listlengths 0
3.17k
|
|---|---|---|
Method Chaining In Java with Examples
|
07 Dec, 2021
Method Chaining is the practice of calling different methods in a single line instead of calling other methods with the same object reference separately. Under this procedure, we have to write the object reference once and then call the methods by separating them with a (dot.).
Method chaining in Java is a common syntax to invoke multiple methods calls in OOPs. Each method in chaining returns an object. It violates the need for intermediate variables. In other words, the method chaining can be defined as if we have an object and we call methods on that object one after another is called method chaining.
Syntax:
obj.method1().method2().method3();
In the above statement, we have an object (obj) and calling method1() then method2(), after that the method3(). So, calling or invoking methods one after another is known as method chaining.
Note: Method chaining in Java is also known as parameter idiom or named parameter idiom. Sometimes it is also known as a train wreck because of the increase in the number of methods even though line breaks are often added between methods.
Let’s audit the example first, and then it will be much smoother to explain.
Example 1:
Java
class A { private int a; private float b; A() { System.out.println("Calling The Constructor"); } int setint(int a) { this.a = a; return this.a; } float setfloat(float b) { this.b = b; return this.b; } void display() { System.out.println("Display=" + a + " " + b); }} // Driver codepublic class Example { public static void main(String[] args) { // This will return an error as // display() method needs an object but // setint(10) is returning an int value // instead of an object reference new A().setint(10).display(); }}
Compilation Error in the Java code:
prog.java:34: error: int cannot be dereferenced
new A().setint(10).display();
^
1 error
Explanation:
When we are calling the constructor, one should perceive that constructor does not hold any return type, but it returns the current object reference. Open this for more about constructors.
As object reference is returned by the constructor, we can use the returned object reference for calling another method as well.
Thus, by implementing the dot(.) operator, we can call another method, too, named “setint(10)”. As of now, we are trying to call the display method further, but, it’s impossible. Why? Check out the next point.
Now, the “setint(10)” method returns the integer value of the variable. In a common way, one can easily understand that the next method can’t be called on the basis of a variable. To solve this, “setint(10)” method must return object reference. How it can be done?.
Example 2:
Java
class A { private int a; private float b; A() { System.out.println("Calling The Constructor"); } public A setint(int a) { this.a = a; return this; } public A setfloat(float b) { this.b = b; return this; } void display() { System.out.println("Display=" + a + " " + b); }} // Driver codepublic class Example { public static void main(String[] args) { // This is the "method chaining". new A().setint(10).setfloat(20).display(); }}
Calling The Constructor
Display=10 20.0
In the above example, we have derived setint(int a) & setfloat(float b) method as the class type.
In this case, while returning, we are using “this,” and it is returning the current instance reference. Check this for the uses and values of “this” reference variable.
When the method chaining is implemented in the main method, “setint(10)” & “setfloat(20)” are returning the object’s reference which is further used to call the “display()” method.
subham223
nishkarshgandhi
Java
Technical Scripter
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Stream In Java
Introduction to Java
Constructors in Java
Exceptions in Java
Generics in Java
Functional Interfaces in Java
Java Programming Examples
Strings in Java
Differences between JDK, JRE and JVM
Abstraction in Java
|
[
{
"code": null,
"e": 52,
"s": 24,
"text": "\n07 Dec, 2021"
},
{
"code": null,
"e": 331,
"s": 52,
"text": "Method Chaining is the practice of calling different methods in a single line instead of calling other methods with the same object reference separately. Under this procedure, we have to write the object reference once and then call the methods by separating them with a (dot.)."
},
{
"code": null,
"e": 663,
"s": 331,
"text": "Method chaining in Java is a common syntax to invoke multiple methods calls in OOPs. Each method in chaining returns an object. It violates the need for intermediate variables. In other words, the method chaining can be defined as if we have an object and we call methods on that object one after another is called method chaining."
},
{
"code": null,
"e": 671,
"s": 663,
"text": "Syntax:"
},
{
"code": null,
"e": 708,
"s": 671,
"text": "obj.method1().method2().method3(); "
},
{
"code": null,
"e": 899,
"s": 708,
"text": "In the above statement, we have an object (obj) and calling method1() then method2(), after that the method3(). So, calling or invoking methods one after another is known as method chaining."
},
{
"code": null,
"e": 1138,
"s": 899,
"text": "Note: Method chaining in Java is also known as parameter idiom or named parameter idiom. Sometimes it is also known as a train wreck because of the increase in the number of methods even though line breaks are often added between methods."
},
{
"code": null,
"e": 1215,
"s": 1138,
"text": "Let’s audit the example first, and then it will be much smoother to explain."
},
{
"code": null,
"e": 1227,
"s": 1215,
"text": "Example 1: "
},
{
"code": null,
"e": 1232,
"s": 1227,
"text": "Java"
},
{
"code": "class A { private int a; private float b; A() { System.out.println(\"Calling The Constructor\"); } int setint(int a) { this.a = a; return this.a; } float setfloat(float b) { this.b = b; return this.b; } void display() { System.out.println(\"Display=\" + a + \" \" + b); }} // Driver codepublic class Example { public static void main(String[] args) { // This will return an error as // display() method needs an object but // setint(10) is returning an int value // instead of an object reference new A().setint(10).display(); }}",
"e": 1876,
"s": 1232,
"text": null
},
{
"code": null,
"e": 1912,
"s": 1876,
"text": "Compilation Error in the Java code:"
},
{
"code": null,
"e": 2034,
"s": 1912,
"text": "prog.java:34: error: int cannot be dereferenced\n new A().setint(10).display();\n ^\n1 error"
},
{
"code": null,
"e": 2047,
"s": 2034,
"text": "Explanation:"
},
{
"code": null,
"e": 2236,
"s": 2047,
"text": "When we are calling the constructor, one should perceive that constructor does not hold any return type, but it returns the current object reference. Open this for more about constructors."
},
{
"code": null,
"e": 2365,
"s": 2236,
"text": "As object reference is returned by the constructor, we can use the returned object reference for calling another method as well."
},
{
"code": null,
"e": 2575,
"s": 2365,
"text": "Thus, by implementing the dot(.) operator, we can call another method, too, named “setint(10)”. As of now, we are trying to call the display method further, but, it’s impossible. Why? Check out the next point."
},
{
"code": null,
"e": 2841,
"s": 2575,
"text": "Now, the “setint(10)” method returns the integer value of the variable. In a common way, one can easily understand that the next method can’t be called on the basis of a variable. To solve this, “setint(10)” method must return object reference. How it can be done?."
},
{
"code": null,
"e": 2852,
"s": 2841,
"text": "Example 2:"
},
{
"code": null,
"e": 2857,
"s": 2852,
"text": "Java"
},
{
"code": "class A { private int a; private float b; A() { System.out.println(\"Calling The Constructor\"); } public A setint(int a) { this.a = a; return this; } public A setfloat(float b) { this.b = b; return this; } void display() { System.out.println(\"Display=\" + a + \" \" + b); }} // Driver codepublic class Example { public static void main(String[] args) { // This is the \"method chaining\". new A().setint(10).setfloat(20).display(); }}",
"e": 3385,
"s": 2857,
"text": null
},
{
"code": null,
"e": 3425,
"s": 3385,
"text": "Calling The Constructor\nDisplay=10 20.0"
},
{
"code": null,
"e": 3523,
"s": 3425,
"text": "In the above example, we have derived setint(int a) & setfloat(float b) method as the class type."
},
{
"code": null,
"e": 3692,
"s": 3523,
"text": "In this case, while returning, we are using “this,” and it is returning the current instance reference. Check this for the uses and values of “this” reference variable."
},
{
"code": null,
"e": 3873,
"s": 3692,
"text": "When the method chaining is implemented in the main method, “setint(10)” & “setfloat(20)” are returning the object’s reference which is further used to call the “display()” method."
},
{
"code": null,
"e": 3883,
"s": 3873,
"text": "subham223"
},
{
"code": null,
"e": 3899,
"s": 3883,
"text": "nishkarshgandhi"
},
{
"code": null,
"e": 3904,
"s": 3899,
"text": "Java"
},
{
"code": null,
"e": 3923,
"s": 3904,
"text": "Technical Scripter"
},
{
"code": null,
"e": 3928,
"s": 3923,
"text": "Java"
},
{
"code": null,
"e": 4026,
"s": 3928,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 4041,
"s": 4026,
"text": "Stream In Java"
},
{
"code": null,
"e": 4062,
"s": 4041,
"text": "Introduction to Java"
},
{
"code": null,
"e": 4083,
"s": 4062,
"text": "Constructors in Java"
},
{
"code": null,
"e": 4102,
"s": 4083,
"text": "Exceptions in Java"
},
{
"code": null,
"e": 4119,
"s": 4102,
"text": "Generics in Java"
},
{
"code": null,
"e": 4149,
"s": 4119,
"text": "Functional Interfaces in Java"
},
{
"code": null,
"e": 4175,
"s": 4149,
"text": "Java Programming Examples"
},
{
"code": null,
"e": 4191,
"s": 4175,
"text": "Strings in Java"
},
{
"code": null,
"e": 4228,
"s": 4191,
"text": "Differences between JDK, JRE and JVM"
}
] |
Check if a string is the typed name of the given name - GeeksforGeeks
|
29 May, 2021
Given a name and a typed-name of a person. Sometimes, when typing a vowel [aeiou], the key might get long pressed, and the character will be typed 1 or more times. The task is to examine the typed-name and tell if it is possible that typed name was of person’s name, with some characters (possibly none) being long pressed. Return ‘True‘ if it is, else ‘False‘.
Note: name and typed-name are separated by space with no space in between individuals names. Each character of the name is unique.
Examples:
Input: str = “geeks”, typed = “geeeeks” Output: True The vowel ‘e’ repeats more times in typed and all other characters match.
Input: str = “alice”, typed = “aallicce” Output: False Here ‘l’ and ‘c’ are repeated which not a vowel. Hence name and typed-name represents different names.
Input: str = “alex”, typed = “aaalaeex” Output: False A vowel ‘a’ is extra in typed.
Approach: The idea is based on Run Length Encoding. We consider only vowels and count their consecutive occurrences in str and typed. The count of occurrences in str must be less.
Below is the implementation of above approach.
C++
Java
Python3
C#
PHP
Javascript
// CPP program to implement run length encoding#include <bits/stdc++.h>using namespace std; // Check if the character is vowel or notbool isVowel(char c){ string vowel = "aeiou"; for (int i = 0; i < vowel.length(); ++i) if (vowel[i] == c) return true; return false;} // Returns true if 'typed' is a typed name// given strbool printRLE(string str, string typed){ int n = str.length(), m = typed.length(); // Traverse through all characters of str. int j = 0; for (int i = 0; i < n; i++) { // If current characters do not match if (str[i] != typed[j]) return false; // If not vowel, simply move ahead in both if (isVowel(str[i]) == false) { j++; continue; } // Count occurrences of current vowel in str int count1 = 1; while (i < n - 1 && str[i] == str[i + 1]) { count1++; i++; } // Count occurrences of current vowel in // typed. int count2 = 1; while (j < m - 1 && typed[j] == str[i]) { count2++; j++; } if (count1 > count2) return false; } return true;} int main(){ string name = "alex", typed = "aaalaeex"; if (printRLE(name, typed)) cout << "Yes"; else cout << "No"; return 0;}
// Java program to implement run length encoding public class Improve { // Check if the character is vowel or not static boolean isVowel(char c) { String vowel = "aeiou"; for (int i = 0; i < vowel.length(); ++i) if (vowel.charAt(i) == c) return true; return false; } // Returns true if 'typed' is a typed name // given str static boolean printRLE(String str, String typed) { int n = str.length(), m = typed.length(); // Traverse through all characters of str. int j = 0; for (int i = 0; i < n; i++) { // If current characters do not match if (str.charAt(i) != typed.charAt(j)) return false; // If not vowel, simply move ahead in both if (isVowel(str.charAt(i)) == false) { j++; continue; } // Count occurrences of current vowel in str int count1 = 1; while (i < n - 1 && str.charAt(i) == str.charAt(i+1)) { count1++; i++; } // Count occurrences of current vowel in // typed. int count2 = 1; while (j < m - 1 && typed.charAt(j) == str.charAt(i)) { count2++; j++; } if (count1 > count2) return false; } return true; } public static void main(String args[]) { String name = "alex", typed = "aaalaeex"; if (printRLE(name, typed)) System.out.println("Yes"); else System.out.println("No"); } // This code is contributed by ANKITRAI1}
# Python3 program to implement run# length encoding # Check if the character is# vowel or notdef isVowel(c): vowel = "aeiou" for i in range(len(vowel)): if(vowel[i] == c): return True return False # Returns true if 'typed' is a# typed name given strdef printRLE(str, typed): n = len(str) m = len(typed) # Traverse through all # characters of str j = 0 for i in range(n): # If current characters do # not match if str[i] != typed[j]: return False # If not vowel, simply move # ahead in both if isVowel(str[i]) == False: j = j + 1 continue # Count occurrences of current # vowel in str count1 = 1 while (i < n - 1 and (str[i] == str[i + 1])): count1 = count1 + 1 i = i + 1 # Count occurrence of current # vowel in typed count2 = 1 while(j < m - 1 and typed[j] == str[i]): count2 = count2 + 1 j = j + 1 if count1 > count2: return False return True # Driver codename = "alex"typed = "aaalaeex"if (printRLE(name, typed)): print("Yes")else: print("No") # This code is contributed# by Shashank_Sharma
// C# program to implement run// length encodingusing System; class GFG{ // Check if the character is// vowel or notpublic static bool isVowel(char c){ string vowel = "aeiou"; for (int i = 0; i < vowel.Length; ++i) { if (vowel[i] == c) { return true; } } return false;} // Returns true if 'typed' is// a typed name given strpublic static bool printRLE(string str, string typed){ int n = str.Length, m = typed.Length; // Traverse through all // characters of str. int j = 0; for (int i = 0; i < n; i++) { // If current characters // do not match if (str[i] != typed[j]) { return false; } // If not vowel, simply move // ahead in both if (isVowel(str[i]) == false) { j++; continue; } // Count occurrences of current // vowel in str int count1 = 1; while (i < n - 1 && str[i] == str[i + 1]) { count1++; i++; } // Count occurrences of current // vowel in typed int count2 = 1; while (j < m - 1 && typed[j] == str[i]) { count2++; j++; } if (count1 > count2) { return false; } } return true;} // Driver Codepublic static void Main(string[] args){ string name = "alex", typed = "aaalaeex"; if (printRLE(name, typed)) { Console.WriteLine("Yes"); } else { Console.WriteLine("No"); }}} // This code is contributed// by Shrikant13
<?php// PHP program to implement// run length encoding // Check if the character is vowel or notfunction isVowel($c){ $vowel = "aeiou"; for ($i = 0; $i < strlen($vowel); ++$i) if ($vowel[$i] == $c) return true; return false;} // Returns true if 'typed'// is a typed name// given strfunction printRLE($str, $typed){ $n = strlen($str); $m = strlen($typed); // Traverse through all // characters of str. $j = 0; for ($i = 0; $i < $n; $i++) { // If current characters // do not match if ($str[$i] != $typed[$j]) return false; // If not vowel, simply // move ahead in both if (isVowel($str[$i]) == false) { $j++; continue; } // Count occurrences of // current vowel in str $count1 = 1; while ($i < $n - 1 && $str[$i] == $str[$i + 1]) { $count1++; $i++; } // Count occurrences of // current vowel in typed. $count2 = 1; while ($j < $m - 1 && $typed[$j] == $str[$i]) { $count2++; $j++; } if ($count1 > $count2) return false; } return true;} // Driver code$name = "alex";$typed = "aaalaeex";if (printRLE($name, $typed)) echo "Yes";else echo "No"; // This code is contributed// by Shivi_Aggarwal ?>
<script> // Javascript program to implement// run length encoding // Check if the character is vowel or notfunction isVowel(c){ let vowel = "aeiou"; for(let i = 0; i < vowel.length; ++i) { if (vowel[i] == c) { return true; } } return false;} // Returns true if 'typed' is// a typed name given strfunction printRLE(str, typed){ let n = str.length, m = typed.length; // Traverse through all // characters of str. let j = 0; for(let i = 0; i < n; i++) { // If current characters // do not match if (str[i] != typed[j]) { return false; } // If not vowel, simply move // ahead in both if (isVowel(str[i]) == false) { j++; continue; } // Count occurrences of current // vowel in str let count1 = 1; while (i < n - 1 && str[i] == str[i + 1]) { count1++; i++; } // Count occurrences of current // vowel in typed let count2 = 1; while (j < m - 1 && typed[j] == str[i]) { count2++; j++; } if (count1 > count2) { return false; } } return true;} // Driver codelet name = "alex", typed = "aaalaeex";if (printRLE(name, typed)){ document.write("Yes");}else{ document.write("No");} // This code is contributed by decode2207 </script>
No
Time Complexity : O(m + n) Auxiliary Space : O(1)
ankthon
Shivi_Aggarwal
Shashank_Sharma
shrikanth13
decode2207
simmytarika5
Pattern Searching
Strings
Strings
Pattern Searching
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Minimize number of cuts required to break N length stick into N unit length sticks
Check whether two strings contain same characters in same order
How to validate HTML tag using Regular Expression
How to validate GUID (Globally Unique Identifier) using Regular Expression
How to check Aadhaar number is valid or not using Regular Expression
Write a program to reverse an array or string
Reverse a string in Java
Longest Common Subsequence | DP-4
Write a program to print all permutations of a given string
C++ Data Types
|
[
{
"code": null,
"e": 25406,
"s": 25378,
"text": "\n29 May, 2021"
},
{
"code": null,
"e": 25768,
"s": 25406,
"text": "Given a name and a typed-name of a person. Sometimes, when typing a vowel [aeiou], the key might get long pressed, and the character will be typed 1 or more times. The task is to examine the typed-name and tell if it is possible that typed name was of person’s name, with some characters (possibly none) being long pressed. Return ‘True‘ if it is, else ‘False‘."
},
{
"code": null,
"e": 25899,
"s": 25768,
"text": "Note: name and typed-name are separated by space with no space in between individuals names. Each character of the name is unique."
},
{
"code": null,
"e": 25911,
"s": 25899,
"text": "Examples: "
},
{
"code": null,
"e": 26038,
"s": 25911,
"text": "Input: str = “geeks”, typed = “geeeeks” Output: True The vowel ‘e’ repeats more times in typed and all other characters match."
},
{
"code": null,
"e": 26196,
"s": 26038,
"text": "Input: str = “alice”, typed = “aallicce” Output: False Here ‘l’ and ‘c’ are repeated which not a vowel. Hence name and typed-name represents different names."
},
{
"code": null,
"e": 26283,
"s": 26196,
"text": "Input: str = “alex”, typed = “aaalaeex” Output: False A vowel ‘a’ is extra in typed. "
},
{
"code": null,
"e": 26463,
"s": 26283,
"text": "Approach: The idea is based on Run Length Encoding. We consider only vowels and count their consecutive occurrences in str and typed. The count of occurrences in str must be less."
},
{
"code": null,
"e": 26511,
"s": 26463,
"text": "Below is the implementation of above approach. "
},
{
"code": null,
"e": 26515,
"s": 26511,
"text": "C++"
},
{
"code": null,
"e": 26520,
"s": 26515,
"text": "Java"
},
{
"code": null,
"e": 26528,
"s": 26520,
"text": "Python3"
},
{
"code": null,
"e": 26531,
"s": 26528,
"text": "C#"
},
{
"code": null,
"e": 26535,
"s": 26531,
"text": "PHP"
},
{
"code": null,
"e": 26546,
"s": 26535,
"text": "Javascript"
},
{
"code": "// CPP program to implement run length encoding#include <bits/stdc++.h>using namespace std; // Check if the character is vowel or notbool isVowel(char c){ string vowel = \"aeiou\"; for (int i = 0; i < vowel.length(); ++i) if (vowel[i] == c) return true; return false;} // Returns true if 'typed' is a typed name// given strbool printRLE(string str, string typed){ int n = str.length(), m = typed.length(); // Traverse through all characters of str. int j = 0; for (int i = 0; i < n; i++) { // If current characters do not match if (str[i] != typed[j]) return false; // If not vowel, simply move ahead in both if (isVowel(str[i]) == false) { j++; continue; } // Count occurrences of current vowel in str int count1 = 1; while (i < n - 1 && str[i] == str[i + 1]) { count1++; i++; } // Count occurrences of current vowel in // typed. int count2 = 1; while (j < m - 1 && typed[j] == str[i]) { count2++; j++; } if (count1 > count2) return false; } return true;} int main(){ string name = \"alex\", typed = \"aaalaeex\"; if (printRLE(name, typed)) cout << \"Yes\"; else cout << \"No\"; return 0;}",
"e": 27898,
"s": 26546,
"text": null
},
{
"code": "// Java program to implement run length encoding public class Improve { // Check if the character is vowel or not static boolean isVowel(char c) { String vowel = \"aeiou\"; for (int i = 0; i < vowel.length(); ++i) if (vowel.charAt(i) == c) return true; return false; } // Returns true if 'typed' is a typed name // given str static boolean printRLE(String str, String typed) { int n = str.length(), m = typed.length(); // Traverse through all characters of str. int j = 0; for (int i = 0; i < n; i++) { // If current characters do not match if (str.charAt(i) != typed.charAt(j)) return false; // If not vowel, simply move ahead in both if (isVowel(str.charAt(i)) == false) { j++; continue; } // Count occurrences of current vowel in str int count1 = 1; while (i < n - 1 && str.charAt(i) == str.charAt(i+1)) { count1++; i++; } // Count occurrences of current vowel in // typed. int count2 = 1; while (j < m - 1 && typed.charAt(j) == str.charAt(i)) { count2++; j++; } if (count1 > count2) return false; } return true; } public static void main(String args[]) { String name = \"alex\", typed = \"aaalaeex\"; if (printRLE(name, typed)) System.out.println(\"Yes\"); else System.out.println(\"No\"); } // This code is contributed by ANKITRAI1}",
"e": 29627,
"s": 27898,
"text": null
},
{
"code": "# Python3 program to implement run# length encoding # Check if the character is# vowel or notdef isVowel(c): vowel = \"aeiou\" for i in range(len(vowel)): if(vowel[i] == c): return True return False # Returns true if 'typed' is a# typed name given strdef printRLE(str, typed): n = len(str) m = len(typed) # Traverse through all # characters of str j = 0 for i in range(n): # If current characters do # not match if str[i] != typed[j]: return False # If not vowel, simply move # ahead in both if isVowel(str[i]) == False: j = j + 1 continue # Count occurrences of current # vowel in str count1 = 1 while (i < n - 1 and (str[i] == str[i + 1])): count1 = count1 + 1 i = i + 1 # Count occurrence of current # vowel in typed count2 = 1 while(j < m - 1 and typed[j] == str[i]): count2 = count2 + 1 j = j + 1 if count1 > count2: return False return True # Driver codename = \"alex\"typed = \"aaalaeex\"if (printRLE(name, typed)): print(\"Yes\")else: print(\"No\") # This code is contributed# by Shashank_Sharma ",
"e": 30949,
"s": 29627,
"text": null
},
{
"code": "// C# program to implement run// length encodingusing System; class GFG{ // Check if the character is// vowel or notpublic static bool isVowel(char c){ string vowel = \"aeiou\"; for (int i = 0; i < vowel.Length; ++i) { if (vowel[i] == c) { return true; } } return false;} // Returns true if 'typed' is// a typed name given strpublic static bool printRLE(string str, string typed){ int n = str.Length, m = typed.Length; // Traverse through all // characters of str. int j = 0; for (int i = 0; i < n; i++) { // If current characters // do not match if (str[i] != typed[j]) { return false; } // If not vowel, simply move // ahead in both if (isVowel(str[i]) == false) { j++; continue; } // Count occurrences of current // vowel in str int count1 = 1; while (i < n - 1 && str[i] == str[i + 1]) { count1++; i++; } // Count occurrences of current // vowel in typed int count2 = 1; while (j < m - 1 && typed[j] == str[i]) { count2++; j++; } if (count1 > count2) { return false; } } return true;} // Driver Codepublic static void Main(string[] args){ string name = \"alex\", typed = \"aaalaeex\"; if (printRLE(name, typed)) { Console.WriteLine(\"Yes\"); } else { Console.WriteLine(\"No\"); }}} // This code is contributed// by Shrikant13",
"e": 32623,
"s": 30949,
"text": null
},
{
"code": "<?php// PHP program to implement// run length encoding // Check if the character is vowel or notfunction isVowel($c){ $vowel = \"aeiou\"; for ($i = 0; $i < strlen($vowel); ++$i) if ($vowel[$i] == $c) return true; return false;} // Returns true if 'typed'// is a typed name// given strfunction printRLE($str, $typed){ $n = strlen($str); $m = strlen($typed); // Traverse through all // characters of str. $j = 0; for ($i = 0; $i < $n; $i++) { // If current characters // do not match if ($str[$i] != $typed[$j]) return false; // If not vowel, simply // move ahead in both if (isVowel($str[$i]) == false) { $j++; continue; } // Count occurrences of // current vowel in str $count1 = 1; while ($i < $n - 1 && $str[$i] == $str[$i + 1]) { $count1++; $i++; } // Count occurrences of // current vowel in typed. $count2 = 1; while ($j < $m - 1 && $typed[$j] == $str[$i]) { $count2++; $j++; } if ($count1 > $count2) return false; } return true;} // Driver code$name = \"alex\";$typed = \"aaalaeex\";if (printRLE($name, $typed)) echo \"Yes\";else echo \"No\"; // This code is contributed// by Shivi_Aggarwal ?>",
"e": 34010,
"s": 32623,
"text": null
},
{
"code": "<script> // Javascript program to implement// run length encoding // Check if the character is vowel or notfunction isVowel(c){ let vowel = \"aeiou\"; for(let i = 0; i < vowel.length; ++i) { if (vowel[i] == c) { return true; } } return false;} // Returns true if 'typed' is// a typed name given strfunction printRLE(str, typed){ let n = str.length, m = typed.length; // Traverse through all // characters of str. let j = 0; for(let i = 0; i < n; i++) { // If current characters // do not match if (str[i] != typed[j]) { return false; } // If not vowel, simply move // ahead in both if (isVowel(str[i]) == false) { j++; continue; } // Count occurrences of current // vowel in str let count1 = 1; while (i < n - 1 && str[i] == str[i + 1]) { count1++; i++; } // Count occurrences of current // vowel in typed let count2 = 1; while (j < m - 1 && typed[j] == str[i]) { count2++; j++; } if (count1 > count2) { return false; } } return true;} // Driver codelet name = \"alex\", typed = \"aaalaeex\";if (printRLE(name, typed)){ document.write(\"Yes\");}else{ document.write(\"No\");} // This code is contributed by decode2207 </script>",
"e": 35514,
"s": 34010,
"text": null
},
{
"code": null,
"e": 35517,
"s": 35514,
"text": "No"
},
{
"code": null,
"e": 35570,
"s": 35519,
"text": "Time Complexity : O(m + n) Auxiliary Space : O(1) "
},
{
"code": null,
"e": 35578,
"s": 35570,
"text": "ankthon"
},
{
"code": null,
"e": 35593,
"s": 35578,
"text": "Shivi_Aggarwal"
},
{
"code": null,
"e": 35609,
"s": 35593,
"text": "Shashank_Sharma"
},
{
"code": null,
"e": 35621,
"s": 35609,
"text": "shrikanth13"
},
{
"code": null,
"e": 35632,
"s": 35621,
"text": "decode2207"
},
{
"code": null,
"e": 35645,
"s": 35632,
"text": "simmytarika5"
},
{
"code": null,
"e": 35663,
"s": 35645,
"text": "Pattern Searching"
},
{
"code": null,
"e": 35671,
"s": 35663,
"text": "Strings"
},
{
"code": null,
"e": 35679,
"s": 35671,
"text": "Strings"
},
{
"code": null,
"e": 35697,
"s": 35679,
"text": "Pattern Searching"
},
{
"code": null,
"e": 35795,
"s": 35697,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 35878,
"s": 35795,
"text": "Minimize number of cuts required to break N length stick into N unit length sticks"
},
{
"code": null,
"e": 35942,
"s": 35878,
"text": "Check whether two strings contain same characters in same order"
},
{
"code": null,
"e": 35992,
"s": 35942,
"text": "How to validate HTML tag using Regular Expression"
},
{
"code": null,
"e": 36067,
"s": 35992,
"text": "How to validate GUID (Globally Unique Identifier) using Regular Expression"
},
{
"code": null,
"e": 36136,
"s": 36067,
"text": "How to check Aadhaar number is valid or not using Regular Expression"
},
{
"code": null,
"e": 36182,
"s": 36136,
"text": "Write a program to reverse an array or string"
},
{
"code": null,
"e": 36207,
"s": 36182,
"text": "Reverse a string in Java"
},
{
"code": null,
"e": 36241,
"s": 36207,
"text": "Longest Common Subsequence | DP-4"
},
{
"code": null,
"e": 36301,
"s": 36241,
"text": "Write a program to print all permutations of a given string"
}
] |
How to repeat column values of a data.table object in R by number of values in another column?
|
To repeat column values of a data.table object by values in another column, we can follow the below steps −
First of all, create a data.table object.
First of all, create a data.table object.
Then, use rep function along with cbind function to repeat column values in the matrix by values in another column.
Then, use rep function along with cbind function to repeat column values in the matrix by values in another column.
Let’s create a data.table object as shown below −
library(data.table)
x<-1:10
y<-sample(1:5,10,replace=TRUE)
DT<-data.table(x,y)
DT
On executing, the above script generates the below output(this output will vary on your system due to randomization) −
x y
1: 1 2
2: 2 4
3: 3 1
4: 4 5
5: 5 3
6: 6 4
7: 7 2
8: 8 5
9: 9 1
10: 10 1
Repeat a column values by values in another column
Using rep function along with cbind function to repeat column x values in the data.table object DT by values in column y −
library(data.table)
x<-1:10
y<-sample(1:5,10,replace=TRUE)
DT<-data.table(x,y)
cbind(rep(DT$x,times=DT$y),rep(DT$y,times=DT$y))
[,1] [,2]
[1,] 1 2
[2,] 1 2
[3,] 2 4
[4,] 2 4
[5,] 2 4
[6,] 2 4
[7,] 3 1
[8,] 4 5
[9,] 4 5
[10,] 4 5
[11,] 4 5
[12,] 4 5
[13,] 5 3
[14,] 5 3
[15,] 5 3
[16,] 6 4
[17,] 6 4
[18,] 6 4
[19,] 6 4
[20,] 7 2
[21,] 7 2
[22,] 8 5
[23,] 8 5
[24,] 8 5
[25,] 8 5
[26,] 8 5
[27,] 9 1
[28,] 10 1
|
[
{
"code": null,
"e": 1170,
"s": 1062,
"text": "To repeat column values of a data.table object by values in another column, we can follow the below steps −"
},
{
"code": null,
"e": 1212,
"s": 1170,
"text": "First of all, create a data.table object."
},
{
"code": null,
"e": 1254,
"s": 1212,
"text": "First of all, create a data.table object."
},
{
"code": null,
"e": 1370,
"s": 1254,
"text": "Then, use rep function along with cbind function to repeat column values in the matrix by values in another column."
},
{
"code": null,
"e": 1486,
"s": 1370,
"text": "Then, use rep function along with cbind function to repeat column values in the matrix by values in another column."
},
{
"code": null,
"e": 1536,
"s": 1486,
"text": "Let’s create a data.table object as shown below −"
},
{
"code": null,
"e": 1618,
"s": 1536,
"text": "library(data.table)\nx<-1:10\ny<-sample(1:5,10,replace=TRUE)\nDT<-data.table(x,y)\nDT"
},
{
"code": null,
"e": 1737,
"s": 1618,
"text": "On executing, the above script generates the below output(this output will vary on your system due to randomization) −"
},
{
"code": null,
"e": 1836,
"s": 1737,
"text": " x y\n1: 1 2\n2: 2 4\n3: 3 1\n4: 4 5\n5: 5 3\n6: 6 4\n7: 7 2\n8: 8 5\n9: 9 1\n10: 10 1"
},
{
"code": null,
"e": 1887,
"s": 1836,
"text": "Repeat a column values by values in another column"
},
{
"code": null,
"e": 2010,
"s": 1887,
"text": "Using rep function along with cbind function to repeat column x values in the data.table object DT by values in column y −"
},
{
"code": null,
"e": 2138,
"s": 2010,
"text": "library(data.table)\nx<-1:10\ny<-sample(1:5,10,replace=TRUE)\nDT<-data.table(x,y)\ncbind(rep(DT$x,times=DT$y),rep(DT$y,times=DT$y))"
},
{
"code": null,
"e": 2517,
"s": 2138,
"text": " [,1] [,2]\n[1,] 1 2\n[2,] 1 2\n[3,] 2 4\n[4,] 2 4\n[5,] 2 4\n[6,] 2 4\n[7,] 3 1\n[8,] 4 5\n[9,] 4 5\n[10,] 4 5\n[11,] 4 5\n[12,] 4 5\n[13,] 5 3\n[14,] 5 3\n[15,] 5 3\n[16,] 6 4\n[17,] 6 4\n[18,] 6 4\n[19,] 6 4\n[20,] 7 2\n[21,] 7 2\n[22,] 8 5\n[23,] 8 5\n[24,] 8 5\n[25,] 8 5\n[26,] 8 5\n[27,] 9 1\n[28,] 10 1"
}
] |
JavaScript - Array splice() Method
|
Javascript array splice() method changes the content of an array, adding new elements while removing old elements.
Its syntax is as follows −
array.splice(index, howMany, [element1][, ..., elementN]);
index − Index at which to start changing the array.
index − Index at which to start changing the array.
howMany − An integer indicating the number of old array elements to remove. If howMany is 0, no elements are removed.
howMany − An integer indicating the number of old array elements to remove. If howMany is 0, no elements are removed.
element1, ..., elementN − The elements to add to the array. If you don't specify any elements, splice simply removes the elements from the array.
element1, ..., elementN − The elements to add to the array. If you don't specify any elements, splice simply removes the elements from the array.
Returns the extracted array based on the passed parameters.
Try the following example.
<html>
<head>
<title>JavaScript Array splice Method</title>
</head>
<body>
<script type = "text/javascript">
var arr = ["orange", "mango", "banana", "sugar", "tea"];
var removed = arr.splice(2, 0, "water");
document.write("After adding 1: " + arr );
document.write("<br />removed is: " + removed);
removed = arr.splice(3, 1);
document.write("<br />After adding 1: " + arr );
document.write("<br />removed is: " + removed);
</script>
</body>
</html>
After adding 1: orange,mango,water,banana,sugar,tea
removed is:
After adding 1: orange,mango,water,sugar,tea
removed is: banana
25 Lectures
2.5 hours
Anadi Sharma
74 Lectures
10 hours
Lets Kode It
72 Lectures
4.5 hours
Frahaan Hussain
70 Lectures
4.5 hours
Frahaan Hussain
46 Lectures
6 hours
Eduonix Learning Solutions
88 Lectures
14 hours
Eduonix Learning Solutions
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2581,
"s": 2466,
"text": "Javascript array splice() method changes the content of an array, adding new elements while removing old elements."
},
{
"code": null,
"e": 2608,
"s": 2581,
"text": "Its syntax is as follows −"
},
{
"code": null,
"e": 2668,
"s": 2608,
"text": "array.splice(index, howMany, [element1][, ..., elementN]);\n"
},
{
"code": null,
"e": 2720,
"s": 2668,
"text": "index − Index at which to start changing the array."
},
{
"code": null,
"e": 2772,
"s": 2720,
"text": "index − Index at which to start changing the array."
},
{
"code": null,
"e": 2890,
"s": 2772,
"text": "howMany − An integer indicating the number of old array elements to remove. If howMany is 0, no elements are removed."
},
{
"code": null,
"e": 3008,
"s": 2890,
"text": "howMany − An integer indicating the number of old array elements to remove. If howMany is 0, no elements are removed."
},
{
"code": null,
"e": 3154,
"s": 3008,
"text": "element1, ..., elementN − The elements to add to the array. If you don't specify any elements, splice simply removes the elements from the array."
},
{
"code": null,
"e": 3300,
"s": 3154,
"text": "element1, ..., elementN − The elements to add to the array. If you don't specify any elements, splice simply removes the elements from the array."
},
{
"code": null,
"e": 3360,
"s": 3300,
"text": "Returns the extracted array based on the passed parameters."
},
{
"code": null,
"e": 3387,
"s": 3360,
"text": "Try the following example."
},
{
"code": null,
"e": 3961,
"s": 3387,
"text": "<html>\n <head>\n <title>JavaScript Array splice Method</title>\n </head>\n \n <body> \n <script type = \"text/javascript\">\n var arr = [\"orange\", \"mango\", \"banana\", \"sugar\", \"tea\"]; \n var removed = arr.splice(2, 0, \"water\");\n document.write(\"After adding 1: \" + arr );\n document.write(\"<br />removed is: \" + removed);\n \n removed = arr.splice(3, 1);\n document.write(\"<br />After adding 1: \" + arr );\n document.write(\"<br />removed is: \" + removed);\n </script> \n </body>\n</html>"
},
{
"code": null,
"e": 4092,
"s": 3961,
"text": "After adding 1: orange,mango,water,banana,sugar,tea\nremoved is: \nAfter adding 1: orange,mango,water,sugar,tea\nremoved is: banana \n"
},
{
"code": null,
"e": 4127,
"s": 4092,
"text": "\n 25 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 4141,
"s": 4127,
"text": " Anadi Sharma"
},
{
"code": null,
"e": 4175,
"s": 4141,
"text": "\n 74 Lectures \n 10 hours \n"
},
{
"code": null,
"e": 4189,
"s": 4175,
"text": " Lets Kode It"
},
{
"code": null,
"e": 4224,
"s": 4189,
"text": "\n 72 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 4241,
"s": 4224,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 4276,
"s": 4241,
"text": "\n 70 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 4293,
"s": 4276,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 4326,
"s": 4293,
"text": "\n 46 Lectures \n 6 hours \n"
},
{
"code": null,
"e": 4354,
"s": 4326,
"text": " Eduonix Learning Solutions"
},
{
"code": null,
"e": 4388,
"s": 4354,
"text": "\n 88 Lectures \n 14 hours \n"
},
{
"code": null,
"e": 4416,
"s": 4388,
"text": " Eduonix Learning Solutions"
},
{
"code": null,
"e": 4423,
"s": 4416,
"text": " Print"
},
{
"code": null,
"e": 4434,
"s": 4423,
"text": " Add Notes"
}
] |
KnockoutJS - Disable Binding
|
This binding is the negation of enable binding. This binding disables the associated DOM element when the parameter evaluates to true.
disable: <binding-value>
Parameter consists of Boolean like value, which decides whether the element should be disabled or not. If the parameter is true or true-like value, then the element is disabled.
Parameter consists of Boolean like value, which decides whether the element should be disabled or not. If the parameter is true or true-like value, then the element is disabled.
Non-Boolean values are considered as loosely Boolean values. Meaning 0 and null are considered as false-like value and Integer and non-null objects are considered as true-like value.
Non-Boolean values are considered as loosely Boolean values. Meaning 0 and null are considered as false-like value and Integer and non-null objects are considered as true-like value.
If the condition in parameter contains an observable value, then the condition is re-evaluated whenever the observable value changes. Correspondingly, related markup will be disabled based on the condition result.
If the condition in parameter contains an observable value, then the condition is re-evaluated whenever the observable value changes. Correspondingly, related markup will be disabled based on the condition result.
Let us take a look at the following example which demonstrates the use of disable binding.
<!DOCTYPE html>
<head>
<title>KnockoutJS Disable Binding</title>
<script src = "https://ajax.aspnetcdn.com/ajax/knockout/knockout-3.3.0.js"
type = "text/javascript"></script>
</head>
<body>
<p> Enter your feedback here:<br><br>
<textarea rows = 5 data-bind = "value: hasFeedback,
valueUpdate: 'afterkeydown'" ></textarea>
</p>
<p><button data-bind = "disable: !(hasFeedback())">Save Feedback</button></p>
<script type = "text/javascript">
function ViewModel () {
hasFeedback = ko.observable('');
};
var vm = new ViewModel();
ko.applyBindings(vm);
</script>
</body>
</html>
Let's carry out the following steps to see how the above code works −
Save the above code in disable-bind.htm file.
Save the above code in disable-bind.htm file.
Open this HTML file in a browser.
Open this HTML file in a browser.
The save button is disabled when the user has not entered any feedback.
The save button is disabled when the user has not entered any feedback.
Enter your feedback here:
Save Feedback
You can also use a random expression to decide whether element should be disabled or not.
38 Lectures
2 hours
Skillbakerystudios
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 1987,
"s": 1852,
"text": "This binding is the negation of enable binding. This binding disables the associated DOM element when the parameter evaluates to true."
},
{
"code": null,
"e": 2013,
"s": 1987,
"text": "disable: <binding-value>\n"
},
{
"code": null,
"e": 2191,
"s": 2013,
"text": "Parameter consists of Boolean like value, which decides whether the element should be disabled or not. If the parameter is true or true-like value, then the element is disabled."
},
{
"code": null,
"e": 2369,
"s": 2191,
"text": "Parameter consists of Boolean like value, which decides whether the element should be disabled or not. If the parameter is true or true-like value, then the element is disabled."
},
{
"code": null,
"e": 2552,
"s": 2369,
"text": "Non-Boolean values are considered as loosely Boolean values. Meaning 0 and null are considered as false-like value and Integer and non-null objects are considered as true-like value."
},
{
"code": null,
"e": 2735,
"s": 2552,
"text": "Non-Boolean values are considered as loosely Boolean values. Meaning 0 and null are considered as false-like value and Integer and non-null objects are considered as true-like value."
},
{
"code": null,
"e": 2949,
"s": 2735,
"text": "If the condition in parameter contains an observable value, then the condition is re-evaluated whenever the observable value changes. Correspondingly, related markup will be disabled based on the condition result."
},
{
"code": null,
"e": 3163,
"s": 2949,
"text": "If the condition in parameter contains an observable value, then the condition is re-evaluated whenever the observable value changes. Correspondingly, related markup will be disabled based on the condition result."
},
{
"code": null,
"e": 3254,
"s": 3163,
"text": "Let us take a look at the following example which demonstrates the use of disable binding."
},
{
"code": null,
"e": 3977,
"s": 3254,
"text": "<!DOCTYPE html>\n <head>\n <title>KnockoutJS Disable Binding</title>\n <script src = \"https://ajax.aspnetcdn.com/ajax/knockout/knockout-3.3.0.js\"\n type = \"text/javascript\"></script>\n </head>\n\n <body>\n <p> Enter your feedback here:<br><br>\n <textarea rows = 5 data-bind = \"value: hasFeedback, \n valueUpdate: 'afterkeydown'\" ></textarea>\n </p>\n \n <p><button data-bind = \"disable: !(hasFeedback())\">Save Feedback</button></p>\n\n <script type = \"text/javascript\">\n function ViewModel () {\n hasFeedback = ko.observable('');\n };\n\n var vm = new ViewModel();\n ko.applyBindings(vm);\n </script>\n \n </body>\n</html>"
},
{
"code": null,
"e": 4047,
"s": 3977,
"text": "Let's carry out the following steps to see how the above code works −"
},
{
"code": null,
"e": 4093,
"s": 4047,
"text": "Save the above code in disable-bind.htm file."
},
{
"code": null,
"e": 4139,
"s": 4093,
"text": "Save the above code in disable-bind.htm file."
},
{
"code": null,
"e": 4173,
"s": 4139,
"text": "Open this HTML file in a browser."
},
{
"code": null,
"e": 4207,
"s": 4173,
"text": "Open this HTML file in a browser."
},
{
"code": null,
"e": 4279,
"s": 4207,
"text": "The save button is disabled when the user has not entered any feedback."
},
{
"code": null,
"e": 4351,
"s": 4279,
"text": "The save button is disabled when the user has not entered any feedback."
},
{
"code": null,
"e": 4379,
"s": 4351,
"text": " Enter your feedback here:\n"
},
{
"code": null,
"e": 4393,
"s": 4379,
"text": "Save Feedback"
},
{
"code": null,
"e": 4483,
"s": 4393,
"text": "You can also use a random expression to decide whether element should be disabled or not."
},
{
"code": null,
"e": 4516,
"s": 4483,
"text": "\n 38 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 4536,
"s": 4516,
"text": " Skillbakerystudios"
},
{
"code": null,
"e": 4543,
"s": 4536,
"text": " Print"
},
{
"code": null,
"e": 4554,
"s": 4543,
"text": " Add Notes"
}
] |
C++ Program to Perform Complex Number Multiplication
|
Complex numbers are numbers that are expressed as a+bi where i is an imaginary number and a and b are real numbers. Some examples on complex numbers are −
2+3i
5+9i
4+2i
A program to perform complex number multiplication is as follows −
Live Demo
#include<iostream>
using namespace std;
int main(){
int x1, y1, x2, y2, x3, y3;
cout<<"Enter the first complex number : "<<endl;
cin>> x1 >> y1;
cout<<"\nEnter second complex number : "<<endl;
cin>> x2 >> y2;
x3 = x1 * x2 - y1 * y2;
y3 = x1 * y2 + y1 * x2;
cout<<"The value after multiplication is: "<<x3<<" + "<<y3<<" i ";
return 0;
}
The output of the above program is as follows
Enter the first complex number : 2 1
Enter second complex number : 3 4
The value after multiplication is: 2 + 11 i
In the above program, the user inputs both the complex numbers. This is given as follows −
cout<<"Enter the first complex number : "<<endl;
cin>> x1 >> y1;
cout<<"\nEnter second complex number : "<<endl;
cin>> x2 >> y2;
The product of the two complex numbers is found by the required formula. This is given as follows −
x3 = x1 * x2 - y1 * y2;
y3 = x1 * y2 + y1 * x2;
Finally, the product is displayed. This is given below −
cout<<"The value after multiplication is: "<<x3<<" + "<<y3<<" i ";
|
[
{
"code": null,
"e": 1217,
"s": 1062,
"text": "Complex numbers are numbers that are expressed as a+bi where i is an imaginary number and a and b are real numbers. Some examples on complex numbers are −"
},
{
"code": null,
"e": 1232,
"s": 1217,
"text": "2+3i\n5+9i\n4+2i"
},
{
"code": null,
"e": 1299,
"s": 1232,
"text": "A program to perform complex number multiplication is as follows −"
},
{
"code": null,
"e": 1310,
"s": 1299,
"text": " Live Demo"
},
{
"code": null,
"e": 1674,
"s": 1310,
"text": "#include<iostream>\nusing namespace std;\nint main(){\n int x1, y1, x2, y2, x3, y3;\n cout<<\"Enter the first complex number : \"<<endl;\n cin>> x1 >> y1;\n\n cout<<\"\\nEnter second complex number : \"<<endl;\n cin>> x2 >> y2;\n x3 = x1 * x2 - y1 * y2;\n y3 = x1 * y2 + y1 * x2;\n cout<<\"The value after multiplication is: \"<<x3<<\" + \"<<y3<<\" i \";\n return 0;\n}"
},
{
"code": null,
"e": 1720,
"s": 1674,
"text": "The output of the above program is as follows"
},
{
"code": null,
"e": 1835,
"s": 1720,
"text": "Enter the first complex number : 2 1\nEnter second complex number : 3 4\nThe value after multiplication is: 2 + 11 i"
},
{
"code": null,
"e": 1926,
"s": 1835,
"text": "In the above program, the user inputs both the complex numbers. This is given as follows −"
},
{
"code": null,
"e": 2056,
"s": 1926,
"text": "cout<<\"Enter the first complex number : \"<<endl;\ncin>> x1 >> y1;\n\ncout<<\"\\nEnter second complex number : \"<<endl;\ncin>> x2 >> y2;"
},
{
"code": null,
"e": 2156,
"s": 2056,
"text": "The product of the two complex numbers is found by the required formula. This is given as follows −"
},
{
"code": null,
"e": 2204,
"s": 2156,
"text": "x3 = x1 * x2 - y1 * y2;\ny3 = x1 * y2 + y1 * x2;"
},
{
"code": null,
"e": 2261,
"s": 2204,
"text": "Finally, the product is displayed. This is given below −"
},
{
"code": null,
"e": 2328,
"s": 2261,
"text": "cout<<\"The value after multiplication is: \"<<x3<<\" + \"<<y3<<\" i \";"
}
] |
SciPy Tutorial
|
SciPy is a scientific computation library that uses NumPy underneath.
SciPy stands for Scientific Python.
We have created 10 tutorial pages for you to learn the fundamentals of SciPy:
Test your SciPy skills with a quiz test.
Start SciPy Quiz
Insert the correct syntax for printing the kilometer unit (in meters):
print(constants.);
Start the Exercise
In our "Try it Yourself" editor, you can use the SciPy module, and modify the code to see the result.
How many cubic meters are in one liter:
Click on the "Try it Yourself" button to see how it works.
We just launchedW3Schools videos
Get certifiedby completinga course today!
If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail:
help@w3schools.com
Your message has been sent to W3Schools.
|
[
{
"code": null,
"e": 70,
"s": 0,
"text": "SciPy is a scientific computation library that uses NumPy underneath."
},
{
"code": null,
"e": 107,
"s": 70,
"text": "SciPy stands for Scientific Python. "
},
{
"code": null,
"e": 185,
"s": 107,
"text": "We have created 10 tutorial pages for you to learn the fundamentals of SciPy:"
},
{
"code": null,
"e": 226,
"s": 185,
"text": "Test your SciPy skills with a quiz test."
},
{
"code": null,
"e": 243,
"s": 226,
"text": "Start SciPy Quiz"
},
{
"code": null,
"e": 314,
"s": 243,
"text": "Insert the correct syntax for printing the kilometer unit (in meters):"
},
{
"code": null,
"e": 334,
"s": 314,
"text": "print(constants.);\n"
},
{
"code": null,
"e": 353,
"s": 334,
"text": "Start the Exercise"
},
{
"code": null,
"e": 455,
"s": 353,
"text": "In our \"Try it Yourself\" editor, you can use the SciPy module, and modify the code to see the result."
},
{
"code": null,
"e": 495,
"s": 455,
"text": "How many cubic meters are in one liter:"
},
{
"code": null,
"e": 554,
"s": 495,
"text": "Click on the \"Try it Yourself\" button to see how it works."
},
{
"code": null,
"e": 587,
"s": 554,
"text": "We just launchedW3Schools videos"
},
{
"code": null,
"e": 629,
"s": 587,
"text": "Get certifiedby completinga course today!"
},
{
"code": null,
"e": 736,
"s": 629,
"text": "If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail:"
},
{
"code": null,
"e": 755,
"s": 736,
"text": "help@w3schools.com"
}
] |
Apache HttpClient - Http Post Request
|
A POST request is used to send data to the server; for example, customer information, file
upload, etc., using HTML forms.
The HttpClient API provides a class named HttpPost which represents the POST request.
Follow the steps given below to send a HTTP POST request using HttpClient library.
The createDefault() method of the HttpClients class returns an object of the class CloseableHttpClient, which is the base implementation of the HttpClient interface.
Using this method, create an HttpClient object.
CloseableHttpClient httpClient = HttpClients.createDefault();
The HttpPost class represents the HTTP POST request. This sends required data and retrieves the information of the given server using a URI.
Create this request by instantiating the HttpPost class and pass a string value representing the URI, as a parameter to its constructor.
HttpGet httpGet = new HttpGet("http://www.tutorialspoint.com/");
The execute() method of the CloseableHttpClient object accepts a HttpUriRequest (interface) object (i.e. HttpGet, HttpPost, HttpPut, HttpHead etc.) and returns a response object.
HttpResponse httpResponse = httpclient.execute(httpget);
Following is an example which demonstrates the execution of the HTTP POST request using
HttpClient library.
import org.apache.http.HttpResponse;
import org.apache.http.client.methods.HttpPost;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClients;
public class HttpPostExample {
public static void main(String args[]) throws Exception{
//Creating a HttpClient object
CloseableHttpClient httpclient = HttpClients.createDefault();
//Creating a HttpGet object
HttpPost httppost = new HttpPost("https://www.tutorialspoint.com/");
//Printing the method used
System.out.println("Request Type: "+httppost.getMethod());
//Executing the Get request
HttpResponse httpresponse = httpclient.execute(httppost);
Scanner sc = new Scanner(httpresponse.getEntity().getContent());
//Printing the status line
System.out.println(httpresponse.getStatusLine());
while(sc.hasNext()) {
System.out.println(sc.nextLine());
}
}
}
The above program generates the following output.
Request Type: POST
<!DOCTYPE html>
<!--[if IE 8]><html class = "ie ie8"> <![endif]-->
<!--[if IE 9]><html class = "ie ie9"> <![endif]-->
<!--[if gt IE 9]><!-->
<html lang = "en-US"> <!--<![endif]-->
<head>
<!-- Basic -->
<meta charset = "utf-8">
<title>Parallax Scrolling, Java Cryptography, YAML, Python Data Science, Java
i18n, GitLab, TestRail, VersionOne, DBUtils, Common CLI, Seaborn, Ansible,
LOLCODE, Current Affairs 2018, Apache Commons Collections</title>
<meta name = "Description" content = "Parallax Scrolling, Java Cryptography, YAML,
Python Data Science, Java i18n, GitLab, TestRail, VersionOne, DBUtils, Common
CLI, Seaborn, Ansible, LOLCODE, Current Affairs 2018, Intellij Idea, Apache
Commons Collections, Java 9, GSON, TestLink, Inter Process Communication (IPC),
Logo, PySpark, Google Tag Manager, Free IFSC Code, SAP Workflow"/>
<meta name = "Keywords" content="Python Data Science, Java i18n, GitLab,
TestRail, VersionOne, DBUtils, Common CLI, Seaborn, Ansible, LOLCODE, Gson,
TestLink, Inter Process Communication (IPC), Logo"/>
<meta http-equiv = "X-UA-Compatible" content = "IE = edge">
<meta name = "viewport" conten t= "width = device-width,initial-scale = 1.0,userscalable = yes">
<link href = "https://cdn.muicss.com/mui-0.9.39/extra/mui-rem.min.css"
rel = "stylesheet" type = "text/css" />
<link rel = "stylesheet" href = "/questions/css/home.css?v = 3" />
<script src = "/questions/js/jquery.min.js"></script>
<script src = "/questions/js/fontawesome.js"></script>
<script src = "https://cdn.muicss.com/mui-0.9.39/js/mui.min.js"></script>
</head>
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
</script>
</body>
</html>
46 Lectures
3.5 hours
Arnab Chakraborty
23 Lectures
1.5 hours
Mukund Kumar Mishra
16 Lectures
1 hours
Nilay Mehta
52 Lectures
1.5 hours
Bigdata Engineer
14 Lectures
1 hours
Bigdata Engineer
23 Lectures
1 hours
Bigdata Engineer
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 1950,
"s": 1827,
"text": "A POST request is used to send data to the server; for example, customer information, file\nupload, etc., using HTML forms."
},
{
"code": null,
"e": 2036,
"s": 1950,
"text": "The HttpClient API provides a class named HttpPost which represents the POST request."
},
{
"code": null,
"e": 2119,
"s": 2036,
"text": "Follow the steps given below to send a HTTP POST request using HttpClient library."
},
{
"code": null,
"e": 2285,
"s": 2119,
"text": "The createDefault() method of the HttpClients class returns an object of the class CloseableHttpClient, which is the base implementation of the HttpClient interface."
},
{
"code": null,
"e": 2333,
"s": 2285,
"text": "Using this method, create an HttpClient object."
},
{
"code": null,
"e": 2396,
"s": 2333,
"text": "CloseableHttpClient httpClient = HttpClients.createDefault();\n"
},
{
"code": null,
"e": 2537,
"s": 2396,
"text": "The HttpPost class represents the HTTP POST request. This sends required data and retrieves the information of the given server using a URI."
},
{
"code": null,
"e": 2674,
"s": 2537,
"text": "Create this request by instantiating the HttpPost class and pass a string value representing the URI, as a parameter to its constructor."
},
{
"code": null,
"e": 2740,
"s": 2674,
"text": "HttpGet httpGet = new HttpGet(\"http://www.tutorialspoint.com/\");\n"
},
{
"code": null,
"e": 2919,
"s": 2740,
"text": "The execute() method of the CloseableHttpClient object accepts a HttpUriRequest (interface) object (i.e. HttpGet, HttpPost, HttpPut, HttpHead etc.) and returns a response object."
},
{
"code": null,
"e": 2977,
"s": 2919,
"text": "HttpResponse httpResponse = httpclient.execute(httpget);\n"
},
{
"code": null,
"e": 3085,
"s": 2977,
"text": "Following is an example which demonstrates the execution of the HTTP POST request using\nHttpClient library."
},
{
"code": null,
"e": 4032,
"s": 3085,
"text": "import org.apache.http.HttpResponse;\nimport org.apache.http.client.methods.HttpPost;\nimport org.apache.http.impl.client.CloseableHttpClient;\nimport org.apache.http.impl.client.HttpClients;\n\npublic class HttpPostExample {\n \n public static void main(String args[]) throws Exception{\n \n //Creating a HttpClient object\n CloseableHttpClient httpclient = HttpClients.createDefault();\n\n //Creating a HttpGet object\n HttpPost httppost = new HttpPost(\"https://www.tutorialspoint.com/\");\n\n //Printing the method used\n System.out.println(\"Request Type: \"+httppost.getMethod());\n\n //Executing the Get request\n HttpResponse httpresponse = httpclient.execute(httppost);\n\n Scanner sc = new Scanner(httpresponse.getEntity().getContent());\n\n //Printing the status line\n System.out.println(httpresponse.getStatusLine());\n while(sc.hasNext()) {\n System.out.println(sc.nextLine());\n }\n }\n}"
},
{
"code": null,
"e": 4082,
"s": 4032,
"text": "The above program generates the following output."
},
{
"code": null,
"e": 5879,
"s": 4082,
"text": "Request Type: POST\n<!DOCTYPE html>\n<!--[if IE 8]><html class = \"ie ie8\"> <![endif]-->\n<!--[if IE 9]><html class = \"ie ie9\"> <![endif]-->\n<!--[if gt IE 9]><!--> \n<html lang = \"en-US\"> <!--<![endif]-->\n<head>\n<!-- Basic -->\n<meta charset = \"utf-8\">\n<title>Parallax Scrolling, Java Cryptography, YAML, Python Data Science, Java\ni18n, GitLab, TestRail, VersionOne, DBUtils, Common CLI, Seaborn, Ansible,\nLOLCODE, Current Affairs 2018, Apache Commons Collections</title>\n<meta name = \"Description\" content = \"Parallax Scrolling, Java Cryptography, YAML,\nPython Data Science, Java i18n, GitLab, TestRail, VersionOne, DBUtils, Common\nCLI, Seaborn, Ansible, LOLCODE, Current Affairs 2018, Intellij Idea, Apache\nCommons Collections, Java 9, GSON, TestLink, Inter Process Communication (IPC),\nLogo, PySpark, Google Tag Manager, Free IFSC Code, SAP Workflow\"/>\n<meta name = \"Keywords\" content=\"Python Data Science, Java i18n, GitLab,\nTestRail, VersionOne, DBUtils, Common CLI, Seaborn, Ansible, LOLCODE, Gson,\nTestLink, Inter Process Communication (IPC), Logo\"/>\n<meta http-equiv = \"X-UA-Compatible\" content = \"IE = edge\">\n<meta name = \"viewport\" conten t= \"width = device-width,initial-scale = 1.0,userscalable = yes\">\n<link href = \"https://cdn.muicss.com/mui-0.9.39/extra/mui-rem.min.css\"\nrel = \"stylesheet\" type = \"text/css\" />\n<link rel = \"stylesheet\" href = \"/questions/css/home.css?v = 3\" />\n<script src = \"/questions/js/jquery.min.js\"></script>\n<script src = \"/questions/js/fontawesome.js\"></script>\n<script src = \"https://cdn.muicss.com/mui-0.9.39/js/mui.min.js\"></script>\n</head>\n. . . . . . . . . . . . . . . . . . . . . . . .\n. . . . . . . . . . . . . . . . . . . . . . . .\n. . . . . . . . . . . . . . . . . . . . . . . .\n. . . . . . . . . . . . . . . . . . . . . . . .\n</script>\n</body>\n</html>\n"
},
{
"code": null,
"e": 5914,
"s": 5879,
"text": "\n 46 Lectures \n 3.5 hours \n"
},
{
"code": null,
"e": 5933,
"s": 5914,
"text": " Arnab Chakraborty"
},
{
"code": null,
"e": 5968,
"s": 5933,
"text": "\n 23 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 5989,
"s": 5968,
"text": " Mukund Kumar Mishra"
},
{
"code": null,
"e": 6022,
"s": 5989,
"text": "\n 16 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 6035,
"s": 6022,
"text": " Nilay Mehta"
},
{
"code": null,
"e": 6070,
"s": 6035,
"text": "\n 52 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 6088,
"s": 6070,
"text": " Bigdata Engineer"
},
{
"code": null,
"e": 6121,
"s": 6088,
"text": "\n 14 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 6139,
"s": 6121,
"text": " Bigdata Engineer"
},
{
"code": null,
"e": 6172,
"s": 6139,
"text": "\n 23 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 6190,
"s": 6172,
"text": " Bigdata Engineer"
},
{
"code": null,
"e": 6197,
"s": 6190,
"text": " Print"
},
{
"code": null,
"e": 6208,
"s": 6197,
"text": " Add Notes"
}
] |
Implementing word2vec in PyTorch (skip-gram model) | by Mateusz Bednarski | Towards Data Science
|
You probably have heard about word2vec embedding. But do you really understand how it works? I though I do. But I have not, until implemented it.
This is why I’m creating this guide.
2021 Update: For a more detailed article see: https://neptune.ai/blog/word-embeddings-guide
I assume you know more-less what word2vec is.
Corpus
In order to be able to track every single step i’m using following nano corpus:
Very first step is word2vec to create the vocabulary. It has to be built at the beginning, as extending it is not supported.
Vocabulary is basically a list of unique words with assigned indices.
Corpus is very simple and short. In real implementation we would have to perform case normalization, removing some punctuation etc, but for simplicity let’s use this nice and clean data. Anyway, we have to tokenize it:
This will give us a list of tokens:
[['he', 'is', 'a', 'king'], ['she', 'is', 'a', 'queen'], ['he', 'is', 'a', 'man'], ['she', 'is', 'a', 'woman'], ['warsaw', 'is', 'poland', 'capital'], ['berlin', 'is', 'germany', 'capital'], ['paris', 'is', 'france', 'capital']]
We iterate over tokens in corpus, and generate list of unique words(tokens). Next we create two dictionaries for mapping between word and index.
Which gives us:
0: 'he', 1: 'is', 2: 'a', 3: 'king', 4: 'she', 5: 'queen', 6: 'man', 7: 'woman', 8: 'warsaw', 9: 'poland', 10: 'capital', 11: 'berlin', 12: 'germany', 13: 'paris', 14: 'france'
We can now generate pairs center word, context word. Let’s assume context window to be symmetric and equal to 2.
It gives us list of pairs center, context indices:
array([[ 0, 1], [ 0, 2], ...
Which can be easily translated to words:
he ishe ais heis ais kinga hea isa king
Which makes perfect sense.
Now, we are going through details from very first equation to working implementation.
For skip-gram we are interested in predicting context, given center word and some parametrization. This is our probability distribution for single pair.
Now, we want to maximize it trough all word/context pairs.
Wait, why?
As we are interested in predicting context given center word, we want to maximize P(context|center) for each context, center pair. As probability sums up to 1 — we are implicitly making P(context|center) close to 0 for all non-existing context, center pairs. By multiplying those probabilities we make this function close to 1 if our model is good and close to 0 if it is bad. Of course we are pursuing a good one — so there is max operator at the beginning.
This expression is not very suitable to compute. That’s why we are going to perform some very common transformations.
Step 1 — Replace probability with negative log likelihood.
Recall that neural nets are about minimizing loss function. We could simply multiply P by minus one , but applying log gives us better computational properties. This does not change position of function extrema (as log is a strictly monotonic function). So expression is changed to:
Step 2 — Replace products with sums
Next step is to replace products with sums. We can do it because:
Step 3 — Transform to a proper loss function
And after dividing by number of pars (T) we have our final loss term:
Great, but how we do define P(context|center)? For now, let’s assume that reach word has actually two vectors. One if is present as center word (v), and second one if context(u). Given that definition for P looks following:
That’s scary!
Let me break it down to smaller parts. See following structure:
This is just a softmax function. Now closer look at nominator
Both u and v are vectors. This expression is just scalar product of given center, context pair. Bigger as they are more similar to each other.
Now, denominator:
We’re iterating over all words in vocabulary.
And computing “similarity” for given center word and each word in vocabulary treated as context word.
To sum up:
For each existing center, context pair in corpus we’re computing their “similarity score”. And divide it by sum of each theoretically possible context — to know whether score is relatively high or low. As softmax is guaranteed to take a value between 0 and 1 it defines a valid probability distribution.
Neural net implementing this concept consists of three layers: input, hidden and output.
Input layer is just the center word encoded in one-hot manner. It dimensions are [1, vocabulary_size]
Hidden layer makes our v vectors. Therefore it has to have embedding_dims neurons. To compute it value we have to define W1 weight matrix. Of course its has to be [embedding_dims, vocabulary_size]. There is no activation function — just plain matrix multiplication.
What’s important — each column of W1 stores v vector for single word. Why? Because x is one-hot and if you multiply one-hot vector by matrix, result is same as selecting select single column from it. Try on your own using a piece of paper ;)
Last layer must have vocabulary_size neurons — because it generates probabilities for each word. Therefore, W2 is [vocabulary_size, embedding_dims] in terms of shape.
On top on that we have to use softmax layer. PyTorch provides optimized version of this, combined with log — because regular softmax is not really numerically stable:
log_softmax = F.log_softmax(a2, dim=0)
This is equivalent to compute softmax and after that applying log.
Now we can compute loss. As usual PyTorch provides everything we need:
loss = F.nll_loss(log_softmax.view(1,-1), y_true)
The nll_loss computes negative-log-likelihood on logsoftmax. y_true is context word — we want to make this as high as possible — because pair x, y_true is from training data — so the are indeed center, context.
As we fished forward pass, now it’s time to perform backward pass. Simply:
loss.backward()
For optimization SDG is used. It is so simple, that it was faster to write it by hand instead of creating optimizer object:
W1.data -= 0.01 * W1.grad.dataW2.data -= 0.01 * W2.grad.data
Last step is to zero gradients to make next pass clear:
W1.grad.data.zero_()W2.grad.data.zero_()
Time to compile it into training loop. It can look like:
One potentially tricky thing is y_true definition. We do not create one-hot explicitly — nll_loss does it by itself.
Loss at epo 0: 4.241989389487675Loss at epo 10: 3.8398486052240646Loss at epo 20: 3.5548086541039603Loss at epo 30: 3.343840673991612Loss at epo 40: 3.183084646293095Loss at epo 50: 3.05673006943294Loss at epo 60: 2.953996729850769Loss at epo 70: 2.867735825266157Loss at epo 80: 2.79331214427948Loss at epo 90: 2.727727291413716Loss at epo 100: 2.6690095041479385
Ok, we have trained the network. One, last thing is to extract vectors for words. It is possible in three ways:
Use vector v from W1
Use vector u from W2
Use average v and u
Try to think on your own when to use which one ;)
I’m working on online interactive demonstration on this. It should be available soon. Stay tuned ;)
You can download code from here.
|
[
{
"code": null,
"e": 317,
"s": 171,
"text": "You probably have heard about word2vec embedding. But do you really understand how it works? I though I do. But I have not, until implemented it."
},
{
"code": null,
"e": 354,
"s": 317,
"text": "This is why I’m creating this guide."
},
{
"code": null,
"e": 446,
"s": 354,
"text": "2021 Update: For a more detailed article see: https://neptune.ai/blog/word-embeddings-guide"
},
{
"code": null,
"e": 492,
"s": 446,
"text": "I assume you know more-less what word2vec is."
},
{
"code": null,
"e": 499,
"s": 492,
"text": "Corpus"
},
{
"code": null,
"e": 579,
"s": 499,
"text": "In order to be able to track every single step i’m using following nano corpus:"
},
{
"code": null,
"e": 704,
"s": 579,
"text": "Very first step is word2vec to create the vocabulary. It has to be built at the beginning, as extending it is not supported."
},
{
"code": null,
"e": 774,
"s": 704,
"text": "Vocabulary is basically a list of unique words with assigned indices."
},
{
"code": null,
"e": 993,
"s": 774,
"text": "Corpus is very simple and short. In real implementation we would have to perform case normalization, removing some punctuation etc, but for simplicity let’s use this nice and clean data. Anyway, we have to tokenize it:"
},
{
"code": null,
"e": 1029,
"s": 993,
"text": "This will give us a list of tokens:"
},
{
"code": null,
"e": 1258,
"s": 1029,
"text": "[['he', 'is', 'a', 'king'], ['she', 'is', 'a', 'queen'], ['he', 'is', 'a', 'man'], ['she', 'is', 'a', 'woman'], ['warsaw', 'is', 'poland', 'capital'], ['berlin', 'is', 'germany', 'capital'], ['paris', 'is', 'france', 'capital']]"
},
{
"code": null,
"e": 1403,
"s": 1258,
"text": "We iterate over tokens in corpus, and generate list of unique words(tokens). Next we create two dictionaries for mapping between word and index."
},
{
"code": null,
"e": 1419,
"s": 1403,
"text": "Which gives us:"
},
{
"code": null,
"e": 1597,
"s": 1419,
"text": " 0: 'he', 1: 'is', 2: 'a', 3: 'king', 4: 'she', 5: 'queen', 6: 'man', 7: 'woman', 8: 'warsaw', 9: 'poland', 10: 'capital', 11: 'berlin', 12: 'germany', 13: 'paris', 14: 'france'"
},
{
"code": null,
"e": 1710,
"s": 1597,
"text": "We can now generate pairs center word, context word. Let’s assume context window to be symmetric and equal to 2."
},
{
"code": null,
"e": 1761,
"s": 1710,
"text": "It gives us list of pairs center, context indices:"
},
{
"code": null,
"e": 1804,
"s": 1761,
"text": "array([[ 0, 1], [ 0, 2], ..."
},
{
"code": null,
"e": 1845,
"s": 1804,
"text": "Which can be easily translated to words:"
},
{
"code": null,
"e": 1885,
"s": 1845,
"text": "he ishe ais heis ais kinga hea isa king"
},
{
"code": null,
"e": 1912,
"s": 1885,
"text": "Which makes perfect sense."
},
{
"code": null,
"e": 1998,
"s": 1912,
"text": "Now, we are going through details from very first equation to working implementation."
},
{
"code": null,
"e": 2151,
"s": 1998,
"text": "For skip-gram we are interested in predicting context, given center word and some parametrization. This is our probability distribution for single pair."
},
{
"code": null,
"e": 2210,
"s": 2151,
"text": "Now, we want to maximize it trough all word/context pairs."
},
{
"code": null,
"e": 2221,
"s": 2210,
"text": "Wait, why?"
},
{
"code": null,
"e": 2680,
"s": 2221,
"text": "As we are interested in predicting context given center word, we want to maximize P(context|center) for each context, center pair. As probability sums up to 1 — we are implicitly making P(context|center) close to 0 for all non-existing context, center pairs. By multiplying those probabilities we make this function close to 1 if our model is good and close to 0 if it is bad. Of course we are pursuing a good one — so there is max operator at the beginning."
},
{
"code": null,
"e": 2798,
"s": 2680,
"text": "This expression is not very suitable to compute. That’s why we are going to perform some very common transformations."
},
{
"code": null,
"e": 2857,
"s": 2798,
"text": "Step 1 — Replace probability with negative log likelihood."
},
{
"code": null,
"e": 3140,
"s": 2857,
"text": "Recall that neural nets are about minimizing loss function. We could simply multiply P by minus one , but applying log gives us better computational properties. This does not change position of function extrema (as log is a strictly monotonic function). So expression is changed to:"
},
{
"code": null,
"e": 3176,
"s": 3140,
"text": "Step 2 — Replace products with sums"
},
{
"code": null,
"e": 3242,
"s": 3176,
"text": "Next step is to replace products with sums. We can do it because:"
},
{
"code": null,
"e": 3287,
"s": 3242,
"text": "Step 3 — Transform to a proper loss function"
},
{
"code": null,
"e": 3357,
"s": 3287,
"text": "And after dividing by number of pars (T) we have our final loss term:"
},
{
"code": null,
"e": 3581,
"s": 3357,
"text": "Great, but how we do define P(context|center)? For now, let’s assume that reach word has actually two vectors. One if is present as center word (v), and second one if context(u). Given that definition for P looks following:"
},
{
"code": null,
"e": 3595,
"s": 3581,
"text": "That’s scary!"
},
{
"code": null,
"e": 3659,
"s": 3595,
"text": "Let me break it down to smaller parts. See following structure:"
},
{
"code": null,
"e": 3721,
"s": 3659,
"text": "This is just a softmax function. Now closer look at nominator"
},
{
"code": null,
"e": 3864,
"s": 3721,
"text": "Both u and v are vectors. This expression is just scalar product of given center, context pair. Bigger as they are more similar to each other."
},
{
"code": null,
"e": 3882,
"s": 3864,
"text": "Now, denominator:"
},
{
"code": null,
"e": 3928,
"s": 3882,
"text": "We’re iterating over all words in vocabulary."
},
{
"code": null,
"e": 4030,
"s": 3928,
"text": "And computing “similarity” for given center word and each word in vocabulary treated as context word."
},
{
"code": null,
"e": 4041,
"s": 4030,
"text": "To sum up:"
},
{
"code": null,
"e": 4345,
"s": 4041,
"text": "For each existing center, context pair in corpus we’re computing their “similarity score”. And divide it by sum of each theoretically possible context — to know whether score is relatively high or low. As softmax is guaranteed to take a value between 0 and 1 it defines a valid probability distribution."
},
{
"code": null,
"e": 4434,
"s": 4345,
"text": "Neural net implementing this concept consists of three layers: input, hidden and output."
},
{
"code": null,
"e": 4536,
"s": 4434,
"text": "Input layer is just the center word encoded in one-hot manner. It dimensions are [1, vocabulary_size]"
},
{
"code": null,
"e": 4802,
"s": 4536,
"text": "Hidden layer makes our v vectors. Therefore it has to have embedding_dims neurons. To compute it value we have to define W1 weight matrix. Of course its has to be [embedding_dims, vocabulary_size]. There is no activation function — just plain matrix multiplication."
},
{
"code": null,
"e": 5044,
"s": 4802,
"text": "What’s important — each column of W1 stores v vector for single word. Why? Because x is one-hot and if you multiply one-hot vector by matrix, result is same as selecting select single column from it. Try on your own using a piece of paper ;)"
},
{
"code": null,
"e": 5211,
"s": 5044,
"text": "Last layer must have vocabulary_size neurons — because it generates probabilities for each word. Therefore, W2 is [vocabulary_size, embedding_dims] in terms of shape."
},
{
"code": null,
"e": 5378,
"s": 5211,
"text": "On top on that we have to use softmax layer. PyTorch provides optimized version of this, combined with log — because regular softmax is not really numerically stable:"
},
{
"code": null,
"e": 5417,
"s": 5378,
"text": "log_softmax = F.log_softmax(a2, dim=0)"
},
{
"code": null,
"e": 5484,
"s": 5417,
"text": "This is equivalent to compute softmax and after that applying log."
},
{
"code": null,
"e": 5555,
"s": 5484,
"text": "Now we can compute loss. As usual PyTorch provides everything we need:"
},
{
"code": null,
"e": 5605,
"s": 5555,
"text": "loss = F.nll_loss(log_softmax.view(1,-1), y_true)"
},
{
"code": null,
"e": 5816,
"s": 5605,
"text": "The nll_loss computes negative-log-likelihood on logsoftmax. y_true is context word — we want to make this as high as possible — because pair x, y_true is from training data — so the are indeed center, context."
},
{
"code": null,
"e": 5891,
"s": 5816,
"text": "As we fished forward pass, now it’s time to perform backward pass. Simply:"
},
{
"code": null,
"e": 5907,
"s": 5891,
"text": "loss.backward()"
},
{
"code": null,
"e": 6031,
"s": 5907,
"text": "For optimization SDG is used. It is so simple, that it was faster to write it by hand instead of creating optimizer object:"
},
{
"code": null,
"e": 6092,
"s": 6031,
"text": "W1.data -= 0.01 * W1.grad.dataW2.data -= 0.01 * W2.grad.data"
},
{
"code": null,
"e": 6148,
"s": 6092,
"text": "Last step is to zero gradients to make next pass clear:"
},
{
"code": null,
"e": 6189,
"s": 6148,
"text": "W1.grad.data.zero_()W2.grad.data.zero_()"
},
{
"code": null,
"e": 6246,
"s": 6189,
"text": "Time to compile it into training loop. It can look like:"
},
{
"code": null,
"e": 6363,
"s": 6246,
"text": "One potentially tricky thing is y_true definition. We do not create one-hot explicitly — nll_loss does it by itself."
},
{
"code": null,
"e": 6728,
"s": 6363,
"text": "Loss at epo 0: 4.241989389487675Loss at epo 10: 3.8398486052240646Loss at epo 20: 3.5548086541039603Loss at epo 30: 3.343840673991612Loss at epo 40: 3.183084646293095Loss at epo 50: 3.05673006943294Loss at epo 60: 2.953996729850769Loss at epo 70: 2.867735825266157Loss at epo 80: 2.79331214427948Loss at epo 90: 2.727727291413716Loss at epo 100: 2.6690095041479385"
},
{
"code": null,
"e": 6840,
"s": 6728,
"text": "Ok, we have trained the network. One, last thing is to extract vectors for words. It is possible in three ways:"
},
{
"code": null,
"e": 6861,
"s": 6840,
"text": "Use vector v from W1"
},
{
"code": null,
"e": 6882,
"s": 6861,
"text": "Use vector u from W2"
},
{
"code": null,
"e": 6902,
"s": 6882,
"text": "Use average v and u"
},
{
"code": null,
"e": 6952,
"s": 6902,
"text": "Try to think on your own when to use which one ;)"
},
{
"code": null,
"e": 7052,
"s": 6952,
"text": "I’m working on online interactive demonstration on this. It should be available soon. Stay tuned ;)"
}
] |
How Floating Point Numbers Work. With Applications to Deep Learning and... | by Ravi Charan | Towards Data Science
|
It is a pesky fact that computers work in binary approximations while humans tend to think in terms of exact values. This is why, in your high school physics class, you may have experienced “rounding error” when computing intermediate numerical values in your solutions and why, if you open a python terminal and compute 0.1 * 3, you will get a weird result.1
>>> 0.1 + 0.1 + 0.10.30000000000000004
this makes floating point numbers an example of a leaky abstraction. Normally, python and numerical computing libraries like numpy or PyTorch handle this behind the scenes. But understanding the details can help you avoid otherwise unexpected errors and speed up many machine learning computations. For example, Google’s Tensor Processing Units (TPUs) use a modified floating point format to substantially improve computational efficiency while trying to maintain good results.
In this article we’ll dig into the nuts and bolts of floating point numbers, cover the edge cases (numerical underflow and overflow), and close with applications: TPU’s bfloat16 format and HDR imaging. The main background assumed is that you understand how to count in binary, as well as how binary fractions work.
Let’s briefly review counting to 5 in binary: 0, 1, 10, 11, 100, 101. Got it? This is great for an unsigned integer; one which is never negative. For example, if we have an 8 bit unsigned integer, we can represent numbers between 00000000 and 11111111. In decimal, that’s between 0 and 28-1=255. For example, most standard image formats are 8-bit color, which is why the “RGB” values go from 0 to 255.
Note also that we would typically abbreviate this with a hexadecimal (base 16) representation: 0x00 to 0xFF. The 0x prefix means “this is a hex number”. The hexadecimal digits are 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F; so F is essentially short for the four bits “1111” (both 0xF and 1111 are 15 in base-10). Also 8 bits are a byte, so our number is a measly 1 byte. But we won’t focus too much on hexadecimal in this article.
Now, you will notice that with an unsigned int, we can’t represent simple numbers like -2. One way you could try to solve this is to make the first bit represent the sign. Say “0” means negative and “1” means positive. Thinking about 4-bit numbers, 0111 would be -7, while 1111 would be +7. However, this has some weird features. For example, 0000 is “-0” while 1000 is “+0”. This is not great: comparing two numbers for equality would get tricky; plus we are wasting space.
The standard solution to this is to use Two’s Complement, which is what most implementations use for signed integers. (There is also a little-used One’s Complement). However, this isn’t what we are going to need for floating point numbers, so we won’t delve into it.
Let’s consider instead a biased representation of a signed 8-bit integer. It’s biased because, well it’s off by a bit. Instead of letting 00000000 represent 0, we will instead use 01111111 to represent 0. This would normally represent 127 in base 10. But we have biased our representation by 127. That means that 00000000 represents –127, while 11111111 represents 128.
Since most recently produced personal computers use a 64 bit processor, it’s pretty common for the default floating-point implementation to be 64 bit. This is called “double precision” because it is double of the previous-standard 32-bit precision (common computers switched to 64 bit processors sometime in the last decade).
For context, the basic idea of a floating point number is to use the binary-equivalent of scientific notation. Your high-school science teachers hopefully drilled into you exactly how to do this (along with a whole bunch about those dreaded signficant figures – sigfigs). For example, the scientific representation of 8191.31 is:
You should notice three key elements. First, a sign (is the number + or -?). Second, we always write the number with a single digit (between 1 and 9 inclusive), followed by a decimal point, followed by a number of digits. Compare that to the below, which are not in scientific notation even though they are true mathematical facts.
With that in mind, let’s think about what will change when we go to binary. First of all, instead of using 10 as the base of the exponent (also called the radix), we’ll want to use 2. Secondly, instead of decimal fractions, we’ll want to use binary fractions.
Please note that I have chosen to write the radix (2 or 10) and their exponents (1 or 0 respectively) in their decimal forms while the numbers on the left hand side and the significands are in binary or decimal respectively.
The binary number 1101 is 13 in base 10. And 13/16 is 0.8125. This is a binary fraction. If you haven’t played with these yet, you should convince yourself of the following:
This is the binary version of the fact that 0.3 is 3/10 and 0.55 is 55/100 (which can be further simplified, of course).
Great. We are now ready to dig into the details of floating point numbers.
Here is the diagram for the “IEEE 754” standard commonly implemented. The first bit is the sign. 0 is positive and 1 is negative (the opposite of what we naïvely suggested above). There are 11 bits for the exponent and 52 or 53 (depending how you count) bits for the fraction, also called the “mantissa” or “significand”. The sign just works like the flag we saw above, so we’ll go into each of the last two in some depth.
The exponent is an 11-bit biased (signed) integer like we saw before, but with some caveats. The bias is 210–1=1023, so that the 11 bits 01111111111 represent 0.
This would normally mean that the largest possible exponent is represented by the 11 bits 11111111111 (representing 211–1–1023=1024) and the smallest possible exponent is represented by the 11 bits 00000000000 (representing –1023).
However, as we will discuss:
The exponent represented by 11111111111 is reserved for infinities and NaNs.
The 00000000000 exponent is reserved for representing 0 and something else we’ll get to.
This means that the exponent can, in normal circumstances, be between –1022 and 1023 (2046 possible values).
The 52-bit significand represents a binary fraction. If you review the scientific notation section above, you’ll see that whenever we write a binary number in “binary scientific notation,” the leading digit is always 1. (In base 10 it could be between 1 and 9, but 2–9 aren’t binary digits). Since we know the leading digit will always be 1 (with some caveats to be discussed), we don’t need to actually store it on the computer (this would be wasteful). This is why I said the significand is 53 bits “depending on how you count.”
In other words, the 52 bits stored on the computer represent the 52 bits that come after the decimal point (or maybe we should call it a “binary point”). A leading 1 is always assumed.
I keep mentioning some caveats, and I intend to put them off for as long as possible. A “normal number” is a non-zero number that doesn’t use any of these caveats, and we are in a position to give some examples.
Recall the three components:
1 bit for the sign
11 bits for the exponent, which is (in decimal) between –1022 and +1023. It is represented as a biased integer in the binary encoding.
52 bits for the significand.
How would we represent the decimal number 1?
Well, the sign is positive, so the sign bit is 0. (Think of 1 as a flag for “negative”). The exponent is 0. Remembering that the biased representation means we add 1023, we get the binary representation 01111111111. Finally, all the fraction bits are 0. Easy:
I’ve written the binary floating-point representation with a space separating the three parts. As usual, the radix and exponent in the “binary scientific” representation are actually in base 10.
What about a harder example, like 3? 3 is 1.5 times 2 (in decimal), so turning that into a binary fraction, we have 1.1. The exponent 21 is represented as 10000000000 accounting for bias.
What’s the largest (normal) number we can get? We should make the exponent 11111111110 (we can’t make it all ones, that’s reserved), which in decimal is 1023.
We can compute this:
but we can also take advantage of the fact that Python has native arbitrary-precision integer arithmetic to gratuitously write out all 309 digits in base 10:
>>> 2 ** 1024 - 2 ** 971179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368
The smallest possible float is just the negative of this. But what is the smallest positive (normal) float? We already said the smallest positive exponent is –1022. Make the significand all 0s, and that means the smallest positive normal floating point number is:
Again, arbitrary precision integer arithmetic means we can exploit the middle fraction to easily get an exact decimal value in all its glory.
>>> numerator = 5 ** 1022>>> print('0', str(numerator).rjust(1022, '0'), sep='.')0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000002225073858507201383090232717332404064219215980462331830553327416887204434813918195854283159012511020564067339731035811005152434161553460108856012385377718821130777993532002330479610147442583636071921565046942503734208375250806650616658158948720491179968591639648500635908770118304874799780887753749949451580451605050915399856582470818645113537935804992115981085766051992433352114352390148795699609591288891602992641511063466313393663477586513029371762047325631781485664350872122828637642044846811407613911477062801689853244110024161447421618567166150540154285084716752901903161322778896729707373123334086988983175067838846926092773977972858659654941091369095406136467568702398678315290680984617210924625396728515625
You know, just in case you were curious. By the way, you can check all of this on your python + hardware setup with:
>>> import sys>>> sys.float_infosys.float_info(max=1.7976931348623157e+308, max_exp=1024, max_10_exp=308, min=2.2250738585072014e-308, min_exp=-1021, min_10_exp=-307, dig=15, mant_dig=53, epsilon=2.220446049250313e-16, radix=2, rounds=1)
and essentially every other programming language has a similar feature.
Okay, here’s where things get weird. If all of the exponent bits are 1, then the number represented is either infinite or not a number (NaN):
If the fraction bits are all 0, the number is infinite. The sign bit controls whether it is –∞ or +∞.
If the fraction bits are not all 0, the “number” is not a number (NaN). Depending on the first bit it can be either a quiet NaN or a signaling NaN. a quiet NaN propagates (add another number to it and you just get NaN). A signaling NaN is supposed to “throw an error”, roughly speaking.2 The remaining bits are typically not used.
The thing I initially found surprising about this is that this is a hardware implementation on commonly used chips. This means, for example, you can use it on a GPU. Why would you want to do that? Well, consider the convenient fact that e to the power of –∞ is 0.
>>> from math import exp>>> minus_infinity = float('-inf')>>> exp(minus_infinity)0.0
In the paper that introduced the transformer architecture for NLP tasks (the one used by BERT, GPT-2, and their more recent cousins), the training was autoregressive which meant that in the attention module’s softmax layers, certain outputs were required to be 0. But if you look at the formula for the softmax and recall that your high school math teacher told you that “there is no number such that exponentiating to it is 0,” you will see it’s tricky to make a softmax return 0. Unless of course, you make (minus) infinity a number!
And, crucially, this is a hardware implementation. If it was a gimmicky Python (or PyTorch, or Numpy) workaround that represented numbers as an object which might sometimes contain a floating point number, this would substantially slow down numerical computations.
Also, the unending complexity of computer hardware is always impressive.
But wait, there’s more! We haven’t even described how to represent 0 yet. Using our exponents and our fraction bits, we were only able to make a very small positive number, not actually 0. The solution of course is that if the exponent bits are all 0 and so is the fraction, then the number is 0. In other words, if the exponent bits are 00000000000 and the fraction bits are also all zeros. Note this means that 0 is “signed” – there is both +0 and –0. In Python, they are stored differently, but they are equal to each other.
>>> zero = 0.0>>> print(zero, -zero)0.0 -0.0>>> zero == -zeroTrue
There are a few edge cases where things get weird though. When trying to compute an angle with atan2, you will see that they are in fact represented differently:
>>> from math import atan2>>> zero = 0.0>>> print(atan2(zero, zero), >>> atan2(zero, -zero))0.0 3.141592653589793
The final case to cover is when all the exponent bits are 0, but the fraction bits are not 0. If we have a representation that doesn’t use some possible bit sequences, we are wasting space. So why not use it to represent even smaller numbers? These numbers are called subnormal (or denormal) numbers.
Basically, the rule is that the exponent is still considered to have its minimal value (–1022) and instead of our “binary scientific” notation always starting with a 1 (as in 1.001), we assume instead that it starts with a 0. So we can have 0.001 times 2 to the power of –1022. This lets us represent numbers up with an exponent 52 less (as small as –1074). Thus:
>>> 2 ** -10745e-324>>> 2 ** -10750.0>>> 2 ** -1075 == 0True
The benefits of subnormal numbers are that, when you subtract two different normal floats, you are guaranteed to get a non-zero result. The cost is lost precision (there is no precision stored in the leading 0s – remember how sigfigs work?). This is called gradual underflow. As floats get smaller and smaller, they gradually lose precision.
Without subnormal numbers you would have to flush to zero, losing all your precision at once and significantly increasing the chance that you’ll accidentally end up dividing by 0. However, subnormal numbers significantly slow down calculations.
Okay, we spent all this time talking about floating point numbers. Besides some weird edge case about 0.1 * 3 that never really comes up, who cares?
Besides the 64-bit float we explored at length, there are also 32-bit floats (single precision) and 16-bit floats (half-precision) commonly available. PyTorch and other numerical computing libraries tend to stick to 32-bit floats by default. Half the size means the computations can be done faster (half as many bits to crunch).
But lower precision comes with a cost. With a standard half-precision float (5 exponent bits, 10 significand bits), the smallest number bigger than 1 is about 1.001. You can’t represent the integer 2049 (you have to pick either 2050 or 2048; and no decimals in between either). 65535 is the largest possible number (or close, depending on precise implementation details).
Google’s Tensor Processing Units instead use a modified 16-bit format for multiplication as part of their many optimizations for deep-learning tasks. The 8-bit exponent with 7-bit significand has just as many exponent bits as a 32-bit floating point number. And it turns out that in deep learning applications, this matters more than the significand bits. Also, when multiplying, the exponents can be added (easy) while the significand bits have to be multiplied (harder). Making the significand smaller makes the silicon that multiplies floats about 8 times smaller.
Plus, the TPU float format flushes to zero instead of using subnormal numbers to boost speed.
If you read the Google blog post about their custom 16-bit float format, you’ll see they talk about “dynamic range.” In fact, this something similar is going on with HDR images (like the ones you can capture on your phone).
A standard image uses an 8-bit RGB encoding. Those 8 bits represent an unsigned integer between 0 and 255. The problem with this is that the relative precision (% jump between consecutive values) is much worse when it’s darker. For example, between a (decimal) pixel value of 10 and 11, there is a 10% jump! But for bright values, the relative difference between 250 and 251 is just 0.4%.
Now the human eye is more sensitive to changes in brightness with dark tones than with bright ones. Meaning the fixed-precision representation is the opposite of what we’d want. Thus, a standard digital or phone camera shooting a JPEG or similar adjusts its sensitivity by recording relatively more precision in the darker tones using a gamma encoding.
The downside to this is that, even if you add bits (say with a 16-bit RBG image), you don’t necessarily gain as much precision in the parts of your image that are bright.
So, an HDR image uses floating point numbers to represent the pixels! This allows a high “dynamic” range (the exponent can be high or low) while still maintaining relative precision across all brightness scales. Perfect for keeping the data from scenes with high contrast. For example in the Radiance HDR format, the exponent is shared across the three colors (channels) in each pixel.
This might be more than you ever wanted to know about floating point numbers. With any luck, you won’t encounter too much numerical under-flow or over-flow that can’t be solved with a simple log-sum-exp or arbitrary-precision integers. But if you do, you’ll be well-prepared! Hopefully, you are also well-positioned to think about just how much precision you need in your machine-learning models as well.
[1] Note: this article assumes a relatively standard setup. It is possible (though uncommon) your results could differ depending on your hardware and software implementation.
[2] I mean, you Python interpreter is allowed to throw an error, crash, and then stop doing things. Your CPU can’t do that exactly: it has to stay alive.
|
[
{
"code": null,
"e": 531,
"s": 171,
"text": "It is a pesky fact that computers work in binary approximations while humans tend to think in terms of exact values. This is why, in your high school physics class, you may have experienced “rounding error” when computing intermediate numerical values in your solutions and why, if you open a python terminal and compute 0.1 * 3, you will get a weird result.1"
},
{
"code": null,
"e": 570,
"s": 531,
"text": ">>> 0.1 + 0.1 + 0.10.30000000000000004"
},
{
"code": null,
"e": 1048,
"s": 570,
"text": "this makes floating point numbers an example of a leaky abstraction. Normally, python and numerical computing libraries like numpy or PyTorch handle this behind the scenes. But understanding the details can help you avoid otherwise unexpected errors and speed up many machine learning computations. For example, Google’s Tensor Processing Units (TPUs) use a modified floating point format to substantially improve computational efficiency while trying to maintain good results."
},
{
"code": null,
"e": 1363,
"s": 1048,
"text": "In this article we’ll dig into the nuts and bolts of floating point numbers, cover the edge cases (numerical underflow and overflow), and close with applications: TPU’s bfloat16 format and HDR imaging. The main background assumed is that you understand how to count in binary, as well as how binary fractions work."
},
{
"code": null,
"e": 1765,
"s": 1363,
"text": "Let’s briefly review counting to 5 in binary: 0, 1, 10, 11, 100, 101. Got it? This is great for an unsigned integer; one which is never negative. For example, if we have an 8 bit unsigned integer, we can represent numbers between 00000000 and 11111111. In decimal, that’s between 0 and 28-1=255. For example, most standard image formats are 8-bit color, which is why the “RGB” values go from 0 to 255."
},
{
"code": null,
"e": 2201,
"s": 1765,
"text": "Note also that we would typically abbreviate this with a hexadecimal (base 16) representation: 0x00 to 0xFF. The 0x prefix means “this is a hex number”. The hexadecimal digits are 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F; so F is essentially short for the four bits “1111” (both 0xF and 1111 are 15 in base-10). Also 8 bits are a byte, so our number is a measly 1 byte. But we won’t focus too much on hexadecimal in this article."
},
{
"code": null,
"e": 2676,
"s": 2201,
"text": "Now, you will notice that with an unsigned int, we can’t represent simple numbers like -2. One way you could try to solve this is to make the first bit represent the sign. Say “0” means negative and “1” means positive. Thinking about 4-bit numbers, 0111 would be -7, while 1111 would be +7. However, this has some weird features. For example, 0000 is “-0” while 1000 is “+0”. This is not great: comparing two numbers for equality would get tricky; plus we are wasting space."
},
{
"code": null,
"e": 2943,
"s": 2676,
"text": "The standard solution to this is to use Two’s Complement, which is what most implementations use for signed integers. (There is also a little-used One’s Complement). However, this isn’t what we are going to need for floating point numbers, so we won’t delve into it."
},
{
"code": null,
"e": 3313,
"s": 2943,
"text": "Let’s consider instead a biased representation of a signed 8-bit integer. It’s biased because, well it’s off by a bit. Instead of letting 00000000 represent 0, we will instead use 01111111 to represent 0. This would normally represent 127 in base 10. But we have biased our representation by 127. That means that 00000000 represents –127, while 11111111 represents 128."
},
{
"code": null,
"e": 3639,
"s": 3313,
"text": "Since most recently produced personal computers use a 64 bit processor, it’s pretty common for the default floating-point implementation to be 64 bit. This is called “double precision” because it is double of the previous-standard 32-bit precision (common computers switched to 64 bit processors sometime in the last decade)."
},
{
"code": null,
"e": 3969,
"s": 3639,
"text": "For context, the basic idea of a floating point number is to use the binary-equivalent of scientific notation. Your high-school science teachers hopefully drilled into you exactly how to do this (along with a whole bunch about those dreaded signficant figures – sigfigs). For example, the scientific representation of 8191.31 is:"
},
{
"code": null,
"e": 4301,
"s": 3969,
"text": "You should notice three key elements. First, a sign (is the number + or -?). Second, we always write the number with a single digit (between 1 and 9 inclusive), followed by a decimal point, followed by a number of digits. Compare that to the below, which are not in scientific notation even though they are true mathematical facts."
},
{
"code": null,
"e": 4561,
"s": 4301,
"text": "With that in mind, let’s think about what will change when we go to binary. First of all, instead of using 10 as the base of the exponent (also called the radix), we’ll want to use 2. Secondly, instead of decimal fractions, we’ll want to use binary fractions."
},
{
"code": null,
"e": 4786,
"s": 4561,
"text": "Please note that I have chosen to write the radix (2 or 10) and their exponents (1 or 0 respectively) in their decimal forms while the numbers on the left hand side and the significands are in binary or decimal respectively."
},
{
"code": null,
"e": 4960,
"s": 4786,
"text": "The binary number 1101 is 13 in base 10. And 13/16 is 0.8125. This is a binary fraction. If you haven’t played with these yet, you should convince yourself of the following:"
},
{
"code": null,
"e": 5081,
"s": 4960,
"text": "This is the binary version of the fact that 0.3 is 3/10 and 0.55 is 55/100 (which can be further simplified, of course)."
},
{
"code": null,
"e": 5156,
"s": 5081,
"text": "Great. We are now ready to dig into the details of floating point numbers."
},
{
"code": null,
"e": 5580,
"s": 5156,
"text": "Here is the diagram for the “IEEE 754” standard commonly implemented. The first bit is the sign. 0 is positive and 1 is negative (the opposite of what we naïvely suggested above). There are 11 bits for the exponent and 52 or 53 (depending how you count) bits for the fraction, also called the “mantissa” or “significand”. The sign just works like the flag we saw above, so we’ll go into each of the last two in some depth."
},
{
"code": null,
"e": 5742,
"s": 5580,
"text": "The exponent is an 11-bit biased (signed) integer like we saw before, but with some caveats. The bias is 210–1=1023, so that the 11 bits 01111111111 represent 0."
},
{
"code": null,
"e": 5974,
"s": 5742,
"text": "This would normally mean that the largest possible exponent is represented by the 11 bits 11111111111 (representing 211–1–1023=1024) and the smallest possible exponent is represented by the 11 bits 00000000000 (representing –1023)."
},
{
"code": null,
"e": 6003,
"s": 5974,
"text": "However, as we will discuss:"
},
{
"code": null,
"e": 6080,
"s": 6003,
"text": "The exponent represented by 11111111111 is reserved for infinities and NaNs."
},
{
"code": null,
"e": 6169,
"s": 6080,
"text": "The 00000000000 exponent is reserved for representing 0 and something else we’ll get to."
},
{
"code": null,
"e": 6278,
"s": 6169,
"text": "This means that the exponent can, in normal circumstances, be between –1022 and 1023 (2046 possible values)."
},
{
"code": null,
"e": 6809,
"s": 6278,
"text": "The 52-bit significand represents a binary fraction. If you review the scientific notation section above, you’ll see that whenever we write a binary number in “binary scientific notation,” the leading digit is always 1. (In base 10 it could be between 1 and 9, but 2–9 aren’t binary digits). Since we know the leading digit will always be 1 (with some caveats to be discussed), we don’t need to actually store it on the computer (this would be wasteful). This is why I said the significand is 53 bits “depending on how you count.”"
},
{
"code": null,
"e": 6994,
"s": 6809,
"text": "In other words, the 52 bits stored on the computer represent the 52 bits that come after the decimal point (or maybe we should call it a “binary point”). A leading 1 is always assumed."
},
{
"code": null,
"e": 7206,
"s": 6994,
"text": "I keep mentioning some caveats, and I intend to put them off for as long as possible. A “normal number” is a non-zero number that doesn’t use any of these caveats, and we are in a position to give some examples."
},
{
"code": null,
"e": 7235,
"s": 7206,
"text": "Recall the three components:"
},
{
"code": null,
"e": 7254,
"s": 7235,
"text": "1 bit for the sign"
},
{
"code": null,
"e": 7389,
"s": 7254,
"text": "11 bits for the exponent, which is (in decimal) between –1022 and +1023. It is represented as a biased integer in the binary encoding."
},
{
"code": null,
"e": 7418,
"s": 7389,
"text": "52 bits for the significand."
},
{
"code": null,
"e": 7463,
"s": 7418,
"text": "How would we represent the decimal number 1?"
},
{
"code": null,
"e": 7723,
"s": 7463,
"text": "Well, the sign is positive, so the sign bit is 0. (Think of 1 as a flag for “negative”). The exponent is 0. Remembering that the biased representation means we add 1023, we get the binary representation 01111111111. Finally, all the fraction bits are 0. Easy:"
},
{
"code": null,
"e": 7918,
"s": 7723,
"text": "I’ve written the binary floating-point representation with a space separating the three parts. As usual, the radix and exponent in the “binary scientific” representation are actually in base 10."
},
{
"code": null,
"e": 8106,
"s": 7918,
"text": "What about a harder example, like 3? 3 is 1.5 times 2 (in decimal), so turning that into a binary fraction, we have 1.1. The exponent 21 is represented as 10000000000 accounting for bias."
},
{
"code": null,
"e": 8265,
"s": 8106,
"text": "What’s the largest (normal) number we can get? We should make the exponent 11111111110 (we can’t make it all ones, that’s reserved), which in decimal is 1023."
},
{
"code": null,
"e": 8286,
"s": 8265,
"text": "We can compute this:"
},
{
"code": null,
"e": 8444,
"s": 8286,
"text": "but we can also take advantage of the fact that Python has native arbitrary-precision integer arithmetic to gratuitously write out all 309 digits in base 10:"
},
{
"code": null,
"e": 8778,
"s": 8444,
"text": ">>> 2 ** 1024 - 2 ** 971179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368"
},
{
"code": null,
"e": 9042,
"s": 8778,
"text": "The smallest possible float is just the negative of this. But what is the smallest positive (normal) float? We already said the smallest positive exponent is –1022. Make the significand all 0s, and that means the smallest positive normal floating point number is:"
},
{
"code": null,
"e": 9184,
"s": 9042,
"text": "Again, arbitrary precision integer arithmetic means we can exploit the middle fraction to easily get an exact decimal value in all its glory."
},
{
"code": null,
"e": 10290,
"s": 9184,
"text": ">>> numerator = 5 ** 1022>>> print('0', str(numerator).rjust(1022, '0'), sep='.')0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000002225073858507201383090232717332404064219215980462331830553327416887204434813918195854283159012511020564067339731035811005152434161553460108856012385377718821130777993532002330479610147442583636071921565046942503734208375250806650616658158948720491179968591639648500635908770118304874799780887753749949451580451605050915399856582470818645113537935804992115981085766051992433352114352390148795699609591288891602992641511063466313393663477586513029371762047325631781485664350872122828637642044846811407613911477062801689853244110024161447421618567166150540154285084716752901903161322778896729707373123334086988983175067838846926092773977972858659654941091369095406136467568702398678315290680984617210924625396728515625"
},
{
"code": null,
"e": 10407,
"s": 10290,
"text": "You know, just in case you were curious. By the way, you can check all of this on your python + hardware setup with:"
},
{
"code": null,
"e": 10645,
"s": 10407,
"text": ">>> import sys>>> sys.float_infosys.float_info(max=1.7976931348623157e+308, max_exp=1024, max_10_exp=308, min=2.2250738585072014e-308, min_exp=-1021, min_10_exp=-307, dig=15, mant_dig=53, epsilon=2.220446049250313e-16, radix=2, rounds=1)"
},
{
"code": null,
"e": 10717,
"s": 10645,
"text": "and essentially every other programming language has a similar feature."
},
{
"code": null,
"e": 10859,
"s": 10717,
"text": "Okay, here’s where things get weird. If all of the exponent bits are 1, then the number represented is either infinite or not a number (NaN):"
},
{
"code": null,
"e": 10961,
"s": 10859,
"text": "If the fraction bits are all 0, the number is infinite. The sign bit controls whether it is –∞ or +∞."
},
{
"code": null,
"e": 11292,
"s": 10961,
"text": "If the fraction bits are not all 0, the “number” is not a number (NaN). Depending on the first bit it can be either a quiet NaN or a signaling NaN. a quiet NaN propagates (add another number to it and you just get NaN). A signaling NaN is supposed to “throw an error”, roughly speaking.2 The remaining bits are typically not used."
},
{
"code": null,
"e": 11556,
"s": 11292,
"text": "The thing I initially found surprising about this is that this is a hardware implementation on commonly used chips. This means, for example, you can use it on a GPU. Why would you want to do that? Well, consider the convenient fact that e to the power of –∞ is 0."
},
{
"code": null,
"e": 11641,
"s": 11556,
"text": ">>> from math import exp>>> minus_infinity = float('-inf')>>> exp(minus_infinity)0.0"
},
{
"code": null,
"e": 12177,
"s": 11641,
"text": "In the paper that introduced the transformer architecture for NLP tasks (the one used by BERT, GPT-2, and their more recent cousins), the training was autoregressive which meant that in the attention module’s softmax layers, certain outputs were required to be 0. But if you look at the formula for the softmax and recall that your high school math teacher told you that “there is no number such that exponentiating to it is 0,” you will see it’s tricky to make a softmax return 0. Unless of course, you make (minus) infinity a number!"
},
{
"code": null,
"e": 12442,
"s": 12177,
"text": "And, crucially, this is a hardware implementation. If it was a gimmicky Python (or PyTorch, or Numpy) workaround that represented numbers as an object which might sometimes contain a floating point number, this would substantially slow down numerical computations."
},
{
"code": null,
"e": 12515,
"s": 12442,
"text": "Also, the unending complexity of computer hardware is always impressive."
},
{
"code": null,
"e": 13043,
"s": 12515,
"text": "But wait, there’s more! We haven’t even described how to represent 0 yet. Using our exponents and our fraction bits, we were only able to make a very small positive number, not actually 0. The solution of course is that if the exponent bits are all 0 and so is the fraction, then the number is 0. In other words, if the exponent bits are 00000000000 and the fraction bits are also all zeros. Note this means that 0 is “signed” – there is both +0 and –0. In Python, they are stored differently, but they are equal to each other."
},
{
"code": null,
"e": 13109,
"s": 13043,
"text": ">>> zero = 0.0>>> print(zero, -zero)0.0 -0.0>>> zero == -zeroTrue"
},
{
"code": null,
"e": 13271,
"s": 13109,
"text": "There are a few edge cases where things get weird though. When trying to compute an angle with atan2, you will see that they are in fact represented differently:"
},
{
"code": null,
"e": 13392,
"s": 13271,
"text": ">>> from math import atan2>>> zero = 0.0>>> print(atan2(zero, zero), >>> atan2(zero, -zero))0.0 3.141592653589793"
},
{
"code": null,
"e": 13693,
"s": 13392,
"text": "The final case to cover is when all the exponent bits are 0, but the fraction bits are not 0. If we have a representation that doesn’t use some possible bit sequences, we are wasting space. So why not use it to represent even smaller numbers? These numbers are called subnormal (or denormal) numbers."
},
{
"code": null,
"e": 14057,
"s": 13693,
"text": "Basically, the rule is that the exponent is still considered to have its minimal value (–1022) and instead of our “binary scientific” notation always starting with a 1 (as in 1.001), we assume instead that it starts with a 0. So we can have 0.001 times 2 to the power of –1022. This lets us represent numbers up with an exponent 52 less (as small as –1074). Thus:"
},
{
"code": null,
"e": 14118,
"s": 14057,
"text": ">>> 2 ** -10745e-324>>> 2 ** -10750.0>>> 2 ** -1075 == 0True"
},
{
"code": null,
"e": 14460,
"s": 14118,
"text": "The benefits of subnormal numbers are that, when you subtract two different normal floats, you are guaranteed to get a non-zero result. The cost is lost precision (there is no precision stored in the leading 0s – remember how sigfigs work?). This is called gradual underflow. As floats get smaller and smaller, they gradually lose precision."
},
{
"code": null,
"e": 14705,
"s": 14460,
"text": "Without subnormal numbers you would have to flush to zero, losing all your precision at once and significantly increasing the chance that you’ll accidentally end up dividing by 0. However, subnormal numbers significantly slow down calculations."
},
{
"code": null,
"e": 14854,
"s": 14705,
"text": "Okay, we spent all this time talking about floating point numbers. Besides some weird edge case about 0.1 * 3 that never really comes up, who cares?"
},
{
"code": null,
"e": 15183,
"s": 14854,
"text": "Besides the 64-bit float we explored at length, there are also 32-bit floats (single precision) and 16-bit floats (half-precision) commonly available. PyTorch and other numerical computing libraries tend to stick to 32-bit floats by default. Half the size means the computations can be done faster (half as many bits to crunch)."
},
{
"code": null,
"e": 15555,
"s": 15183,
"text": "But lower precision comes with a cost. With a standard half-precision float (5 exponent bits, 10 significand bits), the smallest number bigger than 1 is about 1.001. You can’t represent the integer 2049 (you have to pick either 2050 or 2048; and no decimals in between either). 65535 is the largest possible number (or close, depending on precise implementation details)."
},
{
"code": null,
"e": 16123,
"s": 15555,
"text": "Google’s Tensor Processing Units instead use a modified 16-bit format for multiplication as part of their many optimizations for deep-learning tasks. The 8-bit exponent with 7-bit significand has just as many exponent bits as a 32-bit floating point number. And it turns out that in deep learning applications, this matters more than the significand bits. Also, when multiplying, the exponents can be added (easy) while the significand bits have to be multiplied (harder). Making the significand smaller makes the silicon that multiplies floats about 8 times smaller."
},
{
"code": null,
"e": 16217,
"s": 16123,
"text": "Plus, the TPU float format flushes to zero instead of using subnormal numbers to boost speed."
},
{
"code": null,
"e": 16441,
"s": 16217,
"text": "If you read the Google blog post about their custom 16-bit float format, you’ll see they talk about “dynamic range.” In fact, this something similar is going on with HDR images (like the ones you can capture on your phone)."
},
{
"code": null,
"e": 16830,
"s": 16441,
"text": "A standard image uses an 8-bit RGB encoding. Those 8 bits represent an unsigned integer between 0 and 255. The problem with this is that the relative precision (% jump between consecutive values) is much worse when it’s darker. For example, between a (decimal) pixel value of 10 and 11, there is a 10% jump! But for bright values, the relative difference between 250 and 251 is just 0.4%."
},
{
"code": null,
"e": 17183,
"s": 16830,
"text": "Now the human eye is more sensitive to changes in brightness with dark tones than with bright ones. Meaning the fixed-precision representation is the opposite of what we’d want. Thus, a standard digital or phone camera shooting a JPEG or similar adjusts its sensitivity by recording relatively more precision in the darker tones using a gamma encoding."
},
{
"code": null,
"e": 17354,
"s": 17183,
"text": "The downside to this is that, even if you add bits (say with a 16-bit RBG image), you don’t necessarily gain as much precision in the parts of your image that are bright."
},
{
"code": null,
"e": 17740,
"s": 17354,
"text": "So, an HDR image uses floating point numbers to represent the pixels! This allows a high “dynamic” range (the exponent can be high or low) while still maintaining relative precision across all brightness scales. Perfect for keeping the data from scenes with high contrast. For example in the Radiance HDR format, the exponent is shared across the three colors (channels) in each pixel."
},
{
"code": null,
"e": 18145,
"s": 17740,
"text": "This might be more than you ever wanted to know about floating point numbers. With any luck, you won’t encounter too much numerical under-flow or over-flow that can’t be solved with a simple log-sum-exp or arbitrary-precision integers. But if you do, you’ll be well-prepared! Hopefully, you are also well-positioned to think about just how much precision you need in your machine-learning models as well."
},
{
"code": null,
"e": 18320,
"s": 18145,
"text": "[1] Note: this article assumes a relatively standard setup. It is possible (though uncommon) your results could differ depending on your hardware and software implementation."
}
] |
Machine learning hardware (FPGAs, GPUs, CUDA) | Towards Data Science
|
There are options outside of GPUs (Graphics Processing Units) when it comes to deploying a neural network, namely the FPGA (Field Programmable Gate Array). Before delving into FPGAs and their implementation though, it’s good to understand a bit about GPU architecture and why GPUs are a staple for neural networks.
Popular libraries such as Tensorflow run using CUDA (Compute Unified Device Architecture) to process data on GPUs, harnessing their parallel computing power. This work is called GPGPU (General Purpose GPU) programming. It has been adapted to deep learning models which require at least thousands of arithmetic operations.
The deep convolutional neural network, as shown below, requires filters to be slid across pixel regions while outputting a weighted sum at each iteration. For each layer this process gets repeated thousands of times with varying filters of the same size. Logically, deep models get computationally heavy and GPUs come in handy.
Tensorflow can be built on the back of CUDA, which saves the end user from implementing parallel code and understanding the architecture of their chip. Its convenience and high optimization makes it perfect for widespread use.
FPGAs did not offer such a convenient solution earlier, using them required a deep understanding of how hardware works. But recent progress has made them more accessible and there’s more on that to come later.
Contents of this article assume little to no knowledge of how different hardware models function. It goes over the following:
GPUs, CPUs and CUDA
Advantages and design of FPGAs
HDL as a method for FPGA deployment
HLS as a method for FPGA deployment
FPGA deployment using LeFlow
Features of LeFlow for optimisation
CPUs (Central Processing Units) are designed for serial operations and supporting advanced logic. This is reflected in their design which contains less cores and more cache memory to quickly fetch complex instructions.
GPUs, however have hundreds of smaller cores for simple computation, and thus a higher throughput as compared to CPUs.
CUDA accesses a GPU’s many cores by abstracting them into blocks. Each block contains up to 512 accessible threads, with potentially 65 535 blocks able to run at once. Every thread executes a short program, and the catch is that it can run in parallel with other threads. Tensorflow takes advantage of this pattern to improve processing power, often running hundreds to thousands of threads simultaneously.
To learn more about using CUDA visit Nvidia’s Developer Blog or check out the book CUDA By Example.
Tensorflow is divided into two sections: library and runtime.
Library is the creation of a computational graph (neural network) and runtime is the execution of it on some hardware platform.
The preferred platform is a GPU, however there is an alternative: FPGAs.
FPGAs can produce circuits with thousands of memory units for computation, so they work similarly to GPUs and their threads in CUDA. FPGAs have adaptable architecture, enabling additional optimisations for an increase in throughput. Thus the possible volume of calculations makes FPGAs a viable solution to GPUs.
Comparatively FPGAs have lower power consumption and can be optimal for embedded applications. They are also an accepted standard in safety-critical operations such as ADAS (Advanced Driver Assistance Systems) in automotive.
Furthermore, FPGAs can implement custom data types whereas GPUs are limited by architecture. With neural networks transforming in many ways and reaching out to more industries, it is useful to have the adapatability FPGAs offer.
An FPGA (Field Programmable Gate Array) is a customisable hardware device. It can be thought of as a sea of floating logic gates. A designer comes along and writes down a program using a hardware description language (HDL), such as Verilog or VHDL. That program dictates what connections are made and how they are implemented using digital components. Another word for HDL is RTL (register-transfer level) language.
FPGAs are easy to spot, look for an oversized Arduino.
Just kidding, they come in all shapes and sizes.
Using software analogous to a compiler, HDL is synthesized (figure out what gates to use), then routed (connect parts together) to form an optimized digital circuit. These tools (HDL, synthesis, routing, timing analysis, testing) are all encompassed in a software suite, some include Xilinx Design Tools and Quartus Prime.
Currently models get trained using a GPU, but then are deployed on an FPGA for real-time processing.
For FPGAs, the tricky part is implementing ML frameworks which are written in higher level languages such as Python. HDL isn’t inherently a programming platform, it is code written to define hardware components such as registers and counters. Some HDL languages include: Verilog, VHDL.
Shown below is a snippet of some code used to create a serial bit detector.
If you’re unfamiliar with it, try guessing what it does.
Done? Even if you stare at it for a while, it isn’t obvious.
Most of the time FSMs (Finite State Machines) are used to split the task up into states with input-dependent transitions. All of this is done before programming, to figure out how the circuit will work per each clock cycle. Then this diagram, as shown below, gets converted into blocks of HDL code.
Back to the topic: main point is that there’s no direct translation to convert a loop in Python to a bunch of wires in Verilog.
Given the possible complexity of a design, it can be very difficult to debug it for further optimisation. There are no abstractions to simplify the process as there would be in CUDA, where a thread can be selected and modified.
Well no, FPGAs aren’t useless.
One way to work around the programming problem is to use HLS (high level synthesis) tools such as LegUp to generate programs in Verilog for deployment. HLS tools allow designers to avoid writing HDL from scratch and instead use a more intuitive, algorithmic programming language (C).
HLS tools abstract away hardware-level design; similar to how CUDA automatically sets up concurrent blocks and threads when the model is run.
HLS tools require C code as an input which gets mapped to an LLVM IR (intermediate representation) for execution. The tools are used to convert procedural descriptions to a hardware implementation.
Their role in FPGA design is shown below.
LLVM is not an acronym, it is a library that constructs assembly-like instructions (IRs). These programs are easier for HLS tools to process and can be used to create synthesizable code for an FPGA.
IRs are used to describe source code in a general format, allowing use by various programs.
To learn more about LLVM and IRs, refer to Dr. Chisnall’s slides.
The main issue is converting programs and transferring libraries written for Python to C for HLS tools to function. Currently there is no support for Tensorflow in C, so this solution is very difficult. Evidently the requirement for laying out and creating hardware is a large barrier to the use of FPGAs in deep learning.
The LeFlow Toolkit allows engineers to design, train and test their models using Python, then deploy it directly to an FPGA for use. LeFlow simplifies the design process by allowing HLS tools to be compatible with Python and Tensorflow, acting as an adapter.
The software was designed by researchers within the ECE (Electrical and Computer Engineering) department at the University of British Columbia: Daniel H. Noronha, Bahar Salehpour, and Steven J.E. Wilson.
The following sections detail how LeFlow is integrated with Tensorflow and an FPGA, if you’re only interested in implementation skip to: Tuning Time.
An XLA (Accelerated Linear Algebra) compiler made for Tensorflow outputs an LLVM IR. LeFlow restructures the IR and optimizes it to be used with HLS tools.
From thereon, HLS tools do all the leg work to convert that IR to a program deployed onto FPGAs as described in the section: Should We Stick to GPUs?
LeFlow takes the IR as an input. Algorithm 1 is an IR of Tensorflow loading two floats.
The program is difficult to follow and looks messy. LeFlow will clean it up and change it.
Its goal is to create global variables, then map them as inputs and outputs of a hardware interface. The diagram shown below contains a general overview of the synthesized circuit after LeFlow re-formats and passes the IR through LegUp.
Evidently a hardware interface requires additional modules and signals such as clock, reset, memory, and memory controller. LegUp handles the creation of these parts including timing specifications for the clock.
LeFlow sets up a volatile load for FPGA registers. This allows variable access, and automatic circuit modification when high-level code is changed. Changes are evident in lines 1, 6, and 7 mainly.
LeFlow finishes its job as shown by the revamped IR of Algorithm 2, and the rest is handled by an HLS tool!
An interesting feature of LeFlow is the ability to change hardware. LeFlow offers unrolling, and memory partitioning parameters which turbocharge computations when used correctly. This combined with the low latency inherent to FPGAs allows them to work with exceptional efficiency.
The great thing is that these parameters can be specified in Python and the instructions get passed on directly down to the circuit level.
Unrolling is used for looping and is a careful balancing act. The idea is to do multiple calculations (or replications) per iteration and take larger steps.
This looks the same as a regular for loop, but there are more components added at the hardware level to perform more calculations per clock cycle (or loop iteration). More on unrolling is available at Keil’s User Guide.
int sum = 0;for(int i = 0; i < 10; i+=2) sum += a[i]; sum += a[i+1];
Looking above we can unroll by a factor of two, meaning two iterations are taken at once and the loop increases by two steps.
This takes advantage of an FPGA’s parallelism to work more like a GPU, evidently per this example cycles were reduced by 13%.
Keep in mind that additionally added hardware can cause inefficiency or size constraints.
The HLS used in LeFlow’s pipeline requires a dual-port RAM (Random Access Memory) for storing values.
This is problematic as RAM is designed to store lots of data, but at a cost of being very slow. It can take ten times the amount of clock cycles to fetch values from it.
Luckily FPGAs contain many other independent memory units, so LeFlow can partition its data into multiple unique storage points. In a sense its analogous to adding more cores in a processor. It reduces clock cycles by allowing more instructions to execute concurrently.
Imagine the task is to multiply elements of two arrays of size eight together. In parallel computing, tasks get split up into groups to be executed simultaneously. A cyclic decomposition means a certain set of steps get repeated at the same time.
FPGAs can run more efficiently using memory partitions. The following schedule has been run over eight clock cycles.
In (a) there are no memory partitions, so one element from each array gets loaded, multiplied, and stored per cycle. The process continues and take eight cycles till completion.
In (b), arrays are cyclically partitioned into two separate memories. Two elements from each array are loaded, multiplied, and stored per cycle. Larger chunks indicate that processes occur simultaneously albeit in different sections of hardware. This reduces it down to six cycles.
In c), arrays are cyclically partitioned into four separate memories and there is a reduction to five cycles. Four elements from each array are loaded, multiplied, and stored per cycle.
Once LeFlow is set up correctly, all it needs to run without any additional configurations (such as unrolling) is a device selection line:
with tf.device("device:XLA_CPU:0")
It indicates the XLA-compiler will be used to generate LLVM and start the LeFlow conversion process.
Now you’re an expert on understanding how LeFlow works and maybe FPGAs are right for you.
There are loads of examples of LeFlow and its specific installation on Github. Daniel Holanda (one of the co-authors) has source code up for MNIST digit recognition among other things, so grab an FPGA and give it a go!
|
[
{
"code": null,
"e": 486,
"s": 171,
"text": "There are options outside of GPUs (Graphics Processing Units) when it comes to deploying a neural network, namely the FPGA (Field Programmable Gate Array). Before delving into FPGAs and their implementation though, it’s good to understand a bit about GPU architecture and why GPUs are a staple for neural networks."
},
{
"code": null,
"e": 808,
"s": 486,
"text": "Popular libraries such as Tensorflow run using CUDA (Compute Unified Device Architecture) to process data on GPUs, harnessing their parallel computing power. This work is called GPGPU (General Purpose GPU) programming. It has been adapted to deep learning models which require at least thousands of arithmetic operations."
},
{
"code": null,
"e": 1136,
"s": 808,
"text": "The deep convolutional neural network, as shown below, requires filters to be slid across pixel regions while outputting a weighted sum at each iteration. For each layer this process gets repeated thousands of times with varying filters of the same size. Logically, deep models get computationally heavy and GPUs come in handy."
},
{
"code": null,
"e": 1363,
"s": 1136,
"text": "Tensorflow can be built on the back of CUDA, which saves the end user from implementing parallel code and understanding the architecture of their chip. Its convenience and high optimization makes it perfect for widespread use."
},
{
"code": null,
"e": 1573,
"s": 1363,
"text": "FPGAs did not offer such a convenient solution earlier, using them required a deep understanding of how hardware works. But recent progress has made them more accessible and there’s more on that to come later."
},
{
"code": null,
"e": 1699,
"s": 1573,
"text": "Contents of this article assume little to no knowledge of how different hardware models function. It goes over the following:"
},
{
"code": null,
"e": 1719,
"s": 1699,
"text": "GPUs, CPUs and CUDA"
},
{
"code": null,
"e": 1750,
"s": 1719,
"text": "Advantages and design of FPGAs"
},
{
"code": null,
"e": 1786,
"s": 1750,
"text": "HDL as a method for FPGA deployment"
},
{
"code": null,
"e": 1822,
"s": 1786,
"text": "HLS as a method for FPGA deployment"
},
{
"code": null,
"e": 1851,
"s": 1822,
"text": "FPGA deployment using LeFlow"
},
{
"code": null,
"e": 1887,
"s": 1851,
"text": "Features of LeFlow for optimisation"
},
{
"code": null,
"e": 2106,
"s": 1887,
"text": "CPUs (Central Processing Units) are designed for serial operations and supporting advanced logic. This is reflected in their design which contains less cores and more cache memory to quickly fetch complex instructions."
},
{
"code": null,
"e": 2225,
"s": 2106,
"text": "GPUs, however have hundreds of smaller cores for simple computation, and thus a higher throughput as compared to CPUs."
},
{
"code": null,
"e": 2632,
"s": 2225,
"text": "CUDA accesses a GPU’s many cores by abstracting them into blocks. Each block contains up to 512 accessible threads, with potentially 65 535 blocks able to run at once. Every thread executes a short program, and the catch is that it can run in parallel with other threads. Tensorflow takes advantage of this pattern to improve processing power, often running hundreds to thousands of threads simultaneously."
},
{
"code": null,
"e": 2732,
"s": 2632,
"text": "To learn more about using CUDA visit Nvidia’s Developer Blog or check out the book CUDA By Example."
},
{
"code": null,
"e": 2794,
"s": 2732,
"text": "Tensorflow is divided into two sections: library and runtime."
},
{
"code": null,
"e": 2922,
"s": 2794,
"text": "Library is the creation of a computational graph (neural network) and runtime is the execution of it on some hardware platform."
},
{
"code": null,
"e": 2995,
"s": 2922,
"text": "The preferred platform is a GPU, however there is an alternative: FPGAs."
},
{
"code": null,
"e": 3308,
"s": 2995,
"text": "FPGAs can produce circuits with thousands of memory units for computation, so they work similarly to GPUs and their threads in CUDA. FPGAs have adaptable architecture, enabling additional optimisations for an increase in throughput. Thus the possible volume of calculations makes FPGAs a viable solution to GPUs."
},
{
"code": null,
"e": 3533,
"s": 3308,
"text": "Comparatively FPGAs have lower power consumption and can be optimal for embedded applications. They are also an accepted standard in safety-critical operations such as ADAS (Advanced Driver Assistance Systems) in automotive."
},
{
"code": null,
"e": 3762,
"s": 3533,
"text": "Furthermore, FPGAs can implement custom data types whereas GPUs are limited by architecture. With neural networks transforming in many ways and reaching out to more industries, it is useful to have the adapatability FPGAs offer."
},
{
"code": null,
"e": 4178,
"s": 3762,
"text": "An FPGA (Field Programmable Gate Array) is a customisable hardware device. It can be thought of as a sea of floating logic gates. A designer comes along and writes down a program using a hardware description language (HDL), such as Verilog or VHDL. That program dictates what connections are made and how they are implemented using digital components. Another word for HDL is RTL (register-transfer level) language."
},
{
"code": null,
"e": 4233,
"s": 4178,
"text": "FPGAs are easy to spot, look for an oversized Arduino."
},
{
"code": null,
"e": 4282,
"s": 4233,
"text": "Just kidding, they come in all shapes and sizes."
},
{
"code": null,
"e": 4605,
"s": 4282,
"text": "Using software analogous to a compiler, HDL is synthesized (figure out what gates to use), then routed (connect parts together) to form an optimized digital circuit. These tools (HDL, synthesis, routing, timing analysis, testing) are all encompassed in a software suite, some include Xilinx Design Tools and Quartus Prime."
},
{
"code": null,
"e": 4706,
"s": 4605,
"text": "Currently models get trained using a GPU, but then are deployed on an FPGA for real-time processing."
},
{
"code": null,
"e": 4992,
"s": 4706,
"text": "For FPGAs, the tricky part is implementing ML frameworks which are written in higher level languages such as Python. HDL isn’t inherently a programming platform, it is code written to define hardware components such as registers and counters. Some HDL languages include: Verilog, VHDL."
},
{
"code": null,
"e": 5068,
"s": 4992,
"text": "Shown below is a snippet of some code used to create a serial bit detector."
},
{
"code": null,
"e": 5125,
"s": 5068,
"text": "If you’re unfamiliar with it, try guessing what it does."
},
{
"code": null,
"e": 5186,
"s": 5125,
"text": "Done? Even if you stare at it for a while, it isn’t obvious."
},
{
"code": null,
"e": 5485,
"s": 5186,
"text": "Most of the time FSMs (Finite State Machines) are used to split the task up into states with input-dependent transitions. All of this is done before programming, to figure out how the circuit will work per each clock cycle. Then this diagram, as shown below, gets converted into blocks of HDL code."
},
{
"code": null,
"e": 5613,
"s": 5485,
"text": "Back to the topic: main point is that there’s no direct translation to convert a loop in Python to a bunch of wires in Verilog."
},
{
"code": null,
"e": 5841,
"s": 5613,
"text": "Given the possible complexity of a design, it can be very difficult to debug it for further optimisation. There are no abstractions to simplify the process as there would be in CUDA, where a thread can be selected and modified."
},
{
"code": null,
"e": 5872,
"s": 5841,
"text": "Well no, FPGAs aren’t useless."
},
{
"code": null,
"e": 6156,
"s": 5872,
"text": "One way to work around the programming problem is to use HLS (high level synthesis) tools such as LegUp to generate programs in Verilog for deployment. HLS tools allow designers to avoid writing HDL from scratch and instead use a more intuitive, algorithmic programming language (C)."
},
{
"code": null,
"e": 6298,
"s": 6156,
"text": "HLS tools abstract away hardware-level design; similar to how CUDA automatically sets up concurrent blocks and threads when the model is run."
},
{
"code": null,
"e": 6496,
"s": 6298,
"text": "HLS tools require C code as an input which gets mapped to an LLVM IR (intermediate representation) for execution. The tools are used to convert procedural descriptions to a hardware implementation."
},
{
"code": null,
"e": 6538,
"s": 6496,
"text": "Their role in FPGA design is shown below."
},
{
"code": null,
"e": 6737,
"s": 6538,
"text": "LLVM is not an acronym, it is a library that constructs assembly-like instructions (IRs). These programs are easier for HLS tools to process and can be used to create synthesizable code for an FPGA."
},
{
"code": null,
"e": 6829,
"s": 6737,
"text": "IRs are used to describe source code in a general format, allowing use by various programs."
},
{
"code": null,
"e": 6895,
"s": 6829,
"text": "To learn more about LLVM and IRs, refer to Dr. Chisnall’s slides."
},
{
"code": null,
"e": 7218,
"s": 6895,
"text": "The main issue is converting programs and transferring libraries written for Python to C for HLS tools to function. Currently there is no support for Tensorflow in C, so this solution is very difficult. Evidently the requirement for laying out and creating hardware is a large barrier to the use of FPGAs in deep learning."
},
{
"code": null,
"e": 7477,
"s": 7218,
"text": "The LeFlow Toolkit allows engineers to design, train and test their models using Python, then deploy it directly to an FPGA for use. LeFlow simplifies the design process by allowing HLS tools to be compatible with Python and Tensorflow, acting as an adapter."
},
{
"code": null,
"e": 7681,
"s": 7477,
"text": "The software was designed by researchers within the ECE (Electrical and Computer Engineering) department at the University of British Columbia: Daniel H. Noronha, Bahar Salehpour, and Steven J.E. Wilson."
},
{
"code": null,
"e": 7831,
"s": 7681,
"text": "The following sections detail how LeFlow is integrated with Tensorflow and an FPGA, if you’re only interested in implementation skip to: Tuning Time."
},
{
"code": null,
"e": 7987,
"s": 7831,
"text": "An XLA (Accelerated Linear Algebra) compiler made for Tensorflow outputs an LLVM IR. LeFlow restructures the IR and optimizes it to be used with HLS tools."
},
{
"code": null,
"e": 8137,
"s": 7987,
"text": "From thereon, HLS tools do all the leg work to convert that IR to a program deployed onto FPGAs as described in the section: Should We Stick to GPUs?"
},
{
"code": null,
"e": 8225,
"s": 8137,
"text": "LeFlow takes the IR as an input. Algorithm 1 is an IR of Tensorflow loading two floats."
},
{
"code": null,
"e": 8316,
"s": 8225,
"text": "The program is difficult to follow and looks messy. LeFlow will clean it up and change it."
},
{
"code": null,
"e": 8553,
"s": 8316,
"text": "Its goal is to create global variables, then map them as inputs and outputs of a hardware interface. The diagram shown below contains a general overview of the synthesized circuit after LeFlow re-formats and passes the IR through LegUp."
},
{
"code": null,
"e": 8766,
"s": 8553,
"text": "Evidently a hardware interface requires additional modules and signals such as clock, reset, memory, and memory controller. LegUp handles the creation of these parts including timing specifications for the clock."
},
{
"code": null,
"e": 8963,
"s": 8766,
"text": "LeFlow sets up a volatile load for FPGA registers. This allows variable access, and automatic circuit modification when high-level code is changed. Changes are evident in lines 1, 6, and 7 mainly."
},
{
"code": null,
"e": 9071,
"s": 8963,
"text": "LeFlow finishes its job as shown by the revamped IR of Algorithm 2, and the rest is handled by an HLS tool!"
},
{
"code": null,
"e": 9353,
"s": 9071,
"text": "An interesting feature of LeFlow is the ability to change hardware. LeFlow offers unrolling, and memory partitioning parameters which turbocharge computations when used correctly. This combined with the low latency inherent to FPGAs allows them to work with exceptional efficiency."
},
{
"code": null,
"e": 9492,
"s": 9353,
"text": "The great thing is that these parameters can be specified in Python and the instructions get passed on directly down to the circuit level."
},
{
"code": null,
"e": 9649,
"s": 9492,
"text": "Unrolling is used for looping and is a careful balancing act. The idea is to do multiple calculations (or replications) per iteration and take larger steps."
},
{
"code": null,
"e": 9869,
"s": 9649,
"text": "This looks the same as a regular for loop, but there are more components added at the hardware level to perform more calculations per clock cycle (or loop iteration). More on unrolling is available at Keil’s User Guide."
},
{
"code": null,
"e": 9942,
"s": 9869,
"text": "int sum = 0;for(int i = 0; i < 10; i+=2) sum += a[i]; sum += a[i+1];"
},
{
"code": null,
"e": 10068,
"s": 9942,
"text": "Looking above we can unroll by a factor of two, meaning two iterations are taken at once and the loop increases by two steps."
},
{
"code": null,
"e": 10194,
"s": 10068,
"text": "This takes advantage of an FPGA’s parallelism to work more like a GPU, evidently per this example cycles were reduced by 13%."
},
{
"code": null,
"e": 10284,
"s": 10194,
"text": "Keep in mind that additionally added hardware can cause inefficiency or size constraints."
},
{
"code": null,
"e": 10386,
"s": 10284,
"text": "The HLS used in LeFlow’s pipeline requires a dual-port RAM (Random Access Memory) for storing values."
},
{
"code": null,
"e": 10556,
"s": 10386,
"text": "This is problematic as RAM is designed to store lots of data, but at a cost of being very slow. It can take ten times the amount of clock cycles to fetch values from it."
},
{
"code": null,
"e": 10826,
"s": 10556,
"text": "Luckily FPGAs contain many other independent memory units, so LeFlow can partition its data into multiple unique storage points. In a sense its analogous to adding more cores in a processor. It reduces clock cycles by allowing more instructions to execute concurrently."
},
{
"code": null,
"e": 11073,
"s": 10826,
"text": "Imagine the task is to multiply elements of two arrays of size eight together. In parallel computing, tasks get split up into groups to be executed simultaneously. A cyclic decomposition means a certain set of steps get repeated at the same time."
},
{
"code": null,
"e": 11190,
"s": 11073,
"text": "FPGAs can run more efficiently using memory partitions. The following schedule has been run over eight clock cycles."
},
{
"code": null,
"e": 11368,
"s": 11190,
"text": "In (a) there are no memory partitions, so one element from each array gets loaded, multiplied, and stored per cycle. The process continues and take eight cycles till completion."
},
{
"code": null,
"e": 11650,
"s": 11368,
"text": "In (b), arrays are cyclically partitioned into two separate memories. Two elements from each array are loaded, multiplied, and stored per cycle. Larger chunks indicate that processes occur simultaneously albeit in different sections of hardware. This reduces it down to six cycles."
},
{
"code": null,
"e": 11836,
"s": 11650,
"text": "In c), arrays are cyclically partitioned into four separate memories and there is a reduction to five cycles. Four elements from each array are loaded, multiplied, and stored per cycle."
},
{
"code": null,
"e": 11975,
"s": 11836,
"text": "Once LeFlow is set up correctly, all it needs to run without any additional configurations (such as unrolling) is a device selection line:"
},
{
"code": null,
"e": 12010,
"s": 11975,
"text": "with tf.device(\"device:XLA_CPU:0\")"
},
{
"code": null,
"e": 12111,
"s": 12010,
"text": "It indicates the XLA-compiler will be used to generate LLVM and start the LeFlow conversion process."
},
{
"code": null,
"e": 12201,
"s": 12111,
"text": "Now you’re an expert on understanding how LeFlow works and maybe FPGAs are right for you."
}
] |
How to Convert a String value to Short value in Java with Examples - GeeksforGeeks
|
29 Jan, 2020
Given a String “str” in Java, the task is to convert this string to short type.
Examples:
Input: str = "1"
Output: 1
Input: str = "3"
Output: 3
Approach 1: (Naive Method)One method is to traverse the string and add the numbers one by one to the short type. This method is not an efficient approach.
Approach 2: (Using Short.parseShort() method)The simplest way to do so is using parseShort() method of Short class in java.lang package. This method takes the string to be parsed and returns the short type from it. If not convertible, this method throws error.
Syntax:
Short.parseShort(str);
Below is the implementation of the above approach:
Example 1: To show successful conversion
// Java Program to convert string to short class GFG { // Function to convert String to Short public static short convertStringToShort(String str) { // Convert string to short // using parseShort() method return Short.parseShort(str); } // Driver code public static void main(String[] args) { // The string value String stringValue = "1"; // The expected short value short shortValue; // Convert string to short shortValue = convertStringToShort(stringValue); // Print the expected short value System.out.println( stringValue + " after converting into short = " + shortValue); }}
1 after converting into short = 1
Approach 3: (Using Short.valueOf() method)The valueOf() method of Short class converts data from its internal form into human-readable form.
Syntax:
Short.valueOf(str);
Below is the implementation of the above approach:
Example 1: To show successful conversion
// Java Program to convert string to short class GFG { // Function to convert String to Short public static short convertStringToShort(String str) { // Convert string to short // using valueOf() method return Short.valueOf(str); } // Driver code public static void main(String[] args) { // The string value String stringValue = "1"; // The expected short value short shortValue; // Convert string to short shortValue = convertStringToShort(stringValue); // Print the expected short value System.out.println( stringValue + " after converting into short = " + shortValue); }}
1 after converting into short = 1
Java-Data Types
Java-Short
Java-String-Programs
Java-Strings
Java
Java Programs
Java-Strings
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Constructors in Java
Stream In Java
Exceptions in Java
Different ways of Reading a text file in Java
Functional Interfaces in Java
Convert a String to Character array in Java
Java Programming Examples
Convert Double to Integer in Java
Implementing a Linked List in Java using Class
How to Iterate HashMap in Java?
|
[
{
"code": null,
"e": 23868,
"s": 23840,
"text": "\n29 Jan, 2020"
},
{
"code": null,
"e": 23948,
"s": 23868,
"text": "Given a String “str” in Java, the task is to convert this string to short type."
},
{
"code": null,
"e": 23958,
"s": 23948,
"text": "Examples:"
},
{
"code": null,
"e": 24014,
"s": 23958,
"text": "Input: str = \"1\"\nOutput: 1\n\nInput: str = \"3\"\nOutput: 3\n"
},
{
"code": null,
"e": 24169,
"s": 24014,
"text": "Approach 1: (Naive Method)One method is to traverse the string and add the numbers one by one to the short type. This method is not an efficient approach."
},
{
"code": null,
"e": 24430,
"s": 24169,
"text": "Approach 2: (Using Short.parseShort() method)The simplest way to do so is using parseShort() method of Short class in java.lang package. This method takes the string to be parsed and returns the short type from it. If not convertible, this method throws error."
},
{
"code": null,
"e": 24438,
"s": 24430,
"text": "Syntax:"
},
{
"code": null,
"e": 24462,
"s": 24438,
"text": "Short.parseShort(str);\n"
},
{
"code": null,
"e": 24513,
"s": 24462,
"text": "Below is the implementation of the above approach:"
},
{
"code": null,
"e": 24554,
"s": 24513,
"text": "Example 1: To show successful conversion"
},
{
"code": "// Java Program to convert string to short class GFG { // Function to convert String to Short public static short convertStringToShort(String str) { // Convert string to short // using parseShort() method return Short.parseShort(str); } // Driver code public static void main(String[] args) { // The string value String stringValue = \"1\"; // The expected short value short shortValue; // Convert string to short shortValue = convertStringToShort(stringValue); // Print the expected short value System.out.println( stringValue + \" after converting into short = \" + shortValue); }}",
"e": 25283,
"s": 24554,
"text": null
},
{
"code": null,
"e": 25318,
"s": 25283,
"text": "1 after converting into short = 1\n"
},
{
"code": null,
"e": 25459,
"s": 25318,
"text": "Approach 3: (Using Short.valueOf() method)The valueOf() method of Short class converts data from its internal form into human-readable form."
},
{
"code": null,
"e": 25467,
"s": 25459,
"text": "Syntax:"
},
{
"code": null,
"e": 25488,
"s": 25467,
"text": "Short.valueOf(str);\n"
},
{
"code": null,
"e": 25539,
"s": 25488,
"text": "Below is the implementation of the above approach:"
},
{
"code": null,
"e": 25580,
"s": 25539,
"text": "Example 1: To show successful conversion"
},
{
"code": "// Java Program to convert string to short class GFG { // Function to convert String to Short public static short convertStringToShort(String str) { // Convert string to short // using valueOf() method return Short.valueOf(str); } // Driver code public static void main(String[] args) { // The string value String stringValue = \"1\"; // The expected short value short shortValue; // Convert string to short shortValue = convertStringToShort(stringValue); // Print the expected short value System.out.println( stringValue + \" after converting into short = \" + shortValue); }}",
"e": 26303,
"s": 25580,
"text": null
},
{
"code": null,
"e": 26338,
"s": 26303,
"text": "1 after converting into short = 1\n"
},
{
"code": null,
"e": 26354,
"s": 26338,
"text": "Java-Data Types"
},
{
"code": null,
"e": 26365,
"s": 26354,
"text": "Java-Short"
},
{
"code": null,
"e": 26386,
"s": 26365,
"text": "Java-String-Programs"
},
{
"code": null,
"e": 26399,
"s": 26386,
"text": "Java-Strings"
},
{
"code": null,
"e": 26404,
"s": 26399,
"text": "Java"
},
{
"code": null,
"e": 26418,
"s": 26404,
"text": "Java Programs"
},
{
"code": null,
"e": 26431,
"s": 26418,
"text": "Java-Strings"
},
{
"code": null,
"e": 26436,
"s": 26431,
"text": "Java"
},
{
"code": null,
"e": 26534,
"s": 26436,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26543,
"s": 26534,
"text": "Comments"
},
{
"code": null,
"e": 26556,
"s": 26543,
"text": "Old Comments"
},
{
"code": null,
"e": 26577,
"s": 26556,
"text": "Constructors in Java"
},
{
"code": null,
"e": 26592,
"s": 26577,
"text": "Stream In Java"
},
{
"code": null,
"e": 26611,
"s": 26592,
"text": "Exceptions in Java"
},
{
"code": null,
"e": 26657,
"s": 26611,
"text": "Different ways of Reading a text file in Java"
},
{
"code": null,
"e": 26687,
"s": 26657,
"text": "Functional Interfaces in Java"
},
{
"code": null,
"e": 26731,
"s": 26687,
"text": "Convert a String to Character array in Java"
},
{
"code": null,
"e": 26757,
"s": 26731,
"text": "Java Programming Examples"
},
{
"code": null,
"e": 26791,
"s": 26757,
"text": "Convert Double to Integer in Java"
},
{
"code": null,
"e": 26838,
"s": 26791,
"text": "Implementing a Linked List in Java using Class"
}
] |
JavaScript - Dialog Boxes
|
JavaScript supports three important types of dialog boxes. These dialog boxes can be used to raise and alert, or to get confirmation on any input or to have a kind of input from the users. Here we will discuss each dialog box one by one.
An alert dialog box is mostly used to give a warning message to the users. For example, if one input field requires to enter some text but the user does not provide any input, then as a part of validation, you can use an alert box to give a warning message.
Nonetheless, an alert box can still be used for friendlier messages. Alert box gives only one button "OK" to select and proceed.
<html>
<head>
<script type = "text/javascript">
<!--
function Warn() {
alert ("This is a warning message!");
document.write ("This is a warning message!");
}
//-->
</script>
</head>
<body>
<p>Click the following button to see the result: </p>
<form>
<input type = "button" value = "Click Me" onclick = "Warn();" />
</form>
</body>
</html>
Click the following button to see the result:
A confirmation dialog box is mostly used to take user's consent on any option. It displays a dialog box with two buttons: OK and Cancel.
If the user clicks on the OK button, the window method confirm() will return true. If the user clicks on the Cancel button, then confirm() returns false. You can use a confirmation dialog box as follows.
<html>
<head>
<script type = "text/javascript">
<!--
function getConfirmation() {
var retVal = confirm("Do you want to continue ?");
if( retVal == true ) {
document.write ("User wants to continue!");
return true;
} else {
document.write ("User does not want to continue!");
return false;
}
}
//-->
</script>
</head>
<body>
<p>Click the following button to see the result: </p>
<form>
<input type = "button" value = "Click Me" onclick = "getConfirmation();" />
</form>
</body>
</html>
Click the following button to see the result:
The prompt dialog box is very useful when you want to pop-up a text box to get user input. Thus, it enables you to interact with the user. The user needs to fill in the field and then click OK.
This dialog box is displayed using a method called prompt() which takes two parameters: (i) a label which you want to display in the text box and (ii) a default string to display in the text box.
This dialog box has two buttons: OK and Cancel. If the user clicks the OK button, the window method prompt() will return the entered value from the text box. If the user clicks the Cancel button, the window method prompt() returns null.
The following example shows how to use a prompt dialog box −
<html>
<head>
<script type = "text/javascript">
<!--
function getValue() {
var retVal = prompt("Enter your name : ", "your name here");
document.write("You have entered : " + retVal);
}
//-->
</script>
</head>
<body>
<p>Click the following button to see the result: </p>
<form>
<input type = "button" value = "Click Me" onclick = "getValue();" />
</form>
</body>
</html>
Click the following button to see the result:
25 Lectures
2.5 hours
Anadi Sharma
74 Lectures
10 hours
Lets Kode It
72 Lectures
4.5 hours
Frahaan Hussain
70 Lectures
4.5 hours
Frahaan Hussain
46 Lectures
6 hours
Eduonix Learning Solutions
88 Lectures
14 hours
Eduonix Learning Solutions
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2704,
"s": 2466,
"text": "JavaScript supports three important types of dialog boxes. These dialog boxes can be used to raise and alert, or to get confirmation on any input or to have a kind of input from the users. Here we will discuss each dialog box one by one."
},
{
"code": null,
"e": 2962,
"s": 2704,
"text": "An alert dialog box is mostly used to give a warning message to the users. For example, if one input field requires to enter some text but the user does not provide any input, then as a part of validation, you can use an alert box to give a warning message."
},
{
"code": null,
"e": 3091,
"s": 2962,
"text": "Nonetheless, an alert box can still be used for friendlier messages. Alert box gives only one button \"OK\" to select and proceed."
},
{
"code": null,
"e": 3576,
"s": 3091,
"text": "<html>\n <head> \n <script type = \"text/javascript\">\n <!--\n function Warn() {\n alert (\"This is a warning message!\");\n document.write (\"This is a warning message!\");\n }\n //-->\n </script> \n </head>\n \n <body>\n <p>Click the following button to see the result: </p> \n <form>\n <input type = \"button\" value = \"Click Me\" onclick = \"Warn();\" />\n </form> \n </body>\n</html>"
},
{
"code": null,
"e": 3623,
"s": 3576,
"text": "Click the following button to see the result: "
},
{
"code": null,
"e": 3760,
"s": 3623,
"text": "A confirmation dialog box is mostly used to take user's consent on any option. It displays a dialog box with two buttons: OK and Cancel."
},
{
"code": null,
"e": 3964,
"s": 3760,
"text": "If the user clicks on the OK button, the window method confirm() will return true. If the user clicks on the Cancel button, then confirm() returns false. You can use a confirmation dialog box as follows."
},
{
"code": null,
"e": 4697,
"s": 3964,
"text": "<html>\n <head> \n <script type = \"text/javascript\">\n <!--\n function getConfirmation() {\n var retVal = confirm(\"Do you want to continue ?\");\n if( retVal == true ) {\n document.write (\"User wants to continue!\");\n return true;\n } else {\n document.write (\"User does not want to continue!\");\n return false;\n }\n }\n //-->\n </script> \n </head>\n \n <body>\n <p>Click the following button to see the result: </p> \n <form>\n <input type = \"button\" value = \"Click Me\" onclick = \"getConfirmation();\" />\n </form> \n </body>\n</html>"
},
{
"code": null,
"e": 4744,
"s": 4697,
"text": "Click the following button to see the result: "
},
{
"code": null,
"e": 4938,
"s": 4744,
"text": "The prompt dialog box is very useful when you want to pop-up a text box to get user input. Thus, it enables you to interact with the user. The user needs to fill in the field and then click OK."
},
{
"code": null,
"e": 5134,
"s": 4938,
"text": "This dialog box is displayed using a method called prompt() which takes two parameters: (i) a label which you want to display in the text box and (ii) a default string to display in the text box."
},
{
"code": null,
"e": 5371,
"s": 5134,
"text": "This dialog box has two buttons: OK and Cancel. If the user clicks the OK button, the window method prompt() will return the entered value from the text box. If the user clicks the Cancel button, the window method prompt() returns null."
},
{
"code": null,
"e": 5432,
"s": 5371,
"text": "The following example shows how to use a prompt dialog box −"
},
{
"code": null,
"e": 5953,
"s": 5432,
"text": "<html>\n <head> \n <script type = \"text/javascript\">\n <!--\n function getValue() {\n var retVal = prompt(\"Enter your name : \", \"your name here\");\n document.write(\"You have entered : \" + retVal);\n }\n //-->\n </script> \n </head>\n \n <body>\n <p>Click the following button to see the result: </p> \n <form>\n <input type = \"button\" value = \"Click Me\" onclick = \"getValue();\" />\n </form> \n </body>\n</html>"
},
{
"code": null,
"e": 6000,
"s": 5953,
"text": "Click the following button to see the result: "
},
{
"code": null,
"e": 6035,
"s": 6000,
"text": "\n 25 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 6049,
"s": 6035,
"text": " Anadi Sharma"
},
{
"code": null,
"e": 6083,
"s": 6049,
"text": "\n 74 Lectures \n 10 hours \n"
},
{
"code": null,
"e": 6097,
"s": 6083,
"text": " Lets Kode It"
},
{
"code": null,
"e": 6132,
"s": 6097,
"text": "\n 72 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 6149,
"s": 6132,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 6184,
"s": 6149,
"text": "\n 70 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 6201,
"s": 6184,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 6234,
"s": 6201,
"text": "\n 46 Lectures \n 6 hours \n"
},
{
"code": null,
"e": 6262,
"s": 6234,
"text": " Eduonix Learning Solutions"
},
{
"code": null,
"e": 6296,
"s": 6262,
"text": "\n 88 Lectures \n 14 hours \n"
},
{
"code": null,
"e": 6324,
"s": 6296,
"text": " Eduonix Learning Solutions"
},
{
"code": null,
"e": 6331,
"s": 6324,
"text": " Print"
},
{
"code": null,
"e": 6342,
"s": 6331,
"text": " Add Notes"
}
] |
Rust - Data Types
|
The Type System represents the different types of values supported by the language. The Type System checks validity of the supplied values, before they are stored or manipulated by the program. This ensures that the code behaves as expected. The Type System further allows for richer code hinting and automated documentation too.
Rust is a statically typed language. Every value in Rust is of a certain data type. The compiler can automatically infer data type of the variable based on the value assigned to it.
Use the let keyword to declare a variable.
fn main() {
let company_string = "TutorialsPoint"; // string type
let rating_float = 4.5; // float type
let is_growing_boolean = true; // boolean type
let icon_char = '♥'; //unicode character type
println!("company name is:{}",company_string);
println!("company rating on 5 is:{}",rating_float);
println!("company is growing :{}",is_growing_boolean);
println!("company icon is:{}",icon_char);
}
In the above example, data type of the variables will be inferred from the values assigned to them. For example, Rust will assign string data type to the variable company_string, float data type to rating_float, etc.
The println! macro takes two arguments −
A special syntax { }, which is the placeholder
The variable name or a constant
The placeholder will be replaced by the variable’s value
The output of the above code snippet will be −
company name is: TutorialsPoint
company rating on 5 is:4.5
company is growing: true
company icon is: ♥
A scalar type represents a single value. For example, 10,3.14,'c'. Rust has four primary scalar types.
Integer
Floating-point
Booleans
Characters
We will learn about each type in our subsequent sections.
An integer is a number without a fractional component. Simply put, the integer data type is used to represent whole numbers.
Integers can be further classified as Signed and Unsigned. Signed integers can store both negative and positive values. Unsigned integers can only store positive values. A detailed description if integer types is given below −
The size of an integer can be arch. This means the size of the data type will be derived from the architecture of the machine. An integer the size of which is arch will be 32 bits on an x86 machine and 64 bits on an x64 machine. An arch integer is primarily used when indexing some sort of collection.
fn main() {
let result = 10; // i32 by default
let age:u32 = 20;
let sum:i32 = 5-15;
let mark:isize = 10;
let count:usize = 30;
println!("result value is {}",result);
println!("sum is {} and age is {}",sum,age);
println!("mark is {} and count is {}",mark,count);
}
The output will be as given below −
result value is 10
sum is -10 and age is 20
mark is 10 and count is 30
The above code will return a compilation error if you replace the value of age with a floating-point value.
Each signed variant can store numbers from -(2^(n-1) to 2^(n-1) -1, where n is the number of bits that variant uses. For example, i8 can store numbers from -(2^7) to 2^7 -1 − here we replaced n with 8.
Each unsigned variant can store numbers from 0 to (2^n)-1. For example, u8 can store numbers from 0 to (2^8)-1, which is equal to 0 to 255.
An integer overflow occurs when the value assigned to an integer variable exceeds the Rust defined range for the data type. Let us understand this with an example −
fn main() {
let age:u8 = 255;
// 0 to 255 only allowed for u8
let weight:u8 = 256; //overflow value is 0
let height:u8 = 257; //overflow value is 1
let score:u8 = 258; //overflow value is 2
println!("age is {} ",age);
println!("weight is {}",weight);
println!("height is {}",height);
println!("score is {}",score);
}
The valid range of unsigned u8 variable is 0 to 255. In the above example, the variables are assigned values greater than 255 (upper limit for an integer variable in Rust). On execution, the above code will return a warning − warning − literal out of range for u8 for weight, height and score variables. The overflow values after 255 will start from 0, 1, 2, etc. The final output without warning is as shown below −
age is 255
weight is 0
height is 1
score is 2
Float data type in Rust can be classified as f32 and f64. The f32 type is a single-precision float, and f64 has double precision. The default type is f64. Consider the following example to understand more about the float data type.
fn main() {
let result = 10.00; //f64 by default
let interest:f32 = 8.35;
let cost:f64 = 15000.600; //double precision
println!("result value is {}",result);
println!("interest is {}",interest);
println!("cost is {}",cost);
}
The output will be as shown below −
interest is 8.35
cost is 15000.6
Automatic type casting is not allowed in Rust. Consider the following code snippet. An integer value is assigned to the float variable interest.
fn main() {
let interest:f32 = 8; // integer assigned to float variable
println!("interest is {}",interest);
}
The compiler throws a mismatched types error as given below.
error[E0308]: mismatched types
--> main.rs:2:22
|
2 | let interest:f32=8;
| ^ expected f32, found integral variable
|
= note: expected type `f32`
found type `{integer}`
error: aborting due to previous error(s)
For easy readability of large numbers, we can use a visual separator _ underscore to separate digits. That is 50,000 can be written as 50_000 . This is shown in the below example.
fn main() {
let float_with_separator = 11_000.555_001;
println!("float value {}",float_with_separator);
let int_with_separator = 50_000;
println!("int value {}",int_with_separator);
}
The output is given below −
float value 11000.555001
int value 50000
Boolean types have two possible values – true or false. Use the bool keyword to declare a boolean variable.
fn main() {
let isfun:bool = true;
println!("Is Rust Programming Fun ? {}",isfun);
}
The output of the above code will be −
Is Rust Programming Fun ? true
The character data type in Rust supports numbers, alphabets, Unicode and special characters. Use the char keyword to declare a variable of character data type. Rust’s char type represents a Unicode Scalar Value, which means it can represent a lot more than just ASCII. Unicode Scalar Values range from U+0000 to U+D7FF and U+E000 to U+10FFFF inclusive.
Let us consider an example to understand more about the Character data type.
fn main() {
let special_character = '@'; //default
let alphabet:char = 'A';
let emoji:char = '😁';
println!("special character is {}",special_character);
println!("alphabet is {}",alphabet);
println!("emoji is {}",emoji);
}
The output of the above code will be −
special character is @
alphabet is A
emoji is 😁
45 Lectures
4.5 hours
Stone River ELearning
10 Lectures
33 mins
Ken Burke
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2417,
"s": 2087,
"text": "The Type System represents the different types of values supported by the language. The Type System checks validity of the supplied values, before they are stored or manipulated by the program. This ensures that the code behaves as expected. The Type System further allows for richer code hinting and automated documentation too."
},
{
"code": null,
"e": 2599,
"s": 2417,
"text": "Rust is a statically typed language. Every value in Rust is of a certain data type. The compiler can automatically infer data type of the variable based on the value assigned to it."
},
{
"code": null,
"e": 2642,
"s": 2599,
"text": "Use the let keyword to declare a variable."
},
{
"code": null,
"e": 3107,
"s": 2642,
"text": "fn main() {\n let company_string = \"TutorialsPoint\"; // string type\n let rating_float = 4.5; // float type\n let is_growing_boolean = true; // boolean type\n let icon_char = '♥'; //unicode character type\n\n println!(\"company name is:{}\",company_string);\n println!(\"company rating on 5 is:{}\",rating_float);\n println!(\"company is growing :{}\",is_growing_boolean);\n println!(\"company icon is:{}\",icon_char);\n}"
},
{
"code": null,
"e": 3324,
"s": 3107,
"text": "In the above example, data type of the variables will be inferred from the values assigned to them. For example, Rust will assign string data type to the variable company_string, float data type to rating_float, etc."
},
{
"code": null,
"e": 3365,
"s": 3324,
"text": "The println! macro takes two arguments −"
},
{
"code": null,
"e": 3412,
"s": 3365,
"text": "A special syntax { }, which is the placeholder"
},
{
"code": null,
"e": 3444,
"s": 3412,
"text": "The variable name or a constant"
},
{
"code": null,
"e": 3501,
"s": 3444,
"text": "The placeholder will be replaced by the variable’s value"
},
{
"code": null,
"e": 3548,
"s": 3501,
"text": "The output of the above code snippet will be −"
},
{
"code": null,
"e": 3652,
"s": 3548,
"text": "company name is: TutorialsPoint\ncompany rating on 5 is:4.5\ncompany is growing: true\ncompany icon is: ♥\n"
},
{
"code": null,
"e": 3755,
"s": 3652,
"text": "A scalar type represents a single value. For example, 10,3.14,'c'. Rust has four primary scalar types."
},
{
"code": null,
"e": 3763,
"s": 3755,
"text": "Integer"
},
{
"code": null,
"e": 3778,
"s": 3763,
"text": "Floating-point"
},
{
"code": null,
"e": 3787,
"s": 3778,
"text": "Booleans"
},
{
"code": null,
"e": 3798,
"s": 3787,
"text": "Characters"
},
{
"code": null,
"e": 3856,
"s": 3798,
"text": "We will learn about each type in our subsequent sections."
},
{
"code": null,
"e": 3981,
"s": 3856,
"text": "An integer is a number without a fractional component. Simply put, the integer data type is used to represent whole numbers."
},
{
"code": null,
"e": 4208,
"s": 3981,
"text": "Integers can be further classified as Signed and Unsigned. Signed integers can store both negative and positive values. Unsigned integers can only store positive values. A detailed description if integer types is given below −"
},
{
"code": null,
"e": 4510,
"s": 4208,
"text": "The size of an integer can be arch. This means the size of the data type will be derived from the architecture of the machine. An integer the size of which is arch will be 32 bits on an x86 machine and 64 bits on an x64 machine. An arch integer is primarily used when indexing some sort of collection."
},
{
"code": null,
"e": 4802,
"s": 4510,
"text": "fn main() {\n let result = 10; // i32 by default\n let age:u32 = 20;\n let sum:i32 = 5-15;\n let mark:isize = 10;\n let count:usize = 30;\n println!(\"result value is {}\",result);\n println!(\"sum is {} and age is {}\",sum,age);\n println!(\"mark is {} and count is {}\",mark,count);\n}"
},
{
"code": null,
"e": 4838,
"s": 4802,
"text": "The output will be as given below −"
},
{
"code": null,
"e": 4910,
"s": 4838,
"text": "result value is 10\nsum is -10 and age is 20\nmark is 10 and count is 30\n"
},
{
"code": null,
"e": 5018,
"s": 4910,
"text": "The above code will return a compilation error if you replace the value of age with a floating-point value."
},
{
"code": null,
"e": 5220,
"s": 5018,
"text": "Each signed variant can store numbers from -(2^(n-1) to 2^(n-1) -1, where n is the number of bits that variant uses. For example, i8 can store numbers from -(2^7) to 2^7 -1 − here we replaced n with 8."
},
{
"code": null,
"e": 5360,
"s": 5220,
"text": "Each unsigned variant can store numbers from 0 to (2^n)-1. For example, u8 can store numbers from 0 to (2^8)-1, which is equal to 0 to 255."
},
{
"code": null,
"e": 5525,
"s": 5360,
"text": "An integer overflow occurs when the value assigned to an integer variable exceeds the Rust defined range for the data type. Let us understand this with an example −"
},
{
"code": null,
"e": 5878,
"s": 5525,
"text": "fn main() {\n let age:u8 = 255;\n\n // 0 to 255 only allowed for u8\n let weight:u8 = 256; //overflow value is 0\n let height:u8 = 257; //overflow value is 1\n let score:u8 = 258; //overflow value is 2\n\n println!(\"age is {} \",age);\n println!(\"weight is {}\",weight);\n println!(\"height is {}\",height);\n println!(\"score is {}\",score);\n}"
},
{
"code": null,
"e": 6295,
"s": 5878,
"text": "The valid range of unsigned u8 variable is 0 to 255. In the above example, the variables are assigned values greater than 255 (upper limit for an integer variable in Rust). On execution, the above code will return a warning − warning − literal out of range for u8 for weight, height and score variables. The overflow values after 255 will start from 0, 1, 2, etc. The final output without warning is as shown below −"
},
{
"code": null,
"e": 6342,
"s": 6295,
"text": "age is 255\nweight is 0\nheight is 1\nscore is 2\n"
},
{
"code": null,
"e": 6574,
"s": 6342,
"text": "Float data type in Rust can be classified as f32 and f64. The f32 type is a single-precision float, and f64 has double precision. The default type is f64. Consider the following example to understand more about the float data type."
},
{
"code": null,
"e": 6830,
"s": 6574,
"text": "fn main() {\n let result = 10.00; //f64 by default\n let interest:f32 = 8.35;\n let cost:f64 = 15000.600; //double precision\n \n println!(\"result value is {}\",result);\n println!(\"interest is {}\",interest);\n println!(\"cost is {}\",cost);\n}"
},
{
"code": null,
"e": 6866,
"s": 6830,
"text": "The output will be as shown below −"
},
{
"code": null,
"e": 6900,
"s": 6866,
"text": "interest is 8.35\ncost is 15000.6\n"
},
{
"code": null,
"e": 7045,
"s": 6900,
"text": "Automatic type casting is not allowed in Rust. Consider the following code snippet. An integer value is assigned to the float variable interest."
},
{
"code": null,
"e": 7164,
"s": 7045,
"text": "fn main() {\n let interest:f32 = 8; // integer assigned to float variable\n println!(\"interest is {}\",interest);\n}"
},
{
"code": null,
"e": 7225,
"s": 7164,
"text": "The compiler throws a mismatched types error as given below."
},
{
"code": null,
"e": 7461,
"s": 7225,
"text": "error[E0308]: mismatched types\n --> main.rs:2:22\n |\n 2 | let interest:f32=8;\n | ^ expected f32, found integral variable\n |\n = note: expected type `f32`\n found type `{integer}`\nerror: aborting due to previous error(s)\n"
},
{
"code": null,
"e": 7641,
"s": 7461,
"text": "For easy readability of large numbers, we can use a visual separator _ underscore to separate digits. That is 50,000 can be written as 50_000 . This is shown in the below example."
},
{
"code": null,
"e": 7841,
"s": 7641,
"text": "fn main() {\n let float_with_separator = 11_000.555_001;\n println!(\"float value {}\",float_with_separator);\n \n let int_with_separator = 50_000;\n println!(\"int value {}\",int_with_separator);\n}"
},
{
"code": null,
"e": 7869,
"s": 7841,
"text": "The output is given below −"
},
{
"code": null,
"e": 7911,
"s": 7869,
"text": "float value 11000.555001\nint value 50000\n"
},
{
"code": null,
"e": 8019,
"s": 7911,
"text": "Boolean types have two possible values – true or false. Use the bool keyword to declare a boolean variable."
},
{
"code": null,
"e": 8110,
"s": 8019,
"text": "fn main() {\n let isfun:bool = true;\n println!(\"Is Rust Programming Fun ? {}\",isfun);\n}"
},
{
"code": null,
"e": 8149,
"s": 8110,
"text": "The output of the above code will be −"
},
{
"code": null,
"e": 8181,
"s": 8149,
"text": "Is Rust Programming Fun ? true\n"
},
{
"code": null,
"e": 8534,
"s": 8181,
"text": "The character data type in Rust supports numbers, alphabets, Unicode and special characters. Use the char keyword to declare a variable of character data type. Rust’s char type represents a Unicode Scalar Value, which means it can represent a lot more than just ASCII. Unicode Scalar Values range from U+0000 to U+D7FF and U+E000 to U+10FFFF inclusive."
},
{
"code": null,
"e": 8611,
"s": 8534,
"text": "Let us consider an example to understand more about the Character data type."
},
{
"code": null,
"e": 8856,
"s": 8611,
"text": "fn main() {\n let special_character = '@'; //default\n let alphabet:char = 'A';\n let emoji:char = '😁';\n \n println!(\"special character is {}\",special_character);\n println!(\"alphabet is {}\",alphabet);\n println!(\"emoji is {}\",emoji);\n}"
},
{
"code": null,
"e": 8895,
"s": 8856,
"text": "The output of the above code will be −"
},
{
"code": null,
"e": 8944,
"s": 8895,
"text": "special character is @\nalphabet is A\nemoji is 😁\n"
},
{
"code": null,
"e": 8979,
"s": 8944,
"text": "\n 45 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 9002,
"s": 8979,
"text": " Stone River ELearning"
},
{
"code": null,
"e": 9034,
"s": 9002,
"text": "\n 10 Lectures \n 33 mins\n"
},
{
"code": null,
"e": 9045,
"s": 9034,
"text": " Ken Burke"
},
{
"code": null,
"e": 9052,
"s": 9045,
"text": " Print"
},
{
"code": null,
"e": 9063,
"s": 9052,
"text": " Add Notes"
}
] |
cosh() function for complex number working with C++
|
Given the task is to show the working of cosh() function for complex numbers in C++.
The cosh() function is a part of the C++ standard template library. It is a little different from the standard cosh() function. Instead of calculating the hyperbolic cosines of angles that are in radians, it calculates the complex hyperbolic cosine values of complex numbers.
The mathematical formula for calculating complex hyperbolic cosine is −
cosh(z) = (e^(z) + e^(-z))/z
Where, “z” represents the complex number and “i” represents the iota.
The complex number should be declared as follows −
complex<double> name(a,b)
Here, <double> that is attached to the “complex” data type describes an object that stores ordered pair of objects, both of type “double’. Here the two objects are the real part and the imaginary part of the complex number that we want to enter. <complex> header file should be included to call the function for complex numbers.
The syntax is as follows −
cosh(complexnumber)
Input: complexnumber(5,5)
Output: <-27.0349,-3.85115>
Explanation − The following example shows how we use the cosh() function for calculating the complex hyperbolic cosine values of a complex number. Here 5 is the real part and another 5 is the imaginary part of the complex number as shown in the input, and the we get the hyperbolic cosine values in the output as we pass the complex number into the cosh() function.
Approach used in the below program as follows −
First declare a complex number, let’s say complexnumber(a,b)and then assign it a complex value.
Two values should be assigned to the variable complexnumber(a,b). The first value will be the real part of the complex number and the second value will be the imaginary part of the complex number.Let us say complexnumber(1, 3) so this will represent the complex number 1+3i.
Two values should be assigned to the variable complexnumber(a,b). The first value will be the real part of the complex number and the second value will be the imaginary part of the complex number.
Let us say complexnumber(1, 3) so this will represent the complex number 1+3i.
Now pass the complexnumber(1, 3)we created into the cosh() function
Live Demo
#include<iostream>
#include<complex>
using namespace std;
int main() {
complex<double> cno(1,3);
cout<<cosh(cno);
return 0;
}
If we run the above code it will generate the following output −
<-1.52764,0.165844>
Here 1 is the real part and 3 is the imaginary part of the complex number, as we pass our complex number into the cosh() function, we get the hyperbolic cosine values in the output as shown.
|
[
{
"code": null,
"e": 1147,
"s": 1062,
"text": "Given the task is to show the working of cosh() function for complex numbers in C++."
},
{
"code": null,
"e": 1423,
"s": 1147,
"text": "The cosh() function is a part of the C++ standard template library. It is a little different from the standard cosh() function. Instead of calculating the hyperbolic cosines of angles that are in radians, it calculates the complex hyperbolic cosine values of complex numbers."
},
{
"code": null,
"e": 1495,
"s": 1423,
"text": "The mathematical formula for calculating complex hyperbolic cosine is −"
},
{
"code": null,
"e": 1524,
"s": 1495,
"text": "cosh(z) = (e^(z) + e^(-z))/z"
},
{
"code": null,
"e": 1594,
"s": 1524,
"text": "Where, “z” represents the complex number and “i” represents the iota."
},
{
"code": null,
"e": 1645,
"s": 1594,
"text": "The complex number should be declared as follows −"
},
{
"code": null,
"e": 1671,
"s": 1645,
"text": "complex<double> name(a,b)"
},
{
"code": null,
"e": 2000,
"s": 1671,
"text": "Here, <double> that is attached to the “complex” data type describes an object that stores ordered pair of objects, both of type “double’. Here the two objects are the real part and the imaginary part of the complex number that we want to enter. <complex> header file should be included to call the function for complex numbers."
},
{
"code": null,
"e": 2027,
"s": 2000,
"text": "The syntax is as follows −"
},
{
"code": null,
"e": 2047,
"s": 2027,
"text": "cosh(complexnumber)"
},
{
"code": null,
"e": 2101,
"s": 2047,
"text": "Input: complexnumber(5,5)\nOutput: <-27.0349,-3.85115>"
},
{
"code": null,
"e": 2467,
"s": 2101,
"text": "Explanation − The following example shows how we use the cosh() function for calculating the complex hyperbolic cosine values of a complex number. Here 5 is the real part and another 5 is the imaginary part of the complex number as shown in the input, and the we get the hyperbolic cosine values in the output as we pass the complex number into the cosh() function."
},
{
"code": null,
"e": 2515,
"s": 2467,
"text": "Approach used in the below program as follows −"
},
{
"code": null,
"e": 2611,
"s": 2515,
"text": "First declare a complex number, let’s say complexnumber(a,b)and then assign it a complex value."
},
{
"code": null,
"e": 2886,
"s": 2611,
"text": "Two values should be assigned to the variable complexnumber(a,b). The first value will be the real part of the complex number and the second value will be the imaginary part of the complex number.Let us say complexnumber(1, 3) so this will represent the complex number 1+3i."
},
{
"code": null,
"e": 3083,
"s": 2886,
"text": "Two values should be assigned to the variable complexnumber(a,b). The first value will be the real part of the complex number and the second value will be the imaginary part of the complex number."
},
{
"code": null,
"e": 3162,
"s": 3083,
"text": "Let us say complexnumber(1, 3) so this will represent the complex number 1+3i."
},
{
"code": null,
"e": 3230,
"s": 3162,
"text": "Now pass the complexnumber(1, 3)we created into the cosh() function"
},
{
"code": null,
"e": 3241,
"s": 3230,
"text": " Live Demo"
},
{
"code": null,
"e": 3376,
"s": 3241,
"text": "#include<iostream>\n#include<complex>\nusing namespace std;\nint main() {\n complex<double> cno(1,3);\n cout<<cosh(cno);\n return 0;\n}"
},
{
"code": null,
"e": 3441,
"s": 3376,
"text": "If we run the above code it will generate the following output −"
},
{
"code": null,
"e": 3461,
"s": 3441,
"text": "<-1.52764,0.165844>"
},
{
"code": null,
"e": 3652,
"s": 3461,
"text": "Here 1 is the real part and 3 is the imaginary part of the complex number, as we pass our complex number into the cosh() function, we get the hyperbolic cosine values in the output as shown."
}
] |
How to draw a line in Android using Kotlin?
|
This example demonstrates how to draw a line in Android using Kotlin.
Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project.
Step 2 − Add the following code to res/layout/activity_main.xml.
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:id="@+id/relativeLayout"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<ImageView
android:id="@+id/imageView"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_centerInParent="true" />
<Button
android:id="@+id/button"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentBottom="true"
android:layout_centerHorizontal="true"
android:layout_marginBottom="70dp"
android:text="Draw Line" />
</RelativeLayout>
Step 3 − Add the following code to src/MainActivity.kt
import android.graphics.Bitmap
import android.graphics.Canvas
import android.graphics.Color
import android.graphics.Paint
import android.os.Bundle
import android.widget.Button
import android.widget.ImageView
import androidx.appcompat.app.AppCompatActivity
class MainActivity : AppCompatActivity() {
lateinit var button: Button
lateinit var imageView: ImageView
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
title = "KotlinApp"
button = findViewById(R.id.button)
imageView = findViewById(R.id.imageView)
button.setOnClickListener {
val bitmap = Bitmap.createBitmap(10, 700, Bitmap.Config.ARGB_8888)
val canvas = Canvas(bitmap)
canvas.drawColor(Color.RED)
val paint = Paint()
paint.color = Color.RED
paint.style = Paint.Style.STROKE
paint.strokeWidth = 8F
paint.isAntiAlias = true
val offset = 50
canvas.drawLine(
offset.toFloat(), (canvas.height / 2).toFloat(), (canvas.width - offset).toFloat(), (canvas.height /
2).toFloat(), paint)
imageView.setImageBitmap(bitmap)
}
}
}
Step 4 − Add the following code to androidManifest.xml
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.q11">
<application
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="@style/AppTheme">
<activity android:name=".MainActivity">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click the Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen
|
[
{
"code": null,
"e": 1132,
"s": 1062,
"text": "This example demonstrates how to draw a line in Android using Kotlin."
},
{
"code": null,
"e": 1261,
"s": 1132,
"text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project."
},
{
"code": null,
"e": 1326,
"s": 1261,
"text": "Step 2 − Add the following code to res/layout/activity_main.xml."
},
{
"code": null,
"e": 2129,
"s": 1326,
"text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<RelativeLayout xmlns:android=\"http://schemas.android.com/apk/res/android\"\n xmlns:tools=\"http://schemas.android.com/tools\"\n android:id=\"@+id/relativeLayout\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\"\n tools:context=\".MainActivity\">\n <ImageView\n android:id=\"@+id/imageView\"\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:layout_centerInParent=\"true\" />\n <Button\n android:id=\"@+id/button\"\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:layout_alignParentBottom=\"true\"\n android:layout_centerHorizontal=\"true\"\n android:layout_marginBottom=\"70dp\"\n android:text=\"Draw Line\" />\n</RelativeLayout>"
},
{
"code": null,
"e": 2184,
"s": 2129,
"text": "Step 3 − Add the following code to src/MainActivity.kt"
},
{
"code": null,
"e": 3418,
"s": 2184,
"text": "import android.graphics.Bitmap\nimport android.graphics.Canvas\nimport android.graphics.Color\nimport android.graphics.Paint\nimport android.os.Bundle\nimport android.widget.Button\nimport android.widget.ImageView\nimport androidx.appcompat.app.AppCompatActivity\nclass MainActivity : AppCompatActivity() {\n lateinit var button: Button\n lateinit var imageView: ImageView\n override fun onCreate(savedInstanceState: Bundle?) {\n super.onCreate(savedInstanceState)\n setContentView(R.layout.activity_main)\n title = \"KotlinApp\"\n button = findViewById(R.id.button)\n imageView = findViewById(R.id.imageView)\n button.setOnClickListener {\n val bitmap = Bitmap.createBitmap(10, 700, Bitmap.Config.ARGB_8888)\n val canvas = Canvas(bitmap)\n canvas.drawColor(Color.RED)\n val paint = Paint()\n paint.color = Color.RED\n paint.style = Paint.Style.STROKE\n paint.strokeWidth = 8F\n paint.isAntiAlias = true\n val offset = 50\n canvas.drawLine(\n offset.toFloat(), (canvas.height / 2).toFloat(), (canvas.width - offset).toFloat(), (canvas.height /\n 2).toFloat(), paint)\n imageView.setImageBitmap(bitmap)\n }\n }\n}"
},
{
"code": null,
"e": 3473,
"s": 3418,
"text": "Step 4 − Add the following code to androidManifest.xml"
},
{
"code": null,
"e": 4144,
"s": 3473,
"text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<manifest xmlns:android=\"http://schemas.android.com/apk/res/android\" package=\"com.example.q11\">\n <application\n android:allowBackup=\"true\"\n android:icon=\"@mipmap/ic_launcher\"\n android:label=\"@string/app_name\"\n android:roundIcon=\"@mipmap/ic_launcher_round\"\n android:supportsRtl=\"true\"\n android:theme=\"@style/AppTheme\">\n <activity android:name=\".MainActivity\">\n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.LAUNCHER\" />\n </intent-filter>\n </activity>\n </application>\n</manifest>"
},
{
"code": null,
"e": 4492,
"s": 4144,
"text": "Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click the Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen"
}
] |
Diameter of Binary Tree in Python
|
Suppose we have a binary tree; we have to compute the length of the diameter of the tree. The diameter of a binary tree is actually the length of the longest path between any two nodes in a tree. This path not necessarily pass through the root. So if the tree is like below, then the diameter will be 3.as the length of the path [4,2,1,3] or [5,2,1,3] is 3
To solve this, we will follow these steps −
We will use the dfs to find the diameter, set answer := 0
call the dfs function with the root dfs(root)
dfs will work like below dfs(node)
if node is not present, then return 0
left := dfs(left subtree of root), and right := dfs(right subtree of root)
answer := max of answer and left + right
return max of left + 1 and right + 1
Let us see the following implementation to get better understanding −
Live Demo
class TreeNode:
def __init__(self, data, left = None, right = None):
self.data = data
self.left = left
self.right = right
def insert(temp,data):
que = []
que.append(temp)
while (len(que)):
temp = que[0]
que.pop(0)
if (not temp.left):
temp.left = TreeNode(data)
break
else:
que.append(temp.left)
if (not temp.right):
temp.right = TreeNode(data)
break
else:
que.append(temp.right)
def make_tree(elements):
Tree = TreeNode(elements[0])
for element in elements[1:]:
insert(Tree, element)
return Tree
class Solution(object):
def diameterOfBinaryTree(self, root):
"""
:type root: TreeNode
:rtype: int
"""
self.ans = 0
self.dfs(root)
return self.ans
def dfs(self, node):
if not node:
return 0
left = self.dfs(node.left)
right = self.dfs(node.right)
self.ans =max(self.ans,right+left)
return max(left+1,right+1)
root = make_tree([1,2,3,4,5])
ob1 = Solution()
print(ob1.diameterOfBinaryTree(root))
[1,2,3,4,5]
3
|
[
{
"code": null,
"e": 1419,
"s": 1062,
"text": "Suppose we have a binary tree; we have to compute the length of the diameter of the tree. The diameter of a binary tree is actually the length of the longest path between any two nodes in a tree. This path not necessarily pass through the root. So if the tree is like below, then the diameter will be 3.as the length of the path [4,2,1,3] or [5,2,1,3] is 3"
},
{
"code": null,
"e": 1463,
"s": 1419,
"text": "To solve this, we will follow these steps −"
},
{
"code": null,
"e": 1521,
"s": 1463,
"text": "We will use the dfs to find the diameter, set answer := 0"
},
{
"code": null,
"e": 1567,
"s": 1521,
"text": "call the dfs function with the root dfs(root)"
},
{
"code": null,
"e": 1602,
"s": 1567,
"text": "dfs will work like below dfs(node)"
},
{
"code": null,
"e": 1640,
"s": 1602,
"text": "if node is not present, then return 0"
},
{
"code": null,
"e": 1715,
"s": 1640,
"text": "left := dfs(left subtree of root), and right := dfs(right subtree of root)"
},
{
"code": null,
"e": 1756,
"s": 1715,
"text": "answer := max of answer and left + right"
},
{
"code": null,
"e": 1793,
"s": 1756,
"text": "return max of left + 1 and right + 1"
},
{
"code": null,
"e": 1863,
"s": 1793,
"text": "Let us see the following implementation to get better understanding −"
},
{
"code": null,
"e": 1874,
"s": 1863,
"text": " Live Demo"
},
{
"code": null,
"e": 2985,
"s": 1874,
"text": "class TreeNode:\n def __init__(self, data, left = None, right = None):\n self.data = data\n self.left = left\n self.right = right\ndef insert(temp,data):\n que = []\n que.append(temp)\n while (len(que)):\n temp = que[0]\n que.pop(0)\n if (not temp.left):\n temp.left = TreeNode(data)\n break\n else:\n que.append(temp.left)\n if (not temp.right):\n temp.right = TreeNode(data)\n break\n else:\n que.append(temp.right)\ndef make_tree(elements):\n Tree = TreeNode(elements[0])\n for element in elements[1:]:\n insert(Tree, element)\n return Tree\nclass Solution(object):\n def diameterOfBinaryTree(self, root):\n \"\"\"\n :type root: TreeNode\n :rtype: int\n \"\"\"\n self.ans = 0\n self.dfs(root)\n return self.ans\n def dfs(self, node):\n if not node:\n return 0\n left = self.dfs(node.left)\n right = self.dfs(node.right)\n self.ans =max(self.ans,right+left)\n return max(left+1,right+1)\nroot = make_tree([1,2,3,4,5])\nob1 = Solution()\nprint(ob1.diameterOfBinaryTree(root))"
},
{
"code": null,
"e": 2997,
"s": 2985,
"text": "[1,2,3,4,5]"
},
{
"code": null,
"e": 2999,
"s": 2997,
"text": "3"
}
] |
Getting MIN and MAX dates in table in SAP Web Intelligence while using a Break
|
This can be achieved by creating an Indicator as per condition you are looking for- minimum drawn date time for POST-Test and Maximum drawn date time for PRE-Test.
Once you create this indicator, it will show “Y” for the rows highlighted in yellow as per condition and “N” for other rows.
=If ([Drawn date] = Min([Drawn date]) In ([Patient ABO/RN]) Where ([PrePost] = "POST") )
Or ([Drawn date] = Max([Drawn date]) In ([Patient ABO/RN]) Where ([PrePost] = "PRE") )
Then "Y" Else "N"
You need to apply a filter for rows with indicator value- “Y”.
Other option is you can create 3 variables as below −
Max Accession: =Max([Accession]) Where ([Variables].[Pre/Post] = "PRE") In ([Patient Birth Date])
Min Accession: =Min([Accession]) Where ([Variables].[Pre/Post] = "POST") In ([Patient Birth Date])
Accession Min/Max= If ([Accession]=[Min accession])Then 1 ElseIf ([Accession] = [Max accession]) Then 2 Else 0 (By using these, you will get 1 to the min accession, 2 to maximum and 0 to the rest of values in the report)
Then apply a filter with a condition to select values which are greater than 0.
|
[
{
"code": null,
"e": 1226,
"s": 1062,
"text": "This can be achieved by creating an Indicator as per condition you are looking for- minimum drawn date time for POST-Test and Maximum drawn date time for PRE-Test."
},
{
"code": null,
"e": 1352,
"s": 1226,
"text": " Once you create this indicator, it will show “Y” for the rows highlighted in yellow as per condition and “N” for other rows."
},
{
"code": null,
"e": 1551,
"s": 1352,
"text": "=If ([Drawn date] = Min([Drawn date]) In ([Patient ABO/RN]) Where ([PrePost] = \"POST\") )\n Or ([Drawn date] = Max([Drawn date]) In ([Patient ABO/RN]) Where ([PrePost] = \"PRE\") )\n Then \"Y\" Else \"N\""
},
{
"code": null,
"e": 1614,
"s": 1551,
"text": "You need to apply a filter for rows with indicator value- “Y”."
},
{
"code": null,
"e": 1668,
"s": 1614,
"text": "Other option is you can create 3 variables as below −"
},
{
"code": null,
"e": 2086,
"s": 1668,
"text": "Max Accession: =Max([Accession]) Where ([Variables].[Pre/Post] = \"PRE\") In ([Patient Birth Date])\nMin Accession: =Min([Accession]) Where ([Variables].[Pre/Post] = \"POST\") In ([Patient Birth Date])\nAccession Min/Max= If ([Accession]=[Min accession])Then 1 ElseIf ([Accession] = [Max accession]) Then 2 Else 0 (By using these, you will get 1 to the min accession, 2 to maximum and 0 to the rest of values in the report)"
},
{
"code": null,
"e": 2166,
"s": 2086,
"text": "Then apply a filter with a condition to select values which are greater than 0."
}
] |
Interoperable Python and SQL in Jupyter Notebooks | by Kevin Kho | Towards Data Science
|
Note: Most of the code snippets are images because that was the only way to preserve SQL syntax highlighting. For an interactive code example, check out this Kaggle notebook.
The goal of FugueSQL is to provide an enhanced SQL interface (and experience) for data professionals to perform end-to-end data compute workflows in a SQL-like language. With FugueSQL, SQL users can perform full Extract, Transform, Load (ETL) workflows on DataFrames inside Python code and Jupyter notebooks. The SQL is parsed and mapped to the corresponding Pandas, Spark, or Dask code.
This empowers heavy SQL users to harness the power of Spark and Dask, while using their language of choice to express logic. Additionally, distributed compute keywords have been added such as PREPARTITIONandPERSIST, in order to extend the capabilities beyond standard SQL.
In this article we’ll go over the basic FugueSQL features, and how to use it on top of Spark or Dask by specifying the execution engine.
The first changes as seen in the GIF above are the LOADand SAVE keywords. Beyond these, there are some other enhancements that provide a friendlier syntax. Users can also use Python functions inside FugueSQL, creating a powerful combination.
FugueSQL users can have SQL cells in notebooks (more examples later) by using the %%fsqlcell magic. This also provides syntax highlighting in Jupyter notebooks. Although not demonstrated here, these SQL cells can be used in Python code with thefsql() function.
Variable Assignment
DataFrames can be assigned to variables. This is similar to SQL temp tables or Common Table Expressions (CTE). Although not shows in this tutorial, these DataFrames can also be brought out of the SQL cells and used in Python cells. The example below shows two new DataFrames that came from modifying df . dfwas created by using Pandas in a Python cell (this is the same df as the first image). The two new DataFrames are joined together to create a DataFrame namedfinal.
Jinja Templating
FugueSQL can interact with Python variables through Jinja templating. This allows Python logic to alter SQL queries similar to parameters in SQL.
Python Functions
FugueSQL also supports using Python functions inside SQL code blocks. In the example below, we use seaborn to plot two columns of our DataFrame. We invoke the function using the OUTPUT keyword in SQL.
FugueSQL is meant to operate on data that is already loaded into memory (although there are ways to use FugueSQL to bring in data from storage). There is a project called ipython-sql that provides the %%sql cell magic command. This command is meant to use SQL to load data into the Python environment from a database.
FugueSQL’s guarantee is that the same SQL code will work on Pandas, Spark, and Dask without any code change. The focus of FugueSQL is in-memory computation, as opposed to loading data from a database.
As the volume of data we work with continues to increase, distributed compute engines such as Spark and Dask are becoming more widely adopted by data teams. FugueSQL allows users to use these more performant engines the same FugueSQL code.
In the code snippet below, we just changed the cell magic from %%fsql to %%fsql spark and now the SQL code will run on the Spark execution engine. Similarly, %%fsql dask will run the SQL code on the Dask execution engine.
One of the common operations that can benefit from moving to a distributed compute environment is getting the median of each group. In this example, we’ll show the PREPARTITIONkeyword and how to apply a function on each partition of data.
First, we define a Python function that takes in a DataFrame and outputs the user_id and the median measurement. This function is meant to operate on only one user_id at a time. Even if the function is defined in Pandas, it will work on Spark and Dask.
We can then use the PREPARTITION keyword to partition our data by the user_id and apply the get_median function.
In this example, we get the median measurement of each user. As the data size gets larger, more benefits will be seen from the parallelization. In an example notebook we have, the Pandas engine took around 520 seconds for this operation. Using the Spark engine (parallelized on 4 cores) took around 70 seconds for a dataset with 320 million rows.
The difference in execution time is expected. What FugueSQL allows SQL users to do is extend their workflows to Spark and Dask when the data becomes too big for Pandas to effectively handle.
Another common use-case is Dask handles memory spillover and writing data to the disk. This means users can process more data before hitting out-of-memory issues.
In this article, we explored the basics features of FugueSQL that allow users to work on top of Pandas, Spark, and Dask DataFrames through SQL cells in Jupyter notebook.
Fugue decouples logic and execution, making it easy for users to specify the execution engine during runtime. This empowers heavy SQL users by allowing them to express their logic indepedent of a compute framework. They can easily migrate workflows to Spark or Dask when the situation calls for it.
There are a lot of details and features that can’t be covered in one blog post. For an end-to-end example, visit the Kaggle notebook that we prepared for Thinkful data analyst bootcamp students.
Fugue (and FugueSQL) are available through PyPI. They can be installed using pip (installation of Dask and Spark are separate).
pip install fugue
Inside a notebook, the FugueSQL cell magic %%fsql can be used after running the setup function. This also provides syntax highlighting for SQL commands.
from fugue_notebook import setupsetup()
If you’re interested in using FugueSQL, want to give us feedback, or have any questions, we’d be happy to chat on Slack! We are also giving workshops to data teams interested in applying FugueSQL (or Fugue) in their data workflows.
Project Repo
Slack channel
FugueSQL is just one part of the broader Fugue ecosystem. Fugue is an abstraction layer that allows users to write code in native Python, and then execute the code on Pandas, Spark, or Dask without code changes during runtime. More information can be found in the repo above.
|
[
{
"code": null,
"e": 347,
"s": 172,
"text": "Note: Most of the code snippets are images because that was the only way to preserve SQL syntax highlighting. For an interactive code example, check out this Kaggle notebook."
},
{
"code": null,
"e": 735,
"s": 347,
"text": "The goal of FugueSQL is to provide an enhanced SQL interface (and experience) for data professionals to perform end-to-end data compute workflows in a SQL-like language. With FugueSQL, SQL users can perform full Extract, Transform, Load (ETL) workflows on DataFrames inside Python code and Jupyter notebooks. The SQL is parsed and mapped to the corresponding Pandas, Spark, or Dask code."
},
{
"code": null,
"e": 1008,
"s": 735,
"text": "This empowers heavy SQL users to harness the power of Spark and Dask, while using their language of choice to express logic. Additionally, distributed compute keywords have been added such as PREPARTITIONandPERSIST, in order to extend the capabilities beyond standard SQL."
},
{
"code": null,
"e": 1145,
"s": 1008,
"text": "In this article we’ll go over the basic FugueSQL features, and how to use it on top of Spark or Dask by specifying the execution engine."
},
{
"code": null,
"e": 1387,
"s": 1145,
"text": "The first changes as seen in the GIF above are the LOADand SAVE keywords. Beyond these, there are some other enhancements that provide a friendlier syntax. Users can also use Python functions inside FugueSQL, creating a powerful combination."
},
{
"code": null,
"e": 1648,
"s": 1387,
"text": "FugueSQL users can have SQL cells in notebooks (more examples later) by using the %%fsqlcell magic. This also provides syntax highlighting in Jupyter notebooks. Although not demonstrated here, these SQL cells can be used in Python code with thefsql() function."
},
{
"code": null,
"e": 1668,
"s": 1648,
"text": "Variable Assignment"
},
{
"code": null,
"e": 2139,
"s": 1668,
"text": "DataFrames can be assigned to variables. This is similar to SQL temp tables or Common Table Expressions (CTE). Although not shows in this tutorial, these DataFrames can also be brought out of the SQL cells and used in Python cells. The example below shows two new DataFrames that came from modifying df . dfwas created by using Pandas in a Python cell (this is the same df as the first image). The two new DataFrames are joined together to create a DataFrame namedfinal."
},
{
"code": null,
"e": 2156,
"s": 2139,
"text": "Jinja Templating"
},
{
"code": null,
"e": 2302,
"s": 2156,
"text": "FugueSQL can interact with Python variables through Jinja templating. This allows Python logic to alter SQL queries similar to parameters in SQL."
},
{
"code": null,
"e": 2319,
"s": 2302,
"text": "Python Functions"
},
{
"code": null,
"e": 2520,
"s": 2319,
"text": "FugueSQL also supports using Python functions inside SQL code blocks. In the example below, we use seaborn to plot two columns of our DataFrame. We invoke the function using the OUTPUT keyword in SQL."
},
{
"code": null,
"e": 2838,
"s": 2520,
"text": "FugueSQL is meant to operate on data that is already loaded into memory (although there are ways to use FugueSQL to bring in data from storage). There is a project called ipython-sql that provides the %%sql cell magic command. This command is meant to use SQL to load data into the Python environment from a database."
},
{
"code": null,
"e": 3039,
"s": 2838,
"text": "FugueSQL’s guarantee is that the same SQL code will work on Pandas, Spark, and Dask without any code change. The focus of FugueSQL is in-memory computation, as opposed to loading data from a database."
},
{
"code": null,
"e": 3279,
"s": 3039,
"text": "As the volume of data we work with continues to increase, distributed compute engines such as Spark and Dask are becoming more widely adopted by data teams. FugueSQL allows users to use these more performant engines the same FugueSQL code."
},
{
"code": null,
"e": 3501,
"s": 3279,
"text": "In the code snippet below, we just changed the cell magic from %%fsql to %%fsql spark and now the SQL code will run on the Spark execution engine. Similarly, %%fsql dask will run the SQL code on the Dask execution engine."
},
{
"code": null,
"e": 3740,
"s": 3501,
"text": "One of the common operations that can benefit from moving to a distributed compute environment is getting the median of each group. In this example, we’ll show the PREPARTITIONkeyword and how to apply a function on each partition of data."
},
{
"code": null,
"e": 3993,
"s": 3740,
"text": "First, we define a Python function that takes in a DataFrame and outputs the user_id and the median measurement. This function is meant to operate on only one user_id at a time. Even if the function is defined in Pandas, it will work on Spark and Dask."
},
{
"code": null,
"e": 4106,
"s": 3993,
"text": "We can then use the PREPARTITION keyword to partition our data by the user_id and apply the get_median function."
},
{
"code": null,
"e": 4453,
"s": 4106,
"text": "In this example, we get the median measurement of each user. As the data size gets larger, more benefits will be seen from the parallelization. In an example notebook we have, the Pandas engine took around 520 seconds for this operation. Using the Spark engine (parallelized on 4 cores) took around 70 seconds for a dataset with 320 million rows."
},
{
"code": null,
"e": 4644,
"s": 4453,
"text": "The difference in execution time is expected. What FugueSQL allows SQL users to do is extend their workflows to Spark and Dask when the data becomes too big for Pandas to effectively handle."
},
{
"code": null,
"e": 4807,
"s": 4644,
"text": "Another common use-case is Dask handles memory spillover and writing data to the disk. This means users can process more data before hitting out-of-memory issues."
},
{
"code": null,
"e": 4977,
"s": 4807,
"text": "In this article, we explored the basics features of FugueSQL that allow users to work on top of Pandas, Spark, and Dask DataFrames through SQL cells in Jupyter notebook."
},
{
"code": null,
"e": 5276,
"s": 4977,
"text": "Fugue decouples logic and execution, making it easy for users to specify the execution engine during runtime. This empowers heavy SQL users by allowing them to express their logic indepedent of a compute framework. They can easily migrate workflows to Spark or Dask when the situation calls for it."
},
{
"code": null,
"e": 5471,
"s": 5276,
"text": "There are a lot of details and features that can’t be covered in one blog post. For an end-to-end example, visit the Kaggle notebook that we prepared for Thinkful data analyst bootcamp students."
},
{
"code": null,
"e": 5599,
"s": 5471,
"text": "Fugue (and FugueSQL) are available through PyPI. They can be installed using pip (installation of Dask and Spark are separate)."
},
{
"code": null,
"e": 5617,
"s": 5599,
"text": "pip install fugue"
},
{
"code": null,
"e": 5770,
"s": 5617,
"text": "Inside a notebook, the FugueSQL cell magic %%fsql can be used after running the setup function. This also provides syntax highlighting for SQL commands."
},
{
"code": null,
"e": 5810,
"s": 5770,
"text": "from fugue_notebook import setupsetup()"
},
{
"code": null,
"e": 6042,
"s": 5810,
"text": "If you’re interested in using FugueSQL, want to give us feedback, or have any questions, we’d be happy to chat on Slack! We are also giving workshops to data teams interested in applying FugueSQL (or Fugue) in their data workflows."
},
{
"code": null,
"e": 6055,
"s": 6042,
"text": "Project Repo"
},
{
"code": null,
"e": 6069,
"s": 6055,
"text": "Slack channel"
}
] |
How to find the rank of a vector elements in R from largest to smallest?
|
To find the rank of a vector of elements we can use rank function directly but this will result in ranks from smallest to largest. For example, if we have a vector x that contains values 1, 2, 3 in this sequence then the rank function will return 1 2 3. But if we want to get ranks from largest to smallest then it would be 3 2 1 and it can be done in R as rank(-x).
Live Demo
x1<-1:10
x1
[1] 1 2 3 4 5 6 7 8 9 10
rank(-x1)
[1] 10 9 8 7 6 5 4 3 2 1
Live Demo
x2<-rpois(100,5)
x2
[1] 5 1 4 8 5 1 5 6 2 1 6 5 1 4 2 4 3 6 7 6 3 5 7 4 11
[26] 5 2 4 6 2 6 9 1 1 5 7 2 5 4 3 8 9 5 5 6 4 9 4 2 6
[51] 4 9 5 1 4 9 4 8 6 5 5 6 1 12 7 6 3 5 9 2 6 3 6 4 5
[76] 9 3 4 6 3 6 5 11 2 4 7 2 5 5 7 5 9 4 2 3 6 3 4 5 4
rank(-x2)
[1] 55.5 55.5 36.5 55.5 36.5 23.0 89.0 36.5 74.5 55.5 74.5 1.0
[13] 74.5 36.5 7.5 89.0 55.5 74.5 36.5 89.0 14.0 36.5 36.5 97.0
[25] 89.0 14.0 55.5 89.0 89.0 14.0 4.0 36.5 74.5 89.0 55.5 97.0
[37] 74.5 23.0 97.0 14.0 36.5 7.5 89.0 23.0 14.0 89.0 23.0 55.5
[49] 55.5 97.0 36.5 2.0 55.5 74.5 74.5 89.0 7.5 4.0 100.0 74.5
[61] 36.5 55.5 55.5 23.0 14.0 36.5 36.5 23.0 36.5 55.5 55.5 36.5
[73] 36.5 74.5 14.0 55.5 14.0 7.5 55.5 55.5 55.5 4.0 23.0 97.0
[85] 74.5 89.0 74.5 74.5 14.0 55.5 55.5 74.5 36.5 36.5 74.5 74.5
[97] 23.0 74.5 23.0 74.5
Live Demo
x3<-sample(0:9,100,replace=TRUE)
x3
[1] 3 9 0 5 8 3 4 6 0 6 8 2 2 8 7 9 4 2 3 7 4 9 7 8 0 6 3 2 2 4 7 9 0 0 2 8 3
[38] 2 4 9 9 2 3 2 9 1 8 1 3 2 9 2 2 5 6 2 5 8 8 9 7 5 1 8 0 1 3 5 1 3 4 3 9 5
[75] 3 9 9 5 1 1 7 8 6 3 4 4 1 9 9 2 1 6 6 7 4 6 2 5 8 3
rank(-x3)
[1] 80.0 14.5 54.5 71.0 54.5 93.0 45.5 5.5 5.5 64.0 64.0 54.5 14.5 71.0 80.0
[16] 5.5 93.0 54.5 54.5 14.5 54.5 24.0 64.0 14.5 80.0 45.5 80.0 54.5 36.0 14.5
[31] 36.0 80.0 80.0 36.0 64.0 93.0 36.0 71.0 54.5 24.0 5.5 80.0 36.0 93.0 45.5
[46] 24.0 54.5 80.0 93.0 80.0 14.5 93.0 5.5 45.5 54.5 54.5 36.0 45.5 93.0 36.0
[61] 24.0 54.5 5.5 24.0 14.5 71.0 64.0 36.0 80.0 45.5 24.0 5.5 24.0 5.5 93.0
[76] 64.0 24.0 93.0 24.0 24.0 93.0 5.5 93.0 36.0 36.0 71.0 93.0 93.0 64.0 80.0
[91] 36.0 36.0 5.5 24.0 71.0 71.0 14.5 36.0 93.0 93.0
Live Demo
x4<-sample(100,100,replace=TRUE)
x4
[1] 24 2 100 26 18 61 25 100 9 22 3 19 74 93 59 80 88 32
[19] 54 35 44 19 60 87 81 16 71 10 75 13 8 54 58 44 56 1
[37] 14 69 55 4 19 90 22 16 73 4 65 21 79 62 10 6 78 29
[55] 25 37 69 77 20 24 52 78 50 81 92 13 21 15 50 14 84 78
[73] 64 94 6 3 58 23 88 85 35 60 8 75 84 45 17 94 42 98
[91] 67 88 29 34 40 47 20 62 75 73
rank(-x4)
[1] 70.0 53.0 87.0 42.5 18.0 8.5 44.5 93.5 16.0 12.0 33.0 82.5
[13] 28.5 78.0 26.0 60.5 79.5 87.0 99.0 33.0 82.5 47.0 17.0 44.5
[25] 12.0 33.0 58.5 54.0 39.0 64.0 12.0 47.0 87.0 92.0 47.0 98.0
[37] 15.0 55.0 37.5 81.0 49.5 40.5 23.5 79.5 95.5 21.5 40.5 60.5
[49] 75.0 49.5 76.5 72.0 1.5 90.0 97.0 30.5 35.5 3.0 91.0 7.0
[61] 76.5 27.0 21.5 85.0 4.5 19.5 56.5 19.5 65.0 89.0 37.5 35.5
[73] 25.0 66.0 93.5 72.0 12.0 72.0 12.0 28.5 63.0 56.5 84.0 1.5
[85] 58.5 6.0 95.5 62.0 8.5 69.0 68.0 67.0 4.5 51.0 100.0 74.0
[97] 42.5 30.5 52.0 23.5
Live Demo
x5<-sample(101:199,100,replace=TRUE) x5
[1] 184 142 123 176 190 156 148 111 123 175 124 189 196 198 191 154 181 199
[19] 155 193 122 194 179 132 111 151 111 178 124 171 114 168 132 164 105 177
[37] 166 169 136 101 127 150 181 180 109 122 132 126 147 162 141 164 125 163
[55] 177 143 119 177 149 111 182 123 122 163 198 130 135 184 138 107 103 147
[73] 140 135 198 138 188 134 114 186 173 138 161 132 160 176 190 117 161 197
[91] 161 124 116 110 126 119 173 183 155 115
rank(-x5)
[1] 30.0 44.0 42.5 9.0 40.5 12.0 34.0 37.0 27.5 83.0 99.0 63.5
[13] 23.5 16.5 92.0 12.0 25.0 67.0 96.5 48.5 78.0 85.0 18.0 96.5
[25] 15.0 82.0 9.0 37.0 5.0 94.5 39.0 29.0 22.0 52.5 66.0 2.0
[37] 58.0 60.0 14.0 81.0 93.0 23.5 90.0 50.5 37.0 88.5 45.5 55.5
[49] 86.5 27.5 26.0 3.0 9.0 88.5 5.0 5.0 91.0 63.5 84.0 63.5
[61] 94.5 48.5 54.0 69.5 34.0 75.5 71.0 20.0 31.5 42.5 100.0 40.5
[73] 20.0 52.5 79.5 69.5 86.5 75.5 55.5 45.5 98.0 73.5 1.0 59.0
[85] 57.0 47.0 79.5 73.5 20.0 16.5 7.0 50.5 34.0 63.5 68.0 61.0
[97] 31.5 12.0 77.0 72.0
Live Demo
x6<-sample(rpois(10,5),100,replace=TRUE) x6
[1] 5 5 4 5 5 4 5 8 4 3 2 2 4 5 8 4 6 2 5 4 4 4 4 5 4 3 6 4 6 8 4 8 7 4 4 5 4
[38] 5 3 6 4 4 2 8 4 7 3 2 4 4 7 5 2 4 5 3 4 2 4 2 2 8 3 5 4 4 5 8 7 5 2 4 6 5
[75] 6 4 2 2 3 5 2 2 3 3 8 7 2 3 3 6 8 2 6 5 2 5 5 5 8 5
rank(-x6)
[1] 29.0 96.0 74.5 29.0 74.5 74.5 7.0 96.0 29.0 74.5 74.5 74.5 74.5 74.5 7.0
[16] 7.0 7.0 29.0 96.0 74.5 74.5 7.0 7.0 96.0 74.5 29.0 96.0 7.0 51.0 74.5
[31] 51.0 29.0 7.0 74.5 29.0 74.5 29.0 29.0 74.5 29.0 51.0 29.0 51.0 51.0 29.0
[46] 29.0 74.5 74.5 29.0 74.5 51.0 74.5 74.5 51.0 74.5 29.0 29.0 51.0 74.5 74.5
[61] 51.0 74.5 74.5 29.0 29.0 74.5 29.0 96.0 96.0 29.0 51.0 74.5 74.5 74.5 96.0
[76] 7.0 29.0 7.0 29.0 74.5 29.0 74.5 29.0 29.0 29.0 7.0 74.5 29.0 51.0 7.0
[91] 74.5 7.0 51.0 29.0 96.0 74.5 29.0 51.0 29.0 29.0
Live Demo
x7<-round(runif(100,5,10),0)
x7
[1] 5 8 6 9 10 7 7 10 5 9 8 9 7 8 8 6 8 6 6 7 9 6 10 10 8
[26] 8 7 9 9 10 9 8 7 7 6 7 10 6 8 9 7 7 9 6 6 5 5 5 9 9
[51] 6 9 10 7 7 5 9 10 7 7 9 7 6 6 7 8 6 6 9 6 10 9 6 8 7
[76] 7 6 9 7 10 9 7 6 6 6 5 9 10 7 6 9 8 8 7 8 8 8 9 9 8
rank(-x7)
[1] 42.5 78.5 22.5 6.0 96.0 6.0 58.5 42.5 42.5 78.5 78.5 22.5 78.5 78.5 6.0
[16] 78.5 78.5 42.5 22.5 22.5 78.5 58.5 6.0 42.5 22.5 58.5 58.5 6.0 22.5 96.0
[31] 96.0 42.5 6.0 58.5 78.5 96.0 78.5 22.5 22.5 6.0 58.5 42.5 96.0 22.5 22.5
[46] 58.5 58.5 58.5 58.5 78.5 78.5 22.5 22.5 22.5 6.0 78.5 22.5 6.0 78.5 42.5
[61] 42.5 58.5 78.5 22.5 6.0 78.5 78.5 42.5 42.5 42.5 78.5 78.5 78.5 42.5 96.0
[76] 78.5 78.5 96.0 42.5 58.5 78.5 42.5 42.5 42.5 6.0 78.5 22.5 58.5 42.5 22.5
[91] 22.5 22.5 22.5 78.5 96.0 58.5 78.5 22.5 22.5 96.0
Live Demo
x8<-rexp(50,2.5)
x8
[1] 0.29346231 0.19440966 0.77117130 0.08592004 0.04575112 0.04967382
[7] 0.37039906 0.24414045 0.89440198 0.23974022 1.19025638 0.33477031
[13] 1.11400244 0.37368447 0.29478339 0.03654986 0.11947211 0.08989231
[19] 0.15917572 1.22241385 1.09067800 0.15827342 0.40054308 0.04406150
[25] 0.59635287 0.06558528 0.02868031 0.45926452 0.07033172 0.87111673
[31] 0.57026937 1.14810306 0.37622534 0.23697019 0.13202811 0.59222703
[37] 0.54024297 0.04767550 0.07921466 0.52566672 0.49331287 1.50206460
[43] 0.15669989 0.41877724 0.54670825 0.18044803 0.39152010 0.17849852
[49] 0.31529803 1.40226889
rank(-x8)
[1] 15 43 20 12 9 13 27 40 25 32 28 17 46 18 22 8 26 23 47 1 2 41 4 42 29
[26] 34 38 35 24 14 44 49 16 7 19 45 31 36 21 39 33 48 37 3 50 11 30 10 6 5
Live Demo
x9<-round(rexp(50,2.5),0)
x9
[1] 0 0 0 2 1 0 1 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0
[39] 0 1 0 1 0 0 0 0 0 1 0 0
rank(-x9)
[1] 30.5 6.5 30.5 30.5 30.5 30.5 30.5 30.5 30.5 30.5 30.5 30.5 30.5 6.5 1.5
[16] 30.5 30.5 30.5 30.5 30.5 30.5 6.5 30.5 6.5 1.5 30.5 30.5 30.5 6.5 30.5
[31] 30.5 30.5 30.5 6.5 30.5 30.5 30.5 30.5 6.5 6.5 30.5 30.5 30.5 30.5 30.5
[46] 30.5 30.5 30.5 30.5 30.5
Live Demo
x10<-round(rnorm(100,45,3),0)
x10
[1] 48 48 43 46 48 44 39 49 44 42 46 46 41 50 50 46 46 40 47 45 54 48 44 45 44
[26] 48 48 49 47 44 43 45 35 45 44 42 42 45 49 39 44 39 50 44 44 41 47 44 43 47
[51] 45 44 37 42 49 43 49 39 50 51 41 47 43 45 46 45 45 45 46 43 45 44 50 47 43
[76] 49 49 41 47 40 46 46 48 44 45 46 48 42 42 48 47 49 44 42 46 46 46 40 43 47
rank(-x10)
[1] 95.5 29.5 80.0 54.0 41.5 41.5 68.5 80.0 54.0 29.5 68.5 41.5
[13] 41.5 6.0 54.0 54.0 29.5 19.0 68.5 68.5 98.5 68.5 98.5 41.5
[25] 6.0 91.0 29.5 68.5 80.0 11.0 6.0 29.5 95.5 91.0 14.0 19.0
[37] 54.0 68.5 29.5 80.0 100.0 86.0 54.0 68.5 29.5 1.0 68.5 68.5
[49] 54.0 29.5 68.5 19.0 6.0 19.0 91.0 68.5 91.0 68.5 95.5 11.0
[61] 86.0 54.0 54.0 2.5 80.0 11.0 68.5 29.5 41.5 68.5 6.0 54.0
[73] 11.0 54.0 91.0 86.0 41.5 41.5 11.0 41.5 80.0 95.5 41.5 54.0
[85] 19.0 29.5 41.5 29.5 86.0 19.0 19.0 19.0 68.5 29.5 2.5 19.0
[97] 54.0 86.0 41.5 80.0
|
[
{
"code": null,
"e": 1429,
"s": 1062,
"text": "To find the rank of a vector of elements we can use rank function directly but this will result in ranks from smallest to largest. For example, if we have a vector x that contains values 1, 2, 3 in this sequence then the rank function will return 1 2 3. But if we want to get ranks from largest to smallest then it would be 3 2 1 and it can be done in R as rank(-x)."
},
{
"code": null,
"e": 1440,
"s": 1429,
"text": " Live Demo"
},
{
"code": null,
"e": 1452,
"s": 1440,
"text": "x1<-1:10\nx1"
},
{
"code": null,
"e": 1477,
"s": 1452,
"text": "[1] 1 2 3 4 5 6 7 8 9 10"
},
{
"code": null,
"e": 1487,
"s": 1477,
"text": "rank(-x1)"
},
{
"code": null,
"e": 1513,
"s": 1487,
"text": "[1] 10 9 8 7 6 5 4 3 2 1\n"
},
{
"code": null,
"e": 1524,
"s": 1513,
"text": " Live Demo"
},
{
"code": null,
"e": 1544,
"s": 1524,
"text": "x2<-rpois(100,5)\nx2"
},
{
"code": null,
"e": 1767,
"s": 1544,
"text": "[1] 5 1 4 8 5 1 5 6 2 1 6 5 1 4 2 4 3 6 7 6 3 5 7 4 11\n[26] 5 2 4 6 2 6 9 1 1 5 7 2 5 4 3 8 9 5 5 6 4 9 4 2 6\n[51] 4 9 5 1 4 9 4 8 6 5 5 6 1 12 7 6 3 5 9 2 6 3 6 4 5\n[76] 9 3 4 6 3 6 5 11 2 4 7 2 5 5 7 5 9 4 2 3 6 3 4 5 4"
},
{
"code": null,
"e": 1777,
"s": 1767,
"text": "rank(-x2)"
},
{
"code": null,
"e": 2314,
"s": 1777,
"text": "[1] 55.5 55.5 36.5 55.5 36.5 23.0 89.0 36.5 74.5 55.5 74.5 1.0\n [13] 74.5 36.5 7.5 89.0 55.5 74.5 36.5 89.0 14.0 36.5 36.5 97.0\n[25] 89.0 14.0 55.5 89.0 89.0 14.0 4.0 36.5 74.5 89.0 55.5 97.0\n[37] 74.5 23.0 97.0 14.0 36.5 7.5 89.0 23.0 14.0 89.0 23.0 55.5\n[49] 55.5 97.0 36.5 2.0 55.5 74.5 74.5 89.0 7.5 4.0 100.0 74.5\n[61] 36.5 55.5 55.5 23.0 14.0 36.5 36.5 23.0 36.5 55.5 55.5 36.5\n[73] 36.5 74.5 14.0 55.5 14.0 7.5 55.5 55.5 55.5 4.0 23.0 97.0\n[85] 74.5 89.0 74.5 74.5 14.0 55.5 55.5 74.5 36.5 36.5 74.5 74.5\n[97] 23.0 74.5 23.0 74.5"
},
{
"code": null,
"e": 2325,
"s": 2314,
"text": " Live Demo"
},
{
"code": null,
"e": 2361,
"s": 2325,
"text": "x3<-sample(0:9,100,replace=TRUE)\nx3"
},
{
"code": null,
"e": 2575,
"s": 2361,
"text": "[1] 3 9 0 5 8 3 4 6 0 6 8 2 2 8 7 9 4 2 3 7 4 9 7 8 0 6 3 2 2 4 7 9 0 0 2 8 3\n[38] 2 4 9 9 2 3 2 9 1 8 1 3 2 9 2 2 5 6 2 5 8 8 9 7 5 1 8 0 1 3 5 1 3 4 3 9 5\n[75] 3 9 9 5 1 1 7 8 6 3 4 4 1 9 9 2 1 6 6 7 4 6 2 5 8 3"
},
{
"code": null,
"e": 2585,
"s": 2575,
"text": "rank(-x3)"
},
{
"code": null,
"e": 3110,
"s": 2585,
"text": "[1] 80.0 14.5 54.5 71.0 54.5 93.0 45.5 5.5 5.5 64.0 64.0 54.5 14.5 71.0 80.0\n[16] 5.5 93.0 54.5 54.5 14.5 54.5 24.0 64.0 14.5 80.0 45.5 80.0 54.5 36.0 14.5\n[31] 36.0 80.0 80.0 36.0 64.0 93.0 36.0 71.0 54.5 24.0 5.5 80.0 36.0 93.0 45.5\n[46] 24.0 54.5 80.0 93.0 80.0 14.5 93.0 5.5 45.5 54.5 54.5 36.0 45.5 93.0 36.0\n[61] 24.0 54.5 5.5 24.0 14.5 71.0 64.0 36.0 80.0 45.5 24.0 5.5 24.0 5.5 93.0\n[76] 64.0 24.0 93.0 24.0 24.0 93.0 5.5 93.0 36.0 36.0 71.0 93.0 93.0 64.0 80.0\n [91] 36.0 36.0 5.5 24.0 71.0 71.0 14.5 36.0 93.0 93.0"
},
{
"code": null,
"e": 3121,
"s": 3110,
"text": " Live Demo"
},
{
"code": null,
"e": 3157,
"s": 3121,
"text": "x4<-sample(100,100,replace=TRUE)\nx4"
},
{
"code": null,
"e": 3477,
"s": 3157,
"text": "[1] 24 2 100 26 18 61 25 100 9 22 3 19 74 93 59 80 88 32\n[19] 54 35 44 19 60 87 81 16 71 10 75 13 8 54 58 44 56 1\n[37] 14 69 55 4 19 90 22 16 73 4 65 21 79 62 10 6 78 29\n[55] 25 37 69 77 20 24 52 78 50 81 92 13 21 15 50 14 84 78\n[73] 64 94 6 3 58 23 88 85 35 60 8 75 84 45 17 94 42 98\n[91] 67 88 29 34 40 47 20 62 75 73"
},
{
"code": null,
"e": 3487,
"s": 3477,
"text": "rank(-x4)"
},
{
"code": null,
"e": 4026,
"s": 3487,
"text": "[1] 70.0 53.0 87.0 42.5 18.0 8.5 44.5 93.5 16.0 12.0 33.0 82.5\n[13] 28.5 78.0 26.0 60.5 79.5 87.0 99.0 33.0 82.5 47.0 17.0 44.5\n [25] 12.0 33.0 58.5 54.0 39.0 64.0 12.0 47.0 87.0 92.0 47.0 98.0\n [37] 15.0 55.0 37.5 81.0 49.5 40.5 23.5 79.5 95.5 21.5 40.5 60.5\n[49] 75.0 49.5 76.5 72.0 1.5 90.0 97.0 30.5 35.5 3.0 91.0 7.0\n[61] 76.5 27.0 21.5 85.0 4.5 19.5 56.5 19.5 65.0 89.0 37.5 35.5\n[73] 25.0 66.0 93.5 72.0 12.0 72.0 12.0 28.5 63.0 56.5 84.0 1.5\n [85] 58.5 6.0 95.5 62.0 8.5 69.0 68.0 67.0 4.5 51.0 100.0 74.0\n[97] 42.5 30.5 52.0 23.5"
},
{
"code": null,
"e": 4037,
"s": 4026,
"text": " Live Demo"
},
{
"code": null,
"e": 4077,
"s": 4037,
"text": "x5<-sample(101:199,100,replace=TRUE) x5"
},
{
"code": null,
"e": 4506,
"s": 4077,
"text": "[1] 184 142 123 176 190 156 148 111 123 175 124 189 196 198 191 154 181 199\n[19] 155 193 122 194 179 132 111 151 111 178 124 171 114 168 132 164 105 177\n[37] 166 169 136 101 127 150 181 180 109 122 132 126 147 162 141 164 125 163\n[55] 177 143 119 177 149 111 182 123 122 163 198 130 135 184 138 107 103 147\n[73] 140 135 198 138 188 134 114 186 173 138 161 132 160 176 190 117 161 197\n[91] 161 124 116 110 126 119 173 183 155 115"
},
{
"code": null,
"e": 4516,
"s": 4506,
"text": "rank(-x5)"
},
{
"code": null,
"e": 5052,
"s": 4516,
"text": "[1] 30.0 44.0 42.5 9.0 40.5 12.0 34.0 37.0 27.5 83.0 99.0 63.5\n[13] 23.5 16.5 92.0 12.0 25.0 67.0 96.5 48.5 78.0 85.0 18.0 96.5\n[25] 15.0 82.0 9.0 37.0 5.0 94.5 39.0 29.0 22.0 52.5 66.0 2.0\n[37] 58.0 60.0 14.0 81.0 93.0 23.5 90.0 50.5 37.0 88.5 45.5 55.5\n[49] 86.5 27.5 26.0 3.0 9.0 88.5 5.0 5.0 91.0 63.5 84.0 63.5\n[61] 94.5 48.5 54.0 69.5 34.0 75.5 71.0 20.0 31.5 42.5 100.0 40.5\n[73] 20.0 52.5 79.5 69.5 86.5 75.5 55.5 45.5 98.0 73.5 1.0 59.0\n [85] 57.0 47.0 79.5 73.5 20.0 16.5 7.0 50.5 34.0 63.5 68.0 61.0\n[97] 31.5 12.0 77.0 72.0"
},
{
"code": null,
"e": 5063,
"s": 5052,
"text": " Live Demo"
},
{
"code": null,
"e": 5107,
"s": 5063,
"text": "x6<-sample(rpois(10,5),100,replace=TRUE) x6"
},
{
"code": null,
"e": 5321,
"s": 5107,
"text": "[1] 5 5 4 5 5 4 5 8 4 3 2 2 4 5 8 4 6 2 5 4 4 4 4 5 4 3 6 4 6 8 4 8 7 4 4 5 4\n[38] 5 3 6 4 4 2 8 4 7 3 2 4 4 7 5 2 4 5 3 4 2 4 2 2 8 3 5 4 4 5 8 7 5 2 4 6 5\n[75] 6 4 2 2 3 5 2 2 3 3 8 7 2 3 3 6 8 2 6 5 2 5 5 5 8 5"
},
{
"code": null,
"e": 5331,
"s": 5321,
"text": "rank(-x6)"
},
{
"code": null,
"e": 5852,
"s": 5331,
"text": "[1] 29.0 96.0 74.5 29.0 74.5 74.5 7.0 96.0 29.0 74.5 74.5 74.5 74.5 74.5 7.0\n[16] 7.0 7.0 29.0 96.0 74.5 74.5 7.0 7.0 96.0 74.5 29.0 96.0 7.0 51.0 74.5\n[31] 51.0 29.0 7.0 74.5 29.0 74.5 29.0 29.0 74.5 29.0 51.0 29.0 51.0 51.0 29.0\n[46] 29.0 74.5 74.5 29.0 74.5 51.0 74.5 74.5 51.0 74.5 29.0 29.0 51.0 74.5 74.5\n[61] 51.0 74.5 74.5 29.0 29.0 74.5 29.0 96.0 96.0 29.0 51.0 74.5 74.5 74.5 96.0\n[76] 7.0 29.0 7.0 29.0 74.5 29.0 74.5 29.0 29.0 29.0 7.0 74.5 29.0 51.0 7.0\n[91] 74.5 7.0 51.0 29.0 96.0 74.5 29.0 51.0 29.0 29.0"
},
{
"code": null,
"e": 5863,
"s": 5852,
"text": " Live Demo"
},
{
"code": null,
"e": 5895,
"s": 5863,
"text": "x7<-round(runif(100,5,10),0)\nx7"
},
{
"code": null,
"e": 6125,
"s": 5895,
"text": "[1] 5 8 6 9 10 7 7 10 5 9 8 9 7 8 8 6 8 6 6 7 9 6 10 10 8\n[26] 8 7 9 9 10 9 8 7 7 6 7 10 6 8 9 7 7 9 6 6 5 5 5 9 9\n[51] 6 9 10 7 7 5 9 10 7 7 9 7 6 6 7 8 6 6 9 6 10 9 6 8 7\n[76] 7 6 9 7 10 9 7 6 6 6 5 9 10 7 6 9 8 8 7 8 8 8 9 9 8"
},
{
"code": null,
"e": 6135,
"s": 6125,
"text": "rank(-x7)"
},
{
"code": null,
"e": 6661,
"s": 6135,
"text": "[1] 42.5 78.5 22.5 6.0 96.0 6.0 58.5 42.5 42.5 78.5 78.5 22.5 78.5 78.5 6.0\n[16] 78.5 78.5 42.5 22.5 22.5 78.5 58.5 6.0 42.5 22.5 58.5 58.5 6.0 22.5 96.0\n [31] 96.0 42.5 6.0 58.5 78.5 96.0 78.5 22.5 22.5 6.0 58.5 42.5 96.0 22.5 22.5\n[46] 58.5 58.5 58.5 58.5 78.5 78.5 22.5 22.5 22.5 6.0 78.5 22.5 6.0 78.5 42.5\n [61] 42.5 58.5 78.5 22.5 6.0 78.5 78.5 42.5 42.5 42.5 78.5 78.5 78.5 42.5 96.0\n [76] 78.5 78.5 96.0 42.5 58.5 78.5 42.5 42.5 42.5 6.0 78.5 22.5 58.5 42.5 22.5\n[91] 22.5 22.5 22.5 78.5 96.0 58.5 78.5 22.5 22.5 96.0"
},
{
"code": null,
"e": 6672,
"s": 6661,
"text": " Live Demo"
},
{
"code": null,
"e": 6692,
"s": 6672,
"text": "x8<-rexp(50,2.5)\nx8"
},
{
"code": null,
"e": 7285,
"s": 6692,
"text": "[1] 0.29346231 0.19440966 0.77117130 0.08592004 0.04575112 0.04967382\n[7] 0.37039906 0.24414045 0.89440198 0.23974022 1.19025638 0.33477031\n[13] 1.11400244 0.37368447 0.29478339 0.03654986 0.11947211 0.08989231\n[19] 0.15917572 1.22241385 1.09067800 0.15827342 0.40054308 0.04406150\n[25] 0.59635287 0.06558528 0.02868031 0.45926452 0.07033172 0.87111673\n[31] 0.57026937 1.14810306 0.37622534 0.23697019 0.13202811 0.59222703\n[37] 0.54024297 0.04767550 0.07921466 0.52566672 0.49331287 1.50206460\n[43] 0.15669989 0.41877724 0.54670825 0.18044803 0.39152010 0.17849852\n[49] 0.31529803 1.40226889"
},
{
"code": null,
"e": 7295,
"s": 7285,
"text": "rank(-x8)"
},
{
"code": null,
"e": 7445,
"s": 7295,
"text": "[1] 15 43 20 12 9 13 27 40 25 32 28 17 46 18 22 8 26 23 47 1 2 41 4 42 29\n[26] 34 38 35 24 14 44 49 16 7 19 45 31 36 21 39 33 48 37 3 50 11 30 10 6 5"
},
{
"code": null,
"e": 7456,
"s": 7445,
"text": " Live Demo"
},
{
"code": null,
"e": 7485,
"s": 7456,
"text": "x9<-round(rexp(50,2.5),0)\nx9"
},
{
"code": null,
"e": 7594,
"s": 7485,
"text": "[1] 0 0 0 2 1 0 1 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0\n[39] 0 1 0 1 0 0 0 0 0 1 0 0"
},
{
"code": null,
"e": 7604,
"s": 7594,
"text": "rank(-x9)"
},
{
"code": null,
"e": 7863,
"s": 7604,
"text": "[1] 30.5 6.5 30.5 30.5 30.5 30.5 30.5 30.5 30.5 30.5 30.5 30.5 30.5 6.5 1.5\n[16] 30.5 30.5 30.5 30.5 30.5 30.5 6.5 30.5 6.5 1.5 30.5 30.5 30.5 6.5 30.5\n[31] 30.5 30.5 30.5 6.5 30.5 30.5 30.5 30.5 6.5 6.5 30.5 30.5 30.5 30.5 30.5\n[46] 30.5 30.5 30.5 30.5 30.5"
},
{
"code": null,
"e": 7874,
"s": 7863,
"text": " Live Demo"
},
{
"code": null,
"e": 7908,
"s": 7874,
"text": "x10<-round(rnorm(100,45,3),0)\nx10"
},
{
"code": null,
"e": 8227,
"s": 7908,
"text": "[1] 48 48 43 46 48 44 39 49 44 42 46 46 41 50 50 46 46 40 47 45 54 48 44 45 44\n[26] 48 48 49 47 44 43 45 35 45 44 42 42 45 49 39 44 39 50 44 44 41 47 44 43 47\n[51] 45 44 37 42 49 43 49 39 50 51 41 47 43 45 46 45 45 45 46 43 45 44 50 47 43\n[76] 49 49 41 47 40 46 46 48 44 45 46 48 42 42 48 47 49 44 42 46 46 46 40 43 47"
},
{
"code": null,
"e": 8238,
"s": 8227,
"text": "rank(-x10)"
},
{
"code": null,
"e": 8777,
"s": 8238,
"text": "[1] 95.5 29.5 80.0 54.0 41.5 41.5 68.5 80.0 54.0 29.5 68.5 41.5\n[13] 41.5 6.0 54.0 54.0 29.5 19.0 68.5 68.5 98.5 68.5 98.5 41.5\n[25] 6.0 91.0 29.5 68.5 80.0 11.0 6.0 29.5 95.5 91.0 14.0 19.0\n[37] 54.0 68.5 29.5 80.0 100.0 86.0 54.0 68.5 29.5 1.0 68.5 68.5\n [49] 54.0 29.5 68.5 19.0 6.0 19.0 91.0 68.5 91.0 68.5 95.5 11.0\n[61] 86.0 54.0 54.0 2.5 80.0 11.0 68.5 29.5 41.5 68.5 6.0 54.0\n [73] 11.0 54.0 91.0 86.0 41.5 41.5 11.0 41.5 80.0 95.5 41.5 54.0\n[85] 19.0 29.5 41.5 29.5 86.0 19.0 19.0 19.0 68.5 29.5 2.5 19.0\n[97] 54.0 86.0 41.5 80.0"
}
] |
Introducing FugueSQL — SQL for Pandas, Spark, and Dask DataFrames | by Khuyen Tran | Towards Data Science
|
As a data scientist, you might be familiar with both Pandas and SQL. However, there might be some queries, transformations that you feel comfortable doing in SQL instead of Python.
Wouldn’t it be nice if you can query a pandas DataFrame like below:
... using SQL?
Or use a Python function within a SQL query?
That is when FugueSQL comes in handy.
FugueSQL is a Python library that allows users to combine Python code and SQL commands. This gives users the flexibility to switch between Python and SQL within a Jupyter Notebook or a Python script.
To install FugueSQL, type:
pip install fugue[sql]
To run on Spark or Dask execution engines, type:
pip install fugue[sql, spark] pip install fugue[sql, dask]pip install fugue[all]
In this article, we will explore some utilities of FugueSQL and compare FugueSQL with other tools such as pandasql.
FugueSQL comes with a Jupyter notebook extension that allows users to interactively query DataFrames with syntax highlighting.
To use it, import the setup function from fugue_notebook to register the%%fsql cell magic. This is only available on classic notebooks for now (not available on JupyterLab).
To understand how the%%fsql cell magic, let’s start with creating a pandas DataFrame:
Now, you can query like how you would normally do in SQL by adding the %%fsql at the beginning of the cell.
In the code above, only PRINT does not follow standard SQL. This is similar to the pandas head() and Spark show() operations to display a number of rows.
Operations such as GROUP BY are similar to standard SQL syntax.
For SQL users, nothing shown above is out of the ordinary except for the PRINT statement. However, Fugue also adds some enhancements to standard SQL, allowing it to handle end-to-end data workflows gracefully.
SQL users often have to use temp tables or common table expressions (CTE) to hold intermediate transformations. Luckily, FugueSQL supports the creation of intermediate tables through a variable assignment.
For example, after transforming df , we can assign it to another variable called df2 and save df2 to a file using SAVE variable OVERWRITE file_name .
Now, if we want to apply more transformation to df2 , simply load it from the file we saved previously.
Pretty cool, isn’t it?
SQL’s grammar is meant for querying, which means that it lacks keywords to manipulate data. FugueSQL adds some keywords for common DataFrame operations. For example:
DROP
FILL NULLS PARAMS
SAMPLE
For a full list of operators, check the FugueSQL operator docs.
FugueSQL also allows you to use Python functions within a SQL query using TRANSFORM .
For example, to use the function str_concat in a SQL query:
... simply add the following components to the function:
Output schema hint (as a comment)
Type annotations (pd.DataFrame )
Cool! Now we are ready to add it to a SQL query:
One of the beautiful properties of SQL is that it is agnostic to the size of the data. The logic is expressed in a scale-agnostic manner and will remain the same even if running on Pandas, Spark, or Dask.
With FugueSQL, we can apply the same logic on the Spark execution engine just by specifying %%fsql spark . We don’t even need to edit the str_concatfunction to bring it to Spark as Fugue takes care of porting it.
One of the important parts of distributed computing is partitioning. For example, to get the median value in each logical group, the data needs to be partitioned such that each logical group lives on the same worker.
To describe this, FugueSQL has the PREPARTITION BY keyword. Fugue’s prepartition-transform semantics are equivalent to the pandas groupby-apply . The only difference is that prepartition-transformscales to the distributed setting as it dictates the location of the data.
Note that the get_median function above gets called once for each distinct value in the column col2 . Because the data is partitioned beforehand, we can just pull the first value of col2 to know what group we are working with.
To bring FugueSQL out of Jupyter notebooks and into Python scripts, all we need to do is wrap the FugueSQL query inside a fsql class. We can then call the .run() method and choose an execution engine.
If you know pandasql, you might wonder: Why should you use FugueSQL if pandasql already allows you to run SQL with pandas?
pandasql has a single backend, SQLite. It introduces a large overhead to transfer data between pandas and SQLite. On the other hand, FugueSQL supports multiple local backends: pandas, DuckDB and SQLite.
When using the pandas backend, Fugue directly translates SQL to pandas operations, so there is no data transfer at all. DuckDB has superb pandas support, so the overhead of data transfer is also negligible. Both Pandas and DuckDB are preferred FugueSQL backends for local data processing.
Fugue also has support for Spark, Dask, and cuDF (through blazingSQL) as backends.
Congratulations! You have just learned how to use FugueSQL as a SQL interface for operating on Python DataFrames. With FugueSQL, you can now use SQL syntax to express end-to-end data workflows and scale to distributed computing seamlessly!
This article does not exhaustively cover FugueSQL features. For more information about Fugue or FugueSQL, check the resources below.
Github Repo
FugueSQL Documentation
Fugue Slack
Feel free to play and fork the source code of this article here:
github.com
I like to write about basic data science concepts and play with different data science tools. You could connect with me on LinkedIn and Twitter.
Star this repo if you want to check out the codes for all of the articles I have written. Follow me on Medium to stay informed with my latest data science articles like these:
|
[
{
"code": null,
"e": 352,
"s": 171,
"text": "As a data scientist, you might be familiar with both Pandas and SQL. However, there might be some queries, transformations that you feel comfortable doing in SQL instead of Python."
},
{
"code": null,
"e": 420,
"s": 352,
"text": "Wouldn’t it be nice if you can query a pandas DataFrame like below:"
},
{
"code": null,
"e": 435,
"s": 420,
"text": "... using SQL?"
},
{
"code": null,
"e": 480,
"s": 435,
"text": "Or use a Python function within a SQL query?"
},
{
"code": null,
"e": 518,
"s": 480,
"text": "That is when FugueSQL comes in handy."
},
{
"code": null,
"e": 718,
"s": 518,
"text": "FugueSQL is a Python library that allows users to combine Python code and SQL commands. This gives users the flexibility to switch between Python and SQL within a Jupyter Notebook or a Python script."
},
{
"code": null,
"e": 745,
"s": 718,
"text": "To install FugueSQL, type:"
},
{
"code": null,
"e": 768,
"s": 745,
"text": "pip install fugue[sql]"
},
{
"code": null,
"e": 817,
"s": 768,
"text": "To run on Spark or Dask execution engines, type:"
},
{
"code": null,
"e": 898,
"s": 817,
"text": "pip install fugue[sql, spark] pip install fugue[sql, dask]pip install fugue[all]"
},
{
"code": null,
"e": 1014,
"s": 898,
"text": "In this article, we will explore some utilities of FugueSQL and compare FugueSQL with other tools such as pandasql."
},
{
"code": null,
"e": 1141,
"s": 1014,
"text": "FugueSQL comes with a Jupyter notebook extension that allows users to interactively query DataFrames with syntax highlighting."
},
{
"code": null,
"e": 1315,
"s": 1141,
"text": "To use it, import the setup function from fugue_notebook to register the%%fsql cell magic. This is only available on classic notebooks for now (not available on JupyterLab)."
},
{
"code": null,
"e": 1401,
"s": 1315,
"text": "To understand how the%%fsql cell magic, let’s start with creating a pandas DataFrame:"
},
{
"code": null,
"e": 1509,
"s": 1401,
"text": "Now, you can query like how you would normally do in SQL by adding the %%fsql at the beginning of the cell."
},
{
"code": null,
"e": 1663,
"s": 1509,
"text": "In the code above, only PRINT does not follow standard SQL. This is similar to the pandas head() and Spark show() operations to display a number of rows."
},
{
"code": null,
"e": 1727,
"s": 1663,
"text": "Operations such as GROUP BY are similar to standard SQL syntax."
},
{
"code": null,
"e": 1937,
"s": 1727,
"text": "For SQL users, nothing shown above is out of the ordinary except for the PRINT statement. However, Fugue also adds some enhancements to standard SQL, allowing it to handle end-to-end data workflows gracefully."
},
{
"code": null,
"e": 2143,
"s": 1937,
"text": "SQL users often have to use temp tables or common table expressions (CTE) to hold intermediate transformations. Luckily, FugueSQL supports the creation of intermediate tables through a variable assignment."
},
{
"code": null,
"e": 2293,
"s": 2143,
"text": "For example, after transforming df , we can assign it to another variable called df2 and save df2 to a file using SAVE variable OVERWRITE file_name ."
},
{
"code": null,
"e": 2397,
"s": 2293,
"text": "Now, if we want to apply more transformation to df2 , simply load it from the file we saved previously."
},
{
"code": null,
"e": 2420,
"s": 2397,
"text": "Pretty cool, isn’t it?"
},
{
"code": null,
"e": 2586,
"s": 2420,
"text": "SQL’s grammar is meant for querying, which means that it lacks keywords to manipulate data. FugueSQL adds some keywords for common DataFrame operations. For example:"
},
{
"code": null,
"e": 2591,
"s": 2586,
"text": "DROP"
},
{
"code": null,
"e": 2609,
"s": 2591,
"text": "FILL NULLS PARAMS"
},
{
"code": null,
"e": 2616,
"s": 2609,
"text": "SAMPLE"
},
{
"code": null,
"e": 2680,
"s": 2616,
"text": "For a full list of operators, check the FugueSQL operator docs."
},
{
"code": null,
"e": 2766,
"s": 2680,
"text": "FugueSQL also allows you to use Python functions within a SQL query using TRANSFORM ."
},
{
"code": null,
"e": 2826,
"s": 2766,
"text": "For example, to use the function str_concat in a SQL query:"
},
{
"code": null,
"e": 2883,
"s": 2826,
"text": "... simply add the following components to the function:"
},
{
"code": null,
"e": 2917,
"s": 2883,
"text": "Output schema hint (as a comment)"
},
{
"code": null,
"e": 2950,
"s": 2917,
"text": "Type annotations (pd.DataFrame )"
},
{
"code": null,
"e": 2999,
"s": 2950,
"text": "Cool! Now we are ready to add it to a SQL query:"
},
{
"code": null,
"e": 3204,
"s": 2999,
"text": "One of the beautiful properties of SQL is that it is agnostic to the size of the data. The logic is expressed in a scale-agnostic manner and will remain the same even if running on Pandas, Spark, or Dask."
},
{
"code": null,
"e": 3417,
"s": 3204,
"text": "With FugueSQL, we can apply the same logic on the Spark execution engine just by specifying %%fsql spark . We don’t even need to edit the str_concatfunction to bring it to Spark as Fugue takes care of porting it."
},
{
"code": null,
"e": 3634,
"s": 3417,
"text": "One of the important parts of distributed computing is partitioning. For example, to get the median value in each logical group, the data needs to be partitioned such that each logical group lives on the same worker."
},
{
"code": null,
"e": 3905,
"s": 3634,
"text": "To describe this, FugueSQL has the PREPARTITION BY keyword. Fugue’s prepartition-transform semantics are equivalent to the pandas groupby-apply . The only difference is that prepartition-transformscales to the distributed setting as it dictates the location of the data."
},
{
"code": null,
"e": 4132,
"s": 3905,
"text": "Note that the get_median function above gets called once for each distinct value in the column col2 . Because the data is partitioned beforehand, we can just pull the first value of col2 to know what group we are working with."
},
{
"code": null,
"e": 4333,
"s": 4132,
"text": "To bring FugueSQL out of Jupyter notebooks and into Python scripts, all we need to do is wrap the FugueSQL query inside a fsql class. We can then call the .run() method and choose an execution engine."
},
{
"code": null,
"e": 4456,
"s": 4333,
"text": "If you know pandasql, you might wonder: Why should you use FugueSQL if pandasql already allows you to run SQL with pandas?"
},
{
"code": null,
"e": 4659,
"s": 4456,
"text": "pandasql has a single backend, SQLite. It introduces a large overhead to transfer data between pandas and SQLite. On the other hand, FugueSQL supports multiple local backends: pandas, DuckDB and SQLite."
},
{
"code": null,
"e": 4948,
"s": 4659,
"text": "When using the pandas backend, Fugue directly translates SQL to pandas operations, so there is no data transfer at all. DuckDB has superb pandas support, so the overhead of data transfer is also negligible. Both Pandas and DuckDB are preferred FugueSQL backends for local data processing."
},
{
"code": null,
"e": 5031,
"s": 4948,
"text": "Fugue also has support for Spark, Dask, and cuDF (through blazingSQL) as backends."
},
{
"code": null,
"e": 5271,
"s": 5031,
"text": "Congratulations! You have just learned how to use FugueSQL as a SQL interface for operating on Python DataFrames. With FugueSQL, you can now use SQL syntax to express end-to-end data workflows and scale to distributed computing seamlessly!"
},
{
"code": null,
"e": 5404,
"s": 5271,
"text": "This article does not exhaustively cover FugueSQL features. For more information about Fugue or FugueSQL, check the resources below."
},
{
"code": null,
"e": 5416,
"s": 5404,
"text": "Github Repo"
},
{
"code": null,
"e": 5439,
"s": 5416,
"text": "FugueSQL Documentation"
},
{
"code": null,
"e": 5451,
"s": 5439,
"text": "Fugue Slack"
},
{
"code": null,
"e": 5516,
"s": 5451,
"text": "Feel free to play and fork the source code of this article here:"
},
{
"code": null,
"e": 5527,
"s": 5516,
"text": "github.com"
},
{
"code": null,
"e": 5672,
"s": 5527,
"text": "I like to write about basic data science concepts and play with different data science tools. You could connect with me on LinkedIn and Twitter."
}
] |
How to create a password entry field using Tkinter?
|
Let us suppose we want to add an Entry widget which accepts user passwords. Generally, the passwords are displayed using “*” which yields to make the user credentials in an encrypted form.
We can create a password field using tkinter Entry widget.
In this example, we have created an application window that will accept the user password and a button to close the window.
#Import the required libraries
from tkinter import *
#Create an instance of tkinter frame
win= Tk()
#Set the geometry of frame
win.geometry("600x250")
def close_win():
win.destroy()
#Create a text label
Label(win,text="Enter the Password", font=('Helvetica',20)).pack(pady=20)
#Create Entry Widget for password
password= Entry(win,show="*",width=20)
password.pack()
#Create a button to close the window
Button(win, text="Quit", font=('Helvetica bold',
10),command=close_win).pack(pady=20)
win.mainloop()
Running the above code will display a window with an entry field that accepts passwords and a button to close the window.
Now, enter the password and click the “Quit” button to close the window.
|
[
{
"code": null,
"e": 1251,
"s": 1062,
"text": "Let us suppose we want to add an Entry widget which accepts user passwords. Generally, the passwords are displayed using “*” which yields to make the user credentials in an encrypted form."
},
{
"code": null,
"e": 1310,
"s": 1251,
"text": "We can create a password field using tkinter Entry widget."
},
{
"code": null,
"e": 1434,
"s": 1310,
"text": "In this example, we have created an application window that will accept the user password and a button to close the window."
},
{
"code": null,
"e": 1948,
"s": 1434,
"text": "#Import the required libraries\nfrom tkinter import *\n\n#Create an instance of tkinter frame\nwin= Tk()\n\n#Set the geometry of frame\nwin.geometry(\"600x250\")\n\ndef close_win():\n win.destroy()\n\n#Create a text label\nLabel(win,text=\"Enter the Password\", font=('Helvetica',20)).pack(pady=20)\n\n#Create Entry Widget for password\npassword= Entry(win,show=\"*\",width=20)\npassword.pack()\n\n#Create a button to close the window\nButton(win, text=\"Quit\", font=('Helvetica bold',\n10),command=close_win).pack(pady=20)\n\nwin.mainloop()"
},
{
"code": null,
"e": 2070,
"s": 1948,
"text": "Running the above code will display a window with an entry field that accepts passwords and a button to close the window."
},
{
"code": null,
"e": 2143,
"s": 2070,
"text": "Now, enter the password and click the “Quit” button to close the window."
}
] |
If your Python code throws errors, check these things first | by Ari Joury | Towards Data Science
|
Fail fast, fail early — we’ve all heard the motto. Still, it’s frustrating when you’ve written a beautiful piece of code, just to realize that it doesn’t work as you’d expected.
That’s where unit tests come in. Checking each piece of your code helps you localize and fix your bugs.
But not all bugs are created the same. Some bugs are unexpected, not obvious to see at all, and hard to fix even for experienced developers. These are more likely to occur in large and complex projects, and spotting them early can save you a ton of time later on.
Other bugs are trivial, like when you’ve forgotten a closing bracket or messed up some indentations. They’re easy to fix, but hard to spot, especially when you’ve been working on the code for a while or when it’s late at night.
Once you’ve spotted a bug like this, it’s a bit of a facepalm-moment. You could kick yourself for not having seen it earlier — and you wonder why you did such a stupid mistake in the first place. They’re also not the type of bug that you’d want your colleagues to spot before you do.
towardsdatascience.com
I can’t claim that this list covers all silly mistakes that you’ll ever make. However, using it regularly should at least help you eliminate the most common ones.
It happens to everyone — you happily code away, and in the flow of it you forget to close that array, argument list, or whatever you’re dealing with.
Some developers type a closing brace as soon as they open one, and then fill the space in between. Most modern IDEs also close braces automatically — so if forgetting braces is a chronic disease of yours, you might consider leaving your old IDE behind.
Hey, it happens to the best of us. You’re constructing a new class, and since it’s complex, you’re already thinking about the contents of the class while your typing the code. And whoops, you’ve forgotten that colon at the end:
class SillyMistake(): def AvoidThis(): print("Don't do silly mistakes!")
A good rule of thumb is that if you’re increasing the indent of a line, you’ll need to add a colon to the line that comes before it.
When do you use =, and when do you use ==? As a rule of thumb, if you’re checking or comparing two values, you’ll use ==. On the other hand, you’ll use = if you’re assigning a value to a variable.
Have a look at this:
def NotRecommendedCode(item, new=[]): new.append(item)
This isn’t wrong per se, but you’ll run into trouble if you don’t watch out. Defining the array [] in the argument list as mutable means that it’ll be modified for future calls.
But you probably want to start with an empty array every time. This is better:
def RecommendedCode(item, new=None): if new is None: new = [] new.append(item)
This is trivial as hell but happens all the time. Say you’ve defined the variable CamelBucket, and later in the code you call camelbucket.
Won’t work, huh? Chances are, you know very well that Python is case-sensitive, but you just forgot to press that shift key.
Most modern IDEs can help you avoid this mistake by making smart suggestions. So if you’re prone to typos, you might consider upgrading your text editor.
towardsdatascience.com
This has probably happened to every junior developer out there: you’ve built a list, and now you want to change a few things. No big deal, right?
Wrong. Consider this:
mylist = [i for i in range(10)]for i in range(len(mylist)): if i%2==0: del mylist[i]
Spoiler alert: this throws an error because you end up iterating over items in a list that don’t exist any more. Instead of deleting from an exisiting list, consider writing to a new one.
This is so trivial, but also frustratingly common: in one file, let’s say bug.py, you write import anotherbug. In another file, anotherbug.py, you write import bug.
This isn’t going to work — how should the computer know which file to include in which? You probably knew this but did it by accident. No worries, but fix it asap!
Another thing regarding modules: Python comes with a wealth of amazing library modules. But if you need to create your own, be sure to give them an original name.
For example, defining a module called numpy.py is going to lead to confusion. If you’re in doubt which names you can use, there’s a complete list of Python’s standard libraries.
You learn from your mistakes. As a beginner-level developer, you might still be making mistakes like these every day. But as time moves on, they become less and less.
You can also take it to the next level and adopt some healthy habits so you never make trivial mistakes. These are just a few tips of many, but they can help you avoid many bugs.
Python code works perfectly if you define a variable and don’t assign a value until later. It therefore seems cumbersome to initialize them every time upon definition.
But don’t forget that you’re human, and as such you’re prone to losing track of which variable has got assigned and which one hasn’t. Also, initializing variables forces you to think about what type of variable you’re dealing with, which might prevent bugs down the line.
That’s why many seasoned developers initialize every variable with default values such as 0, None, [], "", and so on.
Of course, if you’re just trying to tie up a little standalone script and not working on a mammoth-sized project, you can allow yourself to do things quick and sloppy. But remember to be diligent when things get complex.
Braces again. Somehow they’re just not compatible with the sloppiness of human brains.
In Python, you’ll always need to call a function like this: callingthisfunction(), and not like this: notcallingthisfunction, whether there are any arguments or not. Sounds trivial, but isn’t always!
For example, if you have a file called file and you want to close it in Python, you’ll write file.close(). The code will run without throwing an error, though, if you write file.close — except that it won’t close the file. Try to locate a mistake like that in a project with thousands of lines of code...
It becomes pretty automatic after a few days of practicing, so start today: functions are always called with braces!
Again, this is rather trivial but often forgotten by beginners. If you’re calling a module, you’ll never use an extension, regardless whether it’s from the Python standard library or elsewhere.
So watch out for lines like:
import numpy.py # don't do that!!!
Delete that .py extension and avoid it like stink. Extensions and imports don’t work together.
Again, you probably know yourself that you need to index the same way throughout a file. But it’s still a common pitfall when you’re collaborating with others or when it’s late at night.
Before you start coding, decide whether you use tabs or spaces, and how many spaces. The most common convention is four spaces, but you can do whatever suits you best — as long as you keep it up throughout the document.
If, on the other hand, you use spaces in one line and tabs in another, sometimes Python doesn’t treat that like you’d expected. I know it seems like a dumb job, but try to be as diligent as you can about that.
towardsdatascience.com
Fail fast, fail early — but don’t fail stupidly. Everybody makes mistakes, but still, it’s better to avoid the stupid ones, or at least fix them quickly.
It’s not only about making progress. It’s also about avoiding facepalm-moments where you kick yourself for not having spotted a bug earlier.
And it’s about not losing your reputation — who wants to run to a colleague or a manager with a seemingly complicated problem, only to find out that it was really easy to fix?
Chances are that you’ve done some of the mistakes I’ve mentioned in the past. Chill, buddy, I have, too — we’re human.
The point is, if your code behaves in unexpected ways, you’ll want to check for the trivial things first. Then, if your code still doesn’t work properly, you can always ping your colleague.
|
[
{
"code": null,
"e": 349,
"s": 171,
"text": "Fail fast, fail early — we’ve all heard the motto. Still, it’s frustrating when you’ve written a beautiful piece of code, just to realize that it doesn’t work as you’d expected."
},
{
"code": null,
"e": 453,
"s": 349,
"text": "That’s where unit tests come in. Checking each piece of your code helps you localize and fix your bugs."
},
{
"code": null,
"e": 717,
"s": 453,
"text": "But not all bugs are created the same. Some bugs are unexpected, not obvious to see at all, and hard to fix even for experienced developers. These are more likely to occur in large and complex projects, and spotting them early can save you a ton of time later on."
},
{
"code": null,
"e": 945,
"s": 717,
"text": "Other bugs are trivial, like when you’ve forgotten a closing bracket or messed up some indentations. They’re easy to fix, but hard to spot, especially when you’ve been working on the code for a while or when it’s late at night."
},
{
"code": null,
"e": 1229,
"s": 945,
"text": "Once you’ve spotted a bug like this, it’s a bit of a facepalm-moment. You could kick yourself for not having seen it earlier — and you wonder why you did such a stupid mistake in the first place. They’re also not the type of bug that you’d want your colleagues to spot before you do."
},
{
"code": null,
"e": 1252,
"s": 1229,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 1415,
"s": 1252,
"text": "I can’t claim that this list covers all silly mistakes that you’ll ever make. However, using it regularly should at least help you eliminate the most common ones."
},
{
"code": null,
"e": 1565,
"s": 1415,
"text": "It happens to everyone — you happily code away, and in the flow of it you forget to close that array, argument list, or whatever you’re dealing with."
},
{
"code": null,
"e": 1818,
"s": 1565,
"text": "Some developers type a closing brace as soon as they open one, and then fill the space in between. Most modern IDEs also close braces automatically — so if forgetting braces is a chronic disease of yours, you might consider leaving your old IDE behind."
},
{
"code": null,
"e": 2046,
"s": 1818,
"text": "Hey, it happens to the best of us. You’re constructing a new class, and since it’s complex, you’re already thinking about the contents of the class while your typing the code. And whoops, you’ve forgotten that colon at the end:"
},
{
"code": null,
"e": 2129,
"s": 2046,
"text": "class SillyMistake(): def AvoidThis(): print(\"Don't do silly mistakes!\")"
},
{
"code": null,
"e": 2262,
"s": 2129,
"text": "A good rule of thumb is that if you’re increasing the indent of a line, you’ll need to add a colon to the line that comes before it."
},
{
"code": null,
"e": 2459,
"s": 2262,
"text": "When do you use =, and when do you use ==? As a rule of thumb, if you’re checking or comparing two values, you’ll use ==. On the other hand, you’ll use = if you’re assigning a value to a variable."
},
{
"code": null,
"e": 2480,
"s": 2459,
"text": "Have a look at this:"
},
{
"code": null,
"e": 2538,
"s": 2480,
"text": "def NotRecommendedCode(item, new=[]): new.append(item)"
},
{
"code": null,
"e": 2716,
"s": 2538,
"text": "This isn’t wrong per se, but you’ll run into trouble if you don’t watch out. Defining the array [] in the argument list as mutable means that it’ll be modified for future calls."
},
{
"code": null,
"e": 2795,
"s": 2716,
"text": "But you probably want to start with an empty array every time. This is better:"
},
{
"code": null,
"e": 2887,
"s": 2795,
"text": "def RecommendedCode(item, new=None): if new is None: new = [] new.append(item)"
},
{
"code": null,
"e": 3026,
"s": 2887,
"text": "This is trivial as hell but happens all the time. Say you’ve defined the variable CamelBucket, and later in the code you call camelbucket."
},
{
"code": null,
"e": 3151,
"s": 3026,
"text": "Won’t work, huh? Chances are, you know very well that Python is case-sensitive, but you just forgot to press that shift key."
},
{
"code": null,
"e": 3305,
"s": 3151,
"text": "Most modern IDEs can help you avoid this mistake by making smart suggestions. So if you’re prone to typos, you might consider upgrading your text editor."
},
{
"code": null,
"e": 3328,
"s": 3305,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 3474,
"s": 3328,
"text": "This has probably happened to every junior developer out there: you’ve built a list, and now you want to change a few things. No big deal, right?"
},
{
"code": null,
"e": 3496,
"s": 3474,
"text": "Wrong. Consider this:"
},
{
"code": null,
"e": 3584,
"s": 3496,
"text": "mylist = [i for i in range(10)]for i in range(len(mylist)): if i%2==0: del mylist[i]"
},
{
"code": null,
"e": 3772,
"s": 3584,
"text": "Spoiler alert: this throws an error because you end up iterating over items in a list that don’t exist any more. Instead of deleting from an exisiting list, consider writing to a new one."
},
{
"code": null,
"e": 3937,
"s": 3772,
"text": "This is so trivial, but also frustratingly common: in one file, let’s say bug.py, you write import anotherbug. In another file, anotherbug.py, you write import bug."
},
{
"code": null,
"e": 4101,
"s": 3937,
"text": "This isn’t going to work — how should the computer know which file to include in which? You probably knew this but did it by accident. No worries, but fix it asap!"
},
{
"code": null,
"e": 4264,
"s": 4101,
"text": "Another thing regarding modules: Python comes with a wealth of amazing library modules. But if you need to create your own, be sure to give them an original name."
},
{
"code": null,
"e": 4442,
"s": 4264,
"text": "For example, defining a module called numpy.py is going to lead to confusion. If you’re in doubt which names you can use, there’s a complete list of Python’s standard libraries."
},
{
"code": null,
"e": 4609,
"s": 4442,
"text": "You learn from your mistakes. As a beginner-level developer, you might still be making mistakes like these every day. But as time moves on, they become less and less."
},
{
"code": null,
"e": 4788,
"s": 4609,
"text": "You can also take it to the next level and adopt some healthy habits so you never make trivial mistakes. These are just a few tips of many, but they can help you avoid many bugs."
},
{
"code": null,
"e": 4956,
"s": 4788,
"text": "Python code works perfectly if you define a variable and don’t assign a value until later. It therefore seems cumbersome to initialize them every time upon definition."
},
{
"code": null,
"e": 5228,
"s": 4956,
"text": "But don’t forget that you’re human, and as such you’re prone to losing track of which variable has got assigned and which one hasn’t. Also, initializing variables forces you to think about what type of variable you’re dealing with, which might prevent bugs down the line."
},
{
"code": null,
"e": 5346,
"s": 5228,
"text": "That’s why many seasoned developers initialize every variable with default values such as 0, None, [], \"\", and so on."
},
{
"code": null,
"e": 5567,
"s": 5346,
"text": "Of course, if you’re just trying to tie up a little standalone script and not working on a mammoth-sized project, you can allow yourself to do things quick and sloppy. But remember to be diligent when things get complex."
},
{
"code": null,
"e": 5654,
"s": 5567,
"text": "Braces again. Somehow they’re just not compatible with the sloppiness of human brains."
},
{
"code": null,
"e": 5854,
"s": 5654,
"text": "In Python, you’ll always need to call a function like this: callingthisfunction(), and not like this: notcallingthisfunction, whether there are any arguments or not. Sounds trivial, but isn’t always!"
},
{
"code": null,
"e": 6159,
"s": 5854,
"text": "For example, if you have a file called file and you want to close it in Python, you’ll write file.close(). The code will run without throwing an error, though, if you write file.close — except that it won’t close the file. Try to locate a mistake like that in a project with thousands of lines of code..."
},
{
"code": null,
"e": 6276,
"s": 6159,
"text": "It becomes pretty automatic after a few days of practicing, so start today: functions are always called with braces!"
},
{
"code": null,
"e": 6470,
"s": 6276,
"text": "Again, this is rather trivial but often forgotten by beginners. If you’re calling a module, you’ll never use an extension, regardless whether it’s from the Python standard library or elsewhere."
},
{
"code": null,
"e": 6499,
"s": 6470,
"text": "So watch out for lines like:"
},
{
"code": null,
"e": 6542,
"s": 6499,
"text": "import numpy.py # don't do that!!!"
},
{
"code": null,
"e": 6637,
"s": 6542,
"text": "Delete that .py extension and avoid it like stink. Extensions and imports don’t work together."
},
{
"code": null,
"e": 6824,
"s": 6637,
"text": "Again, you probably know yourself that you need to index the same way throughout a file. But it’s still a common pitfall when you’re collaborating with others or when it’s late at night."
},
{
"code": null,
"e": 7044,
"s": 6824,
"text": "Before you start coding, decide whether you use tabs or spaces, and how many spaces. The most common convention is four spaces, but you can do whatever suits you best — as long as you keep it up throughout the document."
},
{
"code": null,
"e": 7254,
"s": 7044,
"text": "If, on the other hand, you use spaces in one line and tabs in another, sometimes Python doesn’t treat that like you’d expected. I know it seems like a dumb job, but try to be as diligent as you can about that."
},
{
"code": null,
"e": 7277,
"s": 7254,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 7431,
"s": 7277,
"text": "Fail fast, fail early — but don’t fail stupidly. Everybody makes mistakes, but still, it’s better to avoid the stupid ones, or at least fix them quickly."
},
{
"code": null,
"e": 7572,
"s": 7431,
"text": "It’s not only about making progress. It’s also about avoiding facepalm-moments where you kick yourself for not having spotted a bug earlier."
},
{
"code": null,
"e": 7748,
"s": 7572,
"text": "And it’s about not losing your reputation — who wants to run to a colleague or a manager with a seemingly complicated problem, only to find out that it was really easy to fix?"
},
{
"code": null,
"e": 7867,
"s": 7748,
"text": "Chances are that you’ve done some of the mistakes I’ve mentioned in the past. Chill, buddy, I have, too — we’re human."
}
] |
public static void main(String args) | Java main method Onlinetutorialspoint
|
PROGRAMMINGJava ExamplesC Examples
Java Examples
C Examples
C Tutorials
aws
JAVAEXCEPTIONSCOLLECTIONSSWINGJDBC
EXCEPTIONS
COLLECTIONS
SWING
JDBC
JAVA 8
SPRING
SPRING BOOT
HIBERNATE
PYTHON
PHP
JQUERY
PROGRAMMINGJava ExamplesC Examples
Java Examples
C Examples
C Tutorials
aws
The main method is the entry point of any core Java program. Here, I mention the core Java program specifically because, in all the other java programs like Servlets, applets and any Java-based frameworks, they have their own life cycles and own entry points.
For example in Servlet programming, we don’t have any main method, but it does not mean that there is no entry point for a servlet. The starting point of a servlet is init().
Java main method is called by the main Thread, which is created by the Java Virtual Machine(JVM). And the program containing the main method runs until the main thread is alive. After completion of the main thread, the program will be terminated.
java class name
When we hit the above (class name followed by the “java”) command, the JVM checks for the main method with the default prototype.
Default Syntax for main method:
public static void main(String args[])
The main method should be public. The public keyword is the access modifier in java. By making the main method as public, the method can get access from anywhere, any package. For instance, consider in your system JVM is installed under the “C:\” directory but the java program containing the main method is in the “D:\” directory, by making the main method as public the JVM which is present in “C:\” will be allowed to get access to the main method in “D:\”.
Static is also one of the access modifiers in Java. When we declare a variable, method, block, class with static modifier those are all under class level access, and not for a specific instance or object. Hence the static functionalities are directly accessed by the class name. For direct accessing with JVM, the main method should be static. Hence JVM will call the main method directly by using the class only, without creating any instance of the main method class.
Whenever we declare a method with “int” or any return type except “void”, that means we can catch hold of that particular return value and make use of that value from the called method and do something on that value. But the main method is called by the JVM. JVM will not expect any value from the main method as a return value because there is no more functionality done by the JVM after getting the return value from the main, hence the main method is declared as “void”.
In JVM the main method name is configured as a main. It is not necessary to keep the main method name always as main. If we change the configuration of JVM with respect to the main method, then we can change the name of the main method as we want.
Java main method takes the array of Strings as a parameter. By using this parameter we can pass the command line arguments to the main.
public static void main(String args[])
public static void main(String[] args)
public static void main(String []args)
public static void main(String... args)
static public void main(String args[])
public static void main(int[] args)
public static void main(String args[])
public static void main(String[] args)
public static void main(String []args)
public static void main(String... args)
static public void main(String args[])
public static void main(int[] args)
We can make the main method as final.
We can make the main method synchronize.
We can make the main method as strictfp (strict floating point).
We can overload the main method.
We can also override the main method but it is termed as data hiding (not overriding).
We can make the main method as final.
We can make the main method synchronize.
We can make the main method as strictfp (strict floating point).
We can overload the main method.
We can also override the main method but it is termed as data hiding (not overriding).
Hence the below syntax is applicable for the main method:
final static synchronize strictfp public void main(String args[]){}
We can overload the main method, and also we can call the main method if need.
Example:
public class Parent {
public static void main(String[] args) {
System.out.println("Sring args::");
}
/**
* Overloading main method
*/
public static void main(int[] args) {
System.out.println("int args::" + args[0] + "" + args[1] + "" + args[2]);
}
}
class Child {
public static void main(String[] args) {
int[] intArgs = {1, 2, 3};
Parent.main(intArgs);
}
}
By running the Child class we get the below output.
Output:
int args::123
If the main method is overridden it is termed as Method Hiding. We can hide the main method as shown below.
Example:
public class Parent {
public static void main(String[] args) {
System.out.println("In Parent main");
}
}
class Child extends Parent {
public static void main(String[] args) {
System.out.println("In Child main");
}
}
By running the Child class we get the below output.
Output:
In Child main
Happy Learning 🙂
Java Static Variable Method Block Class Example
Overriding vs Overloading in Java
Java String to int conversion Example
Java Reflection Get Method Information
Top 10 Exceptions in Java
How to Split String in Java example
Default Static methods in Interface Java 8
Factory Method Pattern in Java
String sorting in Java
How to check whether a String is a Balanced String or not ?
How to Convert Java Int to String
Java String Binary Search Example
Java instanceof Operator
How to Convert Java String to Int
String in Switch in Java 7 Example
Java Static Variable Method Block Class Example
Overriding vs Overloading in Java
Java String to int conversion Example
Java Reflection Get Method Information
Top 10 Exceptions in Java
How to Split String in Java example
Default Static methods in Interface Java 8
Factory Method Pattern in Java
String sorting in Java
How to check whether a String is a Balanced String or not ?
How to Convert Java Int to String
Java String Binary Search Example
Java instanceof Operator
How to Convert Java String to Int
String in Switch in Java 7 Example
rajesh chaganti
May 13, 2015 at 5:57 pm - Reply
Can you please explain about why main method declared as static as little eloborate..... please.....
rajesh chaganti
May 13, 2015 at 5:57 pm - Reply
Can you please explain about why main method declared as static as little eloborate..... please.....
Can you please explain about why main method declared as static as little eloborate..... please.....
Δ
Install Java on Mac OS
Install AWS CLI on Windows
Install Minikube on Windows
Install Docker Toolbox on Windows
Install SOAPUI on Windows
Install Gradle on Windows
Install RabbitMQ on Windows
Install PuTTY on windows
Install Mysql on Windows
Install Hibernate Tools in Eclipse
Install Elasticsearch on Windows
Install Maven on Windows
Install Maven on Ubuntu
Install Maven on Windows Command
Add OJDBC jar to Maven Repository
Install Ant on Windows
Install RabbitMQ on Windows
Install Apache Kafka on Ubuntu
Install Apache Kafka on Windows
Java8 – Install Windows
Java8 – foreach
Java8 – forEach with index
Java8 – Stream Filter Objects
Java8 – Comparator Userdefined
Java8 – GroupingBy
Java8 – SummingInt
Java8 – walk ReadFiles
Java8 – JAVA_HOME on Windows
Howto – Install Java on Mac OS
Howto – Convert Iterable to Stream
Howto – Get common elements from two Lists
Howto – Convert List to String
Howto – Concatenate Arrays using Stream
Howto – Remove duplicates from List
Howto – Filter null values from Stream
Howto – Convert List to Map
Howto – Convert Stream to List
Howto – Sort a Map
Howto – Filter a Map
Howto – Get Current UTC Time
Howto – Verify an Array contains a specific value
Howto – Convert ArrayList to Array
Howto – Read File Line By Line
Howto – Convert Date to LocalDate
Howto – Merge Streams
Howto – Resolve NullPointerException in toMap
Howto -Get Stream count
Howto – Get Min and Max values in a Stream
Howto – Convert InputStream to String
|
[
{
"code": null,
"e": 158,
"s": 123,
"text": "PROGRAMMINGJava ExamplesC Examples"
},
{
"code": null,
"e": 172,
"s": 158,
"text": "Java Examples"
},
{
"code": null,
"e": 183,
"s": 172,
"text": "C Examples"
},
{
"code": null,
"e": 195,
"s": 183,
"text": "C Tutorials"
},
{
"code": null,
"e": 199,
"s": 195,
"text": "aws"
},
{
"code": null,
"e": 234,
"s": 199,
"text": "JAVAEXCEPTIONSCOLLECTIONSSWINGJDBC"
},
{
"code": null,
"e": 245,
"s": 234,
"text": "EXCEPTIONS"
},
{
"code": null,
"e": 257,
"s": 245,
"text": "COLLECTIONS"
},
{
"code": null,
"e": 263,
"s": 257,
"text": "SWING"
},
{
"code": null,
"e": 268,
"s": 263,
"text": "JDBC"
},
{
"code": null,
"e": 275,
"s": 268,
"text": "JAVA 8"
},
{
"code": null,
"e": 282,
"s": 275,
"text": "SPRING"
},
{
"code": null,
"e": 294,
"s": 282,
"text": "SPRING BOOT"
},
{
"code": null,
"e": 304,
"s": 294,
"text": "HIBERNATE"
},
{
"code": null,
"e": 311,
"s": 304,
"text": "PYTHON"
},
{
"code": null,
"e": 315,
"s": 311,
"text": "PHP"
},
{
"code": null,
"e": 322,
"s": 315,
"text": "JQUERY"
},
{
"code": null,
"e": 357,
"s": 322,
"text": "PROGRAMMINGJava ExamplesC Examples"
},
{
"code": null,
"e": 371,
"s": 357,
"text": "Java Examples"
},
{
"code": null,
"e": 382,
"s": 371,
"text": "C Examples"
},
{
"code": null,
"e": 394,
"s": 382,
"text": "C Tutorials"
},
{
"code": null,
"e": 398,
"s": 394,
"text": "aws"
},
{
"code": null,
"e": 658,
"s": 398,
"text": "The main method is the entry point of any core Java program. Here, I mention the core Java program specifically because, in all the other java programs like Servlets, applets and any Java-based frameworks, they have their own life cycles and own entry points."
},
{
"code": null,
"e": 833,
"s": 658,
"text": "For example in Servlet programming, we don’t have any main method, but it does not mean that there is no entry point for a servlet. The starting point of a servlet is init()."
},
{
"code": null,
"e": 1080,
"s": 833,
"text": "Java main method is called by the main Thread, which is created by the Java Virtual Machine(JVM). And the program containing the main method runs until the main thread is alive. After completion of the main thread, the program will be terminated."
},
{
"code": null,
"e": 1097,
"s": 1080,
"text": "java class name "
},
{
"code": null,
"e": 1227,
"s": 1097,
"text": "When we hit the above (class name followed by the “java”) command, the JVM checks for the main method with the default prototype."
},
{
"code": null,
"e": 1259,
"s": 1227,
"text": "Default Syntax for main method:"
},
{
"code": null,
"e": 1298,
"s": 1259,
"text": "public static void main(String args[])"
},
{
"code": null,
"e": 1759,
"s": 1298,
"text": "The main method should be public. The public keyword is the access modifier in java. By making the main method as public, the method can get access from anywhere, any package. For instance, consider in your system JVM is installed under the “C:\\” directory but the java program containing the main method is in the “D:\\” directory, by making the main method as public the JVM which is present in “C:\\” will be allowed to get access to the main method in “D:\\”."
},
{
"code": null,
"e": 2230,
"s": 1759,
"text": "Static is also one of the access modifiers in Java. When we declare a variable, method, block, class with static modifier those are all under class level access, and not for a specific instance or object. Hence the static functionalities are directly accessed by the class name. For direct accessing with JVM, the main method should be static. Hence JVM will call the main method directly by using the class only, without creating any instance of the main method class."
},
{
"code": null,
"e": 2704,
"s": 2230,
"text": "Whenever we declare a method with “int” or any return type except “void”, that means we can catch hold of that particular return value and make use of that value from the called method and do something on that value. But the main method is called by the JVM. JVM will not expect any value from the main method as a return value because there is no more functionality done by the JVM after getting the return value from the main, hence the main method is declared as “void”."
},
{
"code": null,
"e": 2952,
"s": 2704,
"text": "In JVM the main method name is configured as a main. It is not necessary to keep the main method name always as main. If we change the configuration of JVM with respect to the main method, then we can change the name of the main method as we want."
},
{
"code": null,
"e": 3088,
"s": 2952,
"text": "Java main method takes the array of Strings as a parameter. By using this parameter we can pass the command line arguments to the main."
},
{
"code": null,
"e": 3322,
"s": 3088,
"text": "\npublic static void main(String args[])\npublic static void main(String[] args)\npublic static void main(String []args)\npublic static void main(String... args)\nstatic public void main(String args[])\npublic static void main(int[] args)\n"
},
{
"code": null,
"e": 3361,
"s": 3322,
"text": "public static void main(String args[])"
},
{
"code": null,
"e": 3400,
"s": 3361,
"text": "public static void main(String[] args)"
},
{
"code": null,
"e": 3439,
"s": 3400,
"text": "public static void main(String []args)"
},
{
"code": null,
"e": 3479,
"s": 3439,
"text": "public static void main(String... args)"
},
{
"code": null,
"e": 3518,
"s": 3479,
"text": "static public void main(String args[])"
},
{
"code": null,
"e": 3554,
"s": 3518,
"text": "public static void main(int[] args)"
},
{
"code": null,
"e": 3820,
"s": 3554,
"text": "\nWe can make the main method as final.\nWe can make the main method synchronize.\nWe can make the main method as strictfp (strict floating point).\nWe can overload the main method.\nWe can also override the main method but it is termed as data hiding (not overriding).\n"
},
{
"code": null,
"e": 3858,
"s": 3820,
"text": "We can make the main method as final."
},
{
"code": null,
"e": 3899,
"s": 3858,
"text": "We can make the main method synchronize."
},
{
"code": null,
"e": 3964,
"s": 3899,
"text": "We can make the main method as strictfp (strict floating point)."
},
{
"code": null,
"e": 3997,
"s": 3964,
"text": "We can overload the main method."
},
{
"code": null,
"e": 4084,
"s": 3997,
"text": "We can also override the main method but it is termed as data hiding (not overriding)."
},
{
"code": null,
"e": 4142,
"s": 4084,
"text": "Hence the below syntax is applicable for the main method:"
},
{
"code": null,
"e": 4210,
"s": 4142,
"text": "final static synchronize strictfp public void main(String args[]){}"
},
{
"code": null,
"e": 4289,
"s": 4210,
"text": "We can overload the main method, and also we can call the main method if need."
},
{
"code": null,
"e": 4298,
"s": 4289,
"text": "Example:"
},
{
"code": null,
"e": 4730,
"s": 4298,
"text": "public class Parent {\n\n public static void main(String[] args) {\n System.out.println(\"Sring args::\");\n }\n\n /**\n * Overloading main method\n */\n public static void main(int[] args) {\n System.out.println(\"int args::\" + args[0] + \"\" + args[1] + \"\" + args[2]);\n }\n}\n\nclass Child {\n\n public static void main(String[] args) {\n int[] intArgs = {1, 2, 3};\n Parent.main(intArgs);\n }\n}"
},
{
"code": null,
"e": 4782,
"s": 4730,
"text": "By running the Child class we get the below output."
},
{
"code": null,
"e": 4790,
"s": 4782,
"text": "Output:"
},
{
"code": null,
"e": 4804,
"s": 4790,
"text": "int args::123"
},
{
"code": null,
"e": 4912,
"s": 4804,
"text": "If the main method is overridden it is termed as Method Hiding. We can hide the main method as shown below."
},
{
"code": null,
"e": 4921,
"s": 4912,
"text": "Example:"
},
{
"code": null,
"e": 5173,
"s": 4921,
"text": "public class Parent {\n\n public static void main(String[] args) {\n System.out.println(\"In Parent main\");\n }\n\n}\n\nclass Child extends Parent {\n\n public static void main(String[] args) {\n System.out.println(\"In Child main\");\n }\n}"
},
{
"code": null,
"e": 5225,
"s": 5173,
"text": "By running the Child class we get the below output."
},
{
"code": null,
"e": 5233,
"s": 5225,
"text": "Output:"
},
{
"code": null,
"e": 5247,
"s": 5233,
"text": "In Child main"
},
{
"code": null,
"e": 5264,
"s": 5247,
"text": "Happy Learning 🙂"
},
{
"code": null,
"e": 5806,
"s": 5264,
"text": "\nJava Static Variable Method Block Class Example\nOverriding vs Overloading in Java\nJava String to int conversion Example\nJava Reflection Get Method Information\nTop 10 Exceptions in Java\nHow to Split String in Java example\nDefault Static methods in Interface Java 8\nFactory Method Pattern in Java\nString sorting in Java\nHow to check whether a String is a Balanced String or not ?\nHow to Convert Java Int to String\nJava String Binary Search Example\nJava instanceof Operator\nHow to Convert Java String to Int\nString in Switch in Java 7 Example\n"
},
{
"code": null,
"e": 5854,
"s": 5806,
"text": "Java Static Variable Method Block Class Example"
},
{
"code": null,
"e": 5888,
"s": 5854,
"text": "Overriding vs Overloading in Java"
},
{
"code": null,
"e": 5926,
"s": 5888,
"text": "Java String to int conversion Example"
},
{
"code": null,
"e": 5965,
"s": 5926,
"text": "Java Reflection Get Method Information"
},
{
"code": null,
"e": 5991,
"s": 5965,
"text": "Top 10 Exceptions in Java"
},
{
"code": null,
"e": 6027,
"s": 5991,
"text": "How to Split String in Java example"
},
{
"code": null,
"e": 6070,
"s": 6027,
"text": "Default Static methods in Interface Java 8"
},
{
"code": null,
"e": 6101,
"s": 6070,
"text": "Factory Method Pattern in Java"
},
{
"code": null,
"e": 6124,
"s": 6101,
"text": "String sorting in Java"
},
{
"code": null,
"e": 6184,
"s": 6124,
"text": "How to check whether a String is a Balanced String or not ?"
},
{
"code": null,
"e": 6218,
"s": 6184,
"text": "How to Convert Java Int to String"
},
{
"code": null,
"e": 6252,
"s": 6218,
"text": "Java String Binary Search Example"
},
{
"code": null,
"e": 6277,
"s": 6252,
"text": "Java instanceof Operator"
},
{
"code": null,
"e": 6311,
"s": 6277,
"text": "How to Convert Java String to Int"
},
{
"code": null,
"e": 6346,
"s": 6311,
"text": "String in Switch in Java 7 Example"
},
{
"code": null,
"e": 6508,
"s": 6346,
"text": "\n\n\n\n\n\nrajesh chaganti\nMay 13, 2015 at 5:57 pm - Reply \n\nCan you please explain about why main method declared as static as little eloborate..... please.....\n\n\n\n\n"
},
{
"code": null,
"e": 6668,
"s": 6508,
"text": "\n\n\n\n\nrajesh chaganti\nMay 13, 2015 at 5:57 pm - Reply \n\nCan you please explain about why main method declared as static as little eloborate..... please.....\n\n\n\n"
},
{
"code": null,
"e": 6769,
"s": 6668,
"text": "Can you please explain about why main method declared as static as little eloborate..... please....."
},
{
"code": null,
"e": 6775,
"s": 6773,
"text": "Δ"
},
{
"code": null,
"e": 6799,
"s": 6775,
"text": " Install Java on Mac OS"
},
{
"code": null,
"e": 6827,
"s": 6799,
"text": " Install AWS CLI on Windows"
},
{
"code": null,
"e": 6856,
"s": 6827,
"text": " Install Minikube on Windows"
},
{
"code": null,
"e": 6891,
"s": 6856,
"text": " Install Docker Toolbox on Windows"
},
{
"code": null,
"e": 6918,
"s": 6891,
"text": " Install SOAPUI on Windows"
},
{
"code": null,
"e": 6945,
"s": 6918,
"text": " Install Gradle on Windows"
},
{
"code": null,
"e": 6974,
"s": 6945,
"text": " Install RabbitMQ on Windows"
},
{
"code": null,
"e": 7000,
"s": 6974,
"text": " Install PuTTY on windows"
},
{
"code": null,
"e": 7026,
"s": 7000,
"text": " Install Mysql on Windows"
},
{
"code": null,
"e": 7062,
"s": 7026,
"text": " Install Hibernate Tools in Eclipse"
},
{
"code": null,
"e": 7096,
"s": 7062,
"text": " Install Elasticsearch on Windows"
},
{
"code": null,
"e": 7122,
"s": 7096,
"text": " Install Maven on Windows"
},
{
"code": null,
"e": 7147,
"s": 7122,
"text": " Install Maven on Ubuntu"
},
{
"code": null,
"e": 7181,
"s": 7147,
"text": " Install Maven on Windows Command"
},
{
"code": null,
"e": 7216,
"s": 7181,
"text": " Add OJDBC jar to Maven Repository"
},
{
"code": null,
"e": 7240,
"s": 7216,
"text": " Install Ant on Windows"
},
{
"code": null,
"e": 7269,
"s": 7240,
"text": " Install RabbitMQ on Windows"
},
{
"code": null,
"e": 7301,
"s": 7269,
"text": " Install Apache Kafka on Ubuntu"
},
{
"code": null,
"e": 7334,
"s": 7301,
"text": " Install Apache Kafka on Windows"
},
{
"code": null,
"e": 7359,
"s": 7334,
"text": " Java8 – Install Windows"
},
{
"code": null,
"e": 7376,
"s": 7359,
"text": " Java8 – foreach"
},
{
"code": null,
"e": 7404,
"s": 7376,
"text": " Java8 – forEach with index"
},
{
"code": null,
"e": 7435,
"s": 7404,
"text": " Java8 – Stream Filter Objects"
},
{
"code": null,
"e": 7467,
"s": 7435,
"text": " Java8 – Comparator Userdefined"
},
{
"code": null,
"e": 7487,
"s": 7467,
"text": " Java8 – GroupingBy"
},
{
"code": null,
"e": 7507,
"s": 7487,
"text": " Java8 – SummingInt"
},
{
"code": null,
"e": 7531,
"s": 7507,
"text": " Java8 – walk ReadFiles"
},
{
"code": null,
"e": 7561,
"s": 7531,
"text": " Java8 – JAVA_HOME on Windows"
},
{
"code": null,
"e": 7593,
"s": 7561,
"text": " Howto – Install Java on Mac OS"
},
{
"code": null,
"e": 7629,
"s": 7593,
"text": " Howto – Convert Iterable to Stream"
},
{
"code": null,
"e": 7673,
"s": 7629,
"text": " Howto – Get common elements from two Lists"
},
{
"code": null,
"e": 7705,
"s": 7673,
"text": " Howto – Convert List to String"
},
{
"code": null,
"e": 7746,
"s": 7705,
"text": " Howto – Concatenate Arrays using Stream"
},
{
"code": null,
"e": 7783,
"s": 7746,
"text": " Howto – Remove duplicates from List"
},
{
"code": null,
"e": 7823,
"s": 7783,
"text": " Howto – Filter null values from Stream"
},
{
"code": null,
"e": 7852,
"s": 7823,
"text": " Howto – Convert List to Map"
},
{
"code": null,
"e": 7884,
"s": 7852,
"text": " Howto – Convert Stream to List"
},
{
"code": null,
"e": 7904,
"s": 7884,
"text": " Howto – Sort a Map"
},
{
"code": null,
"e": 7926,
"s": 7904,
"text": " Howto – Filter a Map"
},
{
"code": null,
"e": 7956,
"s": 7926,
"text": " Howto – Get Current UTC Time"
},
{
"code": null,
"e": 8007,
"s": 7956,
"text": " Howto – Verify an Array contains a specific value"
},
{
"code": null,
"e": 8043,
"s": 8007,
"text": " Howto – Convert ArrayList to Array"
},
{
"code": null,
"e": 8075,
"s": 8043,
"text": " Howto – Read File Line By Line"
},
{
"code": null,
"e": 8110,
"s": 8075,
"text": " Howto – Convert Date to LocalDate"
},
{
"code": null,
"e": 8133,
"s": 8110,
"text": " Howto – Merge Streams"
},
{
"code": null,
"e": 8180,
"s": 8133,
"text": " Howto – Resolve NullPointerException in toMap"
},
{
"code": null,
"e": 8205,
"s": 8180,
"text": " Howto -Get Stream count"
},
{
"code": null,
"e": 8249,
"s": 8205,
"text": " Howto – Get Min and Max values in a Stream"
}
] |
How to use the <base> tag to define the base URL for an HTML page?
|
The HTML <base> tag is to display the base URL/target for all relative URLs. The tag comes inside the HTML <head>...</head> tag. You can set the base URL once at the top of your page in header section, and all subsequent relative links will use that URL as a starting point.
The following are the attributes of the HTML <base> tag −
You can try to run the following code to learn about the HTML <base> tag
Live Demo
<!DOCTYPE html>
<html>
<head>
<title>HTML base Tag</title>
<base href = "http://www.tutorialspoint.com" />
</head>
<body>
HTML: <br><img src = "/images/html.gif" />
</body>
</html>
|
[
{
"code": null,
"e": 1337,
"s": 1062,
"text": "The HTML <base> tag is to display the base URL/target for all relative URLs. The tag comes inside the HTML <head>...</head> tag. You can set the base URL once at the top of your page in header section, and all subsequent relative links will use that URL as a starting point."
},
{
"code": null,
"e": 1395,
"s": 1337,
"text": "The following are the attributes of the HTML <base> tag −"
},
{
"code": null,
"e": 1468,
"s": 1395,
"text": "You can try to run the following code to learn about the HTML <base> tag"
},
{
"code": null,
"e": 1478,
"s": 1468,
"text": "Live Demo"
},
{
"code": null,
"e": 1689,
"s": 1478,
"text": "<!DOCTYPE html>\n<html>\n <head>\n <title>HTML base Tag</title>\n <base href = \"http://www.tutorialspoint.com\" />\n </head>\n <body>\n HTML: <br><img src = \"/images/html.gif\" />\n </body>\n</html>"
}
] |
How to get the data type of a tensor in PyTorch?
|
A PyTorch tensor is homogenous, i.e., all the elements of a tensor are of the same data type. We can access the data type of a tensor using the ".dtype" attribute of the tensor. It returns the data type of the tensor.
Import the required library. In all the following Python examples, the required Python library is torch. Make sure you have already
installed it.
Import the required library. In all the following Python examples, the required Python library is torch. Make sure you have already
installed it.
Create a tensor and print it.
Create a tensor and print it.
Compute T.dtype. Here T is the tensor of which we want to get the data type.
Compute T.dtype. Here T is the tensor of which we want to get the data type.
Print the data type of the tensor.
Print the data type of the tensor.
The following Python program shows how to get the data type of a tensor.
# Import the library
import torch
# Create a tensor of random numbers of size 3x4
T = torch.randn(3,4)
print("Original Tensor T:\n", T)
# Get the data type of above tensor
data_type = T.dtype
# Print the data type of the tensor
print("Data type of tensor T:\n", data_type)
Original Tensor T:
tensor([[ 2.1768, -0.1328, 0.8155, -0.7967],
[ 0.1194, 1.0465, 0.0779, 0.9103],
[-0.1809, 1.8085, 0.8393, -0.2463]])
Data type of tensor T:
torch.float32
# Python program to get data type of a tensor
# Import the library
import torch
# Create a tensor of random numbers of size 3x4
T = torch.Tensor([1,2,3,4])
print("Original Tensor T:\n", T)
# Get the data type of above tensor
data_type = T.dtype
# Print the data type of the tensor
print("Data type of tensor T:\n", data_type)
Original Tensor T:
tensor([1., 2., 3., 4.])
Data type of tensor T:
torch.float32
|
[
{
"code": null,
"e": 1280,
"s": 1062,
"text": "A PyTorch tensor is homogenous, i.e., all the elements of a tensor are of the same data type. We can access the data type of a tensor using the \".dtype\" attribute of the tensor. It returns the data type of the tensor."
},
{
"code": null,
"e": 1426,
"s": 1280,
"text": "Import the required library. In all the following Python examples, the required Python library is torch. Make sure you have already\ninstalled it."
},
{
"code": null,
"e": 1572,
"s": 1426,
"text": "Import the required library. In all the following Python examples, the required Python library is torch. Make sure you have already\ninstalled it."
},
{
"code": null,
"e": 1602,
"s": 1572,
"text": "Create a tensor and print it."
},
{
"code": null,
"e": 1632,
"s": 1602,
"text": "Create a tensor and print it."
},
{
"code": null,
"e": 1709,
"s": 1632,
"text": "Compute T.dtype. Here T is the tensor of which we want to get the data type."
},
{
"code": null,
"e": 1786,
"s": 1709,
"text": "Compute T.dtype. Here T is the tensor of which we want to get the data type."
},
{
"code": null,
"e": 1821,
"s": 1786,
"text": "Print the data type of the tensor."
},
{
"code": null,
"e": 1856,
"s": 1821,
"text": "Print the data type of the tensor."
},
{
"code": null,
"e": 1929,
"s": 1856,
"text": "The following Python program shows how to get the data type of a tensor."
},
{
"code": null,
"e": 2205,
"s": 1929,
"text": "# Import the library\nimport torch\n\n# Create a tensor of random numbers of size 3x4\nT = torch.randn(3,4)\nprint(\"Original Tensor T:\\n\", T)\n\n# Get the data type of above tensor\ndata_type = T.dtype\n\n# Print the data type of the tensor\nprint(\"Data type of tensor T:\\n\", data_type)"
},
{
"code": null,
"e": 2396,
"s": 2205,
"text": "Original Tensor T:\ntensor([[ 2.1768, -0.1328, 0.8155, -0.7967],\n [ 0.1194, 1.0465, 0.0779, 0.9103],\n [-0.1809, 1.8085, 0.8393, -0.2463]])\nData type of tensor T:\ntorch.float32"
},
{
"code": null,
"e": 2725,
"s": 2396,
"text": "# Python program to get data type of a tensor\n# Import the library\nimport torch\n\n# Create a tensor of random numbers of size 3x4\nT = torch.Tensor([1,2,3,4])\nprint(\"Original Tensor T:\\n\", T)\n\n# Get the data type of above tensor\ndata_type = T.dtype\n\n# Print the data type of the tensor\nprint(\"Data type of tensor T:\\n\", data_type)"
},
{
"code": null,
"e": 2812,
"s": 2725,
"text": "Original Tensor T:\n tensor([1., 2., 3., 4.])\nData type of tensor T:\n torch.float32"
}
] |
How to get current foreground activity context in Android?
|
This example demonstrate about How to get current foreground activity context in Android
Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project.
Step 2 − Add the following code to src/MyApp.java
package app.tutorialspoint.com.sample ;
import android.app.Activity ;
import android.app.Application ;
public class MyApp extends Application {
private Activity mCurrentActivity = null;
@Override
public void onCreate () {
super .onCreate() ;
}
public Activity getCurrentActivity () {
return mCurrentActivity ;
}
public void setCurrentActivity (Activity mCurrentActivity) {
this . mCurrentActivity = mCurrentActivity ;
}
}
Step 3 − Add the following code to src/MyBaseActivity.java
package app.tutorialspoint.com.sample ;
import android.app.Activity ;
import android.os.Bundle ;
import android.support.v7.app.AppCompatActivity ;
public class MyBaseActivity extends AppCompatActivity {
protected MyApp mMyApp ;
public void onCreate (Bundle savedInstanceState) {
super .onCreate(savedInstanceState) ;
mMyApp = (MyApp) this .getApplicationContext() ;
}
protected void onResume () {
super .onResume() ;
mMyApp .setCurrentActivity( this ) ;
}
protected void onPause () {
clearReferences() ;
super .onPause() ;
}
protected void onDestroy () {
clearReferences() ;
super .onDestroy() ;
}
private void clearReferences () {
Activity currActivity = mMyApp .getCurrentActivity() ;
if ( this .equals(currActivity))
mMyApp .setCurrentActivity( null ) ;
}
}
Step 4 − Add the following code to src/MainActivity.java
package app.tutorialspoint.com.sample ;
import android.app.Activity ;
import android.support.v7.app.AppCompatActivity ;
import android.os.Bundle ;
public class MainActivity extends AppCompatActivity {
@Override
protected void onCreate (Bundle savedInstanceState) {
super .onCreate(savedInstanceState) ;
setContentView(R.layout. activity_main ) ;
Activity currentActivity = ((MyApp)
getApplicationContext()).getCurrentActivity() ;
}
}
Step 5 − Add the following code to androidManifest.xml
<? xml version= "1.0" encoding= "utf-8" ?>
<manifest xmlns: android = "http://schemas.android.com/apk/res/android"
package= "app.tutorialspoint.com.sample" >
<uses-permission android :name= "android.permission.CALL_PHONE" />
<application
android :name= ".MyApp"
android :allowBackup= "true"
android :icon= "@mipmap/ic_launcher"
android :label= "@string/app_name"
android :roundIcon= "@mipmap/ic_launcher_round"
android :supportsRtl= "true"
android :theme= "@style/AppTheme" >
<activity android :name= ".MainActivity" >
<intent-filter>
<action android :name= "android.intent.action.MAIN" />
<category android :name= "android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
<activity android :name= ".MyBaseActivity" />
</application>
</manifest>
|
[
{
"code": null,
"e": 1151,
"s": 1062,
"text": "This example demonstrate about How to get current foreground activity context in Android"
},
{
"code": null,
"e": 1280,
"s": 1151,
"text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project."
},
{
"code": null,
"e": 1330,
"s": 1280,
"text": "Step 2 − Add the following code to src/MyApp.java"
},
{
"code": null,
"e": 1794,
"s": 1330,
"text": "package app.tutorialspoint.com.sample ;\nimport android.app.Activity ;\nimport android.app.Application ;\npublic class MyApp extends Application {\n private Activity mCurrentActivity = null;\n @Override\n public void onCreate () {\n super .onCreate() ;\n }\n public Activity getCurrentActivity () {\n return mCurrentActivity ;\n }\n public void setCurrentActivity (Activity mCurrentActivity) {\n this . mCurrentActivity = mCurrentActivity ;\n }\n}"
},
{
"code": null,
"e": 1853,
"s": 1794,
"text": "Step 3 − Add the following code to src/MyBaseActivity.java"
},
{
"code": null,
"e": 2716,
"s": 1853,
"text": "package app.tutorialspoint.com.sample ;\nimport android.app.Activity ;\nimport android.os.Bundle ;\nimport android.support.v7.app.AppCompatActivity ;\npublic class MyBaseActivity extends AppCompatActivity {\n protected MyApp mMyApp ;\n public void onCreate (Bundle savedInstanceState) {\n super .onCreate(savedInstanceState) ;\n mMyApp = (MyApp) this .getApplicationContext() ;\n }\n protected void onResume () {\n super .onResume() ;\n mMyApp .setCurrentActivity( this ) ;\n }\n protected void onPause () {\n clearReferences() ;\n super .onPause() ;\n }\n protected void onDestroy () {\n clearReferences() ;\n super .onDestroy() ;\n }\n private void clearReferences () {\n Activity currActivity = mMyApp .getCurrentActivity() ;\n if ( this .equals(currActivity))\n mMyApp .setCurrentActivity( null ) ;\n }\n}"
},
{
"code": null,
"e": 2773,
"s": 2716,
"text": "Step 4 − Add the following code to src/MainActivity.java"
},
{
"code": null,
"e": 3240,
"s": 2773,
"text": "package app.tutorialspoint.com.sample ;\nimport android.app.Activity ;\nimport android.support.v7.app.AppCompatActivity ;\nimport android.os.Bundle ;\npublic class MainActivity extends AppCompatActivity {\n @Override\n protected void onCreate (Bundle savedInstanceState) {\n super .onCreate(savedInstanceState) ;\n setContentView(R.layout. activity_main ) ;\n Activity currentActivity = ((MyApp)\n getApplicationContext()).getCurrentActivity() ;\n }\n}"
},
{
"code": null,
"e": 3295,
"s": 3240,
"text": "Step 5 − Add the following code to androidManifest.xml"
},
{
"code": null,
"e": 4164,
"s": 3295,
"text": "<? xml version= \"1.0\" encoding= \"utf-8\" ?>\n<manifest xmlns: android = \"http://schemas.android.com/apk/res/android\"\n package= \"app.tutorialspoint.com.sample\" >\n <uses-permission android :name= \"android.permission.CALL_PHONE\" />\n <application\n android :name= \".MyApp\"\n android :allowBackup= \"true\"\n android :icon= \"@mipmap/ic_launcher\"\n android :label= \"@string/app_name\"\n android :roundIcon= \"@mipmap/ic_launcher_round\"\n android :supportsRtl= \"true\"\n android :theme= \"@style/AppTheme\" >\n <activity android :name= \".MainActivity\" >\n <intent-filter>\n <action android :name= \"android.intent.action.MAIN\" />\n <category android :name= \"android.intent.category.LAUNCHER\" />\n </intent-filter>\n </activity>\n <activity android :name= \".MyBaseActivity\" />\n </application>\n</manifest>"
}
] |
Principal Component Analysis — Explained | by Soner Yıldırım | Towards Data Science
|
Data has become more valuable than ever with the tremendous advancement in data science. Real life datasets usually have many features (columns). Some of the features may be uninformative or correlated with other features. However, we may not know this beforehand so we tend to collect as much data as possible. In some cases, it can be possible to accomplish the task without using all the features. Due to computational and performance reasons, it is desired to do a task with less number of features if possible. Uninformative features do not provide any prediction power and also cause a computational burden. Let’s assume we are trying to predict the shooting accuracy of basketball players. The dataset includes distance to basket, angle of the direction, the position of the defender, the accuracy of previous shots and the color of the ball. It is glaringly obvious that the color of the ball has no relation with shooting accuracy, so we can just remove it. The cases in real life are not that obvious and we need to do some pre-processing to determine uninformative features. Correlation among features or between a feature and target variable can easily be calculated using software packages.
There are also some cases which have a high number of features in nature. For example, an image classification task with 8x8 pixel images has 64 features. We can find a way to represent these images with less number of features without losing a considerable amount of information. Depending on the field you work in, you may even encounter datasets with more than a thousand features. In such cases, reducing the number of features is a challenging yet very beneficial task.
As the number of features increases, the performance of a classifier starts to decrease after some point. More features result in more combinations that the model needs to learn in order to accurately predict the target. Therefore, with same amount of observations (rows), models tend to perform better on datasets with less number of features. Moreover, a high number of features increase the risk of overfitting.
There are two main methods to reduce the number of features. The first one is feature selection which aims to find the most informative features or eliminate uninformative features. Feature selection can be done manually or using software tools. The second way is to derive new features from the existing ones with keeping as much information as possible. This process is called feature extraction or dimensionality reduction.
What do I mean by “keeping as much information as possible”? How do we measure the amount of information? The answer is variance which is a measure of how much a variable is spread out. If the variance of a variable (feature) is very low, it does not tell us much when building a model. The figure below shows the distribution of two variables, x and y. As you can see, x ranges from 1 to 6 while y values are in between 1 and 2. In this case, x has high variance. If these are the only two features to predict a target variable, the role of x in the prediction is much higher than y.
Variation within the current datasets must be retained as much as possible while doing dimensionality reduction. There are many ways to do dimensionality reduction. In this post, I will cover one of the most widely used dimensionality reduction algorithm: Principal Component Analysis (PCA).
PCA is an unsupervised learning algorithm which finds the relations among features within a dataset. It is also widely used as a preprocessing step for supervised learning algorithms.
Note: PCA is a linear dimensionality reduction algorithm. There are also non-linear methods available.
We first need to shift the data points so that the center of data is at the origin. Although the positions of individual data points change, relative positions do not change. For example, the point with highest feature 1 value still has highest feature 1 value. Then, PCA fits a line to the data which minimizes the distances from data points to the line.
This red line is the new axis or first principal component (PC1). Most of the variance of a dataset can be explained by PC1. The second principle component is able to explain vertical variance with respect to PC1.
The sort red line is the second principal component (PC2). The order of principal components is determined according to the fraction of variance of original dataset they explain. It is clear that PC1 explains much more variance than PC2.
Then principal components and data points are rotated so that PC1 becomes new x axis and PC2 becomes new y axis. Relative positions of data points do not change. Principal components are orthogonal to each other and thus linearly independent.
The principal components are linear combinations of the features of original dataset.
The advantage of PCA is that a significant amount of variance of the original dataset is retained using much smaller number of features than the original dataset. Principal components are ordered according to the amount of variance they represent.
Let’s go over an example using scikit-learn. Scikit-learn is a machine learning library that provides simple and efficient tools for predictive data analysis.
To be consistent, I will use the datapoints that I have been showing since the beginning. It is a very simple example yet enough to grasp the concept.
We create a DataFrame using these datapoints and assign a class for each one.
import numpy as npimport pandas as pddf = pd.DataFrame({'feature_a':[2,1.5,2,2.5,3,2.5,3.7,2.8,1.8,3.3],'feature_b':[1,1.2,2,1.5,3,2.4,3.5,2.8,1.5,2.5],'target':['a','a','a','a','b','b','b','b','a','b']})
So it is a binary classification task with two independent variables.
Before applying PCA, we need to standardize the data so that the mean of datapoints is 0 and the variance is 1. Scikit-learn provides StandardScaler() from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import StandardScalerdf_features = df[['feature_a','feature_b']]df_features = StandardScaler().fit_transform(df_features)
Then we use create a PCA() object and fit datapoints to it.
from sklearn.decomposition import PCApca = PCA(n_components=2)PCs = pca.fit_transform(df_features)
Then we create a new dataframe using principal components:
#Data visualization librariesimport seaborn as snsimport matplotlib.pyplot as plt%matplotlib inline#Create DataFramedf_new = pd.DataFrame(data=PCs, columns={'PC1','PC2'})df_new['target'] = df['target'] #targets do not change
We can draw a scatter plot to see the new data points:
fig = plt.figure(figsize = (8,4))ax = fig.add_subplot()ax.set_xlabel('PC1')ax.set_ylabel('PC2')targets = ['a', 'b']colors = ['r', 'b']for target, color in zip(targets,colors): rows = df_new['target'] == target ax.scatter(df_new.loc[rows, 'PC1'], df_new.loc[rows, 'PC2'], ax.legend(targets)
Let’s also draw the scatter plot of original data points so that you can clearly see how data points are transformed:
As you can see on the principal components graph, two classes can be separated using only PC1 instead of using both feature_a and feature_b. Therefore we can say that most of the variance is explained by PC1. To be exact, we can calculate how much each principal component explains the variance. Scikit-learn provides explained_variance_ratio_ method to calculate these amounts:
pca.explained_variance_ratio_array([0.93606831, 0.06393169])
PC1 explains 93.6% of the variance and PC2 explains 6.4%.
Note: Principal components are a linear combination of original features.
This example is a very simple case but it explains the concept. When doing PCA on datasets with many more features, we just follow the same steps.
Thank you for reading. Please let me know if you have any feedback.
Machine Learning
Naive Bayes Classifier — Explained
Logistic Regression — Explained
Support Vector Machine — Explained
Decision Trees and Random Forests — Explained
Gradient Boosted Decision Trees — Explained
Predicting Used Car Prices with Machine Learning
Data analysis
The Most Underrated Tool in Data Science: NumPy
Combining DataFrames Using Pandas
Handling Missing Values with Pandas
3 Useful Functionalities of Pandas
|
[
{
"code": null,
"e": 1376,
"s": 172,
"text": "Data has become more valuable than ever with the tremendous advancement in data science. Real life datasets usually have many features (columns). Some of the features may be uninformative or correlated with other features. However, we may not know this beforehand so we tend to collect as much data as possible. In some cases, it can be possible to accomplish the task without using all the features. Due to computational and performance reasons, it is desired to do a task with less number of features if possible. Uninformative features do not provide any prediction power and also cause a computational burden. Let’s assume we are trying to predict the shooting accuracy of basketball players. The dataset includes distance to basket, angle of the direction, the position of the defender, the accuracy of previous shots and the color of the ball. It is glaringly obvious that the color of the ball has no relation with shooting accuracy, so we can just remove it. The cases in real life are not that obvious and we need to do some pre-processing to determine uninformative features. Correlation among features or between a feature and target variable can easily be calculated using software packages."
},
{
"code": null,
"e": 1851,
"s": 1376,
"text": "There are also some cases which have a high number of features in nature. For example, an image classification task with 8x8 pixel images has 64 features. We can find a way to represent these images with less number of features without losing a considerable amount of information. Depending on the field you work in, you may even encounter datasets with more than a thousand features. In such cases, reducing the number of features is a challenging yet very beneficial task."
},
{
"code": null,
"e": 2266,
"s": 1851,
"text": "As the number of features increases, the performance of a classifier starts to decrease after some point. More features result in more combinations that the model needs to learn in order to accurately predict the target. Therefore, with same amount of observations (rows), models tend to perform better on datasets with less number of features. Moreover, a high number of features increase the risk of overfitting."
},
{
"code": null,
"e": 2693,
"s": 2266,
"text": "There are two main methods to reduce the number of features. The first one is feature selection which aims to find the most informative features or eliminate uninformative features. Feature selection can be done manually or using software tools. The second way is to derive new features from the existing ones with keeping as much information as possible. This process is called feature extraction or dimensionality reduction."
},
{
"code": null,
"e": 3278,
"s": 2693,
"text": "What do I mean by “keeping as much information as possible”? How do we measure the amount of information? The answer is variance which is a measure of how much a variable is spread out. If the variance of a variable (feature) is very low, it does not tell us much when building a model. The figure below shows the distribution of two variables, x and y. As you can see, x ranges from 1 to 6 while y values are in between 1 and 2. In this case, x has high variance. If these are the only two features to predict a target variable, the role of x in the prediction is much higher than y."
},
{
"code": null,
"e": 3570,
"s": 3278,
"text": "Variation within the current datasets must be retained as much as possible while doing dimensionality reduction. There are many ways to do dimensionality reduction. In this post, I will cover one of the most widely used dimensionality reduction algorithm: Principal Component Analysis (PCA)."
},
{
"code": null,
"e": 3754,
"s": 3570,
"text": "PCA is an unsupervised learning algorithm which finds the relations among features within a dataset. It is also widely used as a preprocessing step for supervised learning algorithms."
},
{
"code": null,
"e": 3857,
"s": 3754,
"text": "Note: PCA is a linear dimensionality reduction algorithm. There are also non-linear methods available."
},
{
"code": null,
"e": 4213,
"s": 3857,
"text": "We first need to shift the data points so that the center of data is at the origin. Although the positions of individual data points change, relative positions do not change. For example, the point with highest feature 1 value still has highest feature 1 value. Then, PCA fits a line to the data which minimizes the distances from data points to the line."
},
{
"code": null,
"e": 4427,
"s": 4213,
"text": "This red line is the new axis or first principal component (PC1). Most of the variance of a dataset can be explained by PC1. The second principle component is able to explain vertical variance with respect to PC1."
},
{
"code": null,
"e": 4665,
"s": 4427,
"text": "The sort red line is the second principal component (PC2). The order of principal components is determined according to the fraction of variance of original dataset they explain. It is clear that PC1 explains much more variance than PC2."
},
{
"code": null,
"e": 4908,
"s": 4665,
"text": "Then principal components and data points are rotated so that PC1 becomes new x axis and PC2 becomes new y axis. Relative positions of data points do not change. Principal components are orthogonal to each other and thus linearly independent."
},
{
"code": null,
"e": 4994,
"s": 4908,
"text": "The principal components are linear combinations of the features of original dataset."
},
{
"code": null,
"e": 5242,
"s": 4994,
"text": "The advantage of PCA is that a significant amount of variance of the original dataset is retained using much smaller number of features than the original dataset. Principal components are ordered according to the amount of variance they represent."
},
{
"code": null,
"e": 5401,
"s": 5242,
"text": "Let’s go over an example using scikit-learn. Scikit-learn is a machine learning library that provides simple and efficient tools for predictive data analysis."
},
{
"code": null,
"e": 5552,
"s": 5401,
"text": "To be consistent, I will use the datapoints that I have been showing since the beginning. It is a very simple example yet enough to grasp the concept."
},
{
"code": null,
"e": 5630,
"s": 5552,
"text": "We create a DataFrame using these datapoints and assign a class for each one."
},
{
"code": null,
"e": 5835,
"s": 5630,
"text": "import numpy as npimport pandas as pddf = pd.DataFrame({'feature_a':[2,1.5,2,2.5,3,2.5,3.7,2.8,1.8,3.3],'feature_b':[1,1.2,2,1.5,3,2.4,3.5,2.8,1.5,2.5],'target':['a','a','a','a','b','b','b','b','a','b']})"
},
{
"code": null,
"e": 5905,
"s": 5835,
"text": "So it is a binary classification task with two independent variables."
},
{
"code": null,
"e": 6105,
"s": 5905,
"text": "Before applying PCA, we need to standardize the data so that the mean of datapoints is 0 and the variance is 1. Scikit-learn provides StandardScaler() from sklearn.preprocessing import StandardScaler"
},
{
"code": null,
"e": 6254,
"s": 6105,
"text": "from sklearn.preprocessing import StandardScalerdf_features = df[['feature_a','feature_b']]df_features = StandardScaler().fit_transform(df_features)"
},
{
"code": null,
"e": 6314,
"s": 6254,
"text": "Then we use create a PCA() object and fit datapoints to it."
},
{
"code": null,
"e": 6413,
"s": 6314,
"text": "from sklearn.decomposition import PCApca = PCA(n_components=2)PCs = pca.fit_transform(df_features)"
},
{
"code": null,
"e": 6472,
"s": 6413,
"text": "Then we create a new dataframe using principal components:"
},
{
"code": null,
"e": 6697,
"s": 6472,
"text": "#Data visualization librariesimport seaborn as snsimport matplotlib.pyplot as plt%matplotlib inline#Create DataFramedf_new = pd.DataFrame(data=PCs, columns={'PC1','PC2'})df_new['target'] = df['target'] #targets do not change"
},
{
"code": null,
"e": 6752,
"s": 6697,
"text": "We can draw a scatter plot to see the new data points:"
},
{
"code": null,
"e": 7054,
"s": 6752,
"text": "fig = plt.figure(figsize = (8,4))ax = fig.add_subplot()ax.set_xlabel('PC1')ax.set_ylabel('PC2')targets = ['a', 'b']colors = ['r', 'b']for target, color in zip(targets,colors): rows = df_new['target'] == target ax.scatter(df_new.loc[rows, 'PC1'], df_new.loc[rows, 'PC2'], ax.legend(targets)"
},
{
"code": null,
"e": 7172,
"s": 7054,
"text": "Let’s also draw the scatter plot of original data points so that you can clearly see how data points are transformed:"
},
{
"code": null,
"e": 7551,
"s": 7172,
"text": "As you can see on the principal components graph, two classes can be separated using only PC1 instead of using both feature_a and feature_b. Therefore we can say that most of the variance is explained by PC1. To be exact, we can calculate how much each principal component explains the variance. Scikit-learn provides explained_variance_ratio_ method to calculate these amounts:"
},
{
"code": null,
"e": 7612,
"s": 7551,
"text": "pca.explained_variance_ratio_array([0.93606831, 0.06393169])"
},
{
"code": null,
"e": 7670,
"s": 7612,
"text": "PC1 explains 93.6% of the variance and PC2 explains 6.4%."
},
{
"code": null,
"e": 7744,
"s": 7670,
"text": "Note: Principal components are a linear combination of original features."
},
{
"code": null,
"e": 7891,
"s": 7744,
"text": "This example is a very simple case but it explains the concept. When doing PCA on datasets with many more features, we just follow the same steps."
},
{
"code": null,
"e": 7959,
"s": 7891,
"text": "Thank you for reading. Please let me know if you have any feedback."
},
{
"code": null,
"e": 7976,
"s": 7959,
"text": "Machine Learning"
},
{
"code": null,
"e": 8011,
"s": 7976,
"text": "Naive Bayes Classifier — Explained"
},
{
"code": null,
"e": 8043,
"s": 8011,
"text": "Logistic Regression — Explained"
},
{
"code": null,
"e": 8078,
"s": 8043,
"text": "Support Vector Machine — Explained"
},
{
"code": null,
"e": 8124,
"s": 8078,
"text": "Decision Trees and Random Forests — Explained"
},
{
"code": null,
"e": 8168,
"s": 8124,
"text": "Gradient Boosted Decision Trees — Explained"
},
{
"code": null,
"e": 8217,
"s": 8168,
"text": "Predicting Used Car Prices with Machine Learning"
},
{
"code": null,
"e": 8231,
"s": 8217,
"text": "Data analysis"
},
{
"code": null,
"e": 8279,
"s": 8231,
"text": "The Most Underrated Tool in Data Science: NumPy"
},
{
"code": null,
"e": 8313,
"s": 8279,
"text": "Combining DataFrames Using Pandas"
},
{
"code": null,
"e": 8349,
"s": 8313,
"text": "Handling Missing Values with Pandas"
}
] |
C program to detect tokens in a C program
|
Here, we will create a c program to detect tokens in a C program. This is called the lexical analysis phase of the compiler. The lexical analyzer is the part of the compiler that detects the token of the program and sends it to the syntax analyzer.
Token is the smallest entity of the code, it is either a keyword, identifier, constant, string literal, symbol.
Examples of different types of tokens in C.
Keywords: for, if, include, etc
Identifier: variables, functions, etc
separators: ‘,’, ‘;’, etc
operators: ‘-’, ‘=’, ‘++’, etc
Program to detect tokens in a C program−
Live Demo
#include <stdbool.h>
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
bool isValidDelimiter(char ch) {
if (ch == ' ' || ch == '+' || ch == '-' || ch == '*' ||
ch == '/' || ch == ',' || ch == ';' || ch == '>' ||
ch == '<' || ch == '=' || ch == '(' || ch == ')' ||
ch == '[' || ch == ']' || ch == '{' || ch == '}')
return (true);
return (false);
}
bool isValidOperator(char ch){
if (ch == '+' || ch == '-' || ch == '*' ||
ch == '/' || ch == '>' || ch == '<' ||
ch == '=')
return (true);
return (false);
}
// Returns 'true' if the string is a VALID IDENTIFIER.
bool isvalidIdentifier(char* str){
if (str[0] == '0' || str[0] == '1' || str[0] == '2' ||
str[0] == '3' || str[0] == '4' || str[0] == '5' ||
str[0] == '6' || str[0] == '7' || str[0] == '8' ||
str[0] == '9' || isValidDelimiter(str[0]) == true)
return (false);
return (true);
}
bool isValidKeyword(char* str) {
if (!strcmp(str, "if") || !strcmp(str, "else") || !strcmp(str, "while") || !strcmp(str, "do") || !strcmp(str, "break") || !strcmp(str, "continue") || !strcmp(str, "int")
|| !strcmp(str, "double") || !strcmp(str, "float") || !strcmp(str, "return") || !strcmp(str, "char") || !strcmp(str, "case") || !strcmp(str, "char")
|| !strcmp(str, "sizeof") || !strcmp(str, "long") || !strcmp(str, "short") || !strcmp(str, "typedef") || !strcmp(str, "switch") || !strcmp(str, "unsigned")
|| !strcmp(str, "void") || !strcmp(str, "static") || !strcmp(str, "struct") || !strcmp(str, "goto"))
return (true);
return (false);
}
bool isValidInteger(char* str) {
int i, len = strlen(str);
if (len == 0)
return (false);
for (i = 0; i < len; i++) {
if (str[i] != '0' && str[i] != '1' && str[i] != '2'&& str[i] != '3' && str[i] != '4' && str[i] != '5'
&& str[i] != '6' && str[i] != '7' && str[i] != '8' && str[i] != '9' || (str[i] == '-' && i > 0))
return (false);
}
return (true);
}
bool isRealNumber(char* str) {
int i, len = strlen(str);
bool hasDecimal = false;
if (len == 0)
return (false);
for (i = 0; i < len; i++) {
if (str[i] != '0' && str[i] != '1' && str[i] != '2' && str[i] != '3' && str[i] != '4' && str[i] != '5' && str[i] != '6' && str[i] != '7' && str[i] != '8'
&& str[i] != '9' && str[i] != '.' || (str[i] == '-' && i > 0))
return (false);
if (str[i] == '.')
hasDecimal = true;
}
return (hasDecimal);
}
char* subString(char* str, int left, int right) {
int i;
char* subStr = (char*)malloc( sizeof(char) * (right - left + 2));
for (i = left; i <= right; i++)
subStr[i - left] = str[i];
subStr[right - left + 1] = '\0';
return (subStr);
}
void detectTokens(char* str) {
int left = 0, right = 0;
int length = strlen(str);
while (right <= length && left <= right) {
if (isValidDelimiter(str[right]) == false)
right++;
if (isValidDelimiter(str[right]) == true && left == right) {
if (isValidOperator(str[right]) == true)
printf("Valid operator : '%c'\n", str[right]);
right++;
left = right;
} else if (isValidDelimiter(str[right]) == true && left != right || (right == length && left != right)) {
char* subStr = subString(str, left, right - 1);
if (isValidKeyword(subStr) == true)
printf("Valid keyword : '%s'\n", subStr);
else if (isValidInteger(subStr) == true)
printf("Valid Integer : '%s'\n", subStr);
else if (isRealNumber(subStr) == true)
printf("Real Number : '%s'\n", subStr);
else if (isvalidIdentifier(subStr) == true
&& isValidDelimiter(str[right - 1]) == false)
printf("Valid Identifier : '%s'\n", subStr);
else if (isvalidIdentifier(subStr) == false
&& isValidDelimiter(str[right - 1]) == false)
printf("Invalid Identifier : '%s'\n", subStr);
left = right;
}
}
return;
}
int main(){
char str[100] = "float x = a + 1b; ";
printf("The Program is : '%s' \n", str);
printf("All Tokens are : \n");
detectTokens(str);
return (0);
}
The Program is : 'float x = a + 1b; '
All Tokens are :
Valid keyword : 'float'
Valid Identifier : 'x'
Valid operator : '='
Valid Identifier : 'a'
Valid operator : '+'
Invalid Identifier : '1b'
|
[
{
"code": null,
"e": 1311,
"s": 1062,
"text": "Here, we will create a c program to detect tokens in a C program. This is called the lexical analysis phase of the compiler. The lexical analyzer is the part of the compiler that detects the token of the program and sends it to the syntax analyzer."
},
{
"code": null,
"e": 1423,
"s": 1311,
"text": "Token is the smallest entity of the code, it is either a keyword, identifier, constant, string literal, symbol."
},
{
"code": null,
"e": 1467,
"s": 1423,
"text": "Examples of different types of tokens in C."
},
{
"code": null,
"e": 1594,
"s": 1467,
"text": "Keywords: for, if, include, etc\nIdentifier: variables, functions, etc\nseparators: ‘,’, ‘;’, etc\noperators: ‘-’, ‘=’, ‘++’, etc"
},
{
"code": null,
"e": 1635,
"s": 1594,
"text": "Program to detect tokens in a C program−"
},
{
"code": null,
"e": 1646,
"s": 1635,
"text": " Live Demo"
},
{
"code": null,
"e": 5785,
"s": 1646,
"text": "#include <stdbool.h>\n#include <stdio.h>\n#include <string.h>\n#include <stdlib.h>\nbool isValidDelimiter(char ch) {\n if (ch == ' ' || ch == '+' || ch == '-' || ch == '*' ||\n ch == '/' || ch == ',' || ch == ';' || ch == '>' ||\n ch == '<' || ch == '=' || ch == '(' || ch == ')' ||\n ch == '[' || ch == ']' || ch == '{' || ch == '}')\n return (true);\n return (false);\n}\nbool isValidOperator(char ch){\n if (ch == '+' || ch == '-' || ch == '*' ||\n ch == '/' || ch == '>' || ch == '<' ||\n ch == '=')\n return (true);\n return (false);\n}\n// Returns 'true' if the string is a VALID IDENTIFIER.\nbool isvalidIdentifier(char* str){\n if (str[0] == '0' || str[0] == '1' || str[0] == '2' ||\n str[0] == '3' || str[0] == '4' || str[0] == '5' ||\n str[0] == '6' || str[0] == '7' || str[0] == '8' ||\n str[0] == '9' || isValidDelimiter(str[0]) == true)\n return (false);\n return (true);\n}\nbool isValidKeyword(char* str) {\n if (!strcmp(str, \"if\") || !strcmp(str, \"else\") || !strcmp(str, \"while\") || !strcmp(str, \"do\") || !strcmp(str, \"break\") || !strcmp(str, \"continue\") || !strcmp(str, \"int\")\n || !strcmp(str, \"double\") || !strcmp(str, \"float\") || !strcmp(str, \"return\") || !strcmp(str, \"char\") || !strcmp(str, \"case\") || !strcmp(str, \"char\")\n || !strcmp(str, \"sizeof\") || !strcmp(str, \"long\") || !strcmp(str, \"short\") || !strcmp(str, \"typedef\") || !strcmp(str, \"switch\") || !strcmp(str, \"unsigned\")\n || !strcmp(str, \"void\") || !strcmp(str, \"static\") || !strcmp(str, \"struct\") || !strcmp(str, \"goto\"))\n return (true);\n return (false);\n}\nbool isValidInteger(char* str) {\n int i, len = strlen(str);\n if (len == 0)\n return (false);\n for (i = 0; i < len; i++) {\n if (str[i] != '0' && str[i] != '1' && str[i] != '2'&& str[i] != '3' && str[i] != '4' && str[i] != '5'\n && str[i] != '6' && str[i] != '7' && str[i] != '8' && str[i] != '9' || (str[i] == '-' && i > 0))\n return (false);\n }\n return (true);\n}\nbool isRealNumber(char* str) {\n int i, len = strlen(str);\n bool hasDecimal = false;\n if (len == 0)\n return (false);\n for (i = 0; i < len; i++) {\n if (str[i] != '0' && str[i] != '1' && str[i] != '2' && str[i] != '3' && str[i] != '4' && str[i] != '5' && str[i] != '6' && str[i] != '7' && str[i] != '8'\n && str[i] != '9' && str[i] != '.' || (str[i] == '-' && i > 0))\n return (false);\n if (str[i] == '.')\n hasDecimal = true;\n }\n return (hasDecimal);\n}\nchar* subString(char* str, int left, int right) {\n int i;\n char* subStr = (char*)malloc( sizeof(char) * (right - left + 2));\n for (i = left; i <= right; i++)\n subStr[i - left] = str[i];\n subStr[right - left + 1] = '\\0';\n return (subStr);\n}\nvoid detectTokens(char* str) {\n int left = 0, right = 0;\n int length = strlen(str);\n while (right <= length && left <= right) {\n if (isValidDelimiter(str[right]) == false)\n right++;\n if (isValidDelimiter(str[right]) == true && left == right) {\n if (isValidOperator(str[right]) == true)\n printf(\"Valid operator : '%c'\\n\", str[right]);\n right++;\n left = right;\n } else if (isValidDelimiter(str[right]) == true && left != right || (right == length && left != right)) {\n char* subStr = subString(str, left, right - 1);\n if (isValidKeyword(subStr) == true)\n printf(\"Valid keyword : '%s'\\n\", subStr);\n else if (isValidInteger(subStr) == true)\n printf(\"Valid Integer : '%s'\\n\", subStr);\n else if (isRealNumber(subStr) == true)\n printf(\"Real Number : '%s'\\n\", subStr);\n else if (isvalidIdentifier(subStr) == true\n && isValidDelimiter(str[right - 1]) == false)\n printf(\"Valid Identifier : '%s'\\n\", subStr);\n else if (isvalidIdentifier(subStr) == false\n && isValidDelimiter(str[right - 1]) == false)\n printf(\"Invalid Identifier : '%s'\\n\", subStr);\n left = right;\n }\n }\n return;\n}\nint main(){\n char str[100] = \"float x = a + 1b; \";\n printf(\"The Program is : '%s' \\n\", str);\n printf(\"All Tokens are : \\n\");\n detectTokens(str);\n return (0);\n}"
},
{
"code": null,
"e": 5978,
"s": 5785,
"text": "The Program is : 'float x = a + 1b; '\nAll Tokens are :\nValid keyword : 'float'\nValid Identifier : 'x'\nValid operator : '='\nValid Identifier : 'a'\nValid operator : '+'\nInvalid Identifier : '1b'"
}
] |
Treatment Assignment Strategy in A/B Test | by Chong Han Khai | Towards Data Science
|
In my previous post, I gave a step by step process of planning an A/B Test but did not cover the implementation part. This week, I focus on one of the components of an A/B Test implementation, treatment assignment.
One of the requirements of a valid randomized controlled experiment is random treatment assignment. Despite its importance, this is unfortunately also the part that most people seem to not care too much about. Perhaps because the concept of random seems so easy, I just randomly assign my experiment subjects to the control and treatment groups and that’s it, right? In a business environment, the reality is actually far from this because there are some requirements that make randomness hard to achieve.
In this blog post, I will attempt to highlight some of these requirements, common strategies people use to fulfill them, problems that arise with these strategies and lastly a simple and elegant solution that fulfills all the requirements.
Treatment given to an experiment subject has to be the consistent every time.
I will give a few examples here:
If Netflix wants to measure the effect of its new recommendation engine, then each user (experiment subject) should receive recommendations from the same recommendation engine as previous times.If I want to test if a new mobile app design has a faster loading time, then each device (experiment subject) needs to load the same mobile app design every time.
If Netflix wants to measure the effect of its new recommendation engine, then each user (experiment subject) should receive recommendations from the same recommendation engine as previous times.
If I want to test if a new mobile app design has a faster loading time, then each device (experiment subject) needs to load the same mobile app design every time.
If we randomly assign treatment every time when an experiment subject is exposed, then we will not be able to guarantee this. (e.g. user A might log in and see different treatments every time.)
For ease of explanation and the fact that most experiments are done using users as subjects, I am going to treat users like experiment subjects henceforth.
It is rather easy to fulfill the first requirement, we just log down which user is given which treatment when they are first exposed to the experiment. In subsequent login or subsequent exposure, we just look up for this user’s past treatment and assign the same treatment.
This may be feasible for a small company, but as you scale, you will find that this method is not very scalable at all. We can imagine that it is almost impossible to achieve this when we have tens of millions of users and hundreds of experiments.
In order to solve both of the issues above, one of the common strategies that companies use is to utilize the last character or digit of the user ID. In this case, treatment will be consistent and fully scalable because the treatment only depends on a fixed user ID.
This, however, poses another problem. The experiment, by itself, is indeed truly random. But as we do more and more experiments, we will find that the treatment assignments across different experiments are now correlated with each other.
Let’s use an example here.
+-------------------+-----------------+-------------------+-------+| | Exp 1 : Control | Exp 1 : Treatment | Total |+-------------------+-----------------+-------------------+-------+| Exp 2 : Control | 2107 | 2929 | 5036 || Exp 2 : Treatment | 2916 | 2048 | 4964 || Total | 5023 | 4977 | 10000 |+-------------------+-----------------+-------------------+-------+
Imagine if we have 10,000 users and we did two experiments. In the first experiment (Exp 1), we tested if a reduction in subscription fee increases the conversion rate of paid members and found a significant result. In the second experiment (Exp 2), we tested if an additional feature increases the number of purchases our users make and found that the effect is not statistically significant.
Here is why this might be the case:
Paid users make more purchases because of discounts.More users in the treatment group in Exp 1 are assigned control group in Exp 2, which implies that Exp 2 control group has more paid users.Users in Exp 2 control group naturally will make more purchases than the treatment group if both groups are given the placebo.Hence, even if the additional feature in Exp 2 did increase the number of purchases, we will not be able to measure the effect accurately because the treatment assignment in Exp 2 is correlated to treatment assignment in Exp 1.
Paid users make more purchases because of discounts.
More users in the treatment group in Exp 1 are assigned control group in Exp 2, which implies that Exp 2 control group has more paid users.
Users in Exp 2 control group naturally will make more purchases than the treatment group if both groups are given the placebo.
Hence, even if the additional feature in Exp 2 did increase the number of purchases, we will not be able to measure the effect accurately because the treatment assignment in Exp 2 is correlated to treatment assignment in Exp 1.
To be truly random, there should not be correlation between treatment assignment in different experiments.
To examine this, I performed the following simulation.
Generate 10,000 user IDs where each id is a hex string of length 10 (Example : d3ef2942d7).
Generate 10,000 user IDs where each id is a hex string of length 10 (Example : d3ef2942d7).
import pandas as pdimport randomexp_df = pd.DataFrame(['%010x' % random.randrange(16**10) for x in range(10000)], columns=['user_id'])
2. Create 20 experiments. For each of the experiments, randomly select 8 out of the 16 possible values (0 - 9, a - f) as control. If the last character of a user ID is in the control, assign them the control group, otherwise, assign them to treatment group.
import numpy as np# number of experimentsj = 20# get all 16 possible valuesrandom_list = exp_df['user_id'].apply(lambda x: x[-1]).unique()control_list = [set(np.random.choice(random_list, 8, replace=False)) for x in range(j)]treatment_list = [set(random_list) - x for x in control_list] for k in range(j): exp_df[f'exp_{k}'] = exp_df['user_id'].apply(lambda x: x[-1]).isin(control_list[k]).map({True:'control', False: 'treatment'})
3. For each pair of experiments, generate a contingency table like the one shown in the example above and perform a chi-square test.
from scipy import stats# initialize list to store chi-square results all experiment pairschi2_res = []for cols in combinations(exp_df.columns[1:], 2): target_cols = list(cols)# generate contingency table aggregate_df = exp_df[target_cols]\ .groupby(target_cols)\ .size()\ .to_frame()\ .reset_index()\ .pivot(index=target_cols[1], columns=target_cols[0])# store chi-square test result chi2_res.append(stats.chi2_contingency(aggregate_df)[1])# number of pairs that fail the chi-square test at alpha = 0.01print((np.array(chi2_res) < 0.01).sum())
Intuitively, if the treatment assignments are truly independent, we should expect to see the following (assuming 50% of the samples are assigned to control and 50% to treatment).
+-------------------+-----------------+-------------------+-------+| | Exp 1 : Control | Exp 1 : Treatment | Total |+-------------------+-----------------+-------------------+-------+| Exp 2 : Control | 2500 | 2500 | 5000 || Exp 2 : Treatment | 2500 | 2500 | 5000 || Total | 5000 | 5000 | 10000 |+-------------------+-----------------+-------------------+-------+
A chi-square test tells us how far the actual count is, from the independent case. The further away it is, the higher the chi-square test statistic and hence there is stronger evidence that there is some association between treatment assignments of the two experiments.
I ran the above simulation 100 times and see how many pairs of experiments failed the chi-square test within each simulation and got a mean of 62% (118 out of 190 pairs)! In other words, if we use this approach, on average, we will observe 62% of the experiment pairs failing the chi-square test. I also tried with 5 or 10 experiments (instead of the original 20) and got similar results (~60%).
The solution to fulfilling all the requirements above is actually very simple. All we have to do is to use a hash function. A hash function is any function that can be used to map data of arbitrary size to fixed-size values. This property ensures that the first two requirements (consistency and scalability) is fulfilled.
In order to fulfill the final requirement, instead of hashing only the user ID, we just have to hash the concatenation of user ID and experiment ID.
Below is an example of how to do this using python.
import mmh3def treatment_assignment(user_id, exp_id, control_bucket): # calculates the number of buckets num_buckets = len(control_bucket) * 2 # this generates a 32 bit integer hash_int = mmh3.hash(user_id + exp_id) # get the mod of the 32 bit integer mod = hash_int % num_buckets if mod in control_bucket: return 'control' else: return 'treatment' # create 50 random integer as control groupcontrol_bucket = np.random.choice(np.arange(0,100,1), 50, replace=False)treatment_assignment('d3ef2942d7', 'exp_1', control_bucket)
The above function will return either treatment or control depending on your control bucket and experiment ID. We can also expand this to more-than-two-variant experiments.
I ran the above simulation again using this approach and below is the result.
We can see that 73 out of the 100 simulations result in 2 or fewer pairs of experiments (out of 190 pairs) failing the chi-square test, proving that it is a robust way of assigning treatments without violating any of the requirements.
In this blog post, I gave a detailed overview of the requirements of treatment assignment in experiments. I listed down some common practices I have seen and stated why they violate some of the requirements. Violating any of the requirements will mean the experiment results are no longer valid, which is why we should start putting more attention to treatment assignment.
Lastly, I gave a simple solution that fulfills all the requirements that are generalizable to more-than-two-variant experiments.
[1] A good hash is hard to find (OfferUp)https://blog.offerup.com/a-good-hash-is-hard-to-find-60e8a201e8ce
|
[
{
"code": null,
"e": 262,
"s": 47,
"text": "In my previous post, I gave a step by step process of planning an A/B Test but did not cover the implementation part. This week, I focus on one of the components of an A/B Test implementation, treatment assignment."
},
{
"code": null,
"e": 768,
"s": 262,
"text": "One of the requirements of a valid randomized controlled experiment is random treatment assignment. Despite its importance, this is unfortunately also the part that most people seem to not care too much about. Perhaps because the concept of random seems so easy, I just randomly assign my experiment subjects to the control and treatment groups and that’s it, right? In a business environment, the reality is actually far from this because there are some requirements that make randomness hard to achieve."
},
{
"code": null,
"e": 1008,
"s": 768,
"text": "In this blog post, I will attempt to highlight some of these requirements, common strategies people use to fulfill them, problems that arise with these strategies and lastly a simple and elegant solution that fulfills all the requirements."
},
{
"code": null,
"e": 1086,
"s": 1008,
"text": "Treatment given to an experiment subject has to be the consistent every time."
},
{
"code": null,
"e": 1119,
"s": 1086,
"text": "I will give a few examples here:"
},
{
"code": null,
"e": 1476,
"s": 1119,
"text": "If Netflix wants to measure the effect of its new recommendation engine, then each user (experiment subject) should receive recommendations from the same recommendation engine as previous times.If I want to test if a new mobile app design has a faster loading time, then each device (experiment subject) needs to load the same mobile app design every time."
},
{
"code": null,
"e": 1671,
"s": 1476,
"text": "If Netflix wants to measure the effect of its new recommendation engine, then each user (experiment subject) should receive recommendations from the same recommendation engine as previous times."
},
{
"code": null,
"e": 1834,
"s": 1671,
"text": "If I want to test if a new mobile app design has a faster loading time, then each device (experiment subject) needs to load the same mobile app design every time."
},
{
"code": null,
"e": 2028,
"s": 1834,
"text": "If we randomly assign treatment every time when an experiment subject is exposed, then we will not be able to guarantee this. (e.g. user A might log in and see different treatments every time.)"
},
{
"code": null,
"e": 2184,
"s": 2028,
"text": "For ease of explanation and the fact that most experiments are done using users as subjects, I am going to treat users like experiment subjects henceforth."
},
{
"code": null,
"e": 2458,
"s": 2184,
"text": "It is rather easy to fulfill the first requirement, we just log down which user is given which treatment when they are first exposed to the experiment. In subsequent login or subsequent exposure, we just look up for this user’s past treatment and assign the same treatment."
},
{
"code": null,
"e": 2706,
"s": 2458,
"text": "This may be feasible for a small company, but as you scale, you will find that this method is not very scalable at all. We can imagine that it is almost impossible to achieve this when we have tens of millions of users and hundreds of experiments."
},
{
"code": null,
"e": 2973,
"s": 2706,
"text": "In order to solve both of the issues above, one of the common strategies that companies use is to utilize the last character or digit of the user ID. In this case, treatment will be consistent and fully scalable because the treatment only depends on a fixed user ID."
},
{
"code": null,
"e": 3211,
"s": 2973,
"text": "This, however, poses another problem. The experiment, by itself, is indeed truly random. But as we do more and more experiments, we will find that the treatment assignments across different experiments are now correlated with each other."
},
{
"code": null,
"e": 3238,
"s": 3211,
"text": "Let’s use an example here."
},
{
"code": null,
"e": 3708,
"s": 3238,
"text": "+-------------------+-----------------+-------------------+-------+| | Exp 1 : Control | Exp 1 : Treatment | Total |+-------------------+-----------------+-------------------+-------+| Exp 2 : Control | 2107 | 2929 | 5036 || Exp 2 : Treatment | 2916 | 2048 | 4964 || Total | 5023 | 4977 | 10000 |+-------------------+-----------------+-------------------+-------+"
},
{
"code": null,
"e": 4102,
"s": 3708,
"text": "Imagine if we have 10,000 users and we did two experiments. In the first experiment (Exp 1), we tested if a reduction in subscription fee increases the conversion rate of paid members and found a significant result. In the second experiment (Exp 2), we tested if an additional feature increases the number of purchases our users make and found that the effect is not statistically significant."
},
{
"code": null,
"e": 4138,
"s": 4102,
"text": "Here is why this might be the case:"
},
{
"code": null,
"e": 4683,
"s": 4138,
"text": "Paid users make more purchases because of discounts.More users in the treatment group in Exp 1 are assigned control group in Exp 2, which implies that Exp 2 control group has more paid users.Users in Exp 2 control group naturally will make more purchases than the treatment group if both groups are given the placebo.Hence, even if the additional feature in Exp 2 did increase the number of purchases, we will not be able to measure the effect accurately because the treatment assignment in Exp 2 is correlated to treatment assignment in Exp 1."
},
{
"code": null,
"e": 4736,
"s": 4683,
"text": "Paid users make more purchases because of discounts."
},
{
"code": null,
"e": 4876,
"s": 4736,
"text": "More users in the treatment group in Exp 1 are assigned control group in Exp 2, which implies that Exp 2 control group has more paid users."
},
{
"code": null,
"e": 5003,
"s": 4876,
"text": "Users in Exp 2 control group naturally will make more purchases than the treatment group if both groups are given the placebo."
},
{
"code": null,
"e": 5231,
"s": 5003,
"text": "Hence, even if the additional feature in Exp 2 did increase the number of purchases, we will not be able to measure the effect accurately because the treatment assignment in Exp 2 is correlated to treatment assignment in Exp 1."
},
{
"code": null,
"e": 5338,
"s": 5231,
"text": "To be truly random, there should not be correlation between treatment assignment in different experiments."
},
{
"code": null,
"e": 5393,
"s": 5338,
"text": "To examine this, I performed the following simulation."
},
{
"code": null,
"e": 5485,
"s": 5393,
"text": "Generate 10,000 user IDs where each id is a hex string of length 10 (Example : d3ef2942d7)."
},
{
"code": null,
"e": 5577,
"s": 5485,
"text": "Generate 10,000 user IDs where each id is a hex string of length 10 (Example : d3ef2942d7)."
},
{
"code": null,
"e": 5712,
"s": 5577,
"text": "import pandas as pdimport randomexp_df = pd.DataFrame(['%010x' % random.randrange(16**10) for x in range(10000)], columns=['user_id'])"
},
{
"code": null,
"e": 5970,
"s": 5712,
"text": "2. Create 20 experiments. For each of the experiments, randomly select 8 out of the 16 possible values (0 - 9, a - f) as control. If the last character of a user ID is in the control, assign them the control group, otherwise, assign them to treatment group."
},
{
"code": null,
"e": 6408,
"s": 5970,
"text": "import numpy as np# number of experimentsj = 20# get all 16 possible valuesrandom_list = exp_df['user_id'].apply(lambda x: x[-1]).unique()control_list = [set(np.random.choice(random_list, 8, replace=False)) for x in range(j)]treatment_list = [set(random_list) - x for x in control_list] for k in range(j): exp_df[f'exp_{k}'] = exp_df['user_id'].apply(lambda x: x[-1]).isin(control_list[k]).map({True:'control', False: 'treatment'})"
},
{
"code": null,
"e": 6541,
"s": 6408,
"text": "3. For each pair of experiments, generate a contingency table like the one shown in the example above and perform a chi-square test."
},
{
"code": null,
"e": 7211,
"s": 6541,
"text": "from scipy import stats# initialize list to store chi-square results all experiment pairschi2_res = []for cols in combinations(exp_df.columns[1:], 2): target_cols = list(cols)# generate contingency table aggregate_df = exp_df[target_cols]\\ .groupby(target_cols)\\ .size()\\ .to_frame()\\ .reset_index()\\ .pivot(index=target_cols[1], columns=target_cols[0])# store chi-square test result chi2_res.append(stats.chi2_contingency(aggregate_df)[1])# number of pairs that fail the chi-square test at alpha = 0.01print((np.array(chi2_res) < 0.01).sum())"
},
{
"code": null,
"e": 7390,
"s": 7211,
"text": "Intuitively, if the treatment assignments are truly independent, we should expect to see the following (assuming 50% of the samples are assigned to control and 50% to treatment)."
},
{
"code": null,
"e": 7860,
"s": 7390,
"text": "+-------------------+-----------------+-------------------+-------+| | Exp 1 : Control | Exp 1 : Treatment | Total |+-------------------+-----------------+-------------------+-------+| Exp 2 : Control | 2500 | 2500 | 5000 || Exp 2 : Treatment | 2500 | 2500 | 5000 || Total | 5000 | 5000 | 10000 |+-------------------+-----------------+-------------------+-------+"
},
{
"code": null,
"e": 8130,
"s": 7860,
"text": "A chi-square test tells us how far the actual count is, from the independent case. The further away it is, the higher the chi-square test statistic and hence there is stronger evidence that there is some association between treatment assignments of the two experiments."
},
{
"code": null,
"e": 8526,
"s": 8130,
"text": "I ran the above simulation 100 times and see how many pairs of experiments failed the chi-square test within each simulation and got a mean of 62% (118 out of 190 pairs)! In other words, if we use this approach, on average, we will observe 62% of the experiment pairs failing the chi-square test. I also tried with 5 or 10 experiments (instead of the original 20) and got similar results (~60%)."
},
{
"code": null,
"e": 8849,
"s": 8526,
"text": "The solution to fulfilling all the requirements above is actually very simple. All we have to do is to use a hash function. A hash function is any function that can be used to map data of arbitrary size to fixed-size values. This property ensures that the first two requirements (consistency and scalability) is fulfilled."
},
{
"code": null,
"e": 8998,
"s": 8849,
"text": "In order to fulfill the final requirement, instead of hashing only the user ID, we just have to hash the concatenation of user ID and experiment ID."
},
{
"code": null,
"e": 9050,
"s": 8998,
"text": "Below is an example of how to do this using python."
},
{
"code": null,
"e": 9632,
"s": 9050,
"text": "import mmh3def treatment_assignment(user_id, exp_id, control_bucket): # calculates the number of buckets num_buckets = len(control_bucket) * 2 # this generates a 32 bit integer hash_int = mmh3.hash(user_id + exp_id) # get the mod of the 32 bit integer mod = hash_int % num_buckets if mod in control_bucket: return 'control' else: return 'treatment' # create 50 random integer as control groupcontrol_bucket = np.random.choice(np.arange(0,100,1), 50, replace=False)treatment_assignment('d3ef2942d7', 'exp_1', control_bucket)"
},
{
"code": null,
"e": 9805,
"s": 9632,
"text": "The above function will return either treatment or control depending on your control bucket and experiment ID. We can also expand this to more-than-two-variant experiments."
},
{
"code": null,
"e": 9883,
"s": 9805,
"text": "I ran the above simulation again using this approach and below is the result."
},
{
"code": null,
"e": 10118,
"s": 9883,
"text": "We can see that 73 out of the 100 simulations result in 2 or fewer pairs of experiments (out of 190 pairs) failing the chi-square test, proving that it is a robust way of assigning treatments without violating any of the requirements."
},
{
"code": null,
"e": 10491,
"s": 10118,
"text": "In this blog post, I gave a detailed overview of the requirements of treatment assignment in experiments. I listed down some common practices I have seen and stated why they violate some of the requirements. Violating any of the requirements will mean the experiment results are no longer valid, which is why we should start putting more attention to treatment assignment."
},
{
"code": null,
"e": 10620,
"s": 10491,
"text": "Lastly, I gave a simple solution that fulfills all the requirements that are generalizable to more-than-two-variant experiments."
}
] |
How to Build Your Geocoding Web App with Python | by Abdishakur | Towards Data Science
|
We often need to convert addresses to geographic locations (latitude and longitude), and this is called geocoding. There are several free geocoding API ( with a limit of course) that you can use. In this tutorial, I will show you how to create the free geocoding application that you can drag and drop CSV files with address and get (download) a geocoded addresses as CSV.
We build the geocoding App with Python using Geopandas and Streamlit. Optionally you need an IDE like Visual studio code to run the App. Let us get started. We import first the libraries.
This GIF below shows a glimpse of what we are going to build. It will allow users to upload files and interact by choosing the right columns.
The web app uses Streamlit. Streamlit is an easy to use web app building library purely in Python. I create a python file ( app.py) which we are going to write our code.
Let us first importing the libraries we need
import timeimport base64import streamlit as stimport pandas as pdimport geopandas as gpdimport geopyfrom geopy.geocoders import Nominatimfrom geopy.extra.rate_limiter import RateLimiterimport plotly_express as px
We create first the headlines and run the App to test if it is working.
st.image(“geocoding.jpg”)st.title(“Geocoding Application in Python”)st.markdown(“Uppload a CSV File with address columns (Street name & number, Postcode, City)”)
Streamlit uses a well-defined API which you can simply start using immediately. Look at the above code, and I bet you can guess what it does. In the first line of the code, we display an image using st.image() . In the second line, we also show a test as tittle using st.tittle() . And finally, we show text using st.markdown() . Now, let us run the App.
Running Streamlit is as simple as writing on a terminal:
streamlit run app.py
Running the App will spin up a browser, and you can see the App is running if there are no errors. The image, the title and the text are there (See below image). We will continue working on this interface.
To upload files, we can use st.file_upoader() . We create a function that allows us to interact with the local data using the st.file_upoader().
We create the main function, and inside it, we upload a CSV file. Once the CSV is uploaded, we can use Pandas to read the data and display the first few rows of the data. We will edit this main menu as we progress building the App. You can peak the final code for this function in the last section — the App.
We need a probably formatted address column, and in this App, therefore we design so that it can accept a well-formatted column and geocode or create the address column from columns in the data. Here is an example of a properly formatted address. It has street name and number, postcode, the city and the country.
Karlaplan 13,115 20,STOCKHOLM, Sweden
The below two functions allow the user to select which option they want and later process the choice under the main menu function. The first one formats and creates an address column from DataFrame columns.
The second function below simply chooses a probably formatted column to use as an address column.
We can start now geocoding, and below function uses Nominatim geocoder. The function returns a geocoded data frame with Latitude and Longitude columns.
Once we geocode the data, we can display it in a map. This below function uses the Plotly Express. To pass a figure to Streamlit, you can use st.plotly_chart() . Keep in mind also that you can use other libraries to plot your data.
Once the data is geocoded, the App shows the data frame again with Latitudes and Longitudes. It would be nice also to be able to download the geocoded data.
To download the file, we can write the function below, and it allows us to right-click and save the file with a given name.
Putting together all the code, the geocoding application code looks like this.
We can add some more functionality and build on top of this to allow other use cases if we want. Here is a glimpse of how to download the geocoded file in the App.
The code is also hosted in this Github repository.
|
[
{
"code": null,
"e": 545,
"s": 172,
"text": "We often need to convert addresses to geographic locations (latitude and longitude), and this is called geocoding. There are several free geocoding API ( with a limit of course) that you can use. In this tutorial, I will show you how to create the free geocoding application that you can drag and drop CSV files with address and get (download) a geocoded addresses as CSV."
},
{
"code": null,
"e": 733,
"s": 545,
"text": "We build the geocoding App with Python using Geopandas and Streamlit. Optionally you need an IDE like Visual studio code to run the App. Let us get started. We import first the libraries."
},
{
"code": null,
"e": 875,
"s": 733,
"text": "This GIF below shows a glimpse of what we are going to build. It will allow users to upload files and interact by choosing the right columns."
},
{
"code": null,
"e": 1045,
"s": 875,
"text": "The web app uses Streamlit. Streamlit is an easy to use web app building library purely in Python. I create a python file ( app.py) which we are going to write our code."
},
{
"code": null,
"e": 1090,
"s": 1045,
"text": "Let us first importing the libraries we need"
},
{
"code": null,
"e": 1303,
"s": 1090,
"text": "import timeimport base64import streamlit as stimport pandas as pdimport geopandas as gpdimport geopyfrom geopy.geocoders import Nominatimfrom geopy.extra.rate_limiter import RateLimiterimport plotly_express as px"
},
{
"code": null,
"e": 1375,
"s": 1303,
"text": "We create first the headlines and run the App to test if it is working."
},
{
"code": null,
"e": 1537,
"s": 1375,
"text": "st.image(“geocoding.jpg”)st.title(“Geocoding Application in Python”)st.markdown(“Uppload a CSV File with address columns (Street name & number, Postcode, City)”)"
},
{
"code": null,
"e": 1892,
"s": 1537,
"text": "Streamlit uses a well-defined API which you can simply start using immediately. Look at the above code, and I bet you can guess what it does. In the first line of the code, we display an image using st.image() . In the second line, we also show a test as tittle using st.tittle() . And finally, we show text using st.markdown() . Now, let us run the App."
},
{
"code": null,
"e": 1949,
"s": 1892,
"text": "Running Streamlit is as simple as writing on a terminal:"
},
{
"code": null,
"e": 1970,
"s": 1949,
"text": "streamlit run app.py"
},
{
"code": null,
"e": 2176,
"s": 1970,
"text": "Running the App will spin up a browser, and you can see the App is running if there are no errors. The image, the title and the text are there (See below image). We will continue working on this interface."
},
{
"code": null,
"e": 2321,
"s": 2176,
"text": "To upload files, we can use st.file_upoader() . We create a function that allows us to interact with the local data using the st.file_upoader()."
},
{
"code": null,
"e": 2630,
"s": 2321,
"text": "We create the main function, and inside it, we upload a CSV file. Once the CSV is uploaded, we can use Pandas to read the data and display the first few rows of the data. We will edit this main menu as we progress building the App. You can peak the final code for this function in the last section — the App."
},
{
"code": null,
"e": 2944,
"s": 2630,
"text": "We need a probably formatted address column, and in this App, therefore we design so that it can accept a well-formatted column and geocode or create the address column from columns in the data. Here is an example of a properly formatted address. It has street name and number, postcode, the city and the country."
},
{
"code": null,
"e": 2982,
"s": 2944,
"text": "Karlaplan 13,115 20,STOCKHOLM, Sweden"
},
{
"code": null,
"e": 3189,
"s": 2982,
"text": "The below two functions allow the user to select which option they want and later process the choice under the main menu function. The first one formats and creates an address column from DataFrame columns."
},
{
"code": null,
"e": 3287,
"s": 3189,
"text": "The second function below simply chooses a probably formatted column to use as an address column."
},
{
"code": null,
"e": 3439,
"s": 3287,
"text": "We can start now geocoding, and below function uses Nominatim geocoder. The function returns a geocoded data frame with Latitude and Longitude columns."
},
{
"code": null,
"e": 3671,
"s": 3439,
"text": "Once we geocode the data, we can display it in a map. This below function uses the Plotly Express. To pass a figure to Streamlit, you can use st.plotly_chart() . Keep in mind also that you can use other libraries to plot your data."
},
{
"code": null,
"e": 3828,
"s": 3671,
"text": "Once the data is geocoded, the App shows the data frame again with Latitudes and Longitudes. It would be nice also to be able to download the geocoded data."
},
{
"code": null,
"e": 3952,
"s": 3828,
"text": "To download the file, we can write the function below, and it allows us to right-click and save the file with a given name."
},
{
"code": null,
"e": 4031,
"s": 3952,
"text": "Putting together all the code, the geocoding application code looks like this."
},
{
"code": null,
"e": 4195,
"s": 4031,
"text": "We can add some more functionality and build on top of this to allow other use cases if we want. Here is a glimpse of how to download the geocoded file in the App."
}
] |
Tryit Editor v3.7
|
CSS Style Images
Tryit: Center an image
|
[
{
"code": null,
"e": 26,
"s": 9,
"text": "CSS Style Images"
}
] |
Why is Binary Search preferred over Ternary Search? - GeeksforGeeks
|
16 Nov, 2021
The following is a simple recursive Binary Search function in C++ taken from here.
C
Java
Python3
C#
Javascript
// A recursive binary search function. It returns location of x in// given array arr[l..r] is present, otherwise -1int binarySearch(int arr[], int l, int r, int x){ if (r >= l) { int mid = l + (r - l)/2; // If the element is present at the middle itself if (arr[mid] == x) return mid; // If element is smaller than mid, then it can only be present // in left subarray if (arr[mid] > x) return binarySearch(arr, l, mid-1, x); // Else the element can only be present in right subarray return binarySearch(arr, mid+1, r, x); } // We reach here when element is not present in array return -1;}
// A recursive binary search function. It returns location of x in// given array arr[l..r] is present, otherwise -1static int binarySearch(int arr[], int l, int r, int x){ if (r >= l) { int mid = l + (r - l)/2; // If the element is present at the middle itself if (arr[mid] == x) return mid; // If element is smaller than mid, then it can only be present // in left subarray if (arr[mid] > x) return binarySearch(arr, l, mid-1, x); // Else the element can only be present in right subarray return binarySearch(arr, mid+1, r, x); } // We reach here when element is not present in array return -1;} // This code is contributed by gauravrajput1
# A recursive binary search function. It returns location of x in# given array arr[l..r] is present, otherwise -1def binarySearch(arr, l, r, x): if (r >= l): mid = l + (r - l)/2; # If the element is present at the middle itself if (arr[mid] == x): return mid; # If element is smaller than mid, then it can only be present # in left subarray if (arr[mid] > x): return binarySearch(arr, l, mid-1, x); # Else the element can only be present in right subarray return binarySearch(arr, mid+1, r, x); # We reach here when element is not present in array return -1; # This code is contributed by umadevi9616
// A recursive binary search function. It returns location of x in// given array arr[l..r] is present, otherwise -1static int binarySearch(int []arr, int l, int r, int x){ if (r >= l) { int mid = l + (r - l)/2; // If the element is present at the middle itself if (arr[mid] == x) return mid; // If element is smaller than mid, then it can only be present // in left subarray if (arr[mid] > x) return binarySearch(arr, l, mid-1, x); // Else the element can only be present in right subarray return binarySearch(arr, mid+1, r, x); } // We reach here when element is not present in array return -1;} // This code is contributed by gauravrajput1
<script>// A recursive binary search function. It returns location of x in// given array arr[l..r] is present, otherwise -1function binarySearch(arr , l , r , x){ if (r >= l) { var mid = l + (r - l)/2; // If the element is present at the middle itself if (arr[mid] == x) return mid; // If element is smaller than mid, then it can only be present // in left subarray if (arr[mid] > x) return binarySearch(arr, l, mid-1, x); // Else the element can only be present in right subarray return binarySearch(arr, mid+1, r, x); } // We reach here when element is not present in array return -1;} // This code is contributed by gauravrajput1</script>
The following is a simple recursive Ternary Search function :
C
Java
Python3
C#
PHP
Javascript
// A recursive ternary search function. It returns location of x in// given array arr[l..r] is present, otherwise -1int ternarySearch(int arr[], int l, int r, int x){ if (r >= l) { int mid1 = l + (r - l)/3; int mid2 = mid1 + (r - l)/3; // If x is present at the mid1 if (arr[mid1] == x) return mid1; // If x is present at the mid2 if (arr[mid2] == x) return mid2; // If x is present in left one-third if (arr[mid1] > x) return ternarySearch(arr, l, mid1-1, x); // If x is present in right one-third if (arr[mid2] < x) return ternarySearch(arr, mid2+1, r, x); // If x is present in middle one-third return ternarySearch(arr, mid1+1, mid2-1, x); } // We reach here when element is not present in array return -1;}
import java.io.*; class GFG{public static void main (String[] args){ // A recursive ternary search function.// It returns location of x in given array// arr[l..r] is present, otherwise -1static int ternarySearch(int arr[], int l, int r, int x){ if (r >= l) { int mid1 = l + (r - l) / 3; int mid2 = mid1 + (r - l) / 3; // If x is present at the mid1 if (arr[mid1] == x) return mid1; // If x is present at the mid2 if (arr[mid2] == x) return mid2; // If x is present in left one-third if (arr[mid1] > x) return ternarySearch(arr, l, mid1 - 1, x); // If x is present in right one-third if (arr[mid2] < x) return ternarySearch(arr, mid2 + 1, r, x); // If x is present in middle one-third return ternarySearch(arr, mid1 + 1, mid2 - 1, x); } // We reach here when element is // not present in array return -1;}}
# A recursive ternary search function. It returns location of x in# given array arr[l..r] is present, otherwise -1def ternarySearch(arr, l, r, x): if (r >= l): mid1 = l + (r - l)//3 mid2 = mid1 + (r - l)//3 # If x is present at the mid1 if arr[mid1] == x: return mid1 # If x is present at the mid2 if arr[mid2] == x: return mid2 # If x is present in left one-third if arr[mid1] > x: return ternarySearch(arr, l, mid1-1, x) # If x is present in right one-third if arr[mid2] < x: return ternarySearch(arr, mid2+1, r, x) # If x is present in middle one-third return ternarySearch(arr, mid1+1, mid2-1, x) # We reach here when element is not present in array return -1 # This code is contributed by ankush_953
// A recursive ternary search function.// It returns location of x in given array// arr[l..r] is present, otherwise -1static int ternarySearch(int []arr, int l, int r, int x){ if (r >= l) { int mid1 = l + (r - l) / 3; int mid2 = mid1 + (r - l) / 3; // If x is present at the mid1 if (arr[mid1] == x) return mid1; // If x is present at the mid2 if (arr[mid2] == x) return mid2; // If x is present in left one-third if (arr[mid1] > x) return ternarySearch(arr, l, mid1 - 1, x); // If x is present in right one-third if (arr[mid2] < x) return ternarySearch(arr, mid2 + 1, r, x); // If x is present in middle one-third return ternarySearch(arr, mid1 + 1, mid2 - 1, x); } // We reach here when element is // not present in array return -1;} // This code is contributed by gauravrajput1
<?php// A recursive ternary search function.// It returns location of x in// given array arr[l..r] is// present, otherwise -1function ternarySearch($arr, $l, $r, $x){ if ($r >= $l) { $mid1 = $l + ($r - $l) / 3; $mid2 = $mid1 + ($r - l) / 3; // If x is present at the mid1 if ($arr[mid1] == $x) return $mid1; // If x is present // at the mid2 if ($arr[$mid2] == $x) return $mid2; // If x is present in // left one-third if ($arr[$mid1] > $x) return ternarySearch($arr, $l, $mid1 - 1, $x); // If x is present in right one-third if ($arr[$mid2] < $x) return ternarySearch($arr, $mid2 + 1, $r, $x); // If x is present in // middle one-third return ternarySearch($arr, $mid1 + 1, $mid2 - 1, $x);} // We reach here when element// is not present in arrayreturn -1;} // This code is contributed by anuj_67?>
<script> // A recursive ternary search function. // It returns location of x in given array // arr[l..r] is present, otherwise -1 function ternarySearch(arr , l , r , x) { if (r >= l) { var mid1 = l + parseInt((r - l) / 3); var mid2 = mid1 + parseInt((r - l) / 3); // If x is present at the mid1 if (arr[mid1] == x) return mid1; // If x is present at the mid2 if (arr[mid2] == x) return mid2; // If x is present in left one-third if (arr[mid1] > x) return ternarySearch(arr, l, mid1 - 1, x); // If x is present in right one-third if (arr[mid2] < x) return ternarySearch(arr, mid2 + 1, r, x); // If x is present in middle one-third return ternarySearch(arr, mid1 + 1, mid2 - 1, x); } // We reach here when element is // not present in array return -1; // This code is contributed by gauravrajput1</script>
Which of the above two does less comparisons in worst case? From the first look, it seems the ternary search does less number of comparisons as it makes Log3n recursive calls, but binary search makes Log2n recursive calls. Let us take a closer look. The following is recursive formula for counting comparisons in worst case of Binary Search.
T(n) = T(n/2) + 2, T(1) = 1
The following is recursive formula for counting comparisons in worst case of Ternary Search.
T(n) = T(n/3) + 4, T(1) = 1
In binary search, there are 2Log2n + 1 comparisons in worst case. In ternary search, there are 4Log3n + 1 comparisons in worst case.
Time Complexity for Binary search = 2clog2n + O(1)
Time Complexity for Ternary search = 4clog3n + O(1)
Therefore, the comparison of Ternary and Binary Searches boils down the comparison of expressions 2Log3n and Log2n . The value of 2Log3n can be written as (2 / Log23) * Log2n . Since the value of (2 / Log23) is more than one, Ternary Search does more comparisons than Binary Search in worst case.Exercise: Why Merge Sort divides input array in two halves, why not in three or more parts?This article is contributed by Anmol. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above
vt_m
ankush_953
Sach_Code
GauravRajput1
umadevi9616
Searching
Searching
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Most frequent element in an array
Find the index of an array element in Java
Count number of occurrences (or frequency) in a sorted array
Two Pointers Technique
Best First Search (Informed Search)
Jump Search
Find a peak element
Quickselect Algorithm
Split the given array into K sub-arrays such that maximum sum of all sub arrays is minimum
Median of two sorted arrays of same size
|
[
{
"code": null,
"e": 24061,
"s": 24033,
"text": "\n16 Nov, 2021"
},
{
"code": null,
"e": 24145,
"s": 24061,
"text": "The following is a simple recursive Binary Search function in C++ taken from here. "
},
{
"code": null,
"e": 24147,
"s": 24145,
"text": "C"
},
{
"code": null,
"e": 24152,
"s": 24147,
"text": "Java"
},
{
"code": null,
"e": 24160,
"s": 24152,
"text": "Python3"
},
{
"code": null,
"e": 24163,
"s": 24160,
"text": "C#"
},
{
"code": null,
"e": 24174,
"s": 24163,
"text": "Javascript"
},
{
"code": "// A recursive binary search function. It returns location of x in// given array arr[l..r] is present, otherwise -1int binarySearch(int arr[], int l, int r, int x){ if (r >= l) { int mid = l + (r - l)/2; // If the element is present at the middle itself if (arr[mid] == x) return mid; // If element is smaller than mid, then it can only be present // in left subarray if (arr[mid] > x) return binarySearch(arr, l, mid-1, x); // Else the element can only be present in right subarray return binarySearch(arr, mid+1, r, x); } // We reach here when element is not present in array return -1;}",
"e": 24839,
"s": 24174,
"text": null
},
{
"code": "// A recursive binary search function. It returns location of x in// given array arr[l..r] is present, otherwise -1static int binarySearch(int arr[], int l, int r, int x){ if (r >= l) { int mid = l + (r - l)/2; // If the element is present at the middle itself if (arr[mid] == x) return mid; // If element is smaller than mid, then it can only be present // in left subarray if (arr[mid] > x) return binarySearch(arr, l, mid-1, x); // Else the element can only be present in right subarray return binarySearch(arr, mid+1, r, x); } // We reach here when element is not present in array return -1;} // This code is contributed by gauravrajput1",
"e": 25556,
"s": 24839,
"text": null
},
{
"code": "# A recursive binary search function. It returns location of x in# given array arr[l..r] is present, otherwise -1def binarySearch(arr, l, r, x): if (r >= l): mid = l + (r - l)/2; # If the element is present at the middle itself if (arr[mid] == x): return mid; # If element is smaller than mid, then it can only be present # in left subarray if (arr[mid] > x): return binarySearch(arr, l, mid-1, x); # Else the element can only be present in right subarray return binarySearch(arr, mid+1, r, x); # We reach here when element is not present in array return -1; # This code is contributed by umadevi9616",
"e": 26219,
"s": 25556,
"text": null
},
{
"code": "// A recursive binary search function. It returns location of x in// given array arr[l..r] is present, otherwise -1static int binarySearch(int []arr, int l, int r, int x){ if (r >= l) { int mid = l + (r - l)/2; // If the element is present at the middle itself if (arr[mid] == x) return mid; // If element is smaller than mid, then it can only be present // in left subarray if (arr[mid] > x) return binarySearch(arr, l, mid-1, x); // Else the element can only be present in right subarray return binarySearch(arr, mid+1, r, x); } // We reach here when element is not present in array return -1;} // This code is contributed by gauravrajput1",
"e": 26936,
"s": 26219,
"text": null
},
{
"code": "<script>// A recursive binary search function. It returns location of x in// given array arr[l..r] is present, otherwise -1function binarySearch(arr , l , r , x){ if (r >= l) { var mid = l + (r - l)/2; // If the element is present at the middle itself if (arr[mid] == x) return mid; // If element is smaller than mid, then it can only be present // in left subarray if (arr[mid] > x) return binarySearch(arr, l, mid-1, x); // Else the element can only be present in right subarray return binarySearch(arr, mid+1, r, x); } // We reach here when element is not present in array return -1;} // This code is contributed by gauravrajput1</script>",
"e": 27653,
"s": 26936,
"text": null
},
{
"code": null,
"e": 27716,
"s": 27653,
"text": "The following is a simple recursive Ternary Search function : "
},
{
"code": null,
"e": 27718,
"s": 27716,
"text": "C"
},
{
"code": null,
"e": 27723,
"s": 27718,
"text": "Java"
},
{
"code": null,
"e": 27731,
"s": 27723,
"text": "Python3"
},
{
"code": null,
"e": 27734,
"s": 27731,
"text": "C#"
},
{
"code": null,
"e": 27738,
"s": 27734,
"text": "PHP"
},
{
"code": null,
"e": 27749,
"s": 27738,
"text": "Javascript"
},
{
"code": "// A recursive ternary search function. It returns location of x in// given array arr[l..r] is present, otherwise -1int ternarySearch(int arr[], int l, int r, int x){ if (r >= l) { int mid1 = l + (r - l)/3; int mid2 = mid1 + (r - l)/3; // If x is present at the mid1 if (arr[mid1] == x) return mid1; // If x is present at the mid2 if (arr[mid2] == x) return mid2; // If x is present in left one-third if (arr[mid1] > x) return ternarySearch(arr, l, mid1-1, x); // If x is present in right one-third if (arr[mid2] < x) return ternarySearch(arr, mid2+1, r, x); // If x is present in middle one-third return ternarySearch(arr, mid1+1, mid2-1, x); } // We reach here when element is not present in array return -1;}",
"e": 28562,
"s": 27749,
"text": null
},
{
"code": "import java.io.*; class GFG{public static void main (String[] args){ // A recursive ternary search function.// It returns location of x in given array// arr[l..r] is present, otherwise -1static int ternarySearch(int arr[], int l, int r, int x){ if (r >= l) { int mid1 = l + (r - l) / 3; int mid2 = mid1 + (r - l) / 3; // If x is present at the mid1 if (arr[mid1] == x) return mid1; // If x is present at the mid2 if (arr[mid2] == x) return mid2; // If x is present in left one-third if (arr[mid1] > x) return ternarySearch(arr, l, mid1 - 1, x); // If x is present in right one-third if (arr[mid2] < x) return ternarySearch(arr, mid2 + 1, r, x); // If x is present in middle one-third return ternarySearch(arr, mid1 + 1, mid2 - 1, x); } // We reach here when element is // not present in array return -1;}}",
"e": 29568,
"s": 28562,
"text": null
},
{
"code": "# A recursive ternary search function. It returns location of x in# given array arr[l..r] is present, otherwise -1def ternarySearch(arr, l, r, x): if (r >= l): mid1 = l + (r - l)//3 mid2 = mid1 + (r - l)//3 # If x is present at the mid1 if arr[mid1] == x: return mid1 # If x is present at the mid2 if arr[mid2] == x: return mid2 # If x is present in left one-third if arr[mid1] > x: return ternarySearch(arr, l, mid1-1, x) # If x is present in right one-third if arr[mid2] < x: return ternarySearch(arr, mid2+1, r, x) # If x is present in middle one-third return ternarySearch(arr, mid1+1, mid2-1, x) # We reach here when element is not present in array return -1 # This code is contributed by ankush_953 ",
"e": 30430,
"s": 29568,
"text": null
},
{
"code": " // A recursive ternary search function.// It returns location of x in given array// arr[l..r] is present, otherwise -1static int ternarySearch(int []arr, int l, int r, int x){ if (r >= l) { int mid1 = l + (r - l) / 3; int mid2 = mid1 + (r - l) / 3; // If x is present at the mid1 if (arr[mid1] == x) return mid1; // If x is present at the mid2 if (arr[mid2] == x) return mid2; // If x is present in left one-third if (arr[mid1] > x) return ternarySearch(arr, l, mid1 - 1, x); // If x is present in right one-third if (arr[mid2] < x) return ternarySearch(arr, mid2 + 1, r, x); // If x is present in middle one-third return ternarySearch(arr, mid1 + 1, mid2 - 1, x); } // We reach here when element is // not present in array return -1;} // This code is contributed by gauravrajput1",
"e": 31412,
"s": 30430,
"text": null
},
{
"code": "<?php// A recursive ternary search function.// It returns location of x in// given array arr[l..r] is// present, otherwise -1function ternarySearch($arr, $l, $r, $x){ if ($r >= $l) { $mid1 = $l + ($r - $l) / 3; $mid2 = $mid1 + ($r - l) / 3; // If x is present at the mid1 if ($arr[mid1] == $x) return $mid1; // If x is present // at the mid2 if ($arr[$mid2] == $x) return $mid2; // If x is present in // left one-third if ($arr[$mid1] > $x) return ternarySearch($arr, $l, $mid1 - 1, $x); // If x is present in right one-third if ($arr[$mid2] < $x) return ternarySearch($arr, $mid2 + 1, $r, $x); // If x is present in // middle one-third return ternarySearch($arr, $mid1 + 1, $mid2 - 1, $x);} // We reach here when element// is not present in arrayreturn -1;} // This code is contributed by anuj_67?>",
"e": 32494,
"s": 31412,
"text": null
},
{
"code": "<script> // A recursive ternary search function. // It returns location of x in given array // arr[l..r] is present, otherwise -1 function ternarySearch(arr , l , r , x) { if (r >= l) { var mid1 = l + parseInt((r - l) / 3); var mid2 = mid1 + parseInt((r - l) / 3); // If x is present at the mid1 if (arr[mid1] == x) return mid1; // If x is present at the mid2 if (arr[mid2] == x) return mid2; // If x is present in left one-third if (arr[mid1] > x) return ternarySearch(arr, l, mid1 - 1, x); // If x is present in right one-third if (arr[mid2] < x) return ternarySearch(arr, mid2 + 1, r, x); // If x is present in middle one-third return ternarySearch(arr, mid1 + 1, mid2 - 1, x); } // We reach here when element is // not present in array return -1; // This code is contributed by gauravrajput1</script>",
"e": 33545,
"s": 32494,
"text": null
},
{
"code": null,
"e": 33889,
"s": 33545,
"text": "Which of the above two does less comparisons in worst case? From the first look, it seems the ternary search does less number of comparisons as it makes Log3n recursive calls, but binary search makes Log2n recursive calls. Let us take a closer look. The following is recursive formula for counting comparisons in worst case of Binary Search. "
},
{
"code": null,
"e": 33921,
"s": 33889,
"text": " T(n) = T(n/2) + 2, T(1) = 1"
},
{
"code": null,
"e": 34016,
"s": 33921,
"text": "The following is recursive formula for counting comparisons in worst case of Ternary Search. "
},
{
"code": null,
"e": 34047,
"s": 34016,
"text": " T(n) = T(n/3) + 4, T(1) = 1"
},
{
"code": null,
"e": 34182,
"s": 34047,
"text": "In binary search, there are 2Log2n + 1 comparisons in worst case. In ternary search, there are 4Log3n + 1 comparisons in worst case. "
},
{
"code": null,
"e": 34285,
"s": 34182,
"text": "Time Complexity for Binary search = 2clog2n + O(1)\nTime Complexity for Ternary search = 4clog3n + O(1)"
},
{
"code": null,
"e": 34835,
"s": 34285,
"text": "Therefore, the comparison of Ternary and Binary Searches boils down the comparison of expressions 2Log3n and Log2n . The value of 2Log3n can be written as (2 / Log23) * Log2n . Since the value of (2 / Log23) is more than one, Ternary Search does more comparisons than Binary Search in worst case.Exercise: Why Merge Sort divides input array in two halves, why not in three or more parts?This article is contributed by Anmol. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above "
},
{
"code": null,
"e": 34840,
"s": 34835,
"text": "vt_m"
},
{
"code": null,
"e": 34851,
"s": 34840,
"text": "ankush_953"
},
{
"code": null,
"e": 34861,
"s": 34851,
"text": "Sach_Code"
},
{
"code": null,
"e": 34875,
"s": 34861,
"text": "GauravRajput1"
},
{
"code": null,
"e": 34887,
"s": 34875,
"text": "umadevi9616"
},
{
"code": null,
"e": 34897,
"s": 34887,
"text": "Searching"
},
{
"code": null,
"e": 34907,
"s": 34897,
"text": "Searching"
},
{
"code": null,
"e": 35005,
"s": 34907,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 35014,
"s": 35005,
"text": "Comments"
},
{
"code": null,
"e": 35027,
"s": 35014,
"text": "Old Comments"
},
{
"code": null,
"e": 35061,
"s": 35027,
"text": "Most frequent element in an array"
},
{
"code": null,
"e": 35104,
"s": 35061,
"text": "Find the index of an array element in Java"
},
{
"code": null,
"e": 35165,
"s": 35104,
"text": "Count number of occurrences (or frequency) in a sorted array"
},
{
"code": null,
"e": 35188,
"s": 35165,
"text": "Two Pointers Technique"
},
{
"code": null,
"e": 35224,
"s": 35188,
"text": "Best First Search (Informed Search)"
},
{
"code": null,
"e": 35236,
"s": 35224,
"text": "Jump Search"
},
{
"code": null,
"e": 35256,
"s": 35236,
"text": "Find a peak element"
},
{
"code": null,
"e": 35278,
"s": 35256,
"text": "Quickselect Algorithm"
},
{
"code": null,
"e": 35369,
"s": 35278,
"text": "Split the given array into K sub-arrays such that maximum sum of all sub arrays is minimum"
}
] |
queue::empty() and queue::size() in C++ STL
|
In this article we will be discussing the working, syntax and examples of queue::empty() and queue::size() functions in C++ STL.
Queue is a simple sequence or data structure defined in the C++ STL which does insertion and deletion of the data in FIFO(First In First Out) fashion. The data in a queue is stored in continuous manner. The elements are inserted at the end and removed from the starting of the queue. In C++ STL there is already a predefined template of queue, which inserts and removes the data in the similar fashion of a queue.
queue::empty() is an inbuilt function in C++ STL which is declared in header file. queue::empty() is used to check whether the associated queue container is empty or not. This function returns either true or false, if the queue is empty (size is 0) then the function returns true, else if the queue is having some value then it will return false.
myqueue.empty();
This function accepts no parameter
This function returns true if the size of the associated queue container is 0, else will return false.
Input: queue<int> myqueue = {10, 20, 30, 40};
myqueue.empty();
Output:
False
Input: queue<int> myqueue;
myqueue.empty();
Output:
True
Live Demo
#include <iostream>
#include <queue>
using namespace std;
int main(){
queue<int> Queue;
Queue.push(10);
Queue.push(20);
Queue.push(30);
Queue.push(40);
//check is queue is empty or not
if (Queue.empty()){
cout<<"Queue is empty";
}
else{
cout <<"Queue is not empty";
}
return 0;
}
If we run the above code it will generate the following output −
Queue is not empty
queue::size() is an inbuilt function in C++ STL which is declared in <queue> header file. queue::size() is used to check whether the size of the associated queue container. This function returns an unsigned int value, i.e the size of the queue container, or the number of elements present in a queue container. This function returns 0 if the queue is empty or having no elements in it.
myqueue.size();
This function accepts no parameter
This function returns unsigned int, the size of the queue container associated with the function.
Input: queue<int> myqueue = {10, 20 30, 40};
myqueue.size();
Output:
4
Input: queue<int> myqueue;
myqueue.size();
Output:
0
Live Demo
#include <iostream>
#include <queue>
using namespace std;
int main(){
queue<int> Queue;
Queue.push(10);
Queue.push(20);
Queue.push(30);
Queue.push(40);
cout<<"size of Queue is : "<<Queue.size();
return 0;
}
If we run the above code it will generate the following output −
size of Queue is : 4
|
[
{
"code": null,
"e": 1191,
"s": 1062,
"text": "In this article we will be discussing the working, syntax and examples of queue::empty() and queue::size() functions in C++ STL."
},
{
"code": null,
"e": 1605,
"s": 1191,
"text": "Queue is a simple sequence or data structure defined in the C++ STL which does insertion and deletion of the data in FIFO(First In First Out) fashion. The data in a queue is stored in continuous manner. The elements are inserted at the end and removed from the starting of the queue. In C++ STL there is already a predefined template of queue, which inserts and removes the data in the similar fashion of a queue."
},
{
"code": null,
"e": 1953,
"s": 1605,
"text": "queue::empty() is an inbuilt function in C++ STL which is declared in header file. queue::empty() is used to check whether the associated queue container is empty or not. This function returns either true or false, if the queue is empty (size is 0) then the function returns true, else if the queue is having some value then it will return false."
},
{
"code": null,
"e": 1970,
"s": 1953,
"text": "myqueue.empty();"
},
{
"code": null,
"e": 2005,
"s": 1970,
"text": "This function accepts no parameter"
},
{
"code": null,
"e": 2108,
"s": 2005,
"text": "This function returns true if the size of the associated queue container is 0, else will return false."
},
{
"code": null,
"e": 2266,
"s": 2108,
"text": "Input: queue<int> myqueue = {10, 20, 30, 40};\n myqueue.empty();\nOutput:\n False\nInput: queue<int> myqueue;\n myqueue.empty();\nOutput:\n True"
},
{
"code": null,
"e": 2277,
"s": 2266,
"text": " Live Demo"
},
{
"code": null,
"e": 2602,
"s": 2277,
"text": "#include <iostream>\n#include <queue>\nusing namespace std;\nint main(){\n queue<int> Queue;\n Queue.push(10);\n Queue.push(20);\n Queue.push(30);\n Queue.push(40);\n //check is queue is empty or not\n if (Queue.empty()){\n cout<<\"Queue is empty\";\n }\n else{\n cout <<\"Queue is not empty\";\n }\n return 0;\n}"
},
{
"code": null,
"e": 2667,
"s": 2602,
"text": "If we run the above code it will generate the following output −"
},
{
"code": null,
"e": 2686,
"s": 2667,
"text": "Queue is not empty"
},
{
"code": null,
"e": 3072,
"s": 2686,
"text": "queue::size() is an inbuilt function in C++ STL which is declared in <queue> header file. queue::size() is used to check whether the size of the associated queue container. This function returns an unsigned int value, i.e the size of the queue container, or the number of elements present in a queue container. This function returns 0 if the queue is empty or having no elements in it."
},
{
"code": null,
"e": 3088,
"s": 3072,
"text": "myqueue.size();"
},
{
"code": null,
"e": 3123,
"s": 3088,
"text": "This function accepts no parameter"
},
{
"code": null,
"e": 3221,
"s": 3123,
"text": "This function returns unsigned int, the size of the queue container associated with the function."
},
{
"code": null,
"e": 3369,
"s": 3221,
"text": "Input: queue<int> myqueue = {10, 20 30, 40};\n myqueue.size();\nOutput:\n 4\nInput: queue<int> myqueue;\n myqueue.size();\nOutput:\n 0"
},
{
"code": null,
"e": 3380,
"s": 3369,
"text": " Live Demo"
},
{
"code": null,
"e": 3611,
"s": 3380,
"text": "#include <iostream>\n#include <queue>\nusing namespace std;\nint main(){\n queue<int> Queue;\n Queue.push(10);\n Queue.push(20);\n Queue.push(30);\n Queue.push(40);\n cout<<\"size of Queue is : \"<<Queue.size();\n return 0;\n}"
},
{
"code": null,
"e": 3676,
"s": 3611,
"text": "If we run the above code it will generate the following output −"
},
{
"code": null,
"e": 3697,
"s": 3676,
"text": "size of Queue is : 4"
}
] |
Count of N digit Numbers having no pair of equal consecutive Digits - GeeksforGeeks
|
14 Jun, 2021
Given an integer N, the task is to find the total count of N digit numbers such that no two consecutive digits are equal.
Examples:
Input: N = 2 Output: 81 Explanation: Count possible 2-digit numbers, i.e. the numbers in the range [10, 99] = 90 All 2-digit numbers having equal consecutive digits are {11, 22, 33, 44, 55, 66, 77, 88, 99}. Therefore, the required count = 90 – 9 = 81
Input: N = 1Output: 10
Naive Approach: The simplest approach to solve the problem is to iterate over all possible N-digit numbers and check for every number if any two consecutive digits are equal or not.
Below is the implementation of the above approach:
C++
Java
Python3
C#
Javascript
// C++ program to implement// the above approach#include<bits/stdc++.h>using namespace std; // Function to count the number// of N-digit numbers with no// equal pair of consecutive digitsvoid count(int N){ // Base Case if (N == 1) { cout << 10 << endl; return; } // Lowest N-digit number int l = pow(10, N - 1); // Highest N-digit number int r = pow(10, N) - 1; // Stores the count of all // required numbers int ans = 0; // Iterate over all N-digit numbers for(int i = l; i <= r; i++) { string s = to_string(i); int flag = 0; // Iterate over all digits for(int j = 1; j < N; j++) { // Check for equal pair of // adjacent digits if (s[j] == s[j - 1]) { flag = 1; break; } } if (flag == 0) ans++; } cout << ans << endl;} // Driver Codeint main(){ int N = 2; count(N); return 0;} // This code is contributed by rutvik_56
// Java Program to implement// the above approachimport java.util.*;class GFG { // Function to count the number // of N-digit numbers with no // equal pair of consecutive digits public static void count(int N) { // Base Case if (N == 1) { System.out.println(10); return; } // Lowest N-digit number int l = (int)Math.pow(10, N - 1); // Highest N-digit number int r = (int)Math.pow(10, N) - 1; // Stores the count of all // required numbers int ans = 0; // Iterate over all N-digit numbers for (int i = l; i <= r; i++) { String s = Integer.toString(i); int flag = 0; // Iterate over all digits for (int j = 1; j < N; j++) { // Check for equal pair of // adjacent digits if (s.charAt(j) == s.charAt(j - 1)) { flag = 1; break; } } if (flag == 0) ans++; } System.out.println(ans); } // Driver Code public static void main(String[] args) { int N = 2; count(N); }}
# Python3 Program to implement# the above approach # Function to count the number# of N-digit numbers with no# equal pair of consecutive digitsdef count(N): # Base Case if (N == 1): print(10); return; # Lowest N-digit number l = int(pow(10, N - 1)); # Highest N-digit number r = int(pow(10, N) - 1); # Stores the count of all # required numbers ans = 0; # Iterate over all N-digit numbers for i in range(l, r + 1): s = str(i); flag = 0; # Iterate over all digits for j in range(1, N): # Check for equal pair of # adjacent digits if (s[j] == s[j - 1]): flag = 1; break; if (flag == 0): ans+=1; print(ans); # Driver Codeif __name__ == '__main__': N = 2; count(N); # This code is contributed by sapnasingh4991
// C# program to implement// the above approachusing System; class GFG{ // Function to count the number// of N-digit numbers with no// equal pair of consecutive digitspublic static void count(int N){ // Base Case if (N == 1) { Console.WriteLine(10); return; } // Lowest N-digit number int l = (int)Math.Pow(10, N - 1); // Highest N-digit number int r = (int)Math.Pow(10, N) - 1; // Stores the count of all // required numbers int ans = 0; // Iterate over all N-digit numbers for(int i = l; i <= r; i++) { String s = i.ToString(); int flag = 0; // Iterate over all digits for(int j = 1; j < N; j++) { // Check for equal pair of // adjacent digits if (s[j] == s[j - 1]) { flag = 1; break; } } if (flag == 0) ans++; } Console.WriteLine(ans);} // Driver Codepublic static void Main(String[] args){ int N = 2; count(N);}} // This code is contributed by Princi Singh
<script> // Javascript program to implement// the above approach // Function to count the number// of N-digit numbers with no// equal pair of consecutive digitsfunction count(N){ // Base Case if (N == 1) { document.write(10 + "<br>"); return; } // Lowest N-digit number var l = Math.pow(10, N - 1); // Highest N-digit number var r = Math.pow(10, N) - 1; // Stores the count of all // required numbers var ans = 0; // Iterate over all N-digit numbers for(var i = l; i <= r; i++) { var s = (i.toString()); var flag = 0; // Iterate over all digits for(var j = 1; j < N; j++) { // Check for equal pair of // adjacent digits if (s[j] == s[j - 1]) { flag = 1; break; } } if (flag == 0) ans++; } document.write( ans + "<br>");} // Driver Codevar N = 2; count(N); // This code is contributed by itsok </script>
81
Time Complexity: O(N * (10N), where N is the given integer. Auxiliary Space: O(1)
Dynamic Programming Approach: The above approach can be optimized using Dynamic Programming approach. Follow the steps below to solve the problem:
Initialize DP[][], where DP[i][j] stores the count of numbers having i digits, and ending with j.
Iterate from 2 to N and follow the steps: Calculate the total count of valid i-1 digit numbers by adding all the values of DP[i-1][j] where j ranges from 0 to 9, and store it in temp.Update DP[i][j] = temp – DP[i-1][j], where j ranges from 0 to 9.
Calculate the total count of valid i-1 digit numbers by adding all the values of DP[i-1][j] where j ranges from 0 to 9, and store it in temp.
Update DP[i][j] = temp – DP[i-1][j], where j ranges from 0 to 9.
The result is the sum of DP[N][j], where j ranges from 0 to 9
Below is the implementation of the above approach:
C++
Java
Python3
C#
Javascript
// C++ Program to implement// the above approach#include<bits/stdc++.h>using namespace std; // Function to count the number// of N-digit numbers with no// equal pair of consecutive digitsvoid count(int N){ // Base Case if (N == 1) { cout << (10) << endl; return; } int dp[N][10]; memset(dp, 0, sizeof(dp)); for (int i = 1; i < 10; i++) dp[0][i] = 1; for (int i = 1; i < N; i++) { // Calculate the total count // of valid (i-1)-digit numbers int temp = 0; for (int j = 0; j < 10; j++) temp += dp[i - 1][j]; // Update dp[][] table for (int j = 0; j < 10; j++) dp[i][j] = temp - dp[i - 1][j]; } // Calculate the count of // required N-digit numbers int ans = 0; for (int i = 0; i < 10; i++) ans += dp[N - 1][i]; cout << ans << endl;} // Driver Codeint main(){ int N = 2; count(N); return 0;} // This code is contributed by sapnasingh4991
// Java Program to implement// of the above approachimport java.util.*;class GFG { // Function to count the number // of N-digit numbers with no // equal pair of consecutive digits public static void count(int N) { // Base Case if (N == 1) { System.out.println(10); return; } int dp[][] = new int[N][10]; for (int i = 1; i < 10; i++) dp[0][i] = 1; for (int i = 1; i < N; i++) { // Calculate the total count // of valid (i-1)-digit numbers int temp = 0; for (int j = 0; j < 10; j++) temp += dp[i - 1][j]; // Update dp[][] table for (int j = 0; j < 10; j++) dp[i][j] = temp - dp[i - 1][j]; } // Calculate the count of // required N-digit numbers int ans = 0; for (int i = 0; i < 10; i++) ans += dp[N - 1][i]; System.out.println(ans); } // Driver Code public static void main(String[] args) { int N = 2; count(N); }}
# Python3 Program to implement# of the above approach # Function to count the number# of N-digit numbers with no# equal pair of consecutive digitsdef count(N): # Base Case if (N == 1): print(10); return; dp = [[0 for i in range(10)] for j in range(N)] for i in range(1,10): dp[0][i] = 1; for i in range(1, N): # Calculate the total count # of valid (i-1)-digit numbers temp = 0; for j in range(10): temp += dp[i - 1][j]; # Update dp table for j in range(10): dp[i][j] = temp - dp[i - 1][j]; # Calculate the count of # required N-digit numbers ans = 0; for i in range(10): ans += dp[N - 1][i]; print(ans); # Driver Codeif __name__ == '__main__': N = 2; count(N); # This code is contributed by Amit Katiyar
// C# Program to implement// of the above approachusing System;class GFG{ // Function to count the number // of N-digit numbers with no // equal pair of consecutive digits public static void count(int N) { // Base Case if (N == 1) { Console.WriteLine(10); return; } int [,]dp = new int[N, 10]; for (int i = 1; i < 10; i++) dp[0, i] = 1; for (int i = 1; i < N; i++) { // Calculate the total count // of valid (i-1)-digit numbers int temp = 0; for (int j = 0; j < 10; j++) temp += dp[i - 1, j]; // Update [,]dp table for (int j = 0; j < 10; j++) dp[i, j] = temp - dp[i - 1, j]; } // Calculate the count of // required N-digit numbers int ans = 0; for (int i = 0; i < 10; i++) ans += dp[N - 1, i]; Console.WriteLine(ans); } // Driver Code public static void Main(String[] args) { int N = 2; count(N); }} // This code is contributed by sapnasingh4991
<script> // Javascript program to implement// the above approach // Function to count the number// of N-digit numbers with no// equal pair of consecutive digitsfunction count(N){ // Base Case if (N == 1) { document.write((10) + "<br>"); return; } var dp = Array.from(Array(N), ()=> Array(10).fill(0)); for(var i = 1; i < 10; i++) dp[0][i] = 1; for(var i = 1; i < N; i++) { // Calculate the total count // of valid (i-1)-digit numbers var temp = 0; for(var j = 0; j < 10; j++) temp += dp[i - 1][j]; // Update dp[][] table for(var j = 0; j < 10; j++) dp[i][j] = temp - dp[i - 1][j]; } // Calculate the count of // required N-digit numbers var ans = 0; for(var i = 0; i < 10; i++) ans += dp[N - 1][i]; document.write(ans);} // Driver Codevar N = 2; count(N); // This code is contributed by noob2000 </script>
81
Time Complexity: O(N), where N is the given integerAuxiliary Space: O(N)
Efficient Approach: The above approach can be further optimized by observing that for any N digit number, the required answer is 9N which can be calculated using Binary Exponentiation.
Below is the implementation of the above approach:
C++
Java
Python3
C#
Javascript
// C++ program to implement// the above approach#include <bits/stdc++.h>using namespace std; // Iterative Function to calculate// (x^y) % mod in O(log y)int power(int x, int y, int mod){ // Initialize result int res = 1; // Update x if x >= mod x = x % mod; // If x is divisible by mod if (x == 0) return 0; while (y > 0) { // If y is odd, multiply x // with result if ((y & 1) == 1) res = (res * x) % mod; // y must be even now // y = y / 2 y = y >> 1; x = (x * x) % mod; } return res;} // Function to count the number// of N-digit numbers with no// equal pair of consecutive digitsvoid count(int N){ // Base Case if (N == 1) { cout << 10 << endl; return; } cout << (power(9, N, 1000000007)) << endl;} // Driver Codeint main(){ int N = 3; count(N); return 0;} // This code is contributed by sapnasingh4991
// Java Program to implement// of the above approachimport java.util.*; class GFG { // Iterative Function to calculate // (x^y) % mod in O(log y) static int power(int x, int y, int mod) { // Initialize result int res = 1; // Update x if x >= mod x = x % mod; // If x is divisible by mod if (x == 0) return 0; while (y > 0) { // If y is odd, multiply x // with result if ((y & 1) == 1) res = (res * x) % mod; // y must be even now // y = y / 2 y = y >> 1; x = (x * x) % mod; } return res; } // Function to count the number // of N-digit numbers with no // equal pair of consecutive digits public static void count(int N) { // Base Case if (N == 1) { System.out.println(10); return; } System.out.println(power(9, N, 1000000007)); } // Driver Code public static void main(String[] args) { int N = 3; count(N); }}
# Python3 Program to implement# of the above approach # Iterative Function to calculate# (x^y) % mod in O(log y)def power(x, y, mod): # Initialize result res = 1; # Update x if x >= mod x = x % mod; # If x is divisible by mod if (x == 0): return 0; while (y > 0): # If y is odd, multiply x # with result if ((y & 1) == 1): res = (res * x) % mod; # y must be even now # y = y / 2 y = y >> 1; x = (x * x) % mod; return res; # Function to count the number# of N-digit numbers with no# equal pair of consecutive digitsdef count(N): # Base Case if (N == 1): print(10); return; print(power(9, N, 1000000007)); # Driver Codeif __name__ == '__main__': N = 3; count(N); # This code is contributed by Rohit_ranjan
// C# program to implement// of the above approachusing System; class GFG{ // Iterative Function to calculate// (x^y) % mod in O(log y)static int power(int x, int y, int mod){ // Initialize result int res = 1; // Update x if x >= mod x = x % mod; // If x is divisible by mod if (x == 0) return 0; while (y > 0) { // If y is odd, multiply x // with result if ((y & 1) == 1) res = (res * x) % mod; // y must be even now // y = y / 2 y = y >> 1; x = (x * x) % mod; } return res;} // Function to count the number// of N-digit numbers with no// equal pair of consecutive digitspublic static void count(int N){ // Base Case if (N == 1) { Console.WriteLine(10); return; } Console.WriteLine(power(9, N, 1000000007));} // Driver Codepublic static void Main(String[] args){ int N = 3; count(N);}} // This code is contributed by 29AjayKumar
<script> // Javascript program to implement// of the above approach // Iterative Function to calculate// (x^y) % mod in O(log y)function power(x, y, mod){ // Initialize result let res = 1; // Update x if x >= mod x = x % mod; // If x is divisible by mod if (x == 0) return 0; while (y > 0) { // If y is odd, multiply x // with result if ((y & 1) == 1) res = (res * x) % mod; // y must be even now // y = y / 2 y = y >> 1; x = (x * x) % mod; } return res;} // Function to count the number// of N-digit numbers with no// equal pair of consecutive digitsfunction count(N){ // Base Case if (N == 1) { document.write(10); return; } document.write(power(9, N, 1000000007));} // Driver Codelet N = 3; count(N); // this code is contributed by shivanisinghss2110 </script>
729
Time Complexity: O(logN)Space Complexity: O(1)
princi singh
rutvik_56
sapnasingh4991
29AjayKumar
Rohit_ranjan
amit143katiyar
itsok
noob2000
shivanisinghss2110
maths-power
number-digits
Dynamic Programming
Mathematical
Searching
Searching
Dynamic Programming
Mathematical
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Optimal Substructure Property in Dynamic Programming | DP-2
Maximum sum such that no two elements are adjacent
Min Cost Path | DP-6
Optimal Binary Search Tree | DP-24
Maximum Subarray Sum using Divide and Conquer algorithm
Write a program to print all permutations of a given string
C++ Data Types
Set in C++ Standard Template Library (STL)
Program to find GCD or HCF of two numbers
Modulo Operator (%) in C/C++ with Examples
|
[
{
"code": null,
"e": 24698,
"s": 24670,
"text": "\n14 Jun, 2021"
},
{
"code": null,
"e": 24820,
"s": 24698,
"text": "Given an integer N, the task is to find the total count of N digit numbers such that no two consecutive digits are equal."
},
{
"code": null,
"e": 24830,
"s": 24820,
"text": "Examples:"
},
{
"code": null,
"e": 25081,
"s": 24830,
"text": "Input: N = 2 Output: 81 Explanation: Count possible 2-digit numbers, i.e. the numbers in the range [10, 99] = 90 All 2-digit numbers having equal consecutive digits are {11, 22, 33, 44, 55, 66, 77, 88, 99}. Therefore, the required count = 90 – 9 = 81"
},
{
"code": null,
"e": 25104,
"s": 25081,
"text": "Input: N = 1Output: 10"
},
{
"code": null,
"e": 25286,
"s": 25104,
"text": "Naive Approach: The simplest approach to solve the problem is to iterate over all possible N-digit numbers and check for every number if any two consecutive digits are equal or not."
},
{
"code": null,
"e": 25337,
"s": 25286,
"text": "Below is the implementation of the above approach:"
},
{
"code": null,
"e": 25341,
"s": 25337,
"text": "C++"
},
{
"code": null,
"e": 25346,
"s": 25341,
"text": "Java"
},
{
"code": null,
"e": 25354,
"s": 25346,
"text": "Python3"
},
{
"code": null,
"e": 25357,
"s": 25354,
"text": "C#"
},
{
"code": null,
"e": 25368,
"s": 25357,
"text": "Javascript"
},
{
"code": "// C++ program to implement// the above approach#include<bits/stdc++.h>using namespace std; // Function to count the number// of N-digit numbers with no// equal pair of consecutive digitsvoid count(int N){ // Base Case if (N == 1) { cout << 10 << endl; return; } // Lowest N-digit number int l = pow(10, N - 1); // Highest N-digit number int r = pow(10, N) - 1; // Stores the count of all // required numbers int ans = 0; // Iterate over all N-digit numbers for(int i = l; i <= r; i++) { string s = to_string(i); int flag = 0; // Iterate over all digits for(int j = 1; j < N; j++) { // Check for equal pair of // adjacent digits if (s[j] == s[j - 1]) { flag = 1; break; } } if (flag == 0) ans++; } cout << ans << endl;} // Driver Codeint main(){ int N = 2; count(N); return 0;} // This code is contributed by rutvik_56",
"e": 26431,
"s": 25368,
"text": null
},
{
"code": "// Java Program to implement// the above approachimport java.util.*;class GFG { // Function to count the number // of N-digit numbers with no // equal pair of consecutive digits public static void count(int N) { // Base Case if (N == 1) { System.out.println(10); return; } // Lowest N-digit number int l = (int)Math.pow(10, N - 1); // Highest N-digit number int r = (int)Math.pow(10, N) - 1; // Stores the count of all // required numbers int ans = 0; // Iterate over all N-digit numbers for (int i = l; i <= r; i++) { String s = Integer.toString(i); int flag = 0; // Iterate over all digits for (int j = 1; j < N; j++) { // Check for equal pair of // adjacent digits if (s.charAt(j) == s.charAt(j - 1)) { flag = 1; break; } } if (flag == 0) ans++; } System.out.println(ans); } // Driver Code public static void main(String[] args) { int N = 2; count(N); }}",
"e": 27641,
"s": 26431,
"text": null
},
{
"code": "# Python3 Program to implement# the above approach # Function to count the number# of N-digit numbers with no# equal pair of consecutive digitsdef count(N): # Base Case if (N == 1): print(10); return; # Lowest N-digit number l = int(pow(10, N - 1)); # Highest N-digit number r = int(pow(10, N) - 1); # Stores the count of all # required numbers ans = 0; # Iterate over all N-digit numbers for i in range(l, r + 1): s = str(i); flag = 0; # Iterate over all digits for j in range(1, N): # Check for equal pair of # adjacent digits if (s[j] == s[j - 1]): flag = 1; break; if (flag == 0): ans+=1; print(ans); # Driver Codeif __name__ == '__main__': N = 2; count(N); # This code is contributed by sapnasingh4991",
"e": 28542,
"s": 27641,
"text": null
},
{
"code": "// C# program to implement// the above approachusing System; class GFG{ // Function to count the number// of N-digit numbers with no// equal pair of consecutive digitspublic static void count(int N){ // Base Case if (N == 1) { Console.WriteLine(10); return; } // Lowest N-digit number int l = (int)Math.Pow(10, N - 1); // Highest N-digit number int r = (int)Math.Pow(10, N) - 1; // Stores the count of all // required numbers int ans = 0; // Iterate over all N-digit numbers for(int i = l; i <= r; i++) { String s = i.ToString(); int flag = 0; // Iterate over all digits for(int j = 1; j < N; j++) { // Check for equal pair of // adjacent digits if (s[j] == s[j - 1]) { flag = 1; break; } } if (flag == 0) ans++; } Console.WriteLine(ans);} // Driver Codepublic static void Main(String[] args){ int N = 2; count(N);}} // This code is contributed by Princi Singh",
"e": 29627,
"s": 28542,
"text": null
},
{
"code": "<script> // Javascript program to implement// the above approach // Function to count the number// of N-digit numbers with no// equal pair of consecutive digitsfunction count(N){ // Base Case if (N == 1) { document.write(10 + \"<br>\"); return; } // Lowest N-digit number var l = Math.pow(10, N - 1); // Highest N-digit number var r = Math.pow(10, N) - 1; // Stores the count of all // required numbers var ans = 0; // Iterate over all N-digit numbers for(var i = l; i <= r; i++) { var s = (i.toString()); var flag = 0; // Iterate over all digits for(var j = 1; j < N; j++) { // Check for equal pair of // adjacent digits if (s[j] == s[j - 1]) { flag = 1; break; } } if (flag == 0) ans++; } document.write( ans + \"<br>\");} // Driver Codevar N = 2; count(N); // This code is contributed by itsok </script>",
"e": 30660,
"s": 29627,
"text": null
},
{
"code": null,
"e": 30663,
"s": 30660,
"text": "81"
},
{
"code": null,
"e": 30747,
"s": 30665,
"text": "Time Complexity: O(N * (10N), where N is the given integer. Auxiliary Space: O(1)"
},
{
"code": null,
"e": 30894,
"s": 30747,
"text": "Dynamic Programming Approach: The above approach can be optimized using Dynamic Programming approach. Follow the steps below to solve the problem:"
},
{
"code": null,
"e": 30992,
"s": 30894,
"text": "Initialize DP[][], where DP[i][j] stores the count of numbers having i digits, and ending with j."
},
{
"code": null,
"e": 31240,
"s": 30992,
"text": "Iterate from 2 to N and follow the steps: Calculate the total count of valid i-1 digit numbers by adding all the values of DP[i-1][j] where j ranges from 0 to 9, and store it in temp.Update DP[i][j] = temp – DP[i-1][j], where j ranges from 0 to 9."
},
{
"code": null,
"e": 31382,
"s": 31240,
"text": "Calculate the total count of valid i-1 digit numbers by adding all the values of DP[i-1][j] where j ranges from 0 to 9, and store it in temp."
},
{
"code": null,
"e": 31447,
"s": 31382,
"text": "Update DP[i][j] = temp – DP[i-1][j], where j ranges from 0 to 9."
},
{
"code": null,
"e": 31509,
"s": 31447,
"text": "The result is the sum of DP[N][j], where j ranges from 0 to 9"
},
{
"code": null,
"e": 31560,
"s": 31509,
"text": "Below is the implementation of the above approach:"
},
{
"code": null,
"e": 31564,
"s": 31560,
"text": "C++"
},
{
"code": null,
"e": 31569,
"s": 31564,
"text": "Java"
},
{
"code": null,
"e": 31577,
"s": 31569,
"text": "Python3"
},
{
"code": null,
"e": 31580,
"s": 31577,
"text": "C#"
},
{
"code": null,
"e": 31591,
"s": 31580,
"text": "Javascript"
},
{
"code": "// C++ Program to implement// the above approach#include<bits/stdc++.h>using namespace std; // Function to count the number// of N-digit numbers with no// equal pair of consecutive digitsvoid count(int N){ // Base Case if (N == 1) { cout << (10) << endl; return; } int dp[N][10]; memset(dp, 0, sizeof(dp)); for (int i = 1; i < 10; i++) dp[0][i] = 1; for (int i = 1; i < N; i++) { // Calculate the total count // of valid (i-1)-digit numbers int temp = 0; for (int j = 0; j < 10; j++) temp += dp[i - 1][j]; // Update dp[][] table for (int j = 0; j < 10; j++) dp[i][j] = temp - dp[i - 1][j]; } // Calculate the count of // required N-digit numbers int ans = 0; for (int i = 0; i < 10; i++) ans += dp[N - 1][i]; cout << ans << endl;} // Driver Codeint main(){ int N = 2; count(N); return 0;} // This code is contributed by sapnasingh4991",
"e": 32495,
"s": 31591,
"text": null
},
{
"code": "// Java Program to implement// of the above approachimport java.util.*;class GFG { // Function to count the number // of N-digit numbers with no // equal pair of consecutive digits public static void count(int N) { // Base Case if (N == 1) { System.out.println(10); return; } int dp[][] = new int[N][10]; for (int i = 1; i < 10; i++) dp[0][i] = 1; for (int i = 1; i < N; i++) { // Calculate the total count // of valid (i-1)-digit numbers int temp = 0; for (int j = 0; j < 10; j++) temp += dp[i - 1][j]; // Update dp[][] table for (int j = 0; j < 10; j++) dp[i][j] = temp - dp[i - 1][j]; } // Calculate the count of // required N-digit numbers int ans = 0; for (int i = 0; i < 10; i++) ans += dp[N - 1][i]; System.out.println(ans); } // Driver Code public static void main(String[] args) { int N = 2; count(N); }}",
"e": 33585,
"s": 32495,
"text": null
},
{
"code": "# Python3 Program to implement# of the above approach # Function to count the number# of N-digit numbers with no# equal pair of consecutive digitsdef count(N): # Base Case if (N == 1): print(10); return; dp = [[0 for i in range(10)] for j in range(N)] for i in range(1,10): dp[0][i] = 1; for i in range(1, N): # Calculate the total count # of valid (i-1)-digit numbers temp = 0; for j in range(10): temp += dp[i - 1][j]; # Update dp table for j in range(10): dp[i][j] = temp - dp[i - 1][j]; # Calculate the count of # required N-digit numbers ans = 0; for i in range(10): ans += dp[N - 1][i]; print(ans); # Driver Codeif __name__ == '__main__': N = 2; count(N); # This code is contributed by Amit Katiyar",
"e": 34438,
"s": 33585,
"text": null
},
{
"code": "// C# Program to implement// of the above approachusing System;class GFG{ // Function to count the number // of N-digit numbers with no // equal pair of consecutive digits public static void count(int N) { // Base Case if (N == 1) { Console.WriteLine(10); return; } int [,]dp = new int[N, 10]; for (int i = 1; i < 10; i++) dp[0, i] = 1; for (int i = 1; i < N; i++) { // Calculate the total count // of valid (i-1)-digit numbers int temp = 0; for (int j = 0; j < 10; j++) temp += dp[i - 1, j]; // Update [,]dp table for (int j = 0; j < 10; j++) dp[i, j] = temp - dp[i - 1, j]; } // Calculate the count of // required N-digit numbers int ans = 0; for (int i = 0; i < 10; i++) ans += dp[N - 1, i]; Console.WriteLine(ans); } // Driver Code public static void Main(String[] args) { int N = 2; count(N); }} // This code is contributed by sapnasingh4991",
"e": 35415,
"s": 34438,
"text": null
},
{
"code": "<script> // Javascript program to implement// the above approach // Function to count the number// of N-digit numbers with no// equal pair of consecutive digitsfunction count(N){ // Base Case if (N == 1) { document.write((10) + \"<br>\"); return; } var dp = Array.from(Array(N), ()=> Array(10).fill(0)); for(var i = 1; i < 10; i++) dp[0][i] = 1; for(var i = 1; i < N; i++) { // Calculate the total count // of valid (i-1)-digit numbers var temp = 0; for(var j = 0; j < 10; j++) temp += dp[i - 1][j]; // Update dp[][] table for(var j = 0; j < 10; j++) dp[i][j] = temp - dp[i - 1][j]; } // Calculate the count of // required N-digit numbers var ans = 0; for(var i = 0; i < 10; i++) ans += dp[N - 1][i]; document.write(ans);} // Driver Codevar N = 2; count(N); // This code is contributed by noob2000 </script>",
"e": 36425,
"s": 35415,
"text": null
},
{
"code": null,
"e": 36428,
"s": 36425,
"text": "81"
},
{
"code": null,
"e": 36503,
"s": 36430,
"text": "Time Complexity: O(N), where N is the given integerAuxiliary Space: O(N)"
},
{
"code": null,
"e": 36688,
"s": 36503,
"text": "Efficient Approach: The above approach can be further optimized by observing that for any N digit number, the required answer is 9N which can be calculated using Binary Exponentiation."
},
{
"code": null,
"e": 36739,
"s": 36688,
"text": "Below is the implementation of the above approach:"
},
{
"code": null,
"e": 36743,
"s": 36739,
"text": "C++"
},
{
"code": null,
"e": 36748,
"s": 36743,
"text": "Java"
},
{
"code": null,
"e": 36756,
"s": 36748,
"text": "Python3"
},
{
"code": null,
"e": 36759,
"s": 36756,
"text": "C#"
},
{
"code": null,
"e": 36770,
"s": 36759,
"text": "Javascript"
},
{
"code": "// C++ program to implement// the above approach#include <bits/stdc++.h>using namespace std; // Iterative Function to calculate// (x^y) % mod in O(log y)int power(int x, int y, int mod){ // Initialize result int res = 1; // Update x if x >= mod x = x % mod; // If x is divisible by mod if (x == 0) return 0; while (y > 0) { // If y is odd, multiply x // with result if ((y & 1) == 1) res = (res * x) % mod; // y must be even now // y = y / 2 y = y >> 1; x = (x * x) % mod; } return res;} // Function to count the number// of N-digit numbers with no// equal pair of consecutive digitsvoid count(int N){ // Base Case if (N == 1) { cout << 10 << endl; return; } cout << (power(9, N, 1000000007)) << endl;} // Driver Codeint main(){ int N = 3; count(N); return 0;} // This code is contributed by sapnasingh4991",
"e": 37719,
"s": 36770,
"text": null
},
{
"code": "// Java Program to implement// of the above approachimport java.util.*; class GFG { // Iterative Function to calculate // (x^y) % mod in O(log y) static int power(int x, int y, int mod) { // Initialize result int res = 1; // Update x if x >= mod x = x % mod; // If x is divisible by mod if (x == 0) return 0; while (y > 0) { // If y is odd, multiply x // with result if ((y & 1) == 1) res = (res * x) % mod; // y must be even now // y = y / 2 y = y >> 1; x = (x * x) % mod; } return res; } // Function to count the number // of N-digit numbers with no // equal pair of consecutive digits public static void count(int N) { // Base Case if (N == 1) { System.out.println(10); return; } System.out.println(power(9, N, 1000000007)); } // Driver Code public static void main(String[] args) { int N = 3; count(N); }}",
"e": 38846,
"s": 37719,
"text": null
},
{
"code": "# Python3 Program to implement# of the above approach # Iterative Function to calculate# (x^y) % mod in O(log y)def power(x, y, mod): # Initialize result res = 1; # Update x if x >= mod x = x % mod; # If x is divisible by mod if (x == 0): return 0; while (y > 0): # If y is odd, multiply x # with result if ((y & 1) == 1): res = (res * x) % mod; # y must be even now # y = y / 2 y = y >> 1; x = (x * x) % mod; return res; # Function to count the number# of N-digit numbers with no# equal pair of consecutive digitsdef count(N): # Base Case if (N == 1): print(10); return; print(power(9, N, 1000000007)); # Driver Codeif __name__ == '__main__': N = 3; count(N); # This code is contributed by Rohit_ranjan",
"e": 39683,
"s": 38846,
"text": null
},
{
"code": "// C# program to implement// of the above approachusing System; class GFG{ // Iterative Function to calculate// (x^y) % mod in O(log y)static int power(int x, int y, int mod){ // Initialize result int res = 1; // Update x if x >= mod x = x % mod; // If x is divisible by mod if (x == 0) return 0; while (y > 0) { // If y is odd, multiply x // with result if ((y & 1) == 1) res = (res * x) % mod; // y must be even now // y = y / 2 y = y >> 1; x = (x * x) % mod; } return res;} // Function to count the number// of N-digit numbers with no// equal pair of consecutive digitspublic static void count(int N){ // Base Case if (N == 1) { Console.WriteLine(10); return; } Console.WriteLine(power(9, N, 1000000007));} // Driver Codepublic static void Main(String[] args){ int N = 3; count(N);}} // This code is contributed by 29AjayKumar",
"e": 40692,
"s": 39683,
"text": null
},
{
"code": "<script> // Javascript program to implement// of the above approach // Iterative Function to calculate// (x^y) % mod in O(log y)function power(x, y, mod){ // Initialize result let res = 1; // Update x if x >= mod x = x % mod; // If x is divisible by mod if (x == 0) return 0; while (y > 0) { // If y is odd, multiply x // with result if ((y & 1) == 1) res = (res * x) % mod; // y must be even now // y = y / 2 y = y >> 1; x = (x * x) % mod; } return res;} // Function to count the number// of N-digit numbers with no// equal pair of consecutive digitsfunction count(N){ // Base Case if (N == 1) { document.write(10); return; } document.write(power(9, N, 1000000007));} // Driver Codelet N = 3; count(N); // this code is contributed by shivanisinghss2110 </script>",
"e": 41605,
"s": 40692,
"text": null
},
{
"code": null,
"e": 41609,
"s": 41605,
"text": "729"
},
{
"code": null,
"e": 41659,
"s": 41611,
"text": "Time Complexity: O(logN)Space Complexity: O(1) "
},
{
"code": null,
"e": 41672,
"s": 41659,
"text": "princi singh"
},
{
"code": null,
"e": 41682,
"s": 41672,
"text": "rutvik_56"
},
{
"code": null,
"e": 41697,
"s": 41682,
"text": "sapnasingh4991"
},
{
"code": null,
"e": 41709,
"s": 41697,
"text": "29AjayKumar"
},
{
"code": null,
"e": 41722,
"s": 41709,
"text": "Rohit_ranjan"
},
{
"code": null,
"e": 41737,
"s": 41722,
"text": "amit143katiyar"
},
{
"code": null,
"e": 41743,
"s": 41737,
"text": "itsok"
},
{
"code": null,
"e": 41752,
"s": 41743,
"text": "noob2000"
},
{
"code": null,
"e": 41771,
"s": 41752,
"text": "shivanisinghss2110"
},
{
"code": null,
"e": 41783,
"s": 41771,
"text": "maths-power"
},
{
"code": null,
"e": 41797,
"s": 41783,
"text": "number-digits"
},
{
"code": null,
"e": 41817,
"s": 41797,
"text": "Dynamic Programming"
},
{
"code": null,
"e": 41830,
"s": 41817,
"text": "Mathematical"
},
{
"code": null,
"e": 41840,
"s": 41830,
"text": "Searching"
},
{
"code": null,
"e": 41850,
"s": 41840,
"text": "Searching"
},
{
"code": null,
"e": 41870,
"s": 41850,
"text": "Dynamic Programming"
},
{
"code": null,
"e": 41883,
"s": 41870,
"text": "Mathematical"
},
{
"code": null,
"e": 41981,
"s": 41883,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 42041,
"s": 41981,
"text": "Optimal Substructure Property in Dynamic Programming | DP-2"
},
{
"code": null,
"e": 42092,
"s": 42041,
"text": "Maximum sum such that no two elements are adjacent"
},
{
"code": null,
"e": 42113,
"s": 42092,
"text": "Min Cost Path | DP-6"
},
{
"code": null,
"e": 42148,
"s": 42113,
"text": "Optimal Binary Search Tree | DP-24"
},
{
"code": null,
"e": 42204,
"s": 42148,
"text": "Maximum Subarray Sum using Divide and Conquer algorithm"
},
{
"code": null,
"e": 42264,
"s": 42204,
"text": "Write a program to print all permutations of a given string"
},
{
"code": null,
"e": 42279,
"s": 42264,
"text": "C++ Data Types"
},
{
"code": null,
"e": 42322,
"s": 42279,
"text": "Set in C++ Standard Template Library (STL)"
},
{
"code": null,
"e": 42364,
"s": 42322,
"text": "Program to find GCD or HCF of two numbers"
}
] |
Newbies Guide to Python-igraph. A simple guide to common functions of... | by Vijini Mallawaarachchi | Towards Data Science
|
Handling graph/network data has become much easier at present with the availability of different modules. For python, two of such modules are networkx and igraph. I have been playing around with the python-igraph module for some time and I have found it very useful in my research. I have used python-graph in my latest published tool GraphBin. In this article, I will introduce you to some basic functions of python-igraph which can make implementation much easier with just a single call.
You can read my previous article Visualising Graph Data with Python-igraph where I have introduced the python-igraph module.
towardsdatascience.com
In this article, we will go through the functions that perform the following tasks.
Creating a graphVisualising the graphObtaining information on the vertices and edges of the graphObtaining adjacent vertices to a vertexBreadth-first search (BFS) from a vertexDetermining shortest paths from a vertexObtain the Laplacian matrix of a graphDetermine the maximum flow between the source and target vertices
Creating a graph
Visualising the graph
Obtaining information on the vertices and edges of the graph
Obtaining adjacent vertices to a vertex
Breadth-first search (BFS) from a vertex
Determining shortest paths from a vertex
Obtain the Laplacian matrix of a graph
Determine the maximum flow between the source and target vertices
Let us start by plotting an example graph as shown in Figure 1.
This is a directed graph that contains 5 vertices. We can create this graph as follows.
# Create a directed graphg = Graph(directed=True)# Add 5 verticesg.add_vertices(5)
The vertices will be labelled from 0 to 4 and the 7 weighted edges (0,2), (0,1), (0,3), (1,2), (1,3), (2,4) and (3,4).
# Add ids and labels to verticesfor i in range(len(g.vs)): g.vs[i]["id"]= i g.vs[i]["label"]= str(i)# Add edgesg.add_edges([(0,2),(0,1),(0,3),(1,2),(1,3),(2,4),(3,4)])# Add weights and edge labelsweights = [8,6,3,5,6,4,9]g.es['weight'] = weightsg.es['label'] = weights
Now that we have created our graph, let’s visualise it using the plot function of igraph.
visual_style = {}out_name = "graph.png"# Set bbox and marginvisual_style["bbox"] = (400,400)visual_style["margin"] = 27# Set vertex coloursvisual_style["vertex_color"] = 'white'# Set vertex sizevisual_style["vertex_size"] = 45# Set vertex lable sizevisual_style["vertex_label_size"] = 22# Don't curve the edgesvisual_style["edge_curved"] = False# Set the layoutmy_layout = g.layout_lgl()visual_style["layout"] = my_layout# Plot the graphplot(g, out_name, **visual_style)
Running this code will result in a graph as shown in Figure 1. You can colour the vertices if you want as shown in Figure 2, by adding g.vs[“color”] = ["red", "green", "blue", "yellow", "orange"] instead of the line visual_style[“vertex_color”] = ‘white’. You can read more on visualising graphs and analysing them from my previous article Visualising Graph Data with Python-igraph.
You can obtain some basic information about the graph such as the number of vertices, the number of edges, whether the graph is directed or not, the maximum degree and the adjacency matrix of the graph by calling the functions vcount(), ecount(), is_directed(), maxdegree() and get_adjacency().
print("Number of vertices in the graph:", g.vcount())print("Number of edges in the graph", g.ecount())print("Is the graph directed:", g.is_directed())print("Maximum degree in the graph:", g.maxdegree())print("Adjacency matrix:\n", g.get_adjacency())
The output will be as follows.
Number of vertices in the graph: 5Number of edges in the graph 7Is the graph directed: TrueMaximum degree in the graph: 3Adjacency matrix: [[0, 1, 1, 1, 0] [0, 0, 1, 1, 0] [0, 0, 0, 0, 1] [0, 0, 0, 0, 1] [0, 0, 0, 0, 0]]
You can obtain the adjacent vertices of a given vertex using the function neighbors(vid, mode=ALL). If we consider vertex 0, the adjacent vertices or neighbours will be vertices 1, 2 and 3.
print(g.neighbors(0, mode=ALL))
To perform a breadth-first search starting from a vertex, you can use the function bfs(vid, mode=OUT).
print(g.bfs(0)[0])
The vertex IDs returned will be [0, 1, 2, 3, 4].
You can obtain the shortest paths from a given vertex using the function get_shortest_paths(vid). You want to specify the destination vertex as get_shortest_paths(vid, to=destination).
print(g.get_shortest_paths(0))
The above line will result in all the shortest paths to all the vertices starting from vertex 0 which will be [[0], [0, 1], [0, 2], [0, 3], [0, 2, 4]].
print(g.get_shortest_paths(0, to=4))
The above line will return the shortest paths from vertex 0 to vertex 4 which is [[0, 2, 4]].
You can obtain the Laplacian matrix of the graph using the laplacian() function.
print("Laplacian matrix of a graph:\n",g.laplacian())
The output will be as follows.
Laplacian matrix of a graph: [[3, -1, -1, -1, 0], [0, 2, -1, -1, 0], [0, 0, 1, 0, -1], [0, 0, 0, 1, -1], [0, 0, 0, 0, 0]]
Let us assume that the source vertex as vertex 0 and the target vertex as vertex 4 in our example. We can determine the maximum flow and the minimal cut (according to the max-flow min-cut theorem) between the source and target using the function maxflow(source, target, weights).
maxflow = g.maxflow(0,4,weights)print(maxflow.value)print(maxflow.flow)print(maxflow.cut)print(maxflow.partition)
The above line will output a Graph flow object with the maximum value of the flow as 13, the flow values [4.0, 6.0, 3.0, 0.0, 6.0, 4.0, 9.0] for each vertex, the minimal cut on the edge ids [5, 6] and the partition between vertices as [[0, 1, 2, 3], [4]].
Figure 3 denotes the values related to the maximal flow and minimal cut with the cut marked in purple colour. The values in purple on the edges are the flow values. We can see that two partitions are formed by the cut with vertices 0, 1, 2 and 3 in one partition and vertex 4 in the other partition.
You can read more about graph algorithms from my article 10 Graph Algorithms Visually Explained.
medium.com
If you come across a function that you do not know how to use, you can simply print its docstring which will have a description of the function’s inputs, outputs and what it does. For example,
print(g.bfs.__doc__)print(g.laplacian.__doc__)print(g.maxflow.__doc__)
Personally, I find the python-igraph to be a very useful module in my work. You can easily represent graphs and perform different analysis tasks using the provided functions.
I have attached the jupyter notebook containing all the examples and code I have used in this article. Feel free to play around with it and hope you can make use of igraph in your work as well.
Thank you for reading!
Cheers!
[1] Python-igraph manual at https://igraph.org/python/doc/igraph-module.html
[2] The example graph was adapted from https://www.youtube.com/watch?v=u6FkNw16VJA
|
[
{
"code": null,
"e": 662,
"s": 171,
"text": "Handling graph/network data has become much easier at present with the availability of different modules. For python, two of such modules are networkx and igraph. I have been playing around with the python-igraph module for some time and I have found it very useful in my research. I have used python-graph in my latest published tool GraphBin. In this article, I will introduce you to some basic functions of python-igraph which can make implementation much easier with just a single call."
},
{
"code": null,
"e": 787,
"s": 662,
"text": "You can read my previous article Visualising Graph Data with Python-igraph where I have introduced the python-igraph module."
},
{
"code": null,
"e": 810,
"s": 787,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 894,
"s": 810,
"text": "In this article, we will go through the functions that perform the following tasks."
},
{
"code": null,
"e": 1214,
"s": 894,
"text": "Creating a graphVisualising the graphObtaining information on the vertices and edges of the graphObtaining adjacent vertices to a vertexBreadth-first search (BFS) from a vertexDetermining shortest paths from a vertexObtain the Laplacian matrix of a graphDetermine the maximum flow between the source and target vertices"
},
{
"code": null,
"e": 1231,
"s": 1214,
"text": "Creating a graph"
},
{
"code": null,
"e": 1253,
"s": 1231,
"text": "Visualising the graph"
},
{
"code": null,
"e": 1314,
"s": 1253,
"text": "Obtaining information on the vertices and edges of the graph"
},
{
"code": null,
"e": 1354,
"s": 1314,
"text": "Obtaining adjacent vertices to a vertex"
},
{
"code": null,
"e": 1395,
"s": 1354,
"text": "Breadth-first search (BFS) from a vertex"
},
{
"code": null,
"e": 1436,
"s": 1395,
"text": "Determining shortest paths from a vertex"
},
{
"code": null,
"e": 1475,
"s": 1436,
"text": "Obtain the Laplacian matrix of a graph"
},
{
"code": null,
"e": 1541,
"s": 1475,
"text": "Determine the maximum flow between the source and target vertices"
},
{
"code": null,
"e": 1605,
"s": 1541,
"text": "Let us start by plotting an example graph as shown in Figure 1."
},
{
"code": null,
"e": 1693,
"s": 1605,
"text": "This is a directed graph that contains 5 vertices. We can create this graph as follows."
},
{
"code": null,
"e": 1776,
"s": 1693,
"text": "# Create a directed graphg = Graph(directed=True)# Add 5 verticesg.add_vertices(5)"
},
{
"code": null,
"e": 1895,
"s": 1776,
"text": "The vertices will be labelled from 0 to 4 and the 7 weighted edges (0,2), (0,1), (0,3), (1,2), (1,3), (2,4) and (3,4)."
},
{
"code": null,
"e": 2170,
"s": 1895,
"text": "# Add ids and labels to verticesfor i in range(len(g.vs)): g.vs[i][\"id\"]= i g.vs[i][\"label\"]= str(i)# Add edgesg.add_edges([(0,2),(0,1),(0,3),(1,2),(1,3),(2,4),(3,4)])# Add weights and edge labelsweights = [8,6,3,5,6,4,9]g.es['weight'] = weightsg.es['label'] = weights"
},
{
"code": null,
"e": 2260,
"s": 2170,
"text": "Now that we have created our graph, let’s visualise it using the plot function of igraph."
},
{
"code": null,
"e": 2731,
"s": 2260,
"text": "visual_style = {}out_name = \"graph.png\"# Set bbox and marginvisual_style[\"bbox\"] = (400,400)visual_style[\"margin\"] = 27# Set vertex coloursvisual_style[\"vertex_color\"] = 'white'# Set vertex sizevisual_style[\"vertex_size\"] = 45# Set vertex lable sizevisual_style[\"vertex_label_size\"] = 22# Don't curve the edgesvisual_style[\"edge_curved\"] = False# Set the layoutmy_layout = g.layout_lgl()visual_style[\"layout\"] = my_layout# Plot the graphplot(g, out_name, **visual_style)"
},
{
"code": null,
"e": 3114,
"s": 2731,
"text": "Running this code will result in a graph as shown in Figure 1. You can colour the vertices if you want as shown in Figure 2, by adding g.vs[“color”] = [\"red\", \"green\", \"blue\", \"yellow\", \"orange\"] instead of the line visual_style[“vertex_color”] = ‘white’. You can read more on visualising graphs and analysing them from my previous article Visualising Graph Data with Python-igraph."
},
{
"code": null,
"e": 3409,
"s": 3114,
"text": "You can obtain some basic information about the graph such as the number of vertices, the number of edges, whether the graph is directed or not, the maximum degree and the adjacency matrix of the graph by calling the functions vcount(), ecount(), is_directed(), maxdegree() and get_adjacency()."
},
{
"code": null,
"e": 3659,
"s": 3409,
"text": "print(\"Number of vertices in the graph:\", g.vcount())print(\"Number of edges in the graph\", g.ecount())print(\"Is the graph directed:\", g.is_directed())print(\"Maximum degree in the graph:\", g.maxdegree())print(\"Adjacency matrix:\\n\", g.get_adjacency())"
},
{
"code": null,
"e": 3690,
"s": 3659,
"text": "The output will be as follows."
},
{
"code": null,
"e": 3911,
"s": 3690,
"text": "Number of vertices in the graph: 5Number of edges in the graph 7Is the graph directed: TrueMaximum degree in the graph: 3Adjacency matrix: [[0, 1, 1, 1, 0] [0, 0, 1, 1, 0] [0, 0, 0, 0, 1] [0, 0, 0, 0, 1] [0, 0, 0, 0, 0]]"
},
{
"code": null,
"e": 4101,
"s": 3911,
"text": "You can obtain the adjacent vertices of a given vertex using the function neighbors(vid, mode=ALL). If we consider vertex 0, the adjacent vertices or neighbours will be vertices 1, 2 and 3."
},
{
"code": null,
"e": 4133,
"s": 4101,
"text": "print(g.neighbors(0, mode=ALL))"
},
{
"code": null,
"e": 4236,
"s": 4133,
"text": "To perform a breadth-first search starting from a vertex, you can use the function bfs(vid, mode=OUT)."
},
{
"code": null,
"e": 4255,
"s": 4236,
"text": "print(g.bfs(0)[0])"
},
{
"code": null,
"e": 4304,
"s": 4255,
"text": "The vertex IDs returned will be [0, 1, 2, 3, 4]."
},
{
"code": null,
"e": 4489,
"s": 4304,
"text": "You can obtain the shortest paths from a given vertex using the function get_shortest_paths(vid). You want to specify the destination vertex as get_shortest_paths(vid, to=destination)."
},
{
"code": null,
"e": 4520,
"s": 4489,
"text": "print(g.get_shortest_paths(0))"
},
{
"code": null,
"e": 4672,
"s": 4520,
"text": "The above line will result in all the shortest paths to all the vertices starting from vertex 0 which will be [[0], [0, 1], [0, 2], [0, 3], [0, 2, 4]]."
},
{
"code": null,
"e": 4709,
"s": 4672,
"text": "print(g.get_shortest_paths(0, to=4))"
},
{
"code": null,
"e": 4803,
"s": 4709,
"text": "The above line will return the shortest paths from vertex 0 to vertex 4 which is [[0, 2, 4]]."
},
{
"code": null,
"e": 4884,
"s": 4803,
"text": "You can obtain the Laplacian matrix of the graph using the laplacian() function."
},
{
"code": null,
"e": 4938,
"s": 4884,
"text": "print(\"Laplacian matrix of a graph:\\n\",g.laplacian())"
},
{
"code": null,
"e": 4969,
"s": 4938,
"text": "The output will be as follows."
},
{
"code": null,
"e": 5091,
"s": 4969,
"text": "Laplacian matrix of a graph: [[3, -1, -1, -1, 0], [0, 2, -1, -1, 0], [0, 0, 1, 0, -1], [0, 0, 0, 1, -1], [0, 0, 0, 0, 0]]"
},
{
"code": null,
"e": 5371,
"s": 5091,
"text": "Let us assume that the source vertex as vertex 0 and the target vertex as vertex 4 in our example. We can determine the maximum flow and the minimal cut (according to the max-flow min-cut theorem) between the source and target using the function maxflow(source, target, weights)."
},
{
"code": null,
"e": 5485,
"s": 5371,
"text": "maxflow = g.maxflow(0,4,weights)print(maxflow.value)print(maxflow.flow)print(maxflow.cut)print(maxflow.partition)"
},
{
"code": null,
"e": 5741,
"s": 5485,
"text": "The above line will output a Graph flow object with the maximum value of the flow as 13, the flow values [4.0, 6.0, 3.0, 0.0, 6.0, 4.0, 9.0] for each vertex, the minimal cut on the edge ids [5, 6] and the partition between vertices as [[0, 1, 2, 3], [4]]."
},
{
"code": null,
"e": 6041,
"s": 5741,
"text": "Figure 3 denotes the values related to the maximal flow and minimal cut with the cut marked in purple colour. The values in purple on the edges are the flow values. We can see that two partitions are formed by the cut with vertices 0, 1, 2 and 3 in one partition and vertex 4 in the other partition."
},
{
"code": null,
"e": 6138,
"s": 6041,
"text": "You can read more about graph algorithms from my article 10 Graph Algorithms Visually Explained."
},
{
"code": null,
"e": 6149,
"s": 6138,
"text": "medium.com"
},
{
"code": null,
"e": 6342,
"s": 6149,
"text": "If you come across a function that you do not know how to use, you can simply print its docstring which will have a description of the function’s inputs, outputs and what it does. For example,"
},
{
"code": null,
"e": 6413,
"s": 6342,
"text": "print(g.bfs.__doc__)print(g.laplacian.__doc__)print(g.maxflow.__doc__)"
},
{
"code": null,
"e": 6588,
"s": 6413,
"text": "Personally, I find the python-igraph to be a very useful module in my work. You can easily represent graphs and perform different analysis tasks using the provided functions."
},
{
"code": null,
"e": 6782,
"s": 6588,
"text": "I have attached the jupyter notebook containing all the examples and code I have used in this article. Feel free to play around with it and hope you can make use of igraph in your work as well."
},
{
"code": null,
"e": 6805,
"s": 6782,
"text": "Thank you for reading!"
},
{
"code": null,
"e": 6813,
"s": 6805,
"text": "Cheers!"
},
{
"code": null,
"e": 6890,
"s": 6813,
"text": "[1] Python-igraph manual at https://igraph.org/python/doc/igraph-module.html"
}
] |
Implement ImageDecoder API in Android - GeeksforGeeks
|
20 Dec, 2021
We use a lot of Bitmaps and drawables in Android. Handling bitmap conversions necessitates a significant amount of code, and we frequently encounter the favorite error, the “Out of Memory” exception. The BitmapFactory is used to manipulate Bitmaps, but with Android P, we have ImageDecoder, which allows us to convert images like PNG, JPEG, and so on to Drawables or Bitmaps. We will now go through all the below-mentioned topics in detail:
Understanding the source and loading ImagesDecoding from the Drawable folderURI Overriding the source’s default settingsDecoding GIFs and WebP Error Handling
Understanding the source and loading Images
Decoding from the Drawable folder
URI Overriding the source’s default settings
Decoding GIFs and WebP Error Handling
Before we can decode anything, we must first map the image source. The source is equivalent to the ImageDecoder’s accepted path. We use to create a source
val gfg = ImageDecoder.createSource(file_path_of_your_image)
In this case, generating a source can happen on any thread. However, decoding should be done in the background thread.
imageView.setImageDrawable(your_drawable_file)
We’re using the decodeDrawable method to acquire a drawable, but we’ll use the decodeBitmap function to get a bitmap from the specified source.
val bmp:Bitmap = ImageDecoder.decodeBitmap(your_source_file)
The preceding use-case was to generate a source from a file path and decode it to Drawable or Bitmap. Similarly, we can construct a source from ByteBuffer as follows:
val file = ImageDecoder.createSource(byte_files)
Consider a scenario in which we have PNGs or JPEGs in our project’s drawable folder. Then we can make a source from the resource folder, for example,
val file = ImageDecoder.createSource(resources, R.drawable.gfg_logo)
We’re using ImageDecoder to get the PNG icon ic location from the drawable and create a source for it. Now we may use Drawable or Bitmap to decode the source.
val file: Drawable = ImageDecoder.decodeDrawable(source_file)
SetImageDrawable and setImageBitmap can be used to convert these to ImageView. Similarly, if we have a URI and wish to make a source out of it, we use a content resolver to do so.
val file_source = ImageDecoder.createSource(contentResolver, image_uri)
Finally, if we need to make a source from a file from an asset, we use it.
val file_name = ImageDecoder.createSource(assetManager, some_asset)
We can override the default settings we get from the Image while creating a source. We utilize this to alter the default configuration. We use the onHeaderDecodedListener to add the listener.
Kotlin
val gfgListner: OnHeaderDecodedListener = object : OnHeaderDecodedListener { override fun onHeaderDecoded(decoder: ImageDecoder, info: ImageInfo, source: ImageDecoder.Source) { // Your logic here. }}val img_drawable = ImageDecoder.decodeDrawable(your_source, listener)
The decoder allows us to do transformations, while the information holds all of the original image’s data, such as Mime type, size, and whether or not it is animating, as well as the source. Consider what we would put inside the onHeaderDecoded function if we wanted to resize the image.
decoder.setTargetSize(50,50)
If we have GIFs and WebP files, we can load them with all of the frames’ animations and transitions using ImageDecoder alone, without the need for a third-party library. Let’s say we have a Gif file as a source from the assets folder. So, in order to decode it in Drawable and begin the animation
val img_source = ImageDecoder.createSource(assetManager, your_asset_file)
Kotlin
val img_drawable = ImageDecoder.decodeDrawable(img_source)if (img_source is AnimatedImageDrawable) { drawable.start()}
We may encounter issues while decoding the source. To detect issues, we must set the decoder argument in OnHeaderDecodedListener to setOnPartialImageListener, as seen below.
Kotlin
val gfgListner: OnHeaderDecodedListener = object : OnHeaderDecodedListener { override fun onHeaderDecoded(decoder: ImageDecoder, info: ImageInfo, your_source: ImageDecoder.Source) { decoder.setOnPartialImageListener {exception-> Log.d("GfG Decoder",exception.error.toString())) true } }}
We get the exception here, inside setOnPartialImageListener, and that’s where we can log the error. When we want to log an error, exception.error may return one of the following errors:
SOURCE EXCEPTION is a source exception.
SOURCE INCOMPLETE — the source data was missing.
SOURCE MALFORMED DATA – the encoded data was malformed and contained an error.
We’re returning true in this case, which means the listeners should only see the created image until the exception occurs. If it returns false, however, it will abort the execution and throw an exception.
We can apply some processing once the image has been loaded, such as adding a custom background, etc.
OnHeaderDecodedListener
We use it for processing in the following ways:
Kotlin
val gfgListner: OnHeaderDecodedListener = object : OnHeaderDecodedListener { override fun onHeaderDecoded(decoder: ImageDecoder, info: ImageInfo, source: ImageDecoder.Source) { decoder.setPostProcessor { canvas -> } }}
Here, under setOnProcessor, we obtain the canvas on which we will perform our changes and apply custom effects when the Image has been decoded and loaded. This is how ImageDecoder can be used in your application. To run in your project, you’ll need Android Pie or higher.
Picked
Android
Android
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Flutter - Custom Bottom Navigation Bar
Retrofit with Kotlin Coroutine in Android
Android Listview in Java with Example
GridView in Android with Example
How to Post Data to API using Retrofit in Android?
How to Read Data from SQLite Database in Android?
How to Change the Background Color After Clicking the Button in Android?
Fragment Lifecycle in Android
Animation in Android with Example
How to Add Image to Drawable Folder in Android Studio?
|
[
{
"code": null,
"e": 25116,
"s": 25088,
"text": "\n20 Dec, 2021"
},
{
"code": null,
"e": 25558,
"s": 25116,
"text": "We use a lot of Bitmaps and drawables in Android. Handling bitmap conversions necessitates a significant amount of code, and we frequently encounter the favorite error, the “Out of Memory” exception. The BitmapFactory is used to manipulate Bitmaps, but with Android P, we have ImageDecoder, which allows us to convert images like PNG, JPEG, and so on to Drawables or Bitmaps. We will now go through all the below-mentioned topics in detail: "
},
{
"code": null,
"e": 25716,
"s": 25558,
"text": "Understanding the source and loading ImagesDecoding from the Drawable folderURI Overriding the source’s default settingsDecoding GIFs and WebP Error Handling"
},
{
"code": null,
"e": 25760,
"s": 25716,
"text": "Understanding the source and loading Images"
},
{
"code": null,
"e": 25794,
"s": 25760,
"text": "Decoding from the Drawable folder"
},
{
"code": null,
"e": 25839,
"s": 25794,
"text": "URI Overriding the source’s default settings"
},
{
"code": null,
"e": 25877,
"s": 25839,
"text": "Decoding GIFs and WebP Error Handling"
},
{
"code": null,
"e": 26032,
"s": 25877,
"text": "Before we can decode anything, we must first map the image source. The source is equivalent to the ImageDecoder’s accepted path. We use to create a source"
},
{
"code": null,
"e": 26093,
"s": 26032,
"text": "val gfg = ImageDecoder.createSource(file_path_of_your_image)"
},
{
"code": null,
"e": 26212,
"s": 26093,
"text": "In this case, generating a source can happen on any thread. However, decoding should be done in the background thread."
},
{
"code": null,
"e": 26259,
"s": 26212,
"text": "imageView.setImageDrawable(your_drawable_file)"
},
{
"code": null,
"e": 26403,
"s": 26259,
"text": "We’re using the decodeDrawable method to acquire a drawable, but we’ll use the decodeBitmap function to get a bitmap from the specified source."
},
{
"code": null,
"e": 26464,
"s": 26403,
"text": "val bmp:Bitmap = ImageDecoder.decodeBitmap(your_source_file)"
},
{
"code": null,
"e": 26631,
"s": 26464,
"text": "The preceding use-case was to generate a source from a file path and decode it to Drawable or Bitmap. Similarly, we can construct a source from ByteBuffer as follows:"
},
{
"code": null,
"e": 26680,
"s": 26631,
"text": "val file = ImageDecoder.createSource(byte_files)"
},
{
"code": null,
"e": 26830,
"s": 26680,
"text": "Consider a scenario in which we have PNGs or JPEGs in our project’s drawable folder. Then we can make a source from the resource folder, for example,"
},
{
"code": null,
"e": 26899,
"s": 26830,
"text": "val file = ImageDecoder.createSource(resources, R.drawable.gfg_logo)"
},
{
"code": null,
"e": 27058,
"s": 26899,
"text": "We’re using ImageDecoder to get the PNG icon ic location from the drawable and create a source for it. Now we may use Drawable or Bitmap to decode the source."
},
{
"code": null,
"e": 27120,
"s": 27058,
"text": "val file: Drawable = ImageDecoder.decodeDrawable(source_file)"
},
{
"code": null,
"e": 27300,
"s": 27120,
"text": "SetImageDrawable and setImageBitmap can be used to convert these to ImageView. Similarly, if we have a URI and wish to make a source out of it, we use a content resolver to do so."
},
{
"code": null,
"e": 27372,
"s": 27300,
"text": "val file_source = ImageDecoder.createSource(contentResolver, image_uri)"
},
{
"code": null,
"e": 27447,
"s": 27372,
"text": "Finally, if we need to make a source from a file from an asset, we use it."
},
{
"code": null,
"e": 27515,
"s": 27447,
"text": "val file_name = ImageDecoder.createSource(assetManager, some_asset)"
},
{
"code": null,
"e": 27707,
"s": 27515,
"text": "We can override the default settings we get from the Image while creating a source. We utilize this to alter the default configuration. We use the onHeaderDecodedListener to add the listener."
},
{
"code": null,
"e": 27714,
"s": 27707,
"text": "Kotlin"
},
{
"code": "val gfgListner: OnHeaderDecodedListener = object : OnHeaderDecodedListener { override fun onHeaderDecoded(decoder: ImageDecoder, info: ImageInfo, source: ImageDecoder.Source) { // Your logic here. }}val img_drawable = ImageDecoder.decodeDrawable(your_source, listener)",
"e": 27993,
"s": 27714,
"text": null
},
{
"code": null,
"e": 28281,
"s": 27993,
"text": "The decoder allows us to do transformations, while the information holds all of the original image’s data, such as Mime type, size, and whether or not it is animating, as well as the source. Consider what we would put inside the onHeaderDecoded function if we wanted to resize the image."
},
{
"code": null,
"e": 28310,
"s": 28281,
"text": "decoder.setTargetSize(50,50)"
},
{
"code": null,
"e": 28607,
"s": 28310,
"text": "If we have GIFs and WebP files, we can load them with all of the frames’ animations and transitions using ImageDecoder alone, without the need for a third-party library. Let’s say we have a Gif file as a source from the assets folder. So, in order to decode it in Drawable and begin the animation"
},
{
"code": null,
"e": 28681,
"s": 28607,
"text": "val img_source = ImageDecoder.createSource(assetManager, your_asset_file)"
},
{
"code": null,
"e": 28688,
"s": 28681,
"text": "Kotlin"
},
{
"code": "val img_drawable = ImageDecoder.decodeDrawable(img_source)if (img_source is AnimatedImageDrawable) { drawable.start()}",
"e": 28810,
"s": 28688,
"text": null
},
{
"code": null,
"e": 28984,
"s": 28810,
"text": "We may encounter issues while decoding the source. To detect issues, we must set the decoder argument in OnHeaderDecodedListener to setOnPartialImageListener, as seen below."
},
{
"code": null,
"e": 28991,
"s": 28984,
"text": "Kotlin"
},
{
"code": "val gfgListner: OnHeaderDecodedListener = object : OnHeaderDecodedListener { override fun onHeaderDecoded(decoder: ImageDecoder, info: ImageInfo, your_source: ImageDecoder.Source) { decoder.setOnPartialImageListener {exception-> Log.d(\"GfG Decoder\",exception.error.toString())) true } }}",
"e": 29321,
"s": 28991,
"text": null
},
{
"code": null,
"e": 29507,
"s": 29321,
"text": "We get the exception here, inside setOnPartialImageListener, and that’s where we can log the error. When we want to log an error, exception.error may return one of the following errors:"
},
{
"code": null,
"e": 29547,
"s": 29507,
"text": "SOURCE EXCEPTION is a source exception."
},
{
"code": null,
"e": 29596,
"s": 29547,
"text": "SOURCE INCOMPLETE — the source data was missing."
},
{
"code": null,
"e": 29675,
"s": 29596,
"text": "SOURCE MALFORMED DATA – the encoded data was malformed and contained an error."
},
{
"code": null,
"e": 29880,
"s": 29675,
"text": "We’re returning true in this case, which means the listeners should only see the created image until the exception occurs. If it returns false, however, it will abort the execution and throw an exception."
},
{
"code": null,
"e": 29982,
"s": 29880,
"text": "We can apply some processing once the image has been loaded, such as adding a custom background, etc."
},
{
"code": null,
"e": 30006,
"s": 29982,
"text": "OnHeaderDecodedListener"
},
{
"code": null,
"e": 30054,
"s": 30006,
"text": "We use it for processing in the following ways:"
},
{
"code": null,
"e": 30061,
"s": 30054,
"text": "Kotlin"
},
{
"code": "val gfgListner: OnHeaderDecodedListener = object : OnHeaderDecodedListener { override fun onHeaderDecoded(decoder: ImageDecoder, info: ImageInfo, source: ImageDecoder.Source) { decoder.setPostProcessor { canvas -> } }}",
"e": 30314,
"s": 30061,
"text": null
},
{
"code": null,
"e": 30586,
"s": 30314,
"text": "Here, under setOnProcessor, we obtain the canvas on which we will perform our changes and apply custom effects when the Image has been decoded and loaded. This is how ImageDecoder can be used in your application. To run in your project, you’ll need Android Pie or higher."
},
{
"code": null,
"e": 30593,
"s": 30586,
"text": "Picked"
},
{
"code": null,
"e": 30601,
"s": 30593,
"text": "Android"
},
{
"code": null,
"e": 30609,
"s": 30601,
"text": "Android"
},
{
"code": null,
"e": 30707,
"s": 30609,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 30746,
"s": 30707,
"text": "Flutter - Custom Bottom Navigation Bar"
},
{
"code": null,
"e": 30788,
"s": 30746,
"text": "Retrofit with Kotlin Coroutine in Android"
},
{
"code": null,
"e": 30826,
"s": 30788,
"text": "Android Listview in Java with Example"
},
{
"code": null,
"e": 30859,
"s": 30826,
"text": "GridView in Android with Example"
},
{
"code": null,
"e": 30910,
"s": 30859,
"text": "How to Post Data to API using Retrofit in Android?"
},
{
"code": null,
"e": 30960,
"s": 30910,
"text": "How to Read Data from SQLite Database in Android?"
},
{
"code": null,
"e": 31033,
"s": 30960,
"text": "How to Change the Background Color After Clicking the Button in Android?"
},
{
"code": null,
"e": 31063,
"s": 31033,
"text": "Fragment Lifecycle in Android"
},
{
"code": null,
"e": 31097,
"s": 31063,
"text": "Animation in Android with Example"
}
] |
Intersection Point in Y Shapped Linked Lists | Practice | GeeksforGeeks
|
Given two singly linked lists of size N and M, write a program to get the point where two linked lists intersect each other.
Example 1:
Input:
LinkList1 = 3->6->9->common
LinkList2 = 10->common
common = 15->30->NULL
Output: 15
Explanation:
Example 2:
Input:
Linked List 1 = 4->1->common
Linked List 2 = 5->6->1->common
common = 8->4->5->NULL
Output: 8
Explanation:
4 5
| |
1 6
\ /
8 ----- 1
|
4
|
5
|
NULL
Your Task:
You don't need to read input or print anything. The task is to complete the function intersetPoint() which takes the pointer to the head of linklist1(head1) and linklist2(head2) as input parameters and returns data value of a node where two linked lists intersect. If linked list do not merge at any point, then it should return -1.
Challenge : Try to solve the problem without using any extra space.
Expected Time Complexity: O(N+M)
Expected Auxiliary Space: O(1)
Constraints:
1 ≤ N + M ≤ 2*105
-1000 ≤ value ≤ 1000
0
tusharvatsa2in 11 hours
JAVA Solution
class Intersect
{
//Function to find intersection point in Y shaped Linked Lists.
int intersectPoint(Node headA, Node headB)
{
Node temp1 = headA;
Node temp2 = headB;
int Acount = 0;
int Bcount = 0;
while(temp1 != null){
Acount++;
temp1 = temp1.next;
}
while(temp2 != null){
Bcount++;
temp2 = temp2.next;
}
int diff = Math.abs(Acount - Bcount);
if(Acount > Bcount){
temp1 = headA;
temp2 = headB;
while(diff != 0){
temp1 = temp1.next;
diff--;
}
}else{
temp1 = headA;
temp2 = headB;
while(diff != 0){
temp2 = temp2.next;
diff--;
}
}
while(temp1 != null && temp2 != null){
if(temp1 == temp2){
return temp1.data;
}
temp1 = temp1.next;
temp2 = temp2.next;
}
return -1;
}
}
0
tusharvatsa2
This comment was deleted.
0
pritamhazra21in 5 hours
Java HashSet
int intersectPoint(Node head1, Node head2)
{
HashSet<Node> set = new HashSet<>();
while(head1 != null){
set.add(head1);
head1 = head1.next;
}
while(head2!=null){
if(set.contains(head2)){
return head2.data;
}
head2 = head2.next;
}
return -1;
}
+1
yuvrajranabtcse205 days ago
{//c++ code easy soln Node *go=head1; while(go->next!=NULL)go=go->next; go->next=head2; go=head1; while(go->next!=NULL){ if(go->data>1000) return (go->data-10000); go->data+=10000; go=go->next;} return -1;}
+1
2019012561 week ago
// Optimized cpp code
int intersectPoint(Node* head1, Node* head2){ int l1 = 0; int l2=0; Node* temp1 = head1; Node* temp2 = head2; while(temp1!=NULL){ l1++; temp1=temp1->next; } while(temp2!=NULL){ l2++; temp2=temp2->next; } int d = 0; if(l1 > l2 ){ d = l1-l2; temp1 = head1; temp2 = head2; } else if (l1<l2){ d = l2-l1; temp1 = head2; temp2 = head1; } else{ d = 0; temp1 = head1; temp2 = head2; } for(int i=0;i<d;i++){ temp1=temp1->next; } while(temp1!=temp2){ temp1=temp1->next; temp2=temp2->next; } return temp1->data;}
0
2019sushilkumarkori1 week ago
// } Driver Code Ends
//Function to find intersection point in Y shaped Linked Lists.
int intersectPoint(Node* head1, Node* head2)
{
// Your Code
int n1=0,n2=0;
Node* temp1 = head1;
Node* temp2 = head2;
while(temp1!=0){
n1++;
temp1=temp1->next;
}
while(temp2!=0){
n2++;
temp2=temp2->next;
}
temp1 = head1;
temp2 = head2;
if(n1>=n2){
int i=0;
while(temp1!=NULL){
if(temp1==temp2){
return temp1->data;
}
i++;
temp1=temp1->next;
if(i>n1-n2){
temp2 = temp2->next;
}
}
}
else{
int i=0;
while(temp2!=NULL){
if(temp1==temp2){
return temp2->data;
}
i++;
temp2=temp2->next;
if(i>n2-n1){
temp1=temp1->next;
}
}
}
return -1;
}
0
swapniltayal4222 weeks ago
int length(Node *head){ int l=0; Node* temp = head; while(temp != NULL){ l++; temp = temp->next; }return l;}
int intersectPoint(Node* head1, Node* head2){ int l1 = length(head1); int l2 = length(head2);
int d = 0; Node* ptr1; Node* ptr2;
if (l1 > l2){ d = l1 - l2; ptr1 = head1; ptr2 = head2; }else{ d = l2 - l1; ptr1 = head2; ptr2 = head1; }
while(d){ ptr1 = ptr1->next; if (ptr1 == NULL){ return -1; } d--; }
while (ptr1 != NULL && ptr2 != NULL){ if (ptr1 == ptr2){ return ptr1->data; } ptr1 = ptr1->next; ptr2 = ptr2->next; }
return -1;}
0
09himanshusah2 weeks ago
JavaScript Solution
class Solution {
//Function to find intersection point in Y shaped Linked Lists.
intersectPoint(head1, head2)
{
//your code here
let a = head1;
let b = head2;
while(a != b) {
if(a == null) a = head2;
else a = a.next;
if(b == null) b = head1;
else b = b.next;
}
return a.data;
}
}
+5
shreyassakariya20022 weeks ago
int intersectPoint(Node* head1, Node* head2){ Node *p1=head1,*p2=head2; while(1) { if(p1==p2) return p1->data; p1=p1->next; p2=p2->next; if(p1==NULL and p2==NULL) return -1; if(p1==NULL) p1=head2; if(p2==NULL) p2=head1; } return -1;}
+2
wolfofsv2 weeks ago
find len1 and len2
skip abs(len1 - len2) in bigger LL
traverse the two LLs till u find intersection if there is any
O(n) time and O(1) space
int FindLength(Node* head){
int len = 0;
while(head != NULL){
head = head -> next;
len++;
}
return len;
}
Node* SkipLL(Node* head, int len){
for(int i = 0; i < len; i++){
head = head -> next;
}
return head;
}
int intersectPoint(Node* head1, Node* head2)
{
// Your Code Here
int len1 = FindLength(head1), len2 = FindLength(head2);
if(len1 > len2){
head1 = SkipLL(head1, len1 - len2);
}
else{
head2 = SkipLL(head2, len2 - len1);
}
while(head1 != head2 && head1 != NULL){
head1 = head1 -> next;
head2 = head2 -> next;
}
if(head1 != NULL){
return head1 -> data;
}
return -1;
}
We strongly recommend solving this problem on your own before viewing its editorial. Do you still
want to view the editorial?
Login to access your submissions.
Problem
Contest
Reset the IDE using the second button on the top right corner.
Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values.
Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints.
You can access the hints to get an idea about what is expected of you as well as the final solution code.
You can view the solutions submitted by other users from the submission tab.
|
[
{
"code": null,
"e": 363,
"s": 238,
"text": "Given two singly linked lists of size N and M, write a program to get the point where two linked lists intersect each other."
},
{
"code": null,
"e": 376,
"s": 365,
"text": "Example 1:"
},
{
"code": null,
"e": 482,
"s": 376,
"text": "Input:\nLinkList1 = 3->6->9->common\nLinkList2 = 10->common\ncommon = 15->30->NULL\nOutput: 15\nExplanation:\n\n"
},
{
"code": null,
"e": 493,
"s": 482,
"text": "Example 2:"
},
{
"code": null,
"e": 731,
"s": 493,
"text": "Input: \nLinked List 1 = 4->1->common\nLinked List 2 = 5->6->1->common\ncommon = 8->4->5->NULL\nOutput: 8\nExplanation: \n\n4 5\n| |\n1 6\n \\ /\n 8 ----- 1 \n |\n 4\n |\n 5\n |\n NULL "
},
{
"code": null,
"e": 1143,
"s": 731,
"text": "Your Task:\nYou don't need to read input or print anything. The task is to complete the function intersetPoint() which takes the pointer to the head of linklist1(head1) and linklist2(head2) as input parameters and returns data value of a node where two linked lists intersect. If linked list do not merge at any point, then it should return -1.\nChallenge : Try to solve the problem without using any extra space."
},
{
"code": null,
"e": 1209,
"s": 1145,
"text": "Expected Time Complexity: O(N+M)\nExpected Auxiliary Space: O(1)"
},
{
"code": null,
"e": 1263,
"s": 1211,
"text": "Constraints:\n1 ≤ N + M ≤ 2*105\n-1000 ≤ value ≤ 1000"
},
{
"code": null,
"e": 1267,
"s": 1265,
"text": "0"
},
{
"code": null,
"e": 1291,
"s": 1267,
"text": "tusharvatsa2in 11 hours"
},
{
"code": null,
"e": 1305,
"s": 1291,
"text": "JAVA Solution"
},
{
"code": null,
"e": 2375,
"s": 1307,
"text": "class Intersect\n{\n //Function to find intersection point in Y shaped Linked Lists.\n\tint intersectPoint(Node headA, Node headB)\n\t{\n Node temp1 = headA;\n Node temp2 = headB;\n int Acount = 0;\n int Bcount = 0;\n while(temp1 != null){\n Acount++;\n temp1 = temp1.next;\n }\n while(temp2 != null){\n Bcount++;\n temp2 = temp2.next;\n }\n int diff = Math.abs(Acount - Bcount);\n if(Acount > Bcount){\n temp1 = headA;\n temp2 = headB;\n while(diff != 0){\n temp1 = temp1.next;\n diff--;\n }\n }else{\n temp1 = headA;\n temp2 = headB;\n while(diff != 0){\n temp2 = temp2.next;\n diff--;\n }\n }\n while(temp1 != null && temp2 != null){\n if(temp1 == temp2){\n return temp1.data;\n }\n temp1 = temp1.next;\n temp2 = temp2.next;\n }\n return -1;\n\t}\n}\n"
},
{
"code": null,
"e": 2377,
"s": 2375,
"text": "0"
},
{
"code": null,
"e": 2390,
"s": 2377,
"text": "tusharvatsa2"
},
{
"code": null,
"e": 2416,
"s": 2390,
"text": "This comment was deleted."
},
{
"code": null,
"e": 2418,
"s": 2416,
"text": "0"
},
{
"code": null,
"e": 2442,
"s": 2418,
"text": "pritamhazra21in 5 hours"
},
{
"code": null,
"e": 2868,
"s": 2442,
"text": "Java HashSet\n\n\tint intersectPoint(Node head1, Node head2)\n\t{\n HashSet<Node> set = new HashSet<>();\n \n while(head1 != null){\n set.add(head1);\n head1 = head1.next;\n }\n \n while(head2!=null){\n if(set.contains(head2)){\n return head2.data;\n }\n head2 = head2.next;\n }\n \n return -1;\n\t}"
},
{
"code": null,
"e": 2871,
"s": 2868,
"text": "+1"
},
{
"code": null,
"e": 2899,
"s": 2871,
"text": "yuvrajranabtcse205 days ago"
},
{
"code": null,
"e": 3138,
"s": 2899,
"text": "{//c++ code easy soln Node *go=head1; while(go->next!=NULL)go=go->next; go->next=head2; go=head1; while(go->next!=NULL){ if(go->data>1000) return (go->data-10000); go->data+=10000; go=go->next;} return -1;}"
},
{
"code": null,
"e": 3141,
"s": 3138,
"text": "+1"
},
{
"code": null,
"e": 3161,
"s": 3141,
"text": "2019012561 week ago"
},
{
"code": null,
"e": 3183,
"s": 3161,
"text": "// Optimized cpp code"
},
{
"code": null,
"e": 3860,
"s": 3183,
"text": "int intersectPoint(Node* head1, Node* head2){ int l1 = 0; int l2=0; Node* temp1 = head1; Node* temp2 = head2; while(temp1!=NULL){ l1++; temp1=temp1->next; } while(temp2!=NULL){ l2++; temp2=temp2->next; } int d = 0; if(l1 > l2 ){ d = l1-l2; temp1 = head1; temp2 = head2; } else if (l1<l2){ d = l2-l1; temp1 = head2; temp2 = head1; } else{ d = 0; temp1 = head1; temp2 = head2; } for(int i=0;i<d;i++){ temp1=temp1->next; } while(temp1!=temp2){ temp1=temp1->next; temp2=temp2->next; } return temp1->data;}"
},
{
"code": null,
"e": 3862,
"s": 3860,
"text": "0"
},
{
"code": null,
"e": 3892,
"s": 3862,
"text": "2019sushilkumarkori1 week ago"
},
{
"code": null,
"e": 4860,
"s": 3892,
"text": "\n// } Driver Code Ends\n//Function to find intersection point in Y shaped Linked Lists.\nint intersectPoint(Node* head1, Node* head2)\n{\n // Your Code \n int n1=0,n2=0;\n Node* temp1 = head1;\n Node* temp2 = head2;\n while(temp1!=0){\n n1++;\n temp1=temp1->next;\n }\n while(temp2!=0){\n n2++;\n temp2=temp2->next;\n }\n temp1 = head1;\n temp2 = head2;\n if(n1>=n2){\n int i=0;\n while(temp1!=NULL){\n if(temp1==temp2){\n return temp1->data;\n }\n i++;\n temp1=temp1->next;\n if(i>n1-n2){\n temp2 = temp2->next;\n }\n }\n }\n else{\n int i=0;\n while(temp2!=NULL){\n if(temp1==temp2){\n return temp2->data;\n }\n i++;\n temp2=temp2->next;\n if(i>n2-n1){\n temp1=temp1->next;\n }\n }\n }\n return -1;\n}\n\n"
},
{
"code": null,
"e": 4862,
"s": 4860,
"text": "0"
},
{
"code": null,
"e": 4889,
"s": 4862,
"text": "swapniltayal4222 weeks ago"
},
{
"code": null,
"e": 5018,
"s": 4889,
"text": "int length(Node *head){ int l=0; Node* temp = head; while(temp != NULL){ l++; temp = temp->next; }return l;}"
},
{
"code": null,
"e": 5116,
"s": 5018,
"text": "int intersectPoint(Node* head1, Node* head2){ int l1 = length(head1); int l2 = length(head2);"
},
{
"code": null,
"e": 5158,
"s": 5116,
"text": " int d = 0; Node* ptr1; Node* ptr2;"
},
{
"code": null,
"e": 5306,
"s": 5158,
"text": " if (l1 > l2){ d = l1 - l2; ptr1 = head1; ptr2 = head2; }else{ d = l2 - l1; ptr1 = head2; ptr2 = head1; }"
},
{
"code": null,
"e": 5413,
"s": 5306,
"text": " while(d){ ptr1 = ptr1->next; if (ptr1 == NULL){ return -1; } d--; }"
},
{
"code": null,
"e": 5570,
"s": 5413,
"text": " while (ptr1 != NULL && ptr2 != NULL){ if (ptr1 == ptr2){ return ptr1->data; } ptr1 = ptr1->next; ptr2 = ptr2->next; }"
},
{
"code": null,
"e": 5585,
"s": 5570,
"text": " return -1;}"
},
{
"code": null,
"e": 5587,
"s": 5585,
"text": "0"
},
{
"code": null,
"e": 5612,
"s": 5587,
"text": "09himanshusah2 weeks ago"
},
{
"code": null,
"e": 5632,
"s": 5612,
"text": "JavaScript Solution"
},
{
"code": null,
"e": 6037,
"s": 5632,
"text": "class Solution {\n //Function to find intersection point in Y shaped Linked Lists.\n intersectPoint(head1, head2)\n {\n //your code here\n let a = head1;\n let b = head2;\n while(a != b) {\n if(a == null) a = head2;\n else a = a.next;\n \n if(b == null) b = head1;\n else b = b.next;\n }\n return a.data;\n }\n}"
},
{
"code": null,
"e": 6040,
"s": 6037,
"text": "+5"
},
{
"code": null,
"e": 6071,
"s": 6040,
"text": "shreyassakariya20022 weeks ago"
},
{
"code": null,
"e": 6412,
"s": 6071,
"text": "int intersectPoint(Node* head1, Node* head2){ Node *p1=head1,*p2=head2; while(1) { if(p1==p2) return p1->data; p1=p1->next; p2=p2->next; if(p1==NULL and p2==NULL) return -1; if(p1==NULL) p1=head2; if(p2==NULL) p2=head1; } return -1;}"
},
{
"code": null,
"e": 6415,
"s": 6412,
"text": "+2"
},
{
"code": null,
"e": 6435,
"s": 6415,
"text": "wolfofsv2 weeks ago"
},
{
"code": null,
"e": 6454,
"s": 6435,
"text": "find len1 and len2"
},
{
"code": null,
"e": 6489,
"s": 6454,
"text": "skip abs(len1 - len2) in bigger LL"
},
{
"code": null,
"e": 6551,
"s": 6489,
"text": "traverse the two LLs till u find intersection if there is any"
},
{
"code": null,
"e": 6576,
"s": 6551,
"text": "O(n) time and O(1) space"
},
{
"code": null,
"e": 7292,
"s": 6576,
"text": "int FindLength(Node* head){\n int len = 0;\n while(head != NULL){\n head = head -> next;\n len++;\n }\n return len;\n}\nNode* SkipLL(Node* head, int len){\n for(int i = 0; i < len; i++){\n head = head -> next;\n }\n return head;\n}\n\n\nint intersectPoint(Node* head1, Node* head2)\n{\n // Your Code Here\n int len1 = FindLength(head1), len2 = FindLength(head2);\n if(len1 > len2){\n head1 = SkipLL(head1, len1 - len2);\n }\n else{\n head2 = SkipLL(head2, len2 - len1);\n }\n while(head1 != head2 && head1 != NULL){\n head1 = head1 -> next;\n head2 = head2 -> next;\n }\n if(head1 != NULL){\n return head1 -> data;\n }\n return -1;\n \n}"
},
{
"code": null,
"e": 7438,
"s": 7292,
"text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?"
},
{
"code": null,
"e": 7474,
"s": 7438,
"text": " Login to access your submissions. "
},
{
"code": null,
"e": 7484,
"s": 7474,
"text": "\nProblem\n"
},
{
"code": null,
"e": 7494,
"s": 7484,
"text": "\nContest\n"
},
{
"code": null,
"e": 7557,
"s": 7494,
"text": "Reset the IDE using the second button on the top right corner."
},
{
"code": null,
"e": 7705,
"s": 7557,
"text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values."
},
{
"code": null,
"e": 7913,
"s": 7705,
"text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints."
},
{
"code": null,
"e": 8019,
"s": 7913,
"text": "You can access the hints to get an idea about what is expected of you as well as the final solution code."
}
] |
What is the difference between throw e and throw new Exception(e) in catch block in java?
|
An exception is an issue (run time error) occurred during the execution of a program. Here are some example scenarios −
If you have an array of size 10 if a line in your code tries to access the 11th element in this array.
If you are trying to divide a number with 0 which (results to infinity and JVM doesn’t understand how to valuate it).
When exception occurs the program terminates abruptly at the line that caused exception, leaving the remaining part of the program unexecuted. To prevent this, you need to handle exceptions.
There are two types of exceptions in java.
Unchecked Exception − An unchecked exception is the one which occurs at the time of execution. These are also called as Runtime Exceptions. These include programming bugs, such as logic errors or improper use of an API. Runtime exceptions are ignored at the time of compilation.
Checked Exception − A checked exception is an exception that occurs at the time of compilation, these are also called as compile time exceptions. These exceptions cannot simply be ignored at the time of compilation; the programmer should take care of (handle) these exceptions.
To handle exceptions Java provides a try-catch block mechanism.
A try/catch block is placed around the code that might generate an exception. Code within a try/catch block is referred to as protected code.
try {
// Protected code
} catch (ExceptionName e1) {
// Catch block
}
When an exception raised inside a try block, instead of terminating the program JVM stores the exception details in the exception stack and proceeds to the catch block.
A catch statement involves declaring the type of exception you are trying to catch. If an exception occurs in the try block, the catch block (or blocks) that follows the try is verified.
If the type of exception that occurred is listed in a catch block, the exception is passed to the catch block much as an argument is passed into a method parameter.
import java.io.File;
import java.io.FileInputStream;
public class Test {
public static void main(String args[]){
System.out.println("Hello");
try{
File file =new File("my_file");
FileInputStream fis = new FileInputStream(file);
}catch(Exception e){
System.out.println("Given file path is not found");
}
}
}
Given file path is not found
When an exception is cached in a catch block, you can re-throw it using the throw keyword (which is used to throw the exception objects).
While re-throwing exceptions you can throw the same exception as it is without adjusting it as −
try {
int result = (arr[a])/(arr[b]);
System.out.println("Result of "+arr[a]+"/"+arr[b]+": "+result);
}catch(ArithmeticException e) {
throw e;
}
Or, wrap it within a new exception and throw it. When you wrap a cached exception with in another exception and throw it, it is known as exception chaining or, exception wrapping, by doing this you can adjust your exception, throwing higher level of exception maintaining the abstraction.
try {
int result = (arr[a])/(arr[b]);
System.out.println("Result of "+arr[a]+"/"+arr[b]+": "+result);
}catch(ArrayIndexOutOfBoundsException e) {
throw new IndexOutOfBoundsException();
}
In the following Java example our code in demoMethod() might throw ArrayIndexOutOfBoundsException and ArithmeticException. We are catching these two exceptions in two different catch blocks.
In the catch blocks we are re-throwing both exceptions one by wrapping within higher exception and the other one directly.
import java.util.Arrays;
import java.util.Scanner;
public class RethrowExample {
public void demoMethod() {
Scanner sc = new Scanner(System.in);
int[] arr = {10, 20, 30, 2, 0, 8};
System.out.println("Array: "+Arrays.toString(arr));
System.out.println("Choose numerator and denominator(not 0) from this array (enter positions 0 to 5)");
int a = sc.nextInt();
int b = sc.nextInt();
try {
int result = (arr[a])/(arr[b]);
System.out.println("Result of "+arr[a]+"/"+arr[b]+": "+result);
}catch(ArrayIndexOutOfBoundsException e) {
throw new IndexOutOfBoundsException();
}catch(ArithmeticException e) {
throw e;
}
}
public static void main(String [] args) {
new RethrowExample().demoMethod();
}
}
Array: [10, 20, 30, 2, 0, 8]
Choose numerator and denominator(not 0) from this array (enter positions 0 to 5)
0
4
Exception in thread "main" java.lang.ArithmeticException: / by zero
at myPackage.RethrowExample.demoMethod(RethrowExample.java:16)
at myPackage.RethrowExample.main(RethrowExample.java:25)
Array: [10, 20, 30, 2, 0, 8]
Choose numerator and denominator(not 0) from this array (enter positions 0 to 5)
124
5
Exception in thread "main" java.lang.IndexOutOfBoundsException
at myPackage.RethrowExample.demoMethod(RethrowExample.java:17)
at myPackage.RethrowExample.main(RethrowExample.java:23)
|
[
{
"code": null,
"e": 1182,
"s": 1062,
"text": "An exception is an issue (run time error) occurred during the execution of a program. Here are some example scenarios −"
},
{
"code": null,
"e": 1285,
"s": 1182,
"text": "If you have an array of size 10 if a line in your code tries to access the 11th element in this array."
},
{
"code": null,
"e": 1403,
"s": 1285,
"text": "If you are trying to divide a number with 0 which (results to infinity and JVM doesn’t understand how to valuate it)."
},
{
"code": null,
"e": 1594,
"s": 1403,
"text": "When exception occurs the program terminates abruptly at the line that caused exception, leaving the remaining part of the program unexecuted. To prevent this, you need to handle exceptions."
},
{
"code": null,
"e": 1637,
"s": 1594,
"text": "There are two types of exceptions in java."
},
{
"code": null,
"e": 1916,
"s": 1637,
"text": "Unchecked Exception − An unchecked exception is the one which occurs at the time of execution. These are also called as Runtime Exceptions. These include programming bugs, such as logic errors or improper use of an API. Runtime exceptions are ignored at the time of compilation."
},
{
"code": null,
"e": 2194,
"s": 1916,
"text": "Checked Exception − A checked exception is an exception that occurs at the time of compilation, these are also called as compile time exceptions. These exceptions cannot simply be ignored at the time of compilation; the programmer should take care of (handle) these exceptions."
},
{
"code": null,
"e": 2258,
"s": 2194,
"text": "To handle exceptions Java provides a try-catch block mechanism."
},
{
"code": null,
"e": 2400,
"s": 2258,
"text": "A try/catch block is placed around the code that might generate an exception. Code within a try/catch block is referred to as protected code."
},
{
"code": null,
"e": 2476,
"s": 2400,
"text": "try {\n // Protected code\n} catch (ExceptionName e1) {\n // Catch block\n}"
},
{
"code": null,
"e": 2645,
"s": 2476,
"text": "When an exception raised inside a try block, instead of terminating the program JVM stores the exception details in the exception stack and proceeds to the catch block."
},
{
"code": null,
"e": 2832,
"s": 2645,
"text": "A catch statement involves declaring the type of exception you are trying to catch. If an exception occurs in the try block, the catch block (or blocks) that follows the try is verified."
},
{
"code": null,
"e": 2997,
"s": 2832,
"text": "If the type of exception that occurred is listed in a catch block, the exception is passed to the catch block much as an argument is passed into a method parameter."
},
{
"code": null,
"e": 3361,
"s": 2997,
"text": "import java.io.File;\nimport java.io.FileInputStream;\npublic class Test {\n public static void main(String args[]){\n System.out.println(\"Hello\");\n try{\n File file =new File(\"my_file\");\n FileInputStream fis = new FileInputStream(file);\n }catch(Exception e){\n System.out.println(\"Given file path is not found\");\n }\n }\n}"
},
{
"code": null,
"e": 3390,
"s": 3361,
"text": "Given file path is not found"
},
{
"code": null,
"e": 3528,
"s": 3390,
"text": "When an exception is cached in a catch block, you can re-throw it using the throw keyword (which is used to throw the exception objects)."
},
{
"code": null,
"e": 3625,
"s": 3528,
"text": "While re-throwing exceptions you can throw the same exception as it is without adjusting it as −"
},
{
"code": null,
"e": 3779,
"s": 3625,
"text": "try {\n int result = (arr[a])/(arr[b]);\n System.out.println(\"Result of \"+arr[a]+\"/\"+arr[b]+\": \"+result);\n}catch(ArithmeticException e) {\n throw e;\n}"
},
{
"code": null,
"e": 4068,
"s": 3779,
"text": "Or, wrap it within a new exception and throw it. When you wrap a cached exception with in another exception and throw it, it is known as exception chaining or, exception wrapping, by doing this you can adjust your exception, throwing higher level of exception maintaining the abstraction."
},
{
"code": null,
"e": 4263,
"s": 4068,
"text": "try {\n int result = (arr[a])/(arr[b]);\n System.out.println(\"Result of \"+arr[a]+\"/\"+arr[b]+\": \"+result);\n}catch(ArrayIndexOutOfBoundsException e) {\n throw new IndexOutOfBoundsException();\n}"
},
{
"code": null,
"e": 4454,
"s": 4263,
"text": "In the following Java example our code in demoMethod() might throw ArrayIndexOutOfBoundsException and ArithmeticException. We are catching these two exceptions in two different catch blocks."
},
{
"code": null,
"e": 4577,
"s": 4454,
"text": "In the catch blocks we are re-throwing both exceptions one by wrapping within higher exception and the other one directly."
},
{
"code": null,
"e": 5381,
"s": 4577,
"text": "import java.util.Arrays;\nimport java.util.Scanner;\npublic class RethrowExample {\n public void demoMethod() {\n Scanner sc = new Scanner(System.in);\n int[] arr = {10, 20, 30, 2, 0, 8};\n System.out.println(\"Array: \"+Arrays.toString(arr));\n System.out.println(\"Choose numerator and denominator(not 0) from this array (enter positions 0 to 5)\");\n int a = sc.nextInt();\n int b = sc.nextInt();\n try {\n int result = (arr[a])/(arr[b]);\n System.out.println(\"Result of \"+arr[a]+\"/\"+arr[b]+\": \"+result);\n }catch(ArrayIndexOutOfBoundsException e) {\n throw new IndexOutOfBoundsException();\n }catch(ArithmeticException e) {\n throw e;\n }\n }\n public static void main(String [] args) {\n new RethrowExample().demoMethod();\n }\n}"
},
{
"code": null,
"e": 5690,
"s": 5381,
"text": "Array: [10, 20, 30, 2, 0, 8]\nChoose numerator and denominator(not 0) from this array (enter positions 0 to 5)\n0\n4\n\nException in thread \"main\" java.lang.ArithmeticException: / by zero\n at myPackage.RethrowExample.demoMethod(RethrowExample.java:16)\n at myPackage.RethrowExample.main(RethrowExample.java:25)"
},
{
"code": null,
"e": 5995,
"s": 5690,
"text": "Array: [10, 20, 30, 2, 0, 8]\nChoose numerator and denominator(not 0) from this array (enter positions 0 to 5)\n124\n5\nException in thread \"main\" java.lang.IndexOutOfBoundsException\n at myPackage.RethrowExample.demoMethod(RethrowExample.java:17)\n at myPackage.RethrowExample.main(RethrowExample.java:23)"
}
] |
StringBuilder.Chars[] Property in C# - GeeksforGeeks
|
28 Jan, 2019
StringBuilder.Chars[Int32] Property is used to get or set the character at the specified character position in this instance.
Syntax: public char this[int index] { get; set; }Here, the index is the position of the character.
Property Value: This property returns the Unicode character at position index.
Exceptions:
ArgumentOutOfRangeException: If the index is outside the bounds of this instance while setting a character.
IndexOutOfRangeException: If the index is outside the bounds of this instance while getting a character.
Below programs illustrate the use of the above-discussed property:
Example 1:
// C# program demonstrate// the Chars[Int32] Propertyusing System;using System.Text; class GFG { // Main Method public static void Main(String[] args) { // create a StringBuilder object // with a String pass as parameter StringBuilder str = new StringBuilder("GeeksforGeeks"); // print string Console.WriteLine("String is " + str.ToString()); // loop through string // and print every Character for (int i = 0; i < str.Length; i++) { // get char at position i char ch = str[i]; // print char Console.WriteLine("Char at position " + i + " is " + ch); } }}
String is GeeksforGeeks
Char at position 0 is G
Char at position 1 is e
Char at position 2 is e
Char at position 3 is k
Char at position 4 is s
Char at position 5 is f
Char at position 6 is o
Char at position 7 is r
Char at position 8 is G
Char at position 9 is e
Char at position 10 is e
Char at position 11 is k
Char at position 12 is s
Example 2:
// C# program demonstrate// the Chars[Int32] Propertyusing System;using System.Text; class GFG { // Main Method public static void Main(String[] args) { // create a StringBuilder object StringBuilder str = new StringBuilder(); // add the String to StringBuilder Object str.Append("Geek"); // get char at position 1 char ch = str[1]; // print the result Console.WriteLine("StringBuilder Object" + " contains = " + str); Console.WriteLine("Character at Position 1" + " in StringBuilder = " + ch); }}
StringBuilder Object contains = Geek
Character at Position 1 in StringBuilder = e
Reference:
https://docs.microsoft.com/en-us/dotnet/api/system.text.stringbuilder.chars?view=netframework-4.7.2
CSharp-StringBuilder-Class
C#
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Destructors in C#
Extension Method in C#
HashSet in C# with Examples
Top 50 C# Interview Questions & Answers
C# | How to insert an element in an Array?
Partial Classes in C#
C# | Inheritance
C# | List Class
Difference between Hashtable and Dictionary in C#
Lambda Expressions in C#
|
[
{
"code": null,
"e": 24302,
"s": 24274,
"text": "\n28 Jan, 2019"
},
{
"code": null,
"e": 24428,
"s": 24302,
"text": "StringBuilder.Chars[Int32] Property is used to get or set the character at the specified character position in this instance."
},
{
"code": null,
"e": 24527,
"s": 24428,
"text": "Syntax: public char this[int index] { get; set; }Here, the index is the position of the character."
},
{
"code": null,
"e": 24606,
"s": 24527,
"text": "Property Value: This property returns the Unicode character at position index."
},
{
"code": null,
"e": 24618,
"s": 24606,
"text": "Exceptions:"
},
{
"code": null,
"e": 24726,
"s": 24618,
"text": "ArgumentOutOfRangeException: If the index is outside the bounds of this instance while setting a character."
},
{
"code": null,
"e": 24831,
"s": 24726,
"text": "IndexOutOfRangeException: If the index is outside the bounds of this instance while getting a character."
},
{
"code": null,
"e": 24898,
"s": 24831,
"text": "Below programs illustrate the use of the above-discussed property:"
},
{
"code": null,
"e": 24909,
"s": 24898,
"text": "Example 1:"
},
{
"code": "// C# program demonstrate// the Chars[Int32] Propertyusing System;using System.Text; class GFG { // Main Method public static void Main(String[] args) { // create a StringBuilder object // with a String pass as parameter StringBuilder str = new StringBuilder(\"GeeksforGeeks\"); // print string Console.WriteLine(\"String is \" + str.ToString()); // loop through string // and print every Character for (int i = 0; i < str.Length; i++) { // get char at position i char ch = str[i]; // print char Console.WriteLine(\"Char at position \" + i + \" is \" + ch); } }}",
"e": 25652,
"s": 24909,
"text": null
},
{
"code": null,
"e": 25992,
"s": 25652,
"text": "String is GeeksforGeeks\nChar at position 0 is G\nChar at position 1 is e\nChar at position 2 is e\nChar at position 3 is k\nChar at position 4 is s\nChar at position 5 is f\nChar at position 6 is o\nChar at position 7 is r\nChar at position 8 is G\nChar at position 9 is e\nChar at position 10 is e\nChar at position 11 is k\nChar at position 12 is s\n"
},
{
"code": null,
"e": 26003,
"s": 25992,
"text": "Example 2:"
},
{
"code": "// C# program demonstrate// the Chars[Int32] Propertyusing System;using System.Text; class GFG { // Main Method public static void Main(String[] args) { // create a StringBuilder object StringBuilder str = new StringBuilder(); // add the String to StringBuilder Object str.Append(\"Geek\"); // get char at position 1 char ch = str[1]; // print the result Console.WriteLine(\"StringBuilder Object\" + \" contains = \" + str); Console.WriteLine(\"Character at Position 1\" + \" in StringBuilder = \" + ch); }}",
"e": 26630,
"s": 26003,
"text": null
},
{
"code": null,
"e": 26713,
"s": 26630,
"text": "StringBuilder Object contains = Geek\nCharacter at Position 1 in StringBuilder = e\n"
},
{
"code": null,
"e": 26724,
"s": 26713,
"text": "Reference:"
},
{
"code": null,
"e": 26824,
"s": 26724,
"text": "https://docs.microsoft.com/en-us/dotnet/api/system.text.stringbuilder.chars?view=netframework-4.7.2"
},
{
"code": null,
"e": 26851,
"s": 26824,
"text": "CSharp-StringBuilder-Class"
},
{
"code": null,
"e": 26854,
"s": 26851,
"text": "C#"
},
{
"code": null,
"e": 26952,
"s": 26854,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26970,
"s": 26952,
"text": "Destructors in C#"
},
{
"code": null,
"e": 26993,
"s": 26970,
"text": "Extension Method in C#"
},
{
"code": null,
"e": 27021,
"s": 26993,
"text": "HashSet in C# with Examples"
},
{
"code": null,
"e": 27061,
"s": 27021,
"text": "Top 50 C# Interview Questions & Answers"
},
{
"code": null,
"e": 27104,
"s": 27061,
"text": "C# | How to insert an element in an Array?"
},
{
"code": null,
"e": 27126,
"s": 27104,
"text": "Partial Classes in C#"
},
{
"code": null,
"e": 27143,
"s": 27126,
"text": "C# | Inheritance"
},
{
"code": null,
"e": 27159,
"s": 27143,
"text": "C# | List Class"
},
{
"code": null,
"e": 27209,
"s": 27159,
"text": "Difference between Hashtable and Dictionary in C#"
}
] |
Lua - File I/O
|
I/O library is used for reading and manipulating files in Lua. There are two kinds of file operations in Lua namely implicit file descriptors and explicit file descriptors.
For the following examples, we will use a sample file test.lua as shown below.
-- sample test.lua
-- sample2 test.lua
A simple file open operation uses the following statement.
file = io.open (filename [, mode])
The various file modes are listed in the following table.
"r"
Read-only mode and is the default mode where an existing file is opened.
"w"
Write enabled mode that overwrites the existing file or creates a new file.
"a"
Append mode that opens an existing file or creates a new file for appending.
"r+"
Read and write mode for an existing file.
"w+"
All existing data is removed if file exists or new file is created with read write permissions.
"a+"
Append mode with read mode enabled that opens an existing file or creates a new file.
Implicit file descriptors use the standard input/ output modes or using a single input and single output file. A sample of using implicit file descriptors is shown below.
-- Opens a file in read
file = io.open("test.lua", "r")
-- sets the default input file as test.lua
io.input(file)
-- prints the first line of the file
print(io.read())
-- closes the open file
io.close(file)
-- Opens a file in append mode
file = io.open("test.lua", "a")
-- sets the default output file as test.lua
io.output(file)
-- appends a word test to the last line of the file
io.write("-- End of the test.lua file")
-- closes the open file
io.close(file)
When you run the program, you will get an output of the first line of test.lua file. For our program, we got the following output.
-- Sample test.lua
This was the first line of the statement in test.lua file for us. Also the line "-- End of the test.lua file" would be appended to the last line of the test.lua code.
In the above example, you can see how the implicit descriptors work with file system using the io."x" methods. The above example uses io.read() without the optional parameter. The optional parameter can be any of the following.
"*n"
Reads from the current file position and returns a number if exists at the file position or returns nil.
"*a"
Returns all the contents of file from the current file position.
"*l"
Reads the line from the current file position, and moves file position to next line.
number
Reads number of bytes specified in the function.
Other common I/O methods includes,
io.tmpfile() − Returns a temporary file for reading and writing that will be removed once the program quits.
io.tmpfile() − Returns a temporary file for reading and writing that will be removed once the program quits.
io.type(file) − Returns whether file, closed file or nil based on the input file.
io.type(file) − Returns whether file, closed file or nil based on the input file.
io.flush() − Clears the default output buffer.
io.flush() − Clears the default output buffer.
io.lines(optional file name) − Provides a generic for loop iterator that loops through the file and closes the file in the end, in case the file name is provided or the default file is used and not closed in the end of the loop.
io.lines(optional file name) − Provides a generic for loop iterator that loops through the file and closes the file in the end, in case the file name is provided or the default file is used and not closed in the end of the loop.
We often use explicit file descriptor which allows us to manipulate multiple files at a time. These functions are quite similar to implicit file descriptors. Here, we use file:function_name instead of io.function_name. The following example of the file version of the same implicit file descriptors example is shown below.
-- Opens a file in read mode
file = io.open("test.lua", "r")
-- prints the first line of the file
print(file:read())
-- closes the opened file
file:close()
-- Opens a file in append mode
file = io.open("test.lua", "a")
-- appends a word test to the last line of the file
file:write("--test")
-- closes the open file
file:close()
When you run the program, you will get a similar output as the implicit descriptors example.
-- Sample test.lua
All the modes of file open and params for read for external descriptors is same as implicit file descriptors.
Other common file methods includes,
file:seek(optional whence, optional offset) − Whence parameter is "set", "cur" or "end". Sets the new file pointer with the updated file position from the beginning of the file. The offsets are zero-based in this function. The offset is measured from the beginning of the file if the first argument is "set"; from the current position in the file if it's "cur"; or from the end of the file if it's "end". The default argument values are "cur" and 0, so the current file position can be obtained by calling this function without arguments.
file:seek(optional whence, optional offset) − Whence parameter is "set", "cur" or "end". Sets the new file pointer with the updated file position from the beginning of the file. The offsets are zero-based in this function. The offset is measured from the beginning of the file if the first argument is "set"; from the current position in the file if it's "cur"; or from the end of the file if it's "end". The default argument values are "cur" and 0, so the current file position can be obtained by calling this function without arguments.
file:flush() − Clears the default output buffer.
file:flush() − Clears the default output buffer.
io.lines(optional file name) − Provides a generic for loop iterator that loops through the file and closes the file in the end, in case the file name is provided or the default file is used and not closed in the end of the loop.
io.lines(optional file name) − Provides a generic for loop iterator that loops through the file and closes the file in the end, in case the file name is provided or the default file is used and not closed in the end of the loop.
An example to use the seek method is shown below. It offsets the cursor from the 25 positions prior to the end of file. The read function prints remainder of the file from seek position.
-- Opens a file in read
file = io.open("test.lua", "r")
file:seek("end",-25)
print(file:read("*a"))
-- closes the opened file
file:close()
You will get some output similar to the following.
sample2 test.lua
--test
You can play around all the different modes and parameters to know the full ability of the Lua file operations.
12 Lectures
2 hours
Manish Gupta
80 Lectures
3 hours
Sanjeev Mittal
54 Lectures
3.5 hours
Mehmet GOKTEPE
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2276,
"s": 2103,
"text": "I/O library is used for reading and manipulating files in Lua. There are two kinds of file operations in Lua namely implicit file descriptors and explicit file descriptors."
},
{
"code": null,
"e": 2355,
"s": 2276,
"text": "For the following examples, we will use a sample file test.lua as shown below."
},
{
"code": null,
"e": 2395,
"s": 2355,
"text": "-- sample test.lua\n-- sample2 test.lua\n"
},
{
"code": null,
"e": 2454,
"s": 2395,
"text": "A simple file open operation uses the following statement."
},
{
"code": null,
"e": 2490,
"s": 2454,
"text": "file = io.open (filename [, mode])\n"
},
{
"code": null,
"e": 2548,
"s": 2490,
"text": "The various file modes are listed in the following table."
},
{
"code": null,
"e": 2552,
"s": 2548,
"text": "\"r\""
},
{
"code": null,
"e": 2625,
"s": 2552,
"text": "Read-only mode and is the default mode where an existing file is opened."
},
{
"code": null,
"e": 2629,
"s": 2625,
"text": "\"w\""
},
{
"code": null,
"e": 2705,
"s": 2629,
"text": "Write enabled mode that overwrites the existing file or creates a new file."
},
{
"code": null,
"e": 2709,
"s": 2705,
"text": "\"a\""
},
{
"code": null,
"e": 2786,
"s": 2709,
"text": "Append mode that opens an existing file or creates a new file for appending."
},
{
"code": null,
"e": 2791,
"s": 2786,
"text": "\"r+\""
},
{
"code": null,
"e": 2833,
"s": 2791,
"text": "Read and write mode for an existing file."
},
{
"code": null,
"e": 2838,
"s": 2833,
"text": "\"w+\""
},
{
"code": null,
"e": 2934,
"s": 2838,
"text": "All existing data is removed if file exists or new file is created with read write permissions."
},
{
"code": null,
"e": 2939,
"s": 2934,
"text": "\"a+\""
},
{
"code": null,
"e": 3025,
"s": 2939,
"text": "Append mode with read mode enabled that opens an existing file or creates a new file."
},
{
"code": null,
"e": 3196,
"s": 3025,
"text": "Implicit file descriptors use the standard input/ output modes or using a single input and single output file. A sample of using implicit file descriptors is shown below."
},
{
"code": null,
"e": 3664,
"s": 3196,
"text": "-- Opens a file in read\nfile = io.open(\"test.lua\", \"r\")\n\n-- sets the default input file as test.lua\nio.input(file)\n\n-- prints the first line of the file\nprint(io.read())\n\n-- closes the open file\nio.close(file)\n\n-- Opens a file in append mode\nfile = io.open(\"test.lua\", \"a\")\n\n-- sets the default output file as test.lua\nio.output(file)\n\n-- appends a word test to the last line of the file\nio.write(\"-- End of the test.lua file\")\n\n-- closes the open file\nio.close(file)"
},
{
"code": null,
"e": 3795,
"s": 3664,
"text": "When you run the program, you will get an output of the first line of test.lua file. For our program, we got the following output."
},
{
"code": null,
"e": 3815,
"s": 3795,
"text": "-- Sample test.lua\n"
},
{
"code": null,
"e": 3982,
"s": 3815,
"text": "This was the first line of the statement in test.lua file for us. Also the line \"-- End of the test.lua file\" would be appended to the last line of the test.lua code."
},
{
"code": null,
"e": 4210,
"s": 3982,
"text": "In the above example, you can see how the implicit descriptors work with file system using the io.\"x\" methods. The above example uses io.read() without the optional parameter. The optional parameter can be any of the following."
},
{
"code": null,
"e": 4215,
"s": 4210,
"text": "\"*n\""
},
{
"code": null,
"e": 4320,
"s": 4215,
"text": "Reads from the current file position and returns a number if exists at the file position or returns nil."
},
{
"code": null,
"e": 4325,
"s": 4320,
"text": "\"*a\""
},
{
"code": null,
"e": 4390,
"s": 4325,
"text": "Returns all the contents of file from the current file position."
},
{
"code": null,
"e": 4395,
"s": 4390,
"text": "\"*l\""
},
{
"code": null,
"e": 4480,
"s": 4395,
"text": "Reads the line from the current file position, and moves file position to next line."
},
{
"code": null,
"e": 4487,
"s": 4480,
"text": "number"
},
{
"code": null,
"e": 4536,
"s": 4487,
"text": "Reads number of bytes specified in the function."
},
{
"code": null,
"e": 4571,
"s": 4536,
"text": "Other common I/O methods includes,"
},
{
"code": null,
"e": 4680,
"s": 4571,
"text": "io.tmpfile() − Returns a temporary file for reading and writing that will be removed once the program quits."
},
{
"code": null,
"e": 4789,
"s": 4680,
"text": "io.tmpfile() − Returns a temporary file for reading and writing that will be removed once the program quits."
},
{
"code": null,
"e": 4871,
"s": 4789,
"text": "io.type(file) − Returns whether file, closed file or nil based on the input file."
},
{
"code": null,
"e": 4953,
"s": 4871,
"text": "io.type(file) − Returns whether file, closed file or nil based on the input file."
},
{
"code": null,
"e": 5000,
"s": 4953,
"text": "io.flush() − Clears the default output buffer."
},
{
"code": null,
"e": 5047,
"s": 5000,
"text": "io.flush() − Clears the default output buffer."
},
{
"code": null,
"e": 5276,
"s": 5047,
"text": "io.lines(optional file name) − Provides a generic for loop iterator that loops through the file and closes the file in the end, in case the file name is provided or the default file is used and not closed in the end of the loop."
},
{
"code": null,
"e": 5505,
"s": 5276,
"text": "io.lines(optional file name) − Provides a generic for loop iterator that loops through the file and closes the file in the end, in case the file name is provided or the default file is used and not closed in the end of the loop."
},
{
"code": null,
"e": 5828,
"s": 5505,
"text": "We often use explicit file descriptor which allows us to manipulate multiple files at a time. These functions are quite similar to implicit file descriptors. Here, we use file:function_name instead of io.function_name. The following example of the file version of the same implicit file descriptors example is shown below."
},
{
"code": null,
"e": 6162,
"s": 5828,
"text": "-- Opens a file in read mode\nfile = io.open(\"test.lua\", \"r\")\n\n-- prints the first line of the file\nprint(file:read())\n\n-- closes the opened file\nfile:close()\n\n-- Opens a file in append mode\nfile = io.open(\"test.lua\", \"a\")\n\n-- appends a word test to the last line of the file\nfile:write(\"--test\")\n\n-- closes the open file\nfile:close()"
},
{
"code": null,
"e": 6255,
"s": 6162,
"text": "When you run the program, you will get a similar output as the implicit descriptors example."
},
{
"code": null,
"e": 6275,
"s": 6255,
"text": "-- Sample test.lua\n"
},
{
"code": null,
"e": 6385,
"s": 6275,
"text": "All the modes of file open and params for read for external descriptors is same as implicit file descriptors."
},
{
"code": null,
"e": 6421,
"s": 6385,
"text": "Other common file methods includes,"
},
{
"code": null,
"e": 6960,
"s": 6421,
"text": "file:seek(optional whence, optional offset) − Whence parameter is \"set\", \"cur\" or \"end\". Sets the new file pointer with the updated file position from the beginning of the file. The offsets are zero-based in this function. The offset is measured from the beginning of the file if the first argument is \"set\"; from the current position in the file if it's \"cur\"; or from the end of the file if it's \"end\". The default argument values are \"cur\" and 0, so the current file position can be obtained by calling this function without arguments."
},
{
"code": null,
"e": 7499,
"s": 6960,
"text": "file:seek(optional whence, optional offset) − Whence parameter is \"set\", \"cur\" or \"end\". Sets the new file pointer with the updated file position from the beginning of the file. The offsets are zero-based in this function. The offset is measured from the beginning of the file if the first argument is \"set\"; from the current position in the file if it's \"cur\"; or from the end of the file if it's \"end\". The default argument values are \"cur\" and 0, so the current file position can be obtained by calling this function without arguments."
},
{
"code": null,
"e": 7548,
"s": 7499,
"text": "file:flush() − Clears the default output buffer."
},
{
"code": null,
"e": 7597,
"s": 7548,
"text": "file:flush() − Clears the default output buffer."
},
{
"code": null,
"e": 7826,
"s": 7597,
"text": "io.lines(optional file name) − Provides a generic for loop iterator that loops through the file and closes the file in the end, in case the file name is provided or the default file is used and not closed in the end of the loop."
},
{
"code": null,
"e": 8055,
"s": 7826,
"text": "io.lines(optional file name) − Provides a generic for loop iterator that loops through the file and closes the file in the end, in case the file name is provided or the default file is used and not closed in the end of the loop."
},
{
"code": null,
"e": 8242,
"s": 8055,
"text": "An example to use the seek method is shown below. It offsets the cursor from the 25 positions prior to the end of file. The read function prints remainder of the file from seek position."
},
{
"code": null,
"e": 8383,
"s": 8242,
"text": "-- Opens a file in read\nfile = io.open(\"test.lua\", \"r\")\n\nfile:seek(\"end\",-25)\nprint(file:read(\"*a\"))\n\n-- closes the opened file\nfile:close()"
},
{
"code": null,
"e": 8434,
"s": 8383,
"text": "You will get some output similar to the following."
},
{
"code": null,
"e": 8459,
"s": 8434,
"text": "sample2 test.lua\n--test\n"
},
{
"code": null,
"e": 8571,
"s": 8459,
"text": "You can play around all the different modes and parameters to know the full ability of the Lua file operations."
},
{
"code": null,
"e": 8604,
"s": 8571,
"text": "\n 12 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 8618,
"s": 8604,
"text": " Manish Gupta"
},
{
"code": null,
"e": 8651,
"s": 8618,
"text": "\n 80 Lectures \n 3 hours \n"
},
{
"code": null,
"e": 8667,
"s": 8651,
"text": " Sanjeev Mittal"
},
{
"code": null,
"e": 8702,
"s": 8667,
"text": "\n 54 Lectures \n 3.5 hours \n"
},
{
"code": null,
"e": 8718,
"s": 8702,
"text": " Mehmet GOKTEPE"
},
{
"code": null,
"e": 8725,
"s": 8718,
"text": " Print"
},
{
"code": null,
"e": 8736,
"s": 8725,
"text": " Add Notes"
}
] |
WPF - DockPanel
|
DockPanel defines an area to arrange child elements relative to each other, either horizontally or vertically. With DockPanel you can easily dock child elements to top, bottom, right, left and center using the Dock property.
With LastChildFill property, the last child element fill the remaining space regardless of any other dock value when set for that element. The hierarchical inheritance of DockPanel class is as follows −
Background
Gets or sets a Brush that fills the panel content area. (Inherited from Panel)
Children
Gets a UIElementCollection of child elements of this Panel. (Inherited from Panel.)
Dock
Gets or sets a value that indicates the position of a child element within a parent DockPanel.
Height
Gets or sets the suggested height of the element. (Inherited from FrameworkElement.)
ItemHeight
Gets or sets a value that specifies the height of all items that are contained within a WrapPanel.
ItemWidth
Gets or sets a value that specifies the width of all items that are contained within a WrapPanel.
LastChildFill
Gets or sets a value that indicates whether the last child element within a DockPanel stretches to fill the remaining available space.
LogicalChildren
Gets an enumerator that can iterate the logical child elements of this Panel element. (Inherited from Panel.)
LogicalOrientation
The Orientation of the panel, if the panel supports layout in only a single dimension. (Inherited from Panel.)
Margin
Gets or sets the outer margin of an element. (Inherited from FrameworkElement.)
Name
Gets or sets the identifying name of the element. The name provides a reference so that code-behind, such as event handler code, can refer to a markup element after it is constructed during processing by a XAML processor. (Inherited from FrameworkElement.)
Orientation
Gets or sets a value that specifies the dimension in which child content is arranged.
Parent
Gets the logical parent element of this element. (Inherited from FrameworkElement.)
Resources
Gets or sets the locally-defined resource dictionary. (Inherited from FrameworkElement.)
Style
Gets or sets the style used by this element when it is rendered. (Inherited from FrameworkElement.)
Width
Gets or sets the width of the element. (Inherited from FrameworkElement.)
GetDock
Gets the value of the Dock attached property for a specified UIElement.
SetDock
Sets the value of the Dock attached property to a specified element.
The following example shows how to add child elements into a DockPanel. The following XAML implementation creates buttons inside a DockPanel.
<Window x:Class = "WPFDockPanel.MainWindow"
xmlns = "http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x = "http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:d = "http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc = "http://schemas.openxmlformats.org/markup-compatibility/2006"
xmlns:local = "clr-namespace:WPFDockPanel"
mc:Ignorable = "d" Title = "MainWindow" Height = "350" Width = "604">
<Grid>
<DockPanel LastChildFill = "True">
<Button Content = "Top" DockPanel.Dock = "Top" Click = "Click_Me" />
<Button Content = "Bottom" DockPanel.Dock = "Bottom" Click = "Click_Me" />
<Button Content = "Left" Click = "Click_Me" />
<Button Content = "Right" DockPanel.Dock = "Right" Click = "Click_Me" />
<Button Content = "Center" Click = "Click_Me" />
</DockPanel>
</Grid>
</Window>
Here is the implementation in C# for event.
using System.Windows;
using System.Windows.Controls;
namespace WPFDockPanel {
/// <summary>
/// Interaction logic for MainWindow.xaml
/// </summary>
public partial class MainWindow : Window {
public MainWindow() {
InitializeComponent();
}
private void Click_Me(object sender, RoutedEventArgs e) {
Button btn = sender as Button;
string str = btn.Content.ToString() + " button clicked";
MessageBox.Show(str);
}
}
}
When you compile and execute the above code, it will display the following output −
On clicking any button, it will also display a message. For example, when you click the button which is at the Center, it will display the following message.
We recommend that you execute the above example code and try its other properties as well.
31 Lectures
2.5 hours
Anadi Sharma
30 Lectures
2.5 hours
Taurius Litvinavicius
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2245,
"s": 2020,
"text": "DockPanel defines an area to arrange child elements relative to each other, either horizontally or vertically. With DockPanel you can easily dock child elements to top, bottom, right, left and center using the Dock property."
},
{
"code": null,
"e": 2448,
"s": 2245,
"text": "With LastChildFill property, the last child element fill the remaining space regardless of any other dock value when set for that element. The hierarchical inheritance of DockPanel class is as follows −"
},
{
"code": null,
"e": 2459,
"s": 2448,
"text": "Background"
},
{
"code": null,
"e": 2538,
"s": 2459,
"text": "Gets or sets a Brush that fills the panel content area. (Inherited from Panel)"
},
{
"code": null,
"e": 2547,
"s": 2538,
"text": "Children"
},
{
"code": null,
"e": 2631,
"s": 2547,
"text": "Gets a UIElementCollection of child elements of this Panel. (Inherited from Panel.)"
},
{
"code": null,
"e": 2636,
"s": 2631,
"text": "Dock"
},
{
"code": null,
"e": 2731,
"s": 2636,
"text": "Gets or sets a value that indicates the position of a child element within a parent DockPanel."
},
{
"code": null,
"e": 2738,
"s": 2731,
"text": "Height"
},
{
"code": null,
"e": 2823,
"s": 2738,
"text": "Gets or sets the suggested height of the element. (Inherited from FrameworkElement.)"
},
{
"code": null,
"e": 2834,
"s": 2823,
"text": "ItemHeight"
},
{
"code": null,
"e": 2933,
"s": 2834,
"text": "Gets or sets a value that specifies the height of all items that are contained within a WrapPanel."
},
{
"code": null,
"e": 2943,
"s": 2933,
"text": "ItemWidth"
},
{
"code": null,
"e": 3041,
"s": 2943,
"text": "Gets or sets a value that specifies the width of all items that are contained within a WrapPanel."
},
{
"code": null,
"e": 3055,
"s": 3041,
"text": "LastChildFill"
},
{
"code": null,
"e": 3190,
"s": 3055,
"text": "Gets or sets a value that indicates whether the last child element within a DockPanel stretches to fill the remaining available space."
},
{
"code": null,
"e": 3206,
"s": 3190,
"text": "LogicalChildren"
},
{
"code": null,
"e": 3316,
"s": 3206,
"text": "Gets an enumerator that can iterate the logical child elements of this Panel element. (Inherited from Panel.)"
},
{
"code": null,
"e": 3335,
"s": 3316,
"text": "LogicalOrientation"
},
{
"code": null,
"e": 3446,
"s": 3335,
"text": "The Orientation of the panel, if the panel supports layout in only a single dimension. (Inherited from Panel.)"
},
{
"code": null,
"e": 3453,
"s": 3446,
"text": "Margin"
},
{
"code": null,
"e": 3533,
"s": 3453,
"text": "Gets or sets the outer margin of an element. (Inherited from FrameworkElement.)"
},
{
"code": null,
"e": 3538,
"s": 3533,
"text": "Name"
},
{
"code": null,
"e": 3795,
"s": 3538,
"text": "Gets or sets the identifying name of the element. The name provides a reference so that code-behind, such as event handler code, can refer to a markup element after it is constructed during processing by a XAML processor. (Inherited from FrameworkElement.)"
},
{
"code": null,
"e": 3807,
"s": 3795,
"text": "Orientation"
},
{
"code": null,
"e": 3893,
"s": 3807,
"text": "Gets or sets a value that specifies the dimension in which child content is arranged."
},
{
"code": null,
"e": 3900,
"s": 3893,
"text": "Parent"
},
{
"code": null,
"e": 3984,
"s": 3900,
"text": "Gets the logical parent element of this element. (Inherited from FrameworkElement.)"
},
{
"code": null,
"e": 3994,
"s": 3984,
"text": "Resources"
},
{
"code": null,
"e": 4083,
"s": 3994,
"text": "Gets or sets the locally-defined resource dictionary. (Inherited from FrameworkElement.)"
},
{
"code": null,
"e": 4089,
"s": 4083,
"text": "Style"
},
{
"code": null,
"e": 4189,
"s": 4089,
"text": "Gets or sets the style used by this element when it is rendered. (Inherited from FrameworkElement.)"
},
{
"code": null,
"e": 4195,
"s": 4189,
"text": "Width"
},
{
"code": null,
"e": 4269,
"s": 4195,
"text": "Gets or sets the width of the element. (Inherited from FrameworkElement.)"
},
{
"code": null,
"e": 4277,
"s": 4269,
"text": "GetDock"
},
{
"code": null,
"e": 4349,
"s": 4277,
"text": "Gets the value of the Dock attached property for a specified UIElement."
},
{
"code": null,
"e": 4357,
"s": 4349,
"text": "SetDock"
},
{
"code": null,
"e": 4426,
"s": 4357,
"text": "Sets the value of the Dock attached property to a specified element."
},
{
"code": null,
"e": 4568,
"s": 4426,
"text": "The following example shows how to add child elements into a DockPanel. The following XAML implementation creates buttons inside a DockPanel."
},
{
"code": null,
"e": 5472,
"s": 4568,
"text": "<Window x:Class = \"WPFDockPanel.MainWindow\" \n xmlns = \"http://schemas.microsoft.com/winfx/2006/xaml/presentation\" \n xmlns:x = \"http://schemas.microsoft.com/winfx/2006/xaml\" \n xmlns:d = \"http://schemas.microsoft.com/expression/blend/2008\" \n xmlns:mc = \"http://schemas.openxmlformats.org/markup-compatibility/2006\" \n xmlns:local = \"clr-namespace:WPFDockPanel\" \n mc:Ignorable = \"d\" Title = \"MainWindow\" Height = \"350\" Width = \"604\">\n\t\n <Grid> \n <DockPanel LastChildFill = \"True\"> \n <Button Content = \"Top\" DockPanel.Dock = \"Top\" Click = \"Click_Me\" /> \n <Button Content = \"Bottom\" DockPanel.Dock = \"Bottom\" Click = \"Click_Me\" />\n <Button Content = \"Left\" Click = \"Click_Me\" /> \n <Button Content = \"Right\" DockPanel.Dock = \"Right\" Click = \"Click_Me\" /> \n <Button Content = \"Center\" Click = \"Click_Me\" /> \n </DockPanel> \n </Grid> \n\t\n</Window> "
},
{
"code": null,
"e": 5516,
"s": 5472,
"text": "Here is the implementation in C# for event."
},
{
"code": null,
"e": 6036,
"s": 5516,
"text": "using System.Windows; \nusing System.Windows.Controls;\n \nnamespace WPFDockPanel { \n /// <summary> \n /// Interaction logic for MainWindow.xaml \n /// </summary> \n\t\n public partial class MainWindow : Window { \n\t\n public MainWindow() { \n InitializeComponent(); \n } \n\t\t\n private void Click_Me(object sender, RoutedEventArgs e) { \n Button btn = sender as Button; \n string str = btn.Content.ToString() + \" button clicked\"; \n MessageBox.Show(str); \n } \n\t\t\n } \n}"
},
{
"code": null,
"e": 6120,
"s": 6036,
"text": "When you compile and execute the above code, it will display the following output −"
},
{
"code": null,
"e": 6278,
"s": 6120,
"text": "On clicking any button, it will also display a message. For example, when you click the button which is at the Center, it will display the following message."
},
{
"code": null,
"e": 6369,
"s": 6278,
"text": "We recommend that you execute the above example code and try its other properties as well."
},
{
"code": null,
"e": 6404,
"s": 6369,
"text": "\n 31 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 6418,
"s": 6404,
"text": " Anadi Sharma"
},
{
"code": null,
"e": 6453,
"s": 6418,
"text": "\n 30 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 6476,
"s": 6453,
"text": " Taurius Litvinavicius"
},
{
"code": null,
"e": 6483,
"s": 6476,
"text": " Print"
},
{
"code": null,
"e": 6494,
"s": 6483,
"text": " Add Notes"
}
] |
Draw Rectangle in C graphics - GeeksforGeeks
|
04 Oct, 2018
rectangle() is used to draw a rectangle. Coordinates of left top and right bottom corner are required to draw the rectangle. left specifies the X-coordinate of top left corner, top specifies the Y-coordinate of top left corner, right specifies the X-coordinate of right bottom corner, bottom specifies the Y-coordinate of right bottom corner.Syntax :
rectangle(int left, int top, int right, int bottom);
Examples:
Input : left = 150, top = 250, right = 450, bottom = 350;
Output :
Input : left = 150, top = 150, right = 450, bottom = 450;
Output :
Below is the implementation of the rectangle function :
// C program to draw a rectangle#include <graphics.h> // Driver codeint main(){ // gm is Graphics mode which is a computer display // mode that generates image using pixels. // DETECT is a macro defined in "graphics.h" header file int gd = DETECT, gm; // location of left, top, right, bottom int left = 150, top = 150; int right = 450, bottom = 450; // initgraph initializes the graphics system // by loading a graphics driver from disk initgraph(&gd, &gm, ""); // rectangle function rectangle(left, top, right, bottom); getch(); // closegraph function closes the graphics // mode and deallocates all memory allocated // by graphics system . closegraph(); return 0;}
Output:
computer-graphics
square-rectangle
C Language
C Programs
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
fork() in C
Function Pointer in C
TCP Server-Client implementation in C
Enumeration (or enum) in C
Substring in C++
Strings in C
UDP Server-Client implementation in C
C Program to read contents of Whole File
Arrow operator -> in C/C++ with Examples
Header files in C/C++ and its uses
|
[
{
"code": null,
"e": 24592,
"s": 24564,
"text": "\n04 Oct, 2018"
},
{
"code": null,
"e": 24943,
"s": 24592,
"text": "rectangle() is used to draw a rectangle. Coordinates of left top and right bottom corner are required to draw the rectangle. left specifies the X-coordinate of top left corner, top specifies the Y-coordinate of top left corner, right specifies the X-coordinate of right bottom corner, bottom specifies the Y-coordinate of right bottom corner.Syntax :"
},
{
"code": null,
"e": 24997,
"s": 24943,
"text": "rectangle(int left, int top, int right, int bottom);\n"
},
{
"code": null,
"e": 25007,
"s": 24997,
"text": "Examples:"
},
{
"code": null,
"e": 25147,
"s": 25007,
"text": "Input : left = 150, top = 250, right = 450, bottom = 350;\nOutput : \n\n\nInput : left = 150, top = 150, right = 450, bottom = 450;\nOutput : \n\n"
},
{
"code": null,
"e": 25203,
"s": 25147,
"text": "Below is the implementation of the rectangle function :"
},
{
"code": "// C program to draw a rectangle#include <graphics.h> // Driver codeint main(){ // gm is Graphics mode which is a computer display // mode that generates image using pixels. // DETECT is a macro defined in \"graphics.h\" header file int gd = DETECT, gm; // location of left, top, right, bottom int left = 150, top = 150; int right = 450, bottom = 450; // initgraph initializes the graphics system // by loading a graphics driver from disk initgraph(&gd, &gm, \"\"); // rectangle function rectangle(left, top, right, bottom); getch(); // closegraph function closes the graphics // mode and deallocates all memory allocated // by graphics system . closegraph(); return 0;}",
"e": 25937,
"s": 25203,
"text": null
},
{
"code": null,
"e": 25945,
"s": 25937,
"text": "Output:"
},
{
"code": null,
"e": 25965,
"s": 25947,
"text": "computer-graphics"
},
{
"code": null,
"e": 25982,
"s": 25965,
"text": "square-rectangle"
},
{
"code": null,
"e": 25993,
"s": 25982,
"text": "C Language"
},
{
"code": null,
"e": 26004,
"s": 25993,
"text": "C Programs"
},
{
"code": null,
"e": 26102,
"s": 26004,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26111,
"s": 26102,
"text": "Comments"
},
{
"code": null,
"e": 26124,
"s": 26111,
"text": "Old Comments"
},
{
"code": null,
"e": 26136,
"s": 26124,
"text": "fork() in C"
},
{
"code": null,
"e": 26158,
"s": 26136,
"text": "Function Pointer in C"
},
{
"code": null,
"e": 26196,
"s": 26158,
"text": "TCP Server-Client implementation in C"
},
{
"code": null,
"e": 26223,
"s": 26196,
"text": "Enumeration (or enum) in C"
},
{
"code": null,
"e": 26240,
"s": 26223,
"text": "Substring in C++"
},
{
"code": null,
"e": 26253,
"s": 26240,
"text": "Strings in C"
},
{
"code": null,
"e": 26291,
"s": 26253,
"text": "UDP Server-Client implementation in C"
},
{
"code": null,
"e": 26332,
"s": 26291,
"text": "C Program to read contents of Whole File"
},
{
"code": null,
"e": 26373,
"s": 26332,
"text": "Arrow operator -> in C/C++ with Examples"
}
] |
Regression in the face of messy outliers? Try Huber regressor | by Tirthajyoti Sarkar | Towards Data Science
|
Let’s say you have a dataset with two features X1 and X2, on which you are performing linear regression. However, some noise/outliers got introduced in the dataset.
Say, the y-value outliers are exceptionally low as compared to what they should be. How does that look like?
This is the data you got.
However, if you really think about the slope of the X-Y data, the expected y-values should have been much higher for those X-values. Something like the following,
These are obvious outliers and you can run a simple exploratory data analysis (EDA) to catch and discard them from the dataset before building the regression model.
But you cannot hope to catch all the outliers — at scale and in all dimensions. Visualization of a dataset with 100 or 1000 dimensions (features) is challenging enough to manually examine the plots and discover outliers.
A regression algorithm that is robust to outliers sounds like a good bet against those pesky bad data points. The Huber regressor is one such tool that we will discuss in this article.
In statistics, Huber loss is a particular loss function (first introduced in 1964 by Peter Jost Huber, a Swiss mathematician) that is used widely for robust regression problems — situations where outliers are present that can degrade the performance and accuracy of least-squared-loss error based regression.
towardsdatascience.com
The loss is given by,
We can see that the loss is the square of the usual residual (y — f(x)) only when the absolute value of the residual is smaller than a fixed parameter. The choice and tuning of this parameter are important to get a good estimator. Where the residual is greater than this parameter, the loss is a function of the absolute value of the residual and the Huber parameter.
Now, you may remember from elementary statistics that the squared loss comes from the unbiased estimator around the mean whereas an absolute difference loss comes from an unbiased estimator around the median. Median is much more robust to outliers than mean.
Huber loss is a balanced compromise between these two types. It is robust to the outliers but does not completely ignore them either. The tuning can be done with the free parameter, of course.
The demo notebook is here in my Github repo.
We created the synthetic data and added some noisy outlier with the following code,
import numpy as npfrom sklearn.datasets import make_regressionrng = np.random.RandomState(0)X, y, coef = make_regression( n_samples=200, n_features=2, noise=4.0, coef=True, random_state=0)# The first four data points are outlierX[:4] = rng.uniform(10, 20, (4, 2))y[:4] = rng.uniform(10, 20, 4)
Now, all you have to do is to call Scikit-learn’s built-in HuberRegressor estimator and fit the data. For comparison, we also have the standard LinearRegression method called in.
from sklearn.linear_model import HuberRegressor, LinearRegressionhuber = HuberRegressor().fit(X, y)linear = LinearRegression().fit(X, y)
Now, we know that the first 4 data points are outliers. So, if we try to predict the y-value with the first data point, we will get something like this,
linear.predict(X[:1,])>> array([87.38004436])
As expected, the linear regression prediction is a low value for y. Why? Because the linear fit (based on least-squared loss) has bent towards the outliers due to their large leverage.
However, the Huber estimator predicts a more reasonable (high) value,
huber.predict(X[:1,])>> array([806.72000092])
To demonstrate the robustness of the Huber estimator further, we can use the estimated coefficients and plot the best-fitted line,
huber_y1 = np.arange(-2.5,20,0.01)*huber.coef_[0] + \ np.arange(-2.5,20,0.01)*huber.coef_[1] + \ huber.intercept_plt.figure(dpi=120)plt.scatter(X[:,0],y)plt.plot(np.arange(-2.5,20,0.01), huber_y1, color='red',linestyle='--')plt.show()
Although we are not discussing it in this article, readers are encouraged to check the Theil-Sen estimator, which is another robust linear regression technique and enjoys being highly insensitive to outliers.
As expected, Scikit-learn has a built-in method for this estimator too: Scikit-learn Theil-Sen estimator.
For multiple linear regression with a large number of features, this is a very efficient and fast regression algorithm among all the robust estimators.
In this brief article, we talked about the problem of linear regression estimators in the presence of outliers in the dataset. We demonstrated, with a simple example, that linear estimators based on traditional least-squared loss function, may predict completely wrong values because they are bent towards the outliers.
We discussed a couple of robust estimators and demonstrated the Huber regressor in detail. Non-parametric statistics use these robust regression techniques in many places, especially when the data is expected to be particularly noisy.
Data science students and professionals alike should also have a working knowledge of these robust regression methods for automating the modeling of large datasets in the presence of outliers.
Loved the article? Become a Medium member to continue learning without limits. I’ll receive a portion of your membership fee if you use the following link, with no extra cost to you.
|
[
{
"code": null,
"e": 336,
"s": 171,
"text": "Let’s say you have a dataset with two features X1 and X2, on which you are performing linear regression. However, some noise/outliers got introduced in the dataset."
},
{
"code": null,
"e": 445,
"s": 336,
"text": "Say, the y-value outliers are exceptionally low as compared to what they should be. How does that look like?"
},
{
"code": null,
"e": 471,
"s": 445,
"text": "This is the data you got."
},
{
"code": null,
"e": 634,
"s": 471,
"text": "However, if you really think about the slope of the X-Y data, the expected y-values should have been much higher for those X-values. Something like the following,"
},
{
"code": null,
"e": 799,
"s": 634,
"text": "These are obvious outliers and you can run a simple exploratory data analysis (EDA) to catch and discard them from the dataset before building the regression model."
},
{
"code": null,
"e": 1020,
"s": 799,
"text": "But you cannot hope to catch all the outliers — at scale and in all dimensions. Visualization of a dataset with 100 or 1000 dimensions (features) is challenging enough to manually examine the plots and discover outliers."
},
{
"code": null,
"e": 1205,
"s": 1020,
"text": "A regression algorithm that is robust to outliers sounds like a good bet against those pesky bad data points. The Huber regressor is one such tool that we will discuss in this article."
},
{
"code": null,
"e": 1514,
"s": 1205,
"text": "In statistics, Huber loss is a particular loss function (first introduced in 1964 by Peter Jost Huber, a Swiss mathematician) that is used widely for robust regression problems — situations where outliers are present that can degrade the performance and accuracy of least-squared-loss error based regression."
},
{
"code": null,
"e": 1537,
"s": 1514,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 1559,
"s": 1537,
"text": "The loss is given by,"
},
{
"code": null,
"e": 1927,
"s": 1559,
"text": "We can see that the loss is the square of the usual residual (y — f(x)) only when the absolute value of the residual is smaller than a fixed parameter. The choice and tuning of this parameter are important to get a good estimator. Where the residual is greater than this parameter, the loss is a function of the absolute value of the residual and the Huber parameter."
},
{
"code": null,
"e": 2186,
"s": 1927,
"text": "Now, you may remember from elementary statistics that the squared loss comes from the unbiased estimator around the mean whereas an absolute difference loss comes from an unbiased estimator around the median. Median is much more robust to outliers than mean."
},
{
"code": null,
"e": 2379,
"s": 2186,
"text": "Huber loss is a balanced compromise between these two types. It is robust to the outliers but does not completely ignore them either. The tuning can be done with the free parameter, of course."
},
{
"code": null,
"e": 2424,
"s": 2379,
"text": "The demo notebook is here in my Github repo."
},
{
"code": null,
"e": 2508,
"s": 2424,
"text": "We created the synthetic data and added some noisy outlier with the following code,"
},
{
"code": null,
"e": 2805,
"s": 2508,
"text": "import numpy as npfrom sklearn.datasets import make_regressionrng = np.random.RandomState(0)X, y, coef = make_regression( n_samples=200, n_features=2, noise=4.0, coef=True, random_state=0)# The first four data points are outlierX[:4] = rng.uniform(10, 20, (4, 2))y[:4] = rng.uniform(10, 20, 4)"
},
{
"code": null,
"e": 2984,
"s": 2805,
"text": "Now, all you have to do is to call Scikit-learn’s built-in HuberRegressor estimator and fit the data. For comparison, we also have the standard LinearRegression method called in."
},
{
"code": null,
"e": 3121,
"s": 2984,
"text": "from sklearn.linear_model import HuberRegressor, LinearRegressionhuber = HuberRegressor().fit(X, y)linear = LinearRegression().fit(X, y)"
},
{
"code": null,
"e": 3274,
"s": 3121,
"text": "Now, we know that the first 4 data points are outliers. So, if we try to predict the y-value with the first data point, we will get something like this,"
},
{
"code": null,
"e": 3320,
"s": 3274,
"text": "linear.predict(X[:1,])>> array([87.38004436])"
},
{
"code": null,
"e": 3505,
"s": 3320,
"text": "As expected, the linear regression prediction is a low value for y. Why? Because the linear fit (based on least-squared loss) has bent towards the outliers due to their large leverage."
},
{
"code": null,
"e": 3575,
"s": 3505,
"text": "However, the Huber estimator predicts a more reasonable (high) value,"
},
{
"code": null,
"e": 3621,
"s": 3575,
"text": "huber.predict(X[:1,])>> array([806.72000092])"
},
{
"code": null,
"e": 3752,
"s": 3621,
"text": "To demonstrate the robustness of the Huber estimator further, we can use the estimated coefficients and plot the best-fitted line,"
},
{
"code": null,
"e": 4026,
"s": 3752,
"text": "huber_y1 = np.arange(-2.5,20,0.01)*huber.coef_[0] + \\ np.arange(-2.5,20,0.01)*huber.coef_[1] + \\ huber.intercept_plt.figure(dpi=120)plt.scatter(X[:,0],y)plt.plot(np.arange(-2.5,20,0.01), huber_y1, color='red',linestyle='--')plt.show()"
},
{
"code": null,
"e": 4235,
"s": 4026,
"text": "Although we are not discussing it in this article, readers are encouraged to check the Theil-Sen estimator, which is another robust linear regression technique and enjoys being highly insensitive to outliers."
},
{
"code": null,
"e": 4341,
"s": 4235,
"text": "As expected, Scikit-learn has a built-in method for this estimator too: Scikit-learn Theil-Sen estimator."
},
{
"code": null,
"e": 4493,
"s": 4341,
"text": "For multiple linear regression with a large number of features, this is a very efficient and fast regression algorithm among all the robust estimators."
},
{
"code": null,
"e": 4813,
"s": 4493,
"text": "In this brief article, we talked about the problem of linear regression estimators in the presence of outliers in the dataset. We demonstrated, with a simple example, that linear estimators based on traditional least-squared loss function, may predict completely wrong values because they are bent towards the outliers."
},
{
"code": null,
"e": 5048,
"s": 4813,
"text": "We discussed a couple of robust estimators and demonstrated the Huber regressor in detail. Non-parametric statistics use these robust regression techniques in many places, especially when the data is expected to be particularly noisy."
},
{
"code": null,
"e": 5241,
"s": 5048,
"text": "Data science students and professionals alike should also have a working knowledge of these robust regression methods for automating the modeling of large datasets in the presence of outliers."
}
] |
How to retrieve specific file(s) information using Get-ChildItem in PowerShell?
|
When the item (file) path is provided to the Get-ChildItem cmdlet, it extracts the file information like Name, LastWriteTime, Size, etc.
Get-ChildItem -Path D:\Temp\style.css
Directory: D:\Temp
Mode LastWriteTime Length Name
---- ------------- ------ ----
-a---- 08-12-2017 10:16 393 style.css
To get the full properties of the file then you need to use fl * (Format-List *) pipeline command.
Get-ChildItem -Path D:\Temp\style.css | fl *
PSPath : Microsoft.PowerShell.Core\FileSystem::D:\Temp\style.css
PSParentPath : Microsoft.PowerShell.Core\FileSystem::D:\Temp
PSChildName : style.css
PSDrive : D
PSProvider : Microsoft.PowerShell.Core\FileSystem
PSIsContainer : False
Mode : -a----
VersionInfo : File: D:\Temp\style.css
InternalName:
OriginalFilename:
FileVersion:
FileDescription:
Product:
ProductVersion:
Debug: False
Patched: False
PreRelease: False
PrivateBuild: False
SpecialBuild: False
Language:
BaseName : style
Target : {}
LinkType :
Name : style.css
Length : 393
DirectoryName : D:\Temp
Directory : D:\Temp
IsReadOnly : False
Exists : True
FullName : D:\Temp\style.css
Extension : .css
CreationTime : 08-12-2017 10:02:17
CreationTimeUtc : 08-12-2017 04:32:17
LastAccessTime : 08-12-2017 10:02:17
LastAccessTimeUtc : 08-12-2017 04:32:17
LastWriteTime : 08-12-2017 10:16:26
LastWriteTimeUtc : 08-12-2017 04:46:26
Attributes : Archive
You can get the specific properties by pipelining the Select-Object parameter. From the above example, we will display the File Name, attributes, extension, creation time, last access time and last write time.
Get-ChildItem D:\Temp\style.css | Select Name, Extension, CreationTime, LastAccessTime, LastWriteTime
Name : style.css
Extension : .css
CreationTime : 08-12-2017 10:02:17
LastAccessTime : 08-12-2017 10:02:17
LastWriteTime : 08-12-2017 10:16:26
Similarly, you can retrieve multiple files information in the same command. Each filename needs to be separated by comma (,).
PS D:\Temp> Get-ChildItem .\style.css, .\cars.xml
|
[
{
"code": null,
"e": 1199,
"s": 1062,
"text": "When the item (file) path is provided to the Get-ChildItem cmdlet, it extracts the file information like Name, LastWriteTime, Size, etc."
},
{
"code": null,
"e": 1237,
"s": 1199,
"text": "Get-ChildItem -Path D:\\Temp\\style.css"
},
{
"code": null,
"e": 1424,
"s": 1237,
"text": "Directory: D:\\Temp\n\nMode LastWriteTime Length Name\n---- ------------- ------ ----\n-a---- 08-12-2017 10:16 393 style.css"
},
{
"code": null,
"e": 1523,
"s": 1424,
"text": "To get the full properties of the file then you need to use fl * (Format-List *) pipeline command."
},
{
"code": null,
"e": 1569,
"s": 1523,
"text": "Get-ChildItem -Path D:\\Temp\\style.css | fl *\n"
},
{
"code": null,
"e": 2948,
"s": 1569,
"text": "PSPath : Microsoft.PowerShell.Core\\FileSystem::D:\\Temp\\style.css\nPSParentPath : Microsoft.PowerShell.Core\\FileSystem::D:\\Temp\nPSChildName : style.css\nPSDrive : D\nPSProvider : Microsoft.PowerShell.Core\\FileSystem\nPSIsContainer : False\nMode : -a----\nVersionInfo : File: D:\\Temp\\style.css\n InternalName:\n OriginalFilename:\n FileVersion:\n FileDescription:\n Product:\n ProductVersion:\n Debug: False\n Patched: False\n PreRelease: False\n PrivateBuild: False\n SpecialBuild: False\n Language:\n\nBaseName : style\nTarget : {}\nLinkType :\nName : style.css\nLength : 393\nDirectoryName : D:\\Temp\nDirectory : D:\\Temp\nIsReadOnly : False\nExists : True\nFullName : D:\\Temp\\style.css\nExtension : .css\nCreationTime : 08-12-2017 10:02:17\nCreationTimeUtc : 08-12-2017 04:32:17\nLastAccessTime : 08-12-2017 10:02:17\nLastAccessTimeUtc : 08-12-2017 04:32:17\nLastWriteTime : 08-12-2017 10:16:26\nLastWriteTimeUtc : 08-12-2017 04:46:26\nAttributes : Archive"
},
{
"code": null,
"e": 3158,
"s": 2948,
"text": "You can get the specific properties by pipelining the Select-Object parameter. From the above example, we will display the File Name, attributes, extension, creation time, last access time and last write time."
},
{
"code": null,
"e": 3261,
"s": 3158,
"text": "Get-ChildItem D:\\Temp\\style.css | Select Name, Extension, CreationTime, LastAccessTime, LastWriteTime\n"
},
{
"code": null,
"e": 3421,
"s": 3261,
"text": "Name : style.css\nExtension : .css\nCreationTime : 08-12-2017 10:02:17\nLastAccessTime : 08-12-2017 10:02:17\nLastWriteTime : 08-12-2017 10:16:26"
},
{
"code": null,
"e": 3547,
"s": 3421,
"text": "Similarly, you can retrieve multiple files information in the same command. Each filename needs to be separated by comma (,)."
},
{
"code": null,
"e": 3598,
"s": 3547,
"text": "PS D:\\Temp> Get-ChildItem .\\style.css, .\\cars.xml\n"
}
] |
HTML DOM createTextNode() Method
|
The HTML DOM createTextNode() method is used to create a Text Node with the specified text.
Let us look at an example for the createTextNode() method −
<!DOCTYPE html>
<html>
<body>
<h2>createTextNode() example</h2>
<p>Click the below button to create a p element with some text.</p>
<button onclick="createText()">CREATE</button>
<script>
function createText() {
var x = document.createElement("P");
var p = document.createTextNode("This is a sample paragraph created with
createTextNode()");
x.appendChild(p);
document.body.appendChild(x);
}
</script>
</body>
</html>
This will produce the following output −
On clicking the CREATE button −
In the above example −
We have created a button CREATE that will execute the createText() function on being clicked by the user −
<button onclick="createText()">CREATE</button>
The createText() method creates the <p> element by using the createElement() method of the document object and assigns it to the variable x. We then create a text node using createTextNode() with some text and assigns it to the variable p.
We then append the text node to the <p> element using the appendChild() method. Finally the <p> element along with the text node are appended to document using the document.body appendChild() method:
function createText() {
var x = document.createElement("P");
var p = document.createTextNode("This is a sample paragraph created with createTextNode()");
x.appendChild(p);
document.body.appendChild(x);
}
|
[
{
"code": null,
"e": 1154,
"s": 1062,
"text": "The HTML DOM createTextNode() method is used to create a Text Node with the specified text."
},
{
"code": null,
"e": 1214,
"s": 1154,
"text": "Let us look at an example for the createTextNode() method −"
},
{
"code": null,
"e": 1668,
"s": 1214,
"text": "<!DOCTYPE html>\n<html>\n<body>\n<h2>createTextNode() example</h2>\n<p>Click the below button to create a p element with some text.</p>\n<button onclick=\"createText()\">CREATE</button>\n<script>\n function createText() {\n var x = document.createElement(\"P\");\n var p = document.createTextNode(\"This is a sample paragraph created with\n createTextNode()\");\n x.appendChild(p);\n document.body.appendChild(x);\n }\n</script>\n</body>\n</html>"
},
{
"code": null,
"e": 1709,
"s": 1668,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 1741,
"s": 1709,
"text": "On clicking the CREATE button −"
},
{
"code": null,
"e": 1764,
"s": 1741,
"text": "In the above example −"
},
{
"code": null,
"e": 1871,
"s": 1764,
"text": "We have created a button CREATE that will execute the createText() function on being clicked by the user −"
},
{
"code": null,
"e": 1918,
"s": 1871,
"text": "<button onclick=\"createText()\">CREATE</button>"
},
{
"code": null,
"e": 2158,
"s": 1918,
"text": "The createText() method creates the <p> element by using the createElement() method of the document object and assigns it to the variable x. We then create a text node using createTextNode() with some text and assigns it to the variable p."
},
{
"code": null,
"e": 2358,
"s": 2158,
"text": "We then append the text node to the <p> element using the appendChild() method. Finally the <p> element along with the text node are appended to document using the document.body appendChild() method:"
},
{
"code": null,
"e": 2574,
"s": 2358,
"text": "function createText() {\n var x = document.createElement(\"P\");\n var p = document.createTextNode(\"This is a sample paragraph created with createTextNode()\");\n x.appendChild(p);\n document.body.appendChild(x);\n}"
}
] |
AWT Choice Class
|
Choice control is used to show pop up menu of choices. Selected choice is shown on the top of the menu.
Following is the declaration for java.awt.Choice class:
public class Choice
extends Component
implements ItemSelectable, Accessible
Choice() ()
Creates a new choice menu.
void add(String item)
Adds an item to this Choice menu.
void addItem(String item)
Obsolete as of Java 2 platform v1.1.
void addItemListener(ItemListener l)
Adds the specified item listener to receive item events from this Choice menu.
void addNotify()
Creates the Choice's peer.
int countItems()
Deprecated. As of JDK version 1.1, replaced by getItemCount().
AccessibleContext getAccessibleContext()
Gets the AccessibleContext associated with this Choice.
String getItem(int index)
Gets the string at the specified index in this Choice menu.
int getItemCount()
Returns the number of items in this Choice menu.
ItemListener[] getItemListeners()
Returns an array of all the item listeners registered on this choice.
<T extends EventListener> T[] getListeners(Class<T> listenerType)
Returns an array of all the objects currently registered as FooListeners upon this Choice.
int getSelectedIndex()
Returns the index of the currently selected item.
String getSelectedItem()
Gets a representation of the current choice as a string.
Object[] getSelectedObjects()
Returns an array (length 1) containing the currently selected item.
void insert(String item, int index)
Inserts the item into this choice at the specified position.
protected String paramString()
Returns a string representing the state of this Choice menu.
protected void processEvent(AWTEvent e)
Processes events on this choice.
protected void processItemEvent(ItemEvent e)
Processes item events occurring on this Choice menu by dispatching them to any registered ItemListener objects.
void remove(int position)
Removes an item from the choice menu at the specified position.
void remove(String item)
Removes the first occurrence of item from the Choice menu.
void removeAll()
Removes all items from the choice menu.
void removeItemListener(ItemListener l)
Removes the specified item listener so that it no longer receives item events from this Choice menu.
void select(int pos)
Sets the selected item in this Choice menu to be the item at the specified position.
void select(String str)
Sets the selected item in this Choice menu to be the item whose name is equal to the specified string.
This class inherits methods from the following classes:
java.awt.Component
java.awt.Component
java.lang.Object
java.lang.Object
Create the following java program using any editor of your choice in say D:/ > AWT > com > tutorialspoint > gui >
package com.tutorialspoint.gui;
import java.awt.*;
import java.awt.event.*;
public class AwtControlDemo {
private Frame mainFrame;
private Label headerLabel;
private Label statusLabel;
private Panel controlPanel;
public AwtControlDemo(){
prepareGUI();
}
public static void main(String[] args){
AwtControlDemo awtControlDemo = new AwtControlDemo();
awtControlDemo.showChoiceDemo();
}
private void prepareGUI(){
mainFrame = new Frame("Java AWT Examples");
mainFrame.setSize(400,400);
mainFrame.setLayout(new GridLayout(3, 1));
mainFrame.addWindowListener(new WindowAdapter() {
public void windowClosing(WindowEvent windowEvent){
System.exit(0);
}
});
headerLabel = new Label();
headerLabel.setAlignment(Label.CENTER);
statusLabel = new Label();
statusLabel.setAlignment(Label.CENTER);
statusLabel.setSize(350,100);
controlPanel = new Panel();
controlPanel.setLayout(new FlowLayout());
mainFrame.add(headerLabel);
mainFrame.add(controlPanel);
mainFrame.add(statusLabel);
mainFrame.setVisible(true);
}
private void showChoiceDemo(){
headerLabel.setText("Control in action: Choice");
final Choice fruitChoice = new Choice();
fruitChoice.add("Apple");
fruitChoice.add("Grapes");
fruitChoice.add("Mango");
fruitChoice.add("Peer");
Button showButton = new Button("Show");
showButton.addActionListener(new ActionListener() {
public void actionPerformed(ActionEvent e) {
String data = "Fruit Selected: "
+ fruitChoice.getItem(fruitChoice.getSelectedIndex());
statusLabel.setText(data);
}
});
controlPanel.add(fruitChoice);
controlPanel.add(showButton);
mainFrame.setVisible(true);
}
}
Compile the program using command prompt. Go to D:/ > AWT and type the following command.
D:\AWT>javac com\tutorialspoint\gui\AwtControlDemo.java
If no error comes that means compilation is successful. Run the program using following command.
D:\AWT>java com.tutorialspoint.gui.AwtControlDemo
Verify the following output
13 Lectures
2 hours
EduOLC
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 1851,
"s": 1747,
"text": "Choice control is used to show pop up menu of choices. Selected choice is shown on the top of the menu."
},
{
"code": null,
"e": 1907,
"s": 1851,
"text": "Following is the declaration for java.awt.Choice class:"
},
{
"code": null,
"e": 1992,
"s": 1907,
"text": "public class Choice\n extends Component\n implements ItemSelectable, Accessible"
},
{
"code": null,
"e": 2005,
"s": 1992,
"text": "Choice() () "
},
{
"code": null,
"e": 2032,
"s": 2005,
"text": "Creates a new choice menu."
},
{
"code": null,
"e": 2055,
"s": 2032,
"text": "void add(String item) "
},
{
"code": null,
"e": 2089,
"s": 2055,
"text": "Adds an item to this Choice menu."
},
{
"code": null,
"e": 2116,
"s": 2089,
"text": "void addItem(String item) "
},
{
"code": null,
"e": 2153,
"s": 2116,
"text": "Obsolete as of Java 2 platform v1.1."
},
{
"code": null,
"e": 2191,
"s": 2153,
"text": "void addItemListener(ItemListener l) "
},
{
"code": null,
"e": 2270,
"s": 2191,
"text": "Adds the specified item listener to receive item events from this Choice menu."
},
{
"code": null,
"e": 2288,
"s": 2270,
"text": "void addNotify() "
},
{
"code": null,
"e": 2315,
"s": 2288,
"text": "Creates the Choice's peer."
},
{
"code": null,
"e": 2333,
"s": 2315,
"text": "int countItems() "
},
{
"code": null,
"e": 2396,
"s": 2333,
"text": "Deprecated. As of JDK version 1.1, replaced by getItemCount()."
},
{
"code": null,
"e": 2438,
"s": 2396,
"text": "AccessibleContext getAccessibleContext() "
},
{
"code": null,
"e": 2494,
"s": 2438,
"text": "Gets the AccessibleContext associated with this Choice."
},
{
"code": null,
"e": 2521,
"s": 2494,
"text": "String\tgetItem(int index) "
},
{
"code": null,
"e": 2581,
"s": 2521,
"text": "Gets the string at the specified index in this Choice menu."
},
{
"code": null,
"e": 2601,
"s": 2581,
"text": "int getItemCount() "
},
{
"code": null,
"e": 2650,
"s": 2601,
"text": "Returns the number of items in this Choice menu."
},
{
"code": null,
"e": 2685,
"s": 2650,
"text": "ItemListener[]\tgetItemListeners() "
},
{
"code": null,
"e": 2755,
"s": 2685,
"text": "Returns an array of all the item listeners registered on this choice."
},
{
"code": null,
"e": 2822,
"s": 2755,
"text": "<T extends EventListener> T[] getListeners(Class<T> listenerType) "
},
{
"code": null,
"e": 2913,
"s": 2822,
"text": "Returns an array of all the objects currently registered as FooListeners upon this Choice."
},
{
"code": null,
"e": 2937,
"s": 2913,
"text": "int getSelectedIndex() "
},
{
"code": null,
"e": 2987,
"s": 2937,
"text": "Returns the index of the currently selected item."
},
{
"code": null,
"e": 3013,
"s": 2987,
"text": "String\tgetSelectedItem() "
},
{
"code": null,
"e": 3070,
"s": 3013,
"text": "Gets a representation of the current choice as a string."
},
{
"code": null,
"e": 3101,
"s": 3070,
"text": "Object[] getSelectedObjects() "
},
{
"code": null,
"e": 3169,
"s": 3101,
"text": "Returns an array (length 1) containing the currently selected item."
},
{
"code": null,
"e": 3206,
"s": 3169,
"text": "void insert(String item, int index) "
},
{
"code": null,
"e": 3267,
"s": 3206,
"text": "Inserts the item into this choice at the specified position."
},
{
"code": null,
"e": 3298,
"s": 3267,
"text": "protected String paramString()"
},
{
"code": null,
"e": 3359,
"s": 3298,
"text": "Returns a string representing the state of this Choice menu."
},
{
"code": null,
"e": 3400,
"s": 3359,
"text": "protected void processEvent(AWTEvent e) "
},
{
"code": null,
"e": 3433,
"s": 3400,
"text": "Processes events on this choice."
},
{
"code": null,
"e": 3479,
"s": 3433,
"text": "protected void processItemEvent(ItemEvent e) "
},
{
"code": null,
"e": 3591,
"s": 3479,
"text": "Processes item events occurring on this Choice menu by dispatching them to any registered ItemListener objects."
},
{
"code": null,
"e": 3618,
"s": 3591,
"text": "void remove(int position) "
},
{
"code": null,
"e": 3682,
"s": 3618,
"text": "Removes an item from the choice menu at the specified position."
},
{
"code": null,
"e": 3708,
"s": 3682,
"text": "void remove(String item) "
},
{
"code": null,
"e": 3767,
"s": 3708,
"text": "Removes the first occurrence of item from the Choice menu."
},
{
"code": null,
"e": 3785,
"s": 3767,
"text": "void removeAll() "
},
{
"code": null,
"e": 3825,
"s": 3785,
"text": "Removes all items from the choice menu."
},
{
"code": null,
"e": 3866,
"s": 3825,
"text": "void removeItemListener(ItemListener l) "
},
{
"code": null,
"e": 3967,
"s": 3866,
"text": "Removes the specified item listener so that it no longer receives item events from this Choice menu."
},
{
"code": null,
"e": 3989,
"s": 3967,
"text": "void select(int pos) "
},
{
"code": null,
"e": 4074,
"s": 3989,
"text": "Sets the selected item in this Choice menu to be the item at the specified position."
},
{
"code": null,
"e": 4099,
"s": 4074,
"text": "void select(String str) "
},
{
"code": null,
"e": 4202,
"s": 4099,
"text": "Sets the selected item in this Choice menu to be the item whose name is equal to the specified string."
},
{
"code": null,
"e": 4258,
"s": 4202,
"text": "This class inherits methods from the following classes:"
},
{
"code": null,
"e": 4277,
"s": 4258,
"text": "java.awt.Component"
},
{
"code": null,
"e": 4296,
"s": 4277,
"text": "java.awt.Component"
},
{
"code": null,
"e": 4313,
"s": 4296,
"text": "java.lang.Object"
},
{
"code": null,
"e": 4330,
"s": 4313,
"text": "java.lang.Object"
},
{
"code": null,
"e": 4444,
"s": 4330,
"text": "Create the following java program using any editor of your choice in say D:/ > AWT > com > tutorialspoint > gui >"
},
{
"code": null,
"e": 6412,
"s": 4444,
"text": "package com.tutorialspoint.gui;\n\nimport java.awt.*;\nimport java.awt.event.*;\n\npublic class AwtControlDemo {\n\n private Frame mainFrame;\n private Label headerLabel;\n private Label statusLabel;\n private Panel controlPanel;\n\n public AwtControlDemo(){\n prepareGUI();\n }\n\n public static void main(String[] args){\n AwtControlDemo awtControlDemo = new AwtControlDemo();\n awtControlDemo.showChoiceDemo();\n }\n\n private void prepareGUI(){\n mainFrame = new Frame(\"Java AWT Examples\");\n mainFrame.setSize(400,400);\n mainFrame.setLayout(new GridLayout(3, 1));\n mainFrame.addWindowListener(new WindowAdapter() {\n public void windowClosing(WindowEvent windowEvent){\n System.exit(0);\n } \n }); \n headerLabel = new Label();\n headerLabel.setAlignment(Label.CENTER);\n statusLabel = new Label(); \n statusLabel.setAlignment(Label.CENTER);\n statusLabel.setSize(350,100);\n\n controlPanel = new Panel();\n controlPanel.setLayout(new FlowLayout());\n\n mainFrame.add(headerLabel);\n mainFrame.add(controlPanel);\n mainFrame.add(statusLabel);\n mainFrame.setVisible(true); \n }\n\n private void showChoiceDemo(){ \n\n headerLabel.setText(\"Control in action: Choice\"); \n final Choice fruitChoice = new Choice();\n\n fruitChoice.add(\"Apple\");\n fruitChoice.add(\"Grapes\");\n fruitChoice.add(\"Mango\");\n fruitChoice.add(\"Peer\");\n\n Button showButton = new Button(\"Show\");\n\n showButton.addActionListener(new ActionListener() {\n public void actionPerformed(ActionEvent e) { \n String data = \"Fruit Selected: \" \n + fruitChoice.getItem(fruitChoice.getSelectedIndex());\n statusLabel.setText(data);\n }\n }); \n\n controlPanel.add(fruitChoice);\n controlPanel.add(showButton);\n\n mainFrame.setVisible(true); \n }\n}"
},
{
"code": null,
"e": 6503,
"s": 6412,
"text": "Compile the program using command prompt. Go to D:/ > AWT and type the following command."
},
{
"code": null,
"e": 6559,
"s": 6503,
"text": "D:\\AWT>javac com\\tutorialspoint\\gui\\AwtControlDemo.java"
},
{
"code": null,
"e": 6656,
"s": 6559,
"text": "If no error comes that means compilation is successful. Run the program using following command."
},
{
"code": null,
"e": 6706,
"s": 6656,
"text": "D:\\AWT>java com.tutorialspoint.gui.AwtControlDemo"
},
{
"code": null,
"e": 6734,
"s": 6706,
"text": "Verify the following output"
},
{
"code": null,
"e": 6767,
"s": 6734,
"text": "\n 13 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 6775,
"s": 6767,
"text": " EduOLC"
},
{
"code": null,
"e": 6782,
"s": 6775,
"text": " Print"
},
{
"code": null,
"e": 6793,
"s": 6782,
"text": " Add Notes"
}
] |
C# Program to Check the Salary of all Employees is Less than 10000 using LINQ - GeeksforGeeks
|
06 Dec, 2021
Given the data of employees, now our task is to check if all the employee salaries are less than 10000. So we use the All() method of LINQ. This method is used to check if all the elements in the source sequence satisfy the given condition. It will return true if all the elements present in the given sequence pass the test. Otherwise, it will return false. So the solve the given problem we use the following LINQ query:
result = Geeks.All(geek => geek.Emp_Salary < 10000);
Here, result is a boolean type variable that is used to store the final result, Geeks is the sequence and All() method is used to find the list of employees whose salary is less than 10000.
Example:
Input: {id = 301, Name = Mohit, Salary = 10000}
{id = 302, Name = Priya, Salary = 20000}
{id = 303, Name = Sohan, Salary = 40000}
{id = 304, Name = Rohit, Salary = 10000}
Output: False
Input: {id = 401, Name = Rohan, Salary = 1000}
{id = 404, Name = Mohan, Salary = 4000}
Output: True
Example 1:
C#
// C# program to determine the salary of all employees// is less than 10000 using System;using System.Linq;using System.Collections.Generic; class Geek{ #pragma warning disable 169, 414int emp_id;string Emp_Name;int Emp_Salary;string Emp_Department; static void Main(string[] args){ // List of employee details List<Geek> Geeks = new List<Geek>() { new Geek{emp_id = 401, Emp_Name = "Rajat", Emp_Salary = 50000}, new Geek{emp_id = 402, Emp_Name = "Ram", Emp_Salary = 65000}, new Geek{emp_id = 403, Emp_Name = "Krishna", Emp_Salary = 45000}, new Geek{emp_id = 404, Emp_Name = "Sonial", Emp_Salary = 20000}, new Geek{emp_id = 405, Emp_Name = "Mickey", Emp_Salary = 70000}, new Geek{emp_id = 406, Emp_Name = "Kunti", Emp_Salary = 50000}, }; bool result; // Checking the salary of all employees // is less than 10000 result = Geeks.All(geek => geek.Emp_Salary < 10000); // Display result if (result) { Console.Write("All the salaries are less than 10000"); } else { Console.Write("All the salaries are not less than 10000"); }}}
All the salaries are not less than 10000
Example 2:
C#
// C# program to determine the salary of all employees// is less than 10000 using System;using System.Linq;using System.Collections.Generic; class Geek{ #pragma warning disable 169, 414int emp_id;string Emp_Name;int Emp_Salary;string Emp_Department; static void Main(string[] args){ // List of employee details List<Geek> Geeks = new List<Geek>() { new Geek{emp_id = 501, Emp_Name = "Rohan", Emp_Salary = 3000}, new Geek{emp_id = 502, Emp_Name = "Mohan", Emp_Salary = 3000}, new Geek{emp_id = 503, Emp_Name = "Sham", Emp_Salary = 4000}, new Geek{emp_id = 504, Emp_Name = "Sonial", Emp_Salary = 1000}, }; bool result; // Checking the salary of all employees // is less than 10000 result = Geeks.All(geek => geek.Emp_Salary < 10000); // Display the result Console.WriteLine("Is the salary of the Geek's employees is < 10000: " + result);}}
Output
Is the salary of the Geek's employees is < 10000: True
CSharp LINQ
Picked
C#
C# Programs
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Top 50 C# Interview Questions & Answers
Extension Method in C#
HashSet in C# with Examples
C# | Inheritance
Partial Classes in C#
Convert String to Character Array in C#
Socket Programming in C#
Getting a Month Name Using Month Number in C#
Program to Print a New Line in C#
Program to find absolute value of a given number
|
[
{
"code": null,
"e": 23911,
"s": 23883,
"text": "\n06 Dec, 2021"
},
{
"code": null,
"e": 24334,
"s": 23911,
"text": "Given the data of employees, now our task is to check if all the employee salaries are less than 10000. So we use the All() method of LINQ. This method is used to check if all the elements in the source sequence satisfy the given condition. It will return true if all the elements present in the given sequence pass the test. Otherwise, it will return false. So the solve the given problem we use the following LINQ query:"
},
{
"code": null,
"e": 24387,
"s": 24334,
"text": "result = Geeks.All(geek => geek.Emp_Salary < 10000);"
},
{
"code": null,
"e": 24577,
"s": 24387,
"text": "Here, result is a boolean type variable that is used to store the final result, Geeks is the sequence and All() method is used to find the list of employees whose salary is less than 10000."
},
{
"code": null,
"e": 24587,
"s": 24577,
"text": "Example: "
},
{
"code": null,
"e": 24901,
"s": 24587,
"text": "Input: {id = 301, Name = Mohit, Salary = 10000}\n {id = 302, Name = Priya, Salary = 20000}\n {id = 303, Name = Sohan, Salary = 40000}\n {id = 304, Name = Rohit, Salary = 10000}\nOutput: False\n\nInput: {id = 401, Name = Rohan, Salary = 1000}\n {id = 404, Name = Mohan, Salary = 4000}\nOutput: True"
},
{
"code": null,
"e": 24912,
"s": 24901,
"text": "Example 1:"
},
{
"code": null,
"e": 24915,
"s": 24912,
"text": "C#"
},
{
"code": "// C# program to determine the salary of all employees// is less than 10000 using System;using System.Linq;using System.Collections.Generic; class Geek{ #pragma warning disable 169, 414int emp_id;string Emp_Name;int Emp_Salary;string Emp_Department; static void Main(string[] args){ // List of employee details List<Geek> Geeks = new List<Geek>() { new Geek{emp_id = 401, Emp_Name = \"Rajat\", Emp_Salary = 50000}, new Geek{emp_id = 402, Emp_Name = \"Ram\", Emp_Salary = 65000}, new Geek{emp_id = 403, Emp_Name = \"Krishna\", Emp_Salary = 45000}, new Geek{emp_id = 404, Emp_Name = \"Sonial\", Emp_Salary = 20000}, new Geek{emp_id = 405, Emp_Name = \"Mickey\", Emp_Salary = 70000}, new Geek{emp_id = 406, Emp_Name = \"Kunti\", Emp_Salary = 50000}, }; bool result; // Checking the salary of all employees // is less than 10000 result = Geeks.All(geek => geek.Emp_Salary < 10000); // Display result if (result) { Console.Write(\"All the salaries are less than 10000\"); } else { Console.Write(\"All the salaries are not less than 10000\"); }}}",
"e": 26068,
"s": 24915,
"text": null
},
{
"code": null,
"e": 26109,
"s": 26068,
"text": "All the salaries are not less than 10000"
},
{
"code": null,
"e": 26120,
"s": 26109,
"text": "Example 2:"
},
{
"code": null,
"e": 26123,
"s": 26120,
"text": "C#"
},
{
"code": "// C# program to determine the salary of all employees// is less than 10000 using System;using System.Linq;using System.Collections.Generic; class Geek{ #pragma warning disable 169, 414int emp_id;string Emp_Name;int Emp_Salary;string Emp_Department; static void Main(string[] args){ // List of employee details List<Geek> Geeks = new List<Geek>() { new Geek{emp_id = 501, Emp_Name = \"Rohan\", Emp_Salary = 3000}, new Geek{emp_id = 502, Emp_Name = \"Mohan\", Emp_Salary = 3000}, new Geek{emp_id = 503, Emp_Name = \"Sham\", Emp_Salary = 4000}, new Geek{emp_id = 504, Emp_Name = \"Sonial\", Emp_Salary = 1000}, }; bool result; // Checking the salary of all employees // is less than 10000 result = Geeks.All(geek => geek.Emp_Salary < 10000); // Display the result Console.WriteLine(\"Is the salary of the Geek's employees is < 10000: \" + result);}}",
"e": 27046,
"s": 26123,
"text": null
},
{
"code": null,
"e": 27053,
"s": 27046,
"text": "Output"
},
{
"code": null,
"e": 27108,
"s": 27053,
"text": "Is the salary of the Geek's employees is < 10000: True"
},
{
"code": null,
"e": 27122,
"s": 27110,
"text": "CSharp LINQ"
},
{
"code": null,
"e": 27129,
"s": 27122,
"text": "Picked"
},
{
"code": null,
"e": 27132,
"s": 27129,
"text": "C#"
},
{
"code": null,
"e": 27144,
"s": 27132,
"text": "C# Programs"
},
{
"code": null,
"e": 27242,
"s": 27144,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27251,
"s": 27242,
"text": "Comments"
},
{
"code": null,
"e": 27264,
"s": 27251,
"text": "Old Comments"
},
{
"code": null,
"e": 27304,
"s": 27264,
"text": "Top 50 C# Interview Questions & Answers"
},
{
"code": null,
"e": 27327,
"s": 27304,
"text": "Extension Method in C#"
},
{
"code": null,
"e": 27355,
"s": 27327,
"text": "HashSet in C# with Examples"
},
{
"code": null,
"e": 27372,
"s": 27355,
"text": "C# | Inheritance"
},
{
"code": null,
"e": 27394,
"s": 27372,
"text": "Partial Classes in C#"
},
{
"code": null,
"e": 27434,
"s": 27394,
"text": "Convert String to Character Array in C#"
},
{
"code": null,
"e": 27459,
"s": 27434,
"text": "Socket Programming in C#"
},
{
"code": null,
"e": 27505,
"s": 27459,
"text": "Getting a Month Name Using Month Number in C#"
},
{
"code": null,
"e": 27539,
"s": 27505,
"text": "Program to Print a New Line in C#"
}
] |
How to remove empty rows from an R data frame?
|
During the survey or any other medium of data collection, getting all the information from all units is not possible. Sometimes we get partial information and sometimes nothing. Therefore, it is possible that some rows in our data are completely blank and some might have partial data. The blank rows can be removed and the other empty values can be filled with methods that helps to deal with missing information.
Consider the below data frame, it has some missing rows and some missing values −
> x1<-c(rep(c(1,2,3),times=5),"","","",2,1)
> x2<-rep(c(2,4,"",4,""),each=4)
> x3<-rep(c(5,4,2,""),times=c(2,5,3,10))
> df<-data.frame(x1,x2,x3)
> df
x1 x2 x3
1 1 2 5
2 2 2 5
3 3 2 4
4 1 2 4
5 2 4 4
6 3 4 4
7 1 4 4
8 2 4 2
9 3 2
10 1 2
11 2
12 3
13 1 4
14 2 4
15 3 4
16 4
17
18
19 2
20 1
Here, we can see that the rows 17 and 18 are complete blank that means we do not have any data in them. Hence, we can remove them from the data frame as shown below −
> df[!apply(df == "", 1, all),]
x1 x2 x3
1 1 2 5
2 2 2 5
3 3 2 4
4 1 2 4
5 2 4 4
6 3 4 4
7 1 4 4
8 2 4 2
9 3 2
10 1 2
11 2
12 3
13 1 4
14 2 4
15 3 4
16 4
19 2
20 1
|
[
{
"code": null,
"e": 1477,
"s": 1062,
"text": "During the survey or any other medium of data collection, getting all the information from all units is not possible. Sometimes we get partial information and sometimes nothing. Therefore, it is possible that some rows in our data are completely blank and some might have partial data. The blank rows can be removed and the other empty values can be filled with methods that helps to deal with missing information."
},
{
"code": null,
"e": 1559,
"s": 1477,
"text": "Consider the below data frame, it has some missing rows and some missing values −"
},
{
"code": null,
"e": 1847,
"s": 1559,
"text": "> x1<-c(rep(c(1,2,3),times=5),\"\",\"\",\"\",2,1)\n> x2<-rep(c(2,4,\"\",4,\"\"),each=4)\n> x3<-rep(c(5,4,2,\"\"),times=c(2,5,3,10))\n> df<-data.frame(x1,x2,x3)\n> df\nx1 x2 x3\n1 1 2 5\n2 2 2 5\n3 3 2 4\n4 1 2 4\n5 2 4 4\n6 3 4 4\n7 1 4 4\n8 2 4 2\n9 3 2\n10 1 2\n11 2\n12 3\n13 1 4\n14 2 4\n15 3 4\n16 4\n17\n18\n19 2\n20 1"
},
{
"code": null,
"e": 2014,
"s": 1847,
"text": "Here, we can see that the rows 17 and 18 are complete blank that means we do not have any data in them. Hence, we can remove them from the data frame as shown below −"
},
{
"code": null,
"e": 2178,
"s": 2014,
"text": "> df[!apply(df == \"\", 1, all),]\nx1 x2 x3\n1 1 2 5\n2 2 2 5\n3 3 2 4\n4 1 2 4\n5 2 4 4\n6 3 4 4\n7 1 4 4\n8 2 4 2\n9 3 2\n10 1 2\n11 2\n12 3\n13 1 4\n14 2 4\n15 3 4\n16 4\n19 2\n20 1"
}
] |
Image Segmentation using K-means. Image Segmentation from scratch using... | by Kartik Narula | Towards Data Science
|
While pre-existing libraries(such as OpenCV) save time and effort, implementing the basic algorithms from scratch is another delight.
In this post, I will show the step by step implementation of image segmentation using k-means in python. We train the pipeline on 1100 images across 8 categories sampled from the SUN database. Image segmentation is the grouping of pixels of similar types together. The pipeline can be further extended to classify an image. An image of a park, for instance, will have a greater number of green-colored pixels, for example than an image of a highway. We will talk about classification in a later post.
We established that an image of a park will have a greater number of green-colored pixels. While color is a good way of differentiating pixels, it can create issues at times(water and sky have the same color, for instance). Hence, we need other ways to characterize pixels. In other words, we need to extract useful features from the image. Deep learning algorithms, such as CNN’s find useful features automatically for us but’s that’s outside the scope of this article.
For extracting features, we will use four kinds of image filters with multiple scales- gaussian, derivative of gaussian in the x-direction, derivative of gaussian in the y-direction and the Laplacian of gaussian. Each filter extracts different sets of features from the image. Smaller scales pick up narrower features and larger scales pick up broader features(Think of forest and the trees). We will call the set of filters with different scales as a ‘filter bank’.
Each row in the above figure represents one particular scale and each column represents a different type of filter. Gaussian filter(first column) blurs the image and omits the higher frequencies. Derivative of Gaussian in X direction(second column) picks up the vertical edges. The derivative of gaussian in Y direction(third column) picks up the horizontal edges. The laplacian of gaussian (fourth column) detects regions of rapid intensity changes.
We run the filter bank over an image(which is an array of size H*W*3, where H and W are the height and width of the image, respectively and 3 is the number of channels). We output an array of size(H*W*3F), where F is the size of the filter bank. If we use 4 filters at 5 different scales, F is 4*5=20
From T training images, we get a filter response vector of size T*H*W*3F. To save computation resources, we select alpha random filter responses from each image. Thus, the final size of the filter response vector is alpha* T * F.
We will use this vector of filter responses as the input to our k-means algorithm. It will output K(a hyperparameter) cluster centers. Each pixel of the test image will then be mapped to the nearest cluster center and painted with a corresponding color to form a ‘visual map’. The resulting ‘visual map’ would have similar pixels grouped together and would be the final output of our ‘image segmentation’ algorithm.
Bravo, let’s see the whole pipeline in action. We will start by defining the values of the hyperparameters, which can be tuned later to enhance performance.
filter_scales=[1,1.5,2]K=10alpha=25
We now define the function extract_fiter_responses which returns a vector of size H*W*3F for an image of size H*W*3 and filter bank size F. We use the inbuilt functions in scipy for generating filter_responses.
def extract_filter_responses(filter_scales, img): if len(img.shape)==2: #convert image into 3 channel img=np.dstack((img,img,img)) modified_img=skimage.color.rgb2lab(img) r_channel=modified_img[:,:,0] g_channel=modified_img[:, :,1] b_channel=modified_img[:, : ,2] g=[] for i in range(len(filter_scales)): modified_r_channel=scipy.ndimage.gaussian_filter(r_channel, filter_scales[i]) modified_g_channel=scipy.ndimage.gaussian_filter(g_channel, filter_scales[i]) modified_b_channel=scipy.ndimage.gaussian_filter(b_channel, filter_scales[i]) modified_r_dog_X=scipy.ndimage.gaussian_filter(r_channel, filter_scales[i],(0,1)) modified_g_dog_X=scipy.ndimage.gaussian_filter(g_channel, filter_scales[i], (0,1)) modified_b_dog_X=scipy.ndimage.gaussian_filter(b_channel, filter_scales[i], (0,1)) modified_r_dog_Y=scipy.ndimage.gaussian_filter(r_channel, filter_scales[i],(1,0)) modified_g_dog_Y=scipy.ndimage.gaussian_filter(g_channel, filter_scales[i], (1,0)) modified_b_dog_Y=scipy.ndimage.gaussian_filter(b_channel, filter_scales[i], (1,0)) modified_r_log=scipy.ndimage.gaussian_laplace(r_channel, filter_scales[i]) modified_g_log=scipy.ndimage.gaussian_laplace(g_channel, filter_scales[i]) modified_b_log=scipy.ndimage.gaussian_laplace(b_channel, filter_scales[i]) a=np.dstack((modified_r_channel, modified_g_channel, modified_b_channel)) b=np.dstack((modified_r_dog_X, modified_g_dog_X, modified_b_dog_X)) c=np.dstack((modified_r_dog_Y, modified_g_dog_Y, modified_b_dog_Y)) d=np.dstack((modified_r_log, modified_g_log, modified_b_log)) filter_response=np.dstack((a,b,c,d)) g.append(filter_response) filter_responses=np.dstack(g) return filter_responses
The next function will extract alpha random filter responses from the vector returned by extract_filter_responses
def compute_dictionary_one_image(alpha, img): response=extract_filter_responses(filter_scales, img) d=response.shape[0]*response.shape[1] response=response.reshape((d,-1)) alphas=np.random.choice(d, alpha) alphaed_response=response[alphas] return alphaed_response
The next function pools the alpha responses from each image into a single vector and applies the k-means function from scipy. The function takes the training files path as an argument.
def compute_dictionary(K, alpha, train_files): m=[] for i in range(len(train_files)): img_path = train_files[i] img = Image.open(img_path) img = np.array(img).astype(np.float32)/255 re=compute_dictionary_one_image(alpha, img) m.append(re) m=np.array(m) n=m.shape[0]*m.shape[1] final_response=m.reshape((n,-1)) kmeans=KMeans(n_clusters=K).fit(final_response) dictionary=kmeans.cluster_centers_ return dictionary
Cool. So now we have K cluster centers. For a test image, we need to map each pixel of the image with the nearest cluster center and plot the resulting visualization. We use the scipy.spatial.distance.cdist to find the index of the closest cluster center. This is illustrated in the get_visual_maps() function below
def get_visual_words(filter_scales, img, dictionary): response=extract_filter_responses(filter_scales, img) response=response.reshape(response.shape[0]*response.shape[1],-1) dist=scipy.spatial.distance.cdist(response, dictionary) visual_words=np.argmin(dist, axis=1) visual_words=visual_words.reshape(img.shape[0],img.shape[1]) return visual_words
We will now use these functions to create a pipeline that runs k-means on a training set of 1177 images and check the performance on a test image.
#compute cluster centerscompute_dictionary(K, alpha, train_files)#test on a image img_path = 'image path'img = Image.open(img_path)img = np.array(img).astype(np.float32)/255wordmap = visual_words.get_visual_words(filter_scales, img, dictionary)plt.imshow(wordmap)
Here are a few sample visualizations of the algorithm. I used Filter scales=[1,1.5, 2], K=25 and alpha=10. We are able to identify different regions of the image. In the first image(of a kitchen) we can identify different regions, the table, stove, etc, and so on. Feel free to tune these parameters and see if you can get even better results!
In the next post, we will extend the pipeline that classifies an image by its type(The first is an image of a kitchen, second is an image of a highway, and so on..)
|
[
{
"code": null,
"e": 181,
"s": 47,
"text": "While pre-existing libraries(such as OpenCV) save time and effort, implementing the basic algorithms from scratch is another delight."
},
{
"code": null,
"e": 682,
"s": 181,
"text": "In this post, I will show the step by step implementation of image segmentation using k-means in python. We train the pipeline on 1100 images across 8 categories sampled from the SUN database. Image segmentation is the grouping of pixels of similar types together. The pipeline can be further extended to classify an image. An image of a park, for instance, will have a greater number of green-colored pixels, for example than an image of a highway. We will talk about classification in a later post."
},
{
"code": null,
"e": 1153,
"s": 682,
"text": "We established that an image of a park will have a greater number of green-colored pixels. While color is a good way of differentiating pixels, it can create issues at times(water and sky have the same color, for instance). Hence, we need other ways to characterize pixels. In other words, we need to extract useful features from the image. Deep learning algorithms, such as CNN’s find useful features automatically for us but’s that’s outside the scope of this article."
},
{
"code": null,
"e": 1620,
"s": 1153,
"text": "For extracting features, we will use four kinds of image filters with multiple scales- gaussian, derivative of gaussian in the x-direction, derivative of gaussian in the y-direction and the Laplacian of gaussian. Each filter extracts different sets of features from the image. Smaller scales pick up narrower features and larger scales pick up broader features(Think of forest and the trees). We will call the set of filters with different scales as a ‘filter bank’."
},
{
"code": null,
"e": 2071,
"s": 1620,
"text": "Each row in the above figure represents one particular scale and each column represents a different type of filter. Gaussian filter(first column) blurs the image and omits the higher frequencies. Derivative of Gaussian in X direction(second column) picks up the vertical edges. The derivative of gaussian in Y direction(third column) picks up the horizontal edges. The laplacian of gaussian (fourth column) detects regions of rapid intensity changes."
},
{
"code": null,
"e": 2372,
"s": 2071,
"text": "We run the filter bank over an image(which is an array of size H*W*3, where H and W are the height and width of the image, respectively and 3 is the number of channels). We output an array of size(H*W*3F), where F is the size of the filter bank. If we use 4 filters at 5 different scales, F is 4*5=20"
},
{
"code": null,
"e": 2602,
"s": 2372,
"text": "From T training images, we get a filter response vector of size T*H*W*3F. To save computation resources, we select alpha random filter responses from each image. Thus, the final size of the filter response vector is alpha* T * F."
},
{
"code": null,
"e": 3018,
"s": 2602,
"text": "We will use this vector of filter responses as the input to our k-means algorithm. It will output K(a hyperparameter) cluster centers. Each pixel of the test image will then be mapped to the nearest cluster center and painted with a corresponding color to form a ‘visual map’. The resulting ‘visual map’ would have similar pixels grouped together and would be the final output of our ‘image segmentation’ algorithm."
},
{
"code": null,
"e": 3175,
"s": 3018,
"text": "Bravo, let’s see the whole pipeline in action. We will start by defining the values of the hyperparameters, which can be tuned later to enhance performance."
},
{
"code": null,
"e": 3211,
"s": 3175,
"text": "filter_scales=[1,1.5,2]K=10alpha=25"
},
{
"code": null,
"e": 3422,
"s": 3211,
"text": "We now define the function extract_fiter_responses which returns a vector of size H*W*3F for an image of size H*W*3 and filter bank size F. We use the inbuilt functions in scipy for generating filter_responses."
},
{
"code": null,
"e": 5347,
"s": 3422,
"text": "def extract_filter_responses(filter_scales, img): if len(img.shape)==2: #convert image into 3 channel img=np.dstack((img,img,img)) modified_img=skimage.color.rgb2lab(img) r_channel=modified_img[:,:,0] g_channel=modified_img[:, :,1] b_channel=modified_img[:, : ,2] g=[] for i in range(len(filter_scales)): modified_r_channel=scipy.ndimage.gaussian_filter(r_channel, filter_scales[i]) modified_g_channel=scipy.ndimage.gaussian_filter(g_channel, filter_scales[i]) modified_b_channel=scipy.ndimage.gaussian_filter(b_channel, filter_scales[i]) modified_r_dog_X=scipy.ndimage.gaussian_filter(r_channel, filter_scales[i],(0,1)) modified_g_dog_X=scipy.ndimage.gaussian_filter(g_channel, filter_scales[i], (0,1)) modified_b_dog_X=scipy.ndimage.gaussian_filter(b_channel, filter_scales[i], (0,1)) modified_r_dog_Y=scipy.ndimage.gaussian_filter(r_channel, filter_scales[i],(1,0)) modified_g_dog_Y=scipy.ndimage.gaussian_filter(g_channel, filter_scales[i], (1,0)) modified_b_dog_Y=scipy.ndimage.gaussian_filter(b_channel, filter_scales[i], (1,0)) modified_r_log=scipy.ndimage.gaussian_laplace(r_channel, filter_scales[i]) modified_g_log=scipy.ndimage.gaussian_laplace(g_channel, filter_scales[i]) modified_b_log=scipy.ndimage.gaussian_laplace(b_channel, filter_scales[i]) a=np.dstack((modified_r_channel, modified_g_channel, modified_b_channel)) b=np.dstack((modified_r_dog_X, modified_g_dog_X, modified_b_dog_X)) c=np.dstack((modified_r_dog_Y, modified_g_dog_Y, modified_b_dog_Y)) d=np.dstack((modified_r_log, modified_g_log, modified_b_log)) filter_response=np.dstack((a,b,c,d)) g.append(filter_response) filter_responses=np.dstack(g) return filter_responses"
},
{
"code": null,
"e": 5461,
"s": 5347,
"text": "The next function will extract alpha random filter responses from the vector returned by extract_filter_responses"
},
{
"code": null,
"e": 5751,
"s": 5461,
"text": "def compute_dictionary_one_image(alpha, img): response=extract_filter_responses(filter_scales, img) d=response.shape[0]*response.shape[1] response=response.reshape((d,-1)) alphas=np.random.choice(d, alpha) alphaed_response=response[alphas] return alphaed_response"
},
{
"code": null,
"e": 5936,
"s": 5751,
"text": "The next function pools the alpha responses from each image into a single vector and applies the k-means function from scipy. The function takes the training files path as an argument."
},
{
"code": null,
"e": 6425,
"s": 5936,
"text": "def compute_dictionary(K, alpha, train_files): m=[] for i in range(len(train_files)): img_path = train_files[i] img = Image.open(img_path) img = np.array(img).astype(np.float32)/255 re=compute_dictionary_one_image(alpha, img) m.append(re) m=np.array(m) n=m.shape[0]*m.shape[1] final_response=m.reshape((n,-1)) kmeans=KMeans(n_clusters=K).fit(final_response) dictionary=kmeans.cluster_centers_ return dictionary"
},
{
"code": null,
"e": 6741,
"s": 6425,
"text": "Cool. So now we have K cluster centers. For a test image, we need to map each pixel of the image with the nearest cluster center and plot the resulting visualization. We use the scipy.spatial.distance.cdist to find the index of the closest cluster center. This is illustrated in the get_visual_maps() function below"
},
{
"code": null,
"e": 7116,
"s": 6741,
"text": "def get_visual_words(filter_scales, img, dictionary): response=extract_filter_responses(filter_scales, img) response=response.reshape(response.shape[0]*response.shape[1],-1) dist=scipy.spatial.distance.cdist(response, dictionary) visual_words=np.argmin(dist, axis=1) visual_words=visual_words.reshape(img.shape[0],img.shape[1]) return visual_words"
},
{
"code": null,
"e": 7263,
"s": 7116,
"text": "We will now use these functions to create a pipeline that runs k-means on a training set of 1177 images and check the performance on a test image."
},
{
"code": null,
"e": 7532,
"s": 7263,
"text": "#compute cluster centerscompute_dictionary(K, alpha, train_files)#test on a image img_path = 'image path'img = Image.open(img_path)img = np.array(img).astype(np.float32)/255wordmap = visual_words.get_visual_words(filter_scales, img, dictionary)plt.imshow(wordmap)"
},
{
"code": null,
"e": 7876,
"s": 7532,
"text": "Here are a few sample visualizations of the algorithm. I used Filter scales=[1,1.5, 2], K=25 and alpha=10. We are able to identify different regions of the image. In the first image(of a kitchen) we can identify different regions, the table, stove, etc, and so on. Feel free to tune these parameters and see if you can get even better results!"
}
] |
How to get hidden files and folders using PowerShell?
|
To get hidden files and folders using PowerShell, we need to use the Get-ChildItem command with the - Hidden or -Force parameter.
The difference between the two mentioned parameters is Hidden parameter only retrieves the hidden files and folders while the Force parameter retrieves all the files and folders including Hidden, read-only and normal files and folder.
For example, We have one folder named Data inside folder C:\temp and we need to retrieve it.
PS C:\> Get-ChildItem C:\Temp\ -Hidden
Directory: C:\Temp
Mode LastWriteTime Length Name
---- ------------- ------ ----
d--h- 9/28/2020 7:57 AM Data
You can check the mode of the above folder in the output where ‘d’ indicates the directory and the ‘h’ attribute indicates the Hidden.
If we use the Force parameter, PowerShell will retrieve all attributed files and folders.
PS C:\> Get-ChildItem C:\Temp\ -Force
Directory: C:\Temp
Mode LastWriteTime Length Name
---- ------------- ------ ----
d--h- 9/28/2020 7:57 AM Data
d---- 8/11/2020 10:58 AM Help Files
d---- 7/29/2020 6:01 PM iisadministration
You can also use cmd command Dir to retrieve hidden files and folder with switch -h.
PS C:\> dir -h
Directory: C:\
Mode LastWriteTime Length Name
---- ------------- ------ ----
d--hs 6/4/2020 2:28 PM $Recycle.Bin
d--hs 9/27/2020 10:07 PM Config.Msi
d--hs 6/3/2020 1:00 AM IntelOptaneData
d--h- 6/5/2020 12:18 PM OneDriveTemp
d--h- 9/25/2020 8:02 AM ProgramData
d--hs 6/2/2020 12:32 PM Recovery
-a-hs 9/28/2020 7:54 AM 6768705536 hiberfil.sys
-a-hs 9/16/2020 7:47 AM 12348030976 pagefile.sys
-a-hs 9/18/2020 7:06 PM 285212672 swapfile.sys
The above example retrieves all files and folder which has a hidden attribute.
To check the same settings on the remote computer, use the Invoke-Command method. For example,
Invoke-Command -ComputerName Test1-Win2k16 -ScriptBlock{Get-ChildItem c:\ - Hidden}
Mode LastWriteTime Length Name PSComputerName
---- ------------- ------ ---- --------------
d--hs- 7/29/2020 10:21 PM $Recycle.Bin Test1-Win2k16
d--hsl 7/21/2020 4:36 PM Documents and Settings Test1-Win2k16
d--h-- 9/20/2020 3:24 AM ProgramData Test1-Win2k16
d--hs- 7/21/2020 4:36 PM Recovery Test1 -Win2k16
d--hs- 7/27/2020 6:31 AM System Volume Information Test1-Win2k16
-arhs- 7/16/2016 6:18 AM 384322 bootmgr Test1-Win2k16
-a-hs- 7/16/2016 6:18 AM 1 BOOTNXT Test1-Win2k16
-a-hs- 9/28/2020 10:44 PM 1006632960 pagefile.sys Test1-Win2k16
You can see the hidden files and folders on the remote computer.
|
[
{
"code": null,
"e": 1192,
"s": 1062,
"text": "To get hidden files and folders using PowerShell, we need to use the Get-ChildItem command with the - Hidden or -Force parameter."
},
{
"code": null,
"e": 1427,
"s": 1192,
"text": "The difference between the two mentioned parameters is Hidden parameter only retrieves the hidden files and folders while the Force parameter retrieves all the files and folders including Hidden, read-only and normal files and folder."
},
{
"code": null,
"e": 1520,
"s": 1427,
"text": "For example, We have one folder named Data inside folder C:\\temp and we need to retrieve it."
},
{
"code": null,
"e": 1705,
"s": 1520,
"text": "PS C:\\> Get-ChildItem C:\\Temp\\ -Hidden\nDirectory: C:\\Temp\nMode LastWriteTime Length Name\n---- ------------- ------ ----\nd--h- 9/28/2020 7:57 AM Data"
},
{
"code": null,
"e": 1840,
"s": 1705,
"text": "You can check the mode of the above folder in the output where ‘d’ indicates the directory and the ‘h’ attribute indicates the Hidden."
},
{
"code": null,
"e": 1930,
"s": 1840,
"text": "If we use the Force parameter, PowerShell will retrieve all attributed files and folders."
},
{
"code": null,
"e": 2226,
"s": 1930,
"text": "PS C:\\> Get-ChildItem C:\\Temp\\ -Force\nDirectory: C:\\Temp\nMode LastWriteTime Length Name\n---- ------------- ------ ----\nd--h- 9/28/2020 7:57 AM Data\nd---- 8/11/2020 10:58 AM Help Files\nd---- 7/29/2020 6:01 PM iisadministration"
},
{
"code": null,
"e": 2311,
"s": 2226,
"text": "You can also use cmd command Dir to retrieve hidden files and folder with switch -h."
},
{
"code": null,
"e": 2981,
"s": 2311,
"text": "PS C:\\> dir -h\nDirectory: C:\\\nMode LastWriteTime Length Name\n---- ------------- ------ ----\nd--hs 6/4/2020 2:28 PM $Recycle.Bin\nd--hs 9/27/2020 10:07 PM Config.Msi\nd--hs 6/3/2020 1:00 AM IntelOptaneData\nd--h- 6/5/2020 12:18 PM OneDriveTemp\nd--h- 9/25/2020 8:02 AM ProgramData\nd--hs 6/2/2020 12:32 PM Recovery\n-a-hs 9/28/2020 7:54 AM 6768705536 hiberfil.sys\n-a-hs 9/16/2020 7:47 AM 12348030976 pagefile.sys\n-a-hs 9/18/2020 7:06 PM 285212672 swapfile.sys"
},
{
"code": null,
"e": 3060,
"s": 2981,
"text": "The above example retrieves all files and folder which has a hidden attribute."
},
{
"code": null,
"e": 3155,
"s": 3060,
"text": "To check the same settings on the remote computer, use the Invoke-Command method. For example,"
},
{
"code": null,
"e": 3239,
"s": 3155,
"text": "Invoke-Command -ComputerName Test1-Win2k16 -ScriptBlock{Get-ChildItem c:\\ - Hidden}"
},
{
"code": null,
"e": 3931,
"s": 3239,
"text": "Mode LastWriteTime Length Name PSComputerName\n---- ------------- ------ ---- --------------\nd--hs- 7/29/2020 10:21 PM $Recycle.Bin Test1-Win2k16\nd--hsl 7/21/2020 4:36 PM Documents and Settings Test1-Win2k16\nd--h-- 9/20/2020 3:24 AM ProgramData Test1-Win2k16\nd--hs- 7/21/2020 4:36 PM Recovery Test1 -Win2k16\nd--hs- 7/27/2020 6:31 AM System Volume Information Test1-Win2k16\n-arhs- 7/16/2016 6:18 AM 384322 bootmgr Test1-Win2k16\n-a-hs- 7/16/2016 6:18 AM 1 BOOTNXT Test1-Win2k16\n-a-hs- 9/28/2020 10:44 PM 1006632960 pagefile.sys Test1-Win2k16"
},
{
"code": null,
"e": 3996,
"s": 3931,
"text": "You can see the hidden files and folders on the remote computer."
}
] |
Cordova - Splash Screen
|
This plugin is used to display a splash screen on application launch.
Splash screen plugin can be installed in command prompt window by running the following code.
C:\Users\username\Desktop\CordovaProject>cordova plugin add cordova-plugin-splashscreen
Adding splash screen is different from adding the other Cordova plugins. We need to open config.xml and add the following code snippets inside the widget element.
First snippet is SplashScreen. It has value property which is the name of the images in platform/android/res/drawable- folders. Cordova offers default screen.png images that we are using in this example, but you will probably want to add your own images. Important thing is to add images for portrait and landscape view and also to cover different screen sizes.
<preference name = "SplashScreen" value = "screen" />
Second snippet we need to add is SplashScreenDelay. We are setting value to 3000 to hide the splash screen after three seconds.
<preference name = "SplashScreenDelay" value = "3000" />
The last preference is optional. If value is set to true, the image will not be stretched to fit screen. If it is set to false, it will be stretched.
<preference name = "SplashMaintainAspectRatio" value = "true" />
Now when we run the app, we will see the splash screen.
45 Lectures
2 hours
Skillbakerystudios
16 Lectures
1 hours
Nilay Mehta
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2250,
"s": 2180,
"text": "This plugin is used to display a splash screen on application launch."
},
{
"code": null,
"e": 2344,
"s": 2250,
"text": "Splash screen plugin can be installed in command prompt window by running the following code."
},
{
"code": null,
"e": 2433,
"s": 2344,
"text": "C:\\Users\\username\\Desktop\\CordovaProject>cordova plugin add cordova-plugin-splashscreen\n"
},
{
"code": null,
"e": 2596,
"s": 2433,
"text": "Adding splash screen is different from adding the other Cordova plugins. We need to open config.xml and add the following code snippets inside the widget element."
},
{
"code": null,
"e": 2958,
"s": 2596,
"text": "First snippet is SplashScreen. It has value property which is the name of the images in platform/android/res/drawable- folders. Cordova offers default screen.png images that we are using in this example, but you will probably want to add your own images. Important thing is to add images for portrait and landscape view and also to cover different screen sizes."
},
{
"code": null,
"e": 3012,
"s": 2958,
"text": "<preference name = \"SplashScreen\" value = \"screen\" />"
},
{
"code": null,
"e": 3140,
"s": 3012,
"text": "Second snippet we need to add is SplashScreenDelay. We are setting value to 3000 to hide the splash screen after three seconds."
},
{
"code": null,
"e": 3197,
"s": 3140,
"text": "<preference name = \"SplashScreenDelay\" value = \"3000\" />"
},
{
"code": null,
"e": 3347,
"s": 3197,
"text": "The last preference is optional. If value is set to true, the image will not be stretched to fit screen. If it is set to false, it will be stretched."
},
{
"code": null,
"e": 3412,
"s": 3347,
"text": "<preference name = \"SplashMaintainAspectRatio\" value = \"true\" />"
},
{
"code": null,
"e": 3468,
"s": 3412,
"text": "Now when we run the app, we will see the splash screen."
},
{
"code": null,
"e": 3501,
"s": 3468,
"text": "\n 45 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 3521,
"s": 3501,
"text": " Skillbakerystudios"
},
{
"code": null,
"e": 3554,
"s": 3521,
"text": "\n 16 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 3567,
"s": 3554,
"text": " Nilay Mehta"
},
{
"code": null,
"e": 3574,
"s": 3567,
"text": " Print"
},
{
"code": null,
"e": 3585,
"s": 3574,
"text": " Add Notes"
}
] |
Calling A Super Class Constructor in Scala - GeeksforGeeks
|
28 Feb, 2022
Prerequisite – Scala ConstructorsIn Scala, Constructors are used to initialize an object’s state and are executed at the time of object creation. There is a single primary constructor and all the other constructors must ultimately chain into it. When we define a subclass in Scala, we control the superclass constructor that is called by its primary constructor when we define the extends portion of the subclass declaration.
With one constructor: An example of calling a super class constructor Example:
Scala
// Scala program to illustrate// calling a super class constructor // Primary constructorclass GFG (var message: String){ println(message)} // Calling the super class constructorclass Subclass (message: String) extends GFG (message){ def display() { println("Subclass constructor called") }} // Creating objectobject Main{ // Main method def main(args: Array[String]) { // Creating object of Subclass var obj = new Subclass("Geeksforgeeks"); obj.display(); }}
Geeksforgeeks
Subclass constructor called
In the above example, the subclass is defined to call the primary constructor of the GFG class, which is a single argument constructor that takes message as its parameter. When defining a subclass in Scala, one controls the Superclass constructor that’s called by the Subclass’s primary constructor when defining the extends segment of the Subclass declaration.
With multiple constructors : In a case with the Superclass having multiple constructors, any of those constructors can be called using the primary constructor of the Subclass. For Example, in the following code, the double argument constructor of the Superclass is called by the primary constructor of the Subclass using the extends clause by defining the specific constructor.Example:
Scala
// Scala program to illustrate// calling a specific super class constructor // Primary constructor (1)class GFG (var message: String, var num: Int){ println(message+num) // Auxiliary constructor (2) def this (message: String) { this(message, 0) } } // Calling the super class constructor with 2 argumentsclass Subclass (message: String) extends GFG (message, 3000){ def display() { println("Subclass constructor called") }} // Creating objectobject GFG{ // Main method def main(args: Array[String]) { // Creating object of Subclass var obj = new Subclass("Article count "); obj.display(); }}
Article count 3000
Subclass constructor called
We can call the single argument constructor here, By default another argument value will be 0. Example:
Scala
// Scala program to illustrate// calling a specific super class constructor // Primary constructor (1)class GFG (var message: String, var num: Int){ println(message + num) // Auxiliary constructor (2) def this (message: String) { this(message, 0) } } // Calling the superclass constructor with 1 argumentsclass Subclass (message: String) extends GFG (message){ def display() { println("Subclass constructor called") }} // Creating objectobject GFG{ // Main method def main(args: Array[String]) { // Creating object of Subclass var obj = new Subclass("Article Count "); obj.display(); }}
Article Count 0
Subclass constructor called
BinuKumar
Picked
Scala
Scala-Constructor
Scala
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Inheritance in Scala
Hello World in Scala
Scala | Option
Scala ListBuffer
Scala | Traits
Method Overriding in Scala
Scala Sequence
Comments In Scala
Scala List exists() method with example
Scala List map() method with example
|
[
{
"code": null,
"e": 24056,
"s": 24028,
"text": "\n28 Feb, 2022"
},
{
"code": null,
"e": 24484,
"s": 24056,
"text": "Prerequisite – Scala ConstructorsIn Scala, Constructors are used to initialize an object’s state and are executed at the time of object creation. There is a single primary constructor and all the other constructors must ultimately chain into it. When we define a subclass in Scala, we control the superclass constructor that is called by its primary constructor when we define the extends portion of the subclass declaration. "
},
{
"code": null,
"e": 24565,
"s": 24484,
"text": "With one constructor: An example of calling a super class constructor Example: "
},
{
"code": null,
"e": 24571,
"s": 24565,
"text": "Scala"
},
{
"code": "// Scala program to illustrate// calling a super class constructor // Primary constructorclass GFG (var message: String){ println(message)} // Calling the super class constructorclass Subclass (message: String) extends GFG (message){ def display() { println(\"Subclass constructor called\") }} // Creating objectobject Main{ // Main method def main(args: Array[String]) { // Creating object of Subclass var obj = new Subclass(\"Geeksforgeeks\"); obj.display(); }}",
"e": 25093,
"s": 24571,
"text": null
},
{
"code": null,
"e": 25137,
"s": 25095,
"text": "Geeksforgeeks\nSubclass constructor called"
},
{
"code": null,
"e": 25501,
"s": 25139,
"text": "In the above example, the subclass is defined to call the primary constructor of the GFG class, which is a single argument constructor that takes message as its parameter. When defining a subclass in Scala, one controls the Superclass constructor that’s called by the Subclass’s primary constructor when defining the extends segment of the Subclass declaration."
},
{
"code": null,
"e": 25889,
"s": 25501,
"text": "With multiple constructors : In a case with the Superclass having multiple constructors, any of those constructors can be called using the primary constructor of the Subclass. For Example, in the following code, the double argument constructor of the Superclass is called by the primary constructor of the Subclass using the extends clause by defining the specific constructor.Example: "
},
{
"code": null,
"e": 25895,
"s": 25889,
"text": "Scala"
},
{
"code": "// Scala program to illustrate// calling a specific super class constructor // Primary constructor (1)class GFG (var message: String, var num: Int){ println(message+num) // Auxiliary constructor (2) def this (message: String) { this(message, 0) } } // Calling the super class constructor with 2 argumentsclass Subclass (message: String) extends GFG (message, 3000){ def display() { println(\"Subclass constructor called\") }} // Creating objectobject GFG{ // Main method def main(args: Array[String]) { // Creating object of Subclass var obj = new Subclass(\"Article count \"); obj.display(); }}",
"e": 26590,
"s": 25895,
"text": null
},
{
"code": null,
"e": 26639,
"s": 26592,
"text": "Article count 3000\nSubclass constructor called"
},
{
"code": null,
"e": 26749,
"s": 26643,
"text": "We can call the single argument constructor here, By default another argument value will be 0. Example: "
},
{
"code": null,
"e": 26755,
"s": 26749,
"text": "Scala"
},
{
"code": "// Scala program to illustrate// calling a specific super class constructor // Primary constructor (1)class GFG (var message: String, var num: Int){ println(message + num) // Auxiliary constructor (2) def this (message: String) { this(message, 0) } } // Calling the superclass constructor with 1 argumentsclass Subclass (message: String) extends GFG (message){ def display() { println(\"Subclass constructor called\") }} // Creating objectobject GFG{ // Main method def main(args: Array[String]) { // Creating object of Subclass var obj = new Subclass(\"Article Count \"); obj.display(); }}",
"e": 27446,
"s": 26755,
"text": null
},
{
"code": null,
"e": 27492,
"s": 27448,
"text": "Article Count 0\nSubclass constructor called"
},
{
"code": null,
"e": 27506,
"s": 27496,
"text": "BinuKumar"
},
{
"code": null,
"e": 27513,
"s": 27506,
"text": "Picked"
},
{
"code": null,
"e": 27519,
"s": 27513,
"text": "Scala"
},
{
"code": null,
"e": 27537,
"s": 27519,
"text": "Scala-Constructor"
},
{
"code": null,
"e": 27543,
"s": 27537,
"text": "Scala"
},
{
"code": null,
"e": 27641,
"s": 27543,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27662,
"s": 27641,
"text": "Inheritance in Scala"
},
{
"code": null,
"e": 27683,
"s": 27662,
"text": "Hello World in Scala"
},
{
"code": null,
"e": 27698,
"s": 27683,
"text": "Scala | Option"
},
{
"code": null,
"e": 27715,
"s": 27698,
"text": "Scala ListBuffer"
},
{
"code": null,
"e": 27730,
"s": 27715,
"text": "Scala | Traits"
},
{
"code": null,
"e": 27757,
"s": 27730,
"text": "Method Overriding in Scala"
},
{
"code": null,
"e": 27772,
"s": 27757,
"text": "Scala Sequence"
},
{
"code": null,
"e": 27790,
"s": 27772,
"text": "Comments In Scala"
},
{
"code": null,
"e": 27830,
"s": 27790,
"text": "Scala List exists() method with example"
}
] |
Reinforcement learning (RL) 101 with Python | by Gerard Martínez | Towards Data Science
|
In this post we will introduce few basic concepts of classical RL applied to a very simple task called gridworld in order to solve the so-called state-value function, a function that tells us how good is to be in a certain state t based on future rewards that can be achieved from that state. To do so we will use three different approaches: (1) dynamic programming, (2) Monte Carlo simulations and (3) Temporal-Difference (TD).
Reinforcement learning is a discipline that tries to develop and understand algorithms to model and train agents that can interact with its environment to maximize a specific goal. The idea is quite straightforward: the agent is aware of its own State t, takes an Action At, which leads him to State t+1 and receives a reward Rt. The following scheme summarizes this iterative process of St →At →Rt →St+1 →At+1 →Rt+1 →St+2...:
An example of this process would be a robot with the task of collecting empty cans from the ground. For instance, the robot could be given 1 point every time the robot picks a can and 0 the rest of the time. You can imagine that the actions of the robot could be several, e.g. move front/back/left/right, extend the arm up/down, etc. If the robot was fancy enough, the representation of the environment (perceived as states) could be a simple picture of the street in front of the robot. The robot would be set free to wander around and learn to pick the cans, for which we would give a positive reward of +1 per can. We could then set a termination state, for instance picking 10 cans (reaching reward = 10). The robot would loop in the agent-environment cycle until the terminal state would be achieved, which would mean the end of the task or episode, as it is known.
The gridworld task is similar to the aforementioned example, just that in this case the robot must move through the grid to end up in a termination state (grey squares). Each grid square is a state. The actions that can be taken are up, down, left or right and we assume that these actions are deterministic, meaning every time that the robot picks the option to go up, the robot will go up. There’s an exception, which is when the robot hits the wall. In this case, the final state is the same as the initial state (cannot break the wall). Finally, for every move or attempt against the wall, a reward of -1 will be given except if the initial state is a terminal state, in which case the reward will be 0 and no further action will needed to be taken because the robot would have ended the game.
Now, there are different ways the robot could pick an action. These rules based on which the robot picks an action is what is called the policy. In the simplest of cases, imagine the robot would move to every direction with the same probability, i.e. there is 25% probability it moves to top, 25% to left, 25% to bottom and 25% to right. Let’s call this the random policy. Following this random policy, the question is: what’s the value or how good it is for the robot to be in each of the gridworld states/squares?
If the objective is to end up in a grey square, it is evident that the squares next to a grey one are better because there’s higher chance to end up in a terminal state following the random policy. But how can we quantify how good are each of these squares/states? Or, what is the same, how can we calculate a function V(St) (known as state-value function) that for each state St gives us its real value?
Let’s first talk about the concept of value. Value could be calculated as the sum of all future rewards that can be achieved from a state t. The intuitive difference between value and reward is like happiness to pleasure. While immediate pleasure can be satisfying, it does not ensure a long lasting happiness because it is not taking into consideration all the future rewards, it only takes care of the immediate next one. In RL, the value of a state is the same: the total value is not only the immediate reward but the sum of all future rewards that can be achieved.
A way to solve the aforementioned state-value function is to use policy iteration, an algorithm included in a field of mathematics called dynamic programming. The algorithm is shown in the following box:
The key of the algorithm is the assignment to V(s), which you can find commented here:
The idea is that we start with a value function that is an array of 4x4 dimensions (as big as the grid) with zeroes. Now we iterate for each state and we calculate its new value as the weighted sum of the reward (-1) plus the value of each neighbor states (s’). Notice two things: the V(s’) is the expected value of the final/neighbor state s’ (at the beginning the expected value is 0 as we initialize the value function with zeroes). Finally, the V(s’) is multiplied by a gamma, which is the discounting factor. In our case we use gamma=1 but the idea of the discounting factor is that immediate rewards (the r in our equation) are more important than the future rewards (reflected by the value of s’) and we can adjust the gamma to reflect this fact.
Finally, notice that we can repeat this process over and over in which we “sweep” and update the state-value function for all the states. These values can get iteratively updated until reaching convergence. In fact in the iterative policy evaluation algorithm, you can see we calculate some delta that reflect how much the value of a state changes respect the previous value. These deltas decay over the iterations and are supposed to reach 0 at the infinity.
Here’s an example of how the value function is updated:
Notice in the right column that as we update the values of the states we can now generate more and more efficient policies until we reach the optimal “rules” a robot must follow to end up in the termination states as fast as possible.
Finally, here’s a Python implementation of the iterative policy evaluation and update. Observe in the end how the deltas for each state decay to 0 as we reach convergence.
import numpy as np
from tqdm import tqdm
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style("darkgrid")
%pylab inline
import random
Populating the interactive namespace from numpy and matplotlib
/home/gerard/miniconda3/lib/python3.5/site-packages/IPython/core/magics/pylab.py:161: UserWarning: pylab import has clobbered these variables: ['random', 'gamma']
`%matplotlib` prevents importing * from pylab and numpy
"\n`%matplotlib` prevents importing * from pylab and numpy"
gamma = 1 # discounting rate
rewardSize = -1
gridSize = 4
terminationStates = [[0,0], [gridSize-1, gridSize-1]]
actions = [[-1, 0], [1, 0], [0, 1], [0, -1]]
numIterations = 1000
def actionRewardFunction(initialPosition, action):
if initialPosition in terminationStates:
return initialPosition, 0
reward = rewardSize
finalPosition = np.array(initialPosition) + np.array(action)
if -1 in finalPosition or 4 in finalPosition:
finalPosition = initialPosition
return finalPosition, reward
valueMap = np.zeros((gridSize, gridSize))
states = [[i, j] for i in range(gridSize) for j in range(gridSize)]
# values of the value function at step 0
valueMap
array([[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]])
deltas = []
for it in range(numIterations):
copyValueMap = np.copy(valueMap)
deltaState = []
for state in states:
weightedRewards = 0
for action in actions:
finalPosition, reward = actionRewardFunction(state, action)
weightedRewards += (1/len(actions))*(reward+(gamma*valueMap[finalPosition[0], finalPosition[1]]))
deltaState.append(np.abs(copyValueMap[state[0], state[1]]-weightedRewards))
copyValueMap[state[0], state[1]] = weightedRewards
deltas.append(deltaState)
valueMap = copyValueMap
if it in [0,1,2,9, 99, numIterations-1]:
print("Iteration {}".format(it+1))
print(valueMap)
print("")
Iteration 1
[[ 0. -1. -1. -1.]
[-1. -1. -1. -1.]
[-1. -1. -1. -1.]
[-1. -1. -1. 0.]]
Iteration 2
[[ 0. -1.75 -2. -2. ]
[-1.75 -2. -2. -2. ]
[-2. -2. -2. -1.75]
[-2. -2. -1.75 0. ]]
Iteration 3
[[ 0. -2.4375 -2.9375 -3. ]
[-2.4375 -2.875 -3. -2.9375]
[-2.9375 -3. -2.875 -2.4375]
[-3. -2.9375 -2.4375 0. ]]
Iteration 10
[[ 0. -6.13796997 -8.35235596 -8.96731567]
[-6.13796997 -7.73739624 -8.42782593 -8.35235596]
[-8.35235596 -8.42782593 -7.73739624 -6.13796997]
[-8.96731567 -8.35235596 -6.13796997 0. ]]
Iteration 100
[[ 0. -13.94260509 -19.91495107 -21.90482522]
[-13.94260509 -17.92507693 -19.91551999 -19.91495107]
[-19.91495107 -19.91551999 -17.92507693 -13.94260509]
[-21.90482522 -19.91495107 -13.94260509 0. ]]
Iteration 1000
[[ 0. -14. -20. -22.]
[-14. -18. -20. -20.]
[-20. -20. -18. -14.]
[-22. -20. -14. 0.]]
plt.figure(figsize=(20, 10))
plt.plot(deltas)
[<matplotlib.lines.Line2D at 0x7f50bc077780>,
<matplotlib.lines.Line2D at 0x7f50bc077a90>,
<matplotlib.lines.Line2D at 0x7f50bc077cc0>,
<matplotlib.lines.Line2D at 0x7f50bc077ef0>,
<matplotlib.lines.Line2D at 0x7f50af3ac160>,
<matplotlib.lines.Line2D at 0x7f50af3ac390>,
<matplotlib.lines.Line2D at 0x7f50af3ac5c0>,
<matplotlib.lines.Line2D at 0x7f50af3ac7f0>,
<matplotlib.lines.Line2D at 0x7f50af3aca20>,
<matplotlib.lines.Line2D at 0x7f50af3acc50>,
<matplotlib.lines.Line2D at 0x7f50af3ace80>,
<matplotlib.lines.Line2D at 0x7f50af3af0f0>,
<matplotlib.lines.Line2D at 0x7f50af3af320>,
<matplotlib.lines.Line2D at 0x7f50af3af550>,
<matplotlib.lines.Line2D at 0x7f50af3af780>,
<matplotlib.lines.Line2D at 0x7f50af3af9b0>]
While the previous approach assumes we have a complete knowledge of the environment, many times this is not the case. Monte Carlo (MC) methods are able to learn directly from experience or episodes rather than relying on the prior knowledge of the environment dynamics.
The term “Monte Carlo” is often used broadly for any estimation method whose operation involves a significant random component.
Interestingly, in many cases is possible to generate experiences sampled according to the desired probability distributions but infeasible to obtain the distributions in explicit form.
Here’s the algorithm to estimate the value function following MC:
The Monte Carlo approach to solve the gridworld task is somewhat naive but effective. Basically we can produce n simulations starting from random points of the grid, and let the robot move randomly to the four directions until a termination state is achieved. For each simulation we save the 4 values: (1) the initial state, (2) the action taken, (3) the reward received and (4) the final state. In the end, a simulation is just an array containing x arrays of these values, x being the number of steps the robot had to take until reaching a terminal state.
Now, from these simulations, we iterate from the end of the “experience” array, and compute G as the previous state value in the same experience (weighed by gamma, the discount factor) plus the received reward in that state. We then store G in an array of Returns(St). Finally, for each state we compute the average of the Returns(St) and we set this as the state value at a particular iteration.
Here you can find a Python implementation of this approach applied to the same previous task: the worldgrid.
import numpy as np
from tqdm import tqdm
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style("darkgrid")
%pylab inline
import random
Populating the interactive namespace from numpy and matplotlib
# parameters
gamma = 0.6 # discounting rate
rewardSize = -1
gridSize = 4
terminationStates = [[0,0], [gridSize-1, gridSize-1]]
actions = [[-1, 0], [1, 0], [0, 1], [0, -1]]
numIterations = 10000
# initialization
V = np.zeros((gridSize, gridSize))
returns = {(i, j):list() for i in range(gridSize) for j in range(gridSize)}
deltas = {(i, j):list() for i in range(gridSize) for j in range(gridSize)}
states = [[i, j] for i in range(gridSize) for j in range(gridSize)]
# utils
def generateEpisode():
initState = random.choice(states[1:-1])
episode = []
while True:
if list(initState) in terminationStates:
return episode
action = random.choice(actions)
finalState = np.array(initState)+np.array(action)
if -1 in list(finalState) or gridSize in list(finalState):
finalState = initState
episode.append([list(initState), action, rewardSize, list(finalState)])
initState = finalState
for it in tqdm(range(numIterations)):
episode = generateEpisode()
G = 0
#print(episode)
for i, step in enumerate(episode[::-1]):
G = gamma*G + step[2]
if step[0] not in [x[0] for x in episode[::-1][len(episode)-i:]]:
idx = (step[0][0], step[0][1])
returns[idx].append(G)
newValue = np.average(returns[idx])
deltas[idx[0], idx[1]].append(np.abs(V[idx[0], idx[1]]-newValue))
V[idx[0], idx[1]] = newValue
100%|██████████| 10000/10000 [00:09<00:00, 1092.69it/s]
V
array([[ 0. , -4.62939535, -8.06323419, -10.33670955],
[ -5.01729474, -6.44643267, -7.91912568, -8.35701949],
[ -8.1403594 , -7.82764654, -6.51265382, -4.69684525],
[-10.6207086 , -8.28126595, -4.70611477, 0. ]])
# using gamma = 1
plt.figure(figsize=(20,10))
all_series = [list(x)[:50] for x in deltas.values()]
for series in all_series:
plt.plot(series)
# using gamma = 0.6
plt.figure(figsize=(20,10))
all_series = [list(x)[:50] for x in deltas.values()]
for series in all_series:
plt.plot(series)
Note that varying the gamma can decrease the convergence time as we can see in the last two plots using gamma=1 and gamma=0.6. The good side of this approach is that:
Technically, we don’t have to compute all the state-values for all the states if we don’t want. We could just focus on a particular grid point and start all the simulations from that initial state to sample episodes that include that state, ignoring all others. This can radically decrease the computational expense.As we said before, this approach does not require a full understanding of the environment dynamics and we can learn directly from experience or simulation.
Technically, we don’t have to compute all the state-values for all the states if we don’t want. We could just focus on a particular grid point and start all the simulations from that initial state to sample episodes that include that state, ignoring all others. This can radically decrease the computational expense.
As we said before, this approach does not require a full understanding of the environment dynamics and we can learn directly from experience or simulation.
Finally, the last method we will explore is temporal-difference (TD). This third method is said to merge the best of dynamic programming and the best of Monte Carlo approaches. Here we enumerate some of its strong points:
As the dynamic programming method, during the optimization of the value function for an initial state, we use the expected values of next state to enrich the prediction. This process is called bootstrapping.As in Monte Carlo, we don’t have to have a model of the environment dynamics and can learn directly from experience.Furthermore, unlike MC, we don’t have to wait until the end of the episode to start learning. In fact, in the case of TD(0) or one-step TD, we learn at each and every step we take. This particularly powerful because: on one hand, the nature of learning is truly “online” and on the other hand we can deal with tasks which do not have a clear terminal state, learning and approximating value functions ad infinitum (suitable for non-deterministic non-episodic or time-varying value functions).
As the dynamic programming method, during the optimization of the value function for an initial state, we use the expected values of next state to enrich the prediction. This process is called bootstrapping.
As in Monte Carlo, we don’t have to have a model of the environment dynamics and can learn directly from experience.
Furthermore, unlike MC, we don’t have to wait until the end of the episode to start learning. In fact, in the case of TD(0) or one-step TD, we learn at each and every step we take. This particularly powerful because: on one hand, the nature of learning is truly “online” and on the other hand we can deal with tasks which do not have a clear terminal state, learning and approximating value functions ad infinitum (suitable for non-deterministic non-episodic or time-varying value functions).
Here’s the algorithm to calculate the value function using temporal-difference:
And here’s the jupyter notebook with the Python implementation
Notice that adjusting alpha and gamma parameters is critical in this case to reach convergence.
Finally, I’d like to mention that most of the work here is inspired or drawn from the latest edition of the Andrew G. and Richard S. book called Reinforcement Learning: An Introduction, amazing work that these authors have made publicly accessible here.
|
[
{
"code": null,
"e": 601,
"s": 172,
"text": "In this post we will introduce few basic concepts of classical RL applied to a very simple task called gridworld in order to solve the so-called state-value function, a function that tells us how good is to be in a certain state t based on future rewards that can be achieved from that state. To do so we will use three different approaches: (1) dynamic programming, (2) Monte Carlo simulations and (3) Temporal-Difference (TD)."
},
{
"code": null,
"e": 1028,
"s": 601,
"text": "Reinforcement learning is a discipline that tries to develop and understand algorithms to model and train agents that can interact with its environment to maximize a specific goal. The idea is quite straightforward: the agent is aware of its own State t, takes an Action At, which leads him to State t+1 and receives a reward Rt. The following scheme summarizes this iterative process of St →At →Rt →St+1 →At+1 →Rt+1 →St+2...:"
},
{
"code": null,
"e": 1899,
"s": 1028,
"text": "An example of this process would be a robot with the task of collecting empty cans from the ground. For instance, the robot could be given 1 point every time the robot picks a can and 0 the rest of the time. You can imagine that the actions of the robot could be several, e.g. move front/back/left/right, extend the arm up/down, etc. If the robot was fancy enough, the representation of the environment (perceived as states) could be a simple picture of the street in front of the robot. The robot would be set free to wander around and learn to pick the cans, for which we would give a positive reward of +1 per can. We could then set a termination state, for instance picking 10 cans (reaching reward = 10). The robot would loop in the agent-environment cycle until the terminal state would be achieved, which would mean the end of the task or episode, as it is known."
},
{
"code": null,
"e": 2697,
"s": 1899,
"text": "The gridworld task is similar to the aforementioned example, just that in this case the robot must move through the grid to end up in a termination state (grey squares). Each grid square is a state. The actions that can be taken are up, down, left or right and we assume that these actions are deterministic, meaning every time that the robot picks the option to go up, the robot will go up. There’s an exception, which is when the robot hits the wall. In this case, the final state is the same as the initial state (cannot break the wall). Finally, for every move or attempt against the wall, a reward of -1 will be given except if the initial state is a terminal state, in which case the reward will be 0 and no further action will needed to be taken because the robot would have ended the game."
},
{
"code": null,
"e": 3213,
"s": 2697,
"text": "Now, there are different ways the robot could pick an action. These rules based on which the robot picks an action is what is called the policy. In the simplest of cases, imagine the robot would move to every direction with the same probability, i.e. there is 25% probability it moves to top, 25% to left, 25% to bottom and 25% to right. Let’s call this the random policy. Following this random policy, the question is: what’s the value or how good it is for the robot to be in each of the gridworld states/squares?"
},
{
"code": null,
"e": 3618,
"s": 3213,
"text": "If the objective is to end up in a grey square, it is evident that the squares next to a grey one are better because there’s higher chance to end up in a terminal state following the random policy. But how can we quantify how good are each of these squares/states? Or, what is the same, how can we calculate a function V(St) (known as state-value function) that for each state St gives us its real value?"
},
{
"code": null,
"e": 4188,
"s": 3618,
"text": "Let’s first talk about the concept of value. Value could be calculated as the sum of all future rewards that can be achieved from a state t. The intuitive difference between value and reward is like happiness to pleasure. While immediate pleasure can be satisfying, it does not ensure a long lasting happiness because it is not taking into consideration all the future rewards, it only takes care of the immediate next one. In RL, the value of a state is the same: the total value is not only the immediate reward but the sum of all future rewards that can be achieved."
},
{
"code": null,
"e": 4392,
"s": 4188,
"text": "A way to solve the aforementioned state-value function is to use policy iteration, an algorithm included in a field of mathematics called dynamic programming. The algorithm is shown in the following box:"
},
{
"code": null,
"e": 4479,
"s": 4392,
"text": "The key of the algorithm is the assignment to V(s), which you can find commented here:"
},
{
"code": null,
"e": 5233,
"s": 4479,
"text": "The idea is that we start with a value function that is an array of 4x4 dimensions (as big as the grid) with zeroes. Now we iterate for each state and we calculate its new value as the weighted sum of the reward (-1) plus the value of each neighbor states (s’). Notice two things: the V(s’) is the expected value of the final/neighbor state s’ (at the beginning the expected value is 0 as we initialize the value function with zeroes). Finally, the V(s’) is multiplied by a gamma, which is the discounting factor. In our case we use gamma=1 but the idea of the discounting factor is that immediate rewards (the r in our equation) are more important than the future rewards (reflected by the value of s’) and we can adjust the gamma to reflect this fact."
},
{
"code": null,
"e": 5693,
"s": 5233,
"text": "Finally, notice that we can repeat this process over and over in which we “sweep” and update the state-value function for all the states. These values can get iteratively updated until reaching convergence. In fact in the iterative policy evaluation algorithm, you can see we calculate some delta that reflect how much the value of a state changes respect the previous value. These deltas decay over the iterations and are supposed to reach 0 at the infinity."
},
{
"code": null,
"e": 5749,
"s": 5693,
"text": "Here’s an example of how the value function is updated:"
},
{
"code": null,
"e": 5984,
"s": 5749,
"text": "Notice in the right column that as we update the values of the states we can now generate more and more efficient policies until we reach the optimal “rules” a robot must follow to end up in the termination states as fast as possible."
},
{
"code": null,
"e": 6156,
"s": 5984,
"text": "Finally, here’s a Python implementation of the iterative policy evaluation and update. Observe in the end how the deltas for each state decay to 0 as we reach convergence."
},
{
"code": null,
"e": 6306,
"s": 6156,
"text": "import numpy as np\nfrom tqdm import tqdm\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set_style(\"darkgrid\")\n%pylab inline\nimport random\n"
},
{
"code": null,
"e": 6370,
"s": 6306,
"text": "Populating the interactive namespace from numpy and matplotlib\n"
},
{
"code": null,
"e": 6652,
"s": 6370,
"text": "/home/gerard/miniconda3/lib/python3.5/site-packages/IPython/core/magics/pylab.py:161: UserWarning: pylab import has clobbered these variables: ['random', 'gamma']\n`%matplotlib` prevents importing * from pylab and numpy\n \"\\n`%matplotlib` prevents importing * from pylab and numpy\"\n"
},
{
"code": null,
"e": 6831,
"s": 6652,
"text": "gamma = 1 # discounting rate\nrewardSize = -1\ngridSize = 4\nterminationStates = [[0,0], [gridSize-1, gridSize-1]]\nactions = [[-1, 0], [1, 0], [0, 1], [0, -1]]\nnumIterations = 1000\n"
},
{
"code": null,
"e": 7194,
"s": 6831,
"text": "def actionRewardFunction(initialPosition, action):\n \n if initialPosition in terminationStates:\n return initialPosition, 0\n \n reward = rewardSize\n finalPosition = np.array(initialPosition) + np.array(action)\n if -1 in finalPosition or 4 in finalPosition: \n finalPosition = initialPosition\n \n return finalPosition, reward\n"
},
{
"code": null,
"e": 7305,
"s": 7194,
"text": "valueMap = np.zeros((gridSize, gridSize))\nstates = [[i, j] for i in range(gridSize) for j in range(gridSize)]\n"
},
{
"code": null,
"e": 7356,
"s": 7305,
"text": "# values of the value function at step 0\nvalueMap\n"
},
{
"code": null,
"e": 7457,
"s": 7356,
"text": "array([[0., 0., 0., 0.],\n [0., 0., 0., 0.],\n [0., 0., 0., 0.],\n [0., 0., 0., 0.]])"
},
{
"code": null,
"e": 8165,
"s": 7457,
"text": "deltas = []\nfor it in range(numIterations):\n copyValueMap = np.copy(valueMap)\n deltaState = []\n for state in states:\n weightedRewards = 0\n for action in actions:\n finalPosition, reward = actionRewardFunction(state, action)\n weightedRewards += (1/len(actions))*(reward+(gamma*valueMap[finalPosition[0], finalPosition[1]]))\n deltaState.append(np.abs(copyValueMap[state[0], state[1]]-weightedRewards))\n copyValueMap[state[0], state[1]] = weightedRewards\n deltas.append(deltaState)\n valueMap = copyValueMap\n if it in [0,1,2,9, 99, numIterations-1]:\n print(\"Iteration {}\".format(it+1))\n print(valueMap)\n print(\"\")\n \n"
},
{
"code": null,
"e": 9096,
"s": 8165,
"text": "Iteration 1\n[[ 0. -1. -1. -1.]\n [-1. -1. -1. -1.]\n [-1. -1. -1. -1.]\n [-1. -1. -1. 0.]]\n\nIteration 2\n[[ 0. -1.75 -2. -2. ]\n [-1.75 -2. -2. -2. ]\n [-2. -2. -2. -1.75]\n [-2. -2. -1.75 0. ]]\n\nIteration 3\n[[ 0. -2.4375 -2.9375 -3. ]\n [-2.4375 -2.875 -3. -2.9375]\n [-2.9375 -3. -2.875 -2.4375]\n [-3. -2.9375 -2.4375 0. ]]\n\nIteration 10\n[[ 0. -6.13796997 -8.35235596 -8.96731567]\n [-6.13796997 -7.73739624 -8.42782593 -8.35235596]\n [-8.35235596 -8.42782593 -7.73739624 -6.13796997]\n [-8.96731567 -8.35235596 -6.13796997 0. ]]\n\nIteration 100\n[[ 0. -13.94260509 -19.91495107 -21.90482522]\n [-13.94260509 -17.92507693 -19.91551999 -19.91495107]\n [-19.91495107 -19.91551999 -17.92507693 -13.94260509]\n [-21.90482522 -19.91495107 -13.94260509 0. ]]\n\nIteration 1000\n[[ 0. -14. -20. -22.]\n [-14. -18. -20. -20.]\n [-20. -20. -18. -14.]\n [-22. -20. -14. 0.]]\n\n"
},
{
"code": null,
"e": 9143,
"s": 9096,
"text": "plt.figure(figsize=(20, 10))\nplt.plot(deltas)\n"
},
{
"code": null,
"e": 9879,
"s": 9143,
"text": "[<matplotlib.lines.Line2D at 0x7f50bc077780>,\n <matplotlib.lines.Line2D at 0x7f50bc077a90>,\n <matplotlib.lines.Line2D at 0x7f50bc077cc0>,\n <matplotlib.lines.Line2D at 0x7f50bc077ef0>,\n <matplotlib.lines.Line2D at 0x7f50af3ac160>,\n <matplotlib.lines.Line2D at 0x7f50af3ac390>,\n <matplotlib.lines.Line2D at 0x7f50af3ac5c0>,\n <matplotlib.lines.Line2D at 0x7f50af3ac7f0>,\n <matplotlib.lines.Line2D at 0x7f50af3aca20>,\n <matplotlib.lines.Line2D at 0x7f50af3acc50>,\n <matplotlib.lines.Line2D at 0x7f50af3ace80>,\n <matplotlib.lines.Line2D at 0x7f50af3af0f0>,\n <matplotlib.lines.Line2D at 0x7f50af3af320>,\n <matplotlib.lines.Line2D at 0x7f50af3af550>,\n <matplotlib.lines.Line2D at 0x7f50af3af780>,\n <matplotlib.lines.Line2D at 0x7f50af3af9b0>]"
},
{
"code": null,
"e": 10149,
"s": 9879,
"text": "While the previous approach assumes we have a complete knowledge of the environment, many times this is not the case. Monte Carlo (MC) methods are able to learn directly from experience or episodes rather than relying on the prior knowledge of the environment dynamics."
},
{
"code": null,
"e": 10277,
"s": 10149,
"text": "The term “Monte Carlo” is often used broadly for any estimation method whose operation involves a significant random component."
},
{
"code": null,
"e": 10462,
"s": 10277,
"text": "Interestingly, in many cases is possible to generate experiences sampled according to the desired probability distributions but infeasible to obtain the distributions in explicit form."
},
{
"code": null,
"e": 10528,
"s": 10462,
"text": "Here’s the algorithm to estimate the value function following MC:"
},
{
"code": null,
"e": 11086,
"s": 10528,
"text": "The Monte Carlo approach to solve the gridworld task is somewhat naive but effective. Basically we can produce n simulations starting from random points of the grid, and let the robot move randomly to the four directions until a termination state is achieved. For each simulation we save the 4 values: (1) the initial state, (2) the action taken, (3) the reward received and (4) the final state. In the end, a simulation is just an array containing x arrays of these values, x being the number of steps the robot had to take until reaching a terminal state."
},
{
"code": null,
"e": 11483,
"s": 11086,
"text": "Now, from these simulations, we iterate from the end of the “experience” array, and compute G as the previous state value in the same experience (weighed by gamma, the discount factor) plus the received reward in that state. We then store G in an array of Returns(St). Finally, for each state we compute the average of the Returns(St) and we set this as the state value at a particular iteration."
},
{
"code": null,
"e": 11592,
"s": 11483,
"text": "Here you can find a Python implementation of this approach applied to the same previous task: the worldgrid."
},
{
"code": null,
"e": 11742,
"s": 11592,
"text": "import numpy as np\nfrom tqdm import tqdm\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set_style(\"darkgrid\")\n%pylab inline\nimport random\n"
},
{
"code": null,
"e": 11806,
"s": 11742,
"text": "Populating the interactive namespace from numpy and matplotlib\n"
},
{
"code": null,
"e": 12001,
"s": 11806,
"text": "# parameters\ngamma = 0.6 # discounting rate\nrewardSize = -1\ngridSize = 4\nterminationStates = [[0,0], [gridSize-1, gridSize-1]]\nactions = [[-1, 0], [1, 0], [0, 1], [0, -1]]\nnumIterations = 10000\n"
},
{
"code": null,
"e": 12273,
"s": 12001,
"text": "# initialization\nV = np.zeros((gridSize, gridSize))\nreturns = {(i, j):list() for i in range(gridSize) for j in range(gridSize)}\ndeltas = {(i, j):list() for i in range(gridSize) for j in range(gridSize)}\nstates = [[i, j] for i in range(gridSize) for j in range(gridSize)]\n"
},
{
"code": null,
"e": 12769,
"s": 12273,
"text": "# utils\ndef generateEpisode():\n initState = random.choice(states[1:-1])\n episode = []\n while True:\n if list(initState) in terminationStates:\n return episode\n action = random.choice(actions)\n finalState = np.array(initState)+np.array(action)\n if -1 in list(finalState) or gridSize in list(finalState):\n finalState = initState\n episode.append([list(initState), action, rewardSize, list(finalState)])\n initState = finalState\n"
},
{
"code": null,
"e": 13264,
"s": 12769,
"text": "for it in tqdm(range(numIterations)):\n episode = generateEpisode()\n G = 0\n #print(episode)\n for i, step in enumerate(episode[::-1]):\n G = gamma*G + step[2]\n if step[0] not in [x[0] for x in episode[::-1][len(episode)-i:]]:\n idx = (step[0][0], step[0][1])\n returns[idx].append(G)\n newValue = np.average(returns[idx])\n deltas[idx[0], idx[1]].append(np.abs(V[idx[0], idx[1]]-newValue))\n V[idx[0], idx[1]] = newValue\n"
},
{
"code": null,
"e": 13321,
"s": 13264,
"text": "100%|██████████| 10000/10000 [00:09<00:00, 1092.69it/s]\n"
},
{
"code": null,
"e": 13324,
"s": 13321,
"text": "V\n"
},
{
"code": null,
"e": 13585,
"s": 13324,
"text": "array([[ 0. , -4.62939535, -8.06323419, -10.33670955],\n [ -5.01729474, -6.44643267, -7.91912568, -8.35701949],\n [ -8.1403594 , -7.82764654, -6.51265382, -4.69684525],\n [-10.6207086 , -8.28126595, -4.70611477, 0. ]])"
},
{
"code": null,
"e": 13732,
"s": 13585,
"text": "# using gamma = 1\nplt.figure(figsize=(20,10))\nall_series = [list(x)[:50] for x in deltas.values()]\nfor series in all_series:\n plt.plot(series)\n"
},
{
"code": null,
"e": 13881,
"s": 13732,
"text": "# using gamma = 0.6\nplt.figure(figsize=(20,10))\nall_series = [list(x)[:50] for x in deltas.values()]\nfor series in all_series:\n plt.plot(series)\n"
},
{
"code": null,
"e": 14051,
"s": 13884,
"text": "Note that varying the gamma can decrease the convergence time as we can see in the last two plots using gamma=1 and gamma=0.6. The good side of this approach is that:"
},
{
"code": null,
"e": 14523,
"s": 14051,
"text": "Technically, we don’t have to compute all the state-values for all the states if we don’t want. We could just focus on a particular grid point and start all the simulations from that initial state to sample episodes that include that state, ignoring all others. This can radically decrease the computational expense.As we said before, this approach does not require a full understanding of the environment dynamics and we can learn directly from experience or simulation."
},
{
"code": null,
"e": 14840,
"s": 14523,
"text": "Technically, we don’t have to compute all the state-values for all the states if we don’t want. We could just focus on a particular grid point and start all the simulations from that initial state to sample episodes that include that state, ignoring all others. This can radically decrease the computational expense."
},
{
"code": null,
"e": 14996,
"s": 14840,
"text": "As we said before, this approach does not require a full understanding of the environment dynamics and we can learn directly from experience or simulation."
},
{
"code": null,
"e": 15218,
"s": 14996,
"text": "Finally, the last method we will explore is temporal-difference (TD). This third method is said to merge the best of dynamic programming and the best of Monte Carlo approaches. Here we enumerate some of its strong points:"
},
{
"code": null,
"e": 16034,
"s": 15218,
"text": "As the dynamic programming method, during the optimization of the value function for an initial state, we use the expected values of next state to enrich the prediction. This process is called bootstrapping.As in Monte Carlo, we don’t have to have a model of the environment dynamics and can learn directly from experience.Furthermore, unlike MC, we don’t have to wait until the end of the episode to start learning. In fact, in the case of TD(0) or one-step TD, we learn at each and every step we take. This particularly powerful because: on one hand, the nature of learning is truly “online” and on the other hand we can deal with tasks which do not have a clear terminal state, learning and approximating value functions ad infinitum (suitable for non-deterministic non-episodic or time-varying value functions)."
},
{
"code": null,
"e": 16242,
"s": 16034,
"text": "As the dynamic programming method, during the optimization of the value function for an initial state, we use the expected values of next state to enrich the prediction. This process is called bootstrapping."
},
{
"code": null,
"e": 16359,
"s": 16242,
"text": "As in Monte Carlo, we don’t have to have a model of the environment dynamics and can learn directly from experience."
},
{
"code": null,
"e": 16852,
"s": 16359,
"text": "Furthermore, unlike MC, we don’t have to wait until the end of the episode to start learning. In fact, in the case of TD(0) or one-step TD, we learn at each and every step we take. This particularly powerful because: on one hand, the nature of learning is truly “online” and on the other hand we can deal with tasks which do not have a clear terminal state, learning and approximating value functions ad infinitum (suitable for non-deterministic non-episodic or time-varying value functions)."
},
{
"code": null,
"e": 16932,
"s": 16852,
"text": "Here’s the algorithm to calculate the value function using temporal-difference:"
},
{
"code": null,
"e": 16995,
"s": 16932,
"text": "And here’s the jupyter notebook with the Python implementation"
},
{
"code": null,
"e": 17091,
"s": 16995,
"text": "Notice that adjusting alpha and gamma parameters is critical in this case to reach convergence."
}
] |
Tkinter scrollbar for frame
|
Let’s suppose you want to organize a set of widgets inside an application window, then you can use Frames. Tkinter Frames are generally used to organize and group many widgets. For a particular application, we can also add a scrollbar in the frames. In order to add a scrollbar, we generally use to the Scrollbar(...options) function.
#Import the required library
from tkinter import *
#Create an instance of tkinter frame or window
win = Tk()
#Define the geometry
win.geometry("750x400")
#Create a Frame
frame= Frame(win)
def close():
win.destroy()
#Create a Label widget in the frame
text= Label(frame, text= "Register", font= ('Helvetica bold', 14))
text.pack(pady=20)
#ADDING A SCROLLBAR
myscrollbar=Scrollbar(frame,orient="vertical")
myscrollbar.pack(side="right",fill="y")
#Add Entry Widgets
Label(frame, text= "Username").pack()
username= Entry(frame, width= 20)
username.pack()
Label(frame, text= "password").pack()
password= Entry(frame, show="*", width= 15)
password.pack()
Label(frame, text= "Email Id").pack()
email= Entry(frame, width= 15)
email.pack()
#Create widget in the frame
button= Button(frame, text= "Close",font= ('Helvetica bold',14),
command= close)
button.pack(pady=20)
frame.pack()
win.mainloop()
Running the above code will display a window with a frame containing Entry widgets. All the widgets in the frame are aligned vertically with a scrollbar.
|
[
{
"code": null,
"e": 1397,
"s": 1062,
"text": "Let’s suppose you want to organize a set of widgets inside an application window, then you can use Frames. Tkinter Frames are generally used to organize and group many widgets. For a particular application, we can also add a scrollbar in the frames. In order to add a scrollbar, we generally use to the Scrollbar(...options) function."
},
{
"code": null,
"e": 2289,
"s": 1397,
"text": "#Import the required library\nfrom tkinter import *\n#Create an instance of tkinter frame or window\nwin = Tk()\n#Define the geometry\nwin.geometry(\"750x400\")\n#Create a Frame\nframe= Frame(win)\ndef close():\n win.destroy()\n#Create a Label widget in the frame\ntext= Label(frame, text= \"Register\", font= ('Helvetica bold', 14))\ntext.pack(pady=20)\n#ADDING A SCROLLBAR\nmyscrollbar=Scrollbar(frame,orient=\"vertical\")\nmyscrollbar.pack(side=\"right\",fill=\"y\")\n#Add Entry Widgets\nLabel(frame, text= \"Username\").pack()\nusername= Entry(frame, width= 20)\nusername.pack()\nLabel(frame, text= \"password\").pack()\npassword= Entry(frame, show=\"*\", width= 15)\npassword.pack()\nLabel(frame, text= \"Email Id\").pack()\nemail= Entry(frame, width= 15)\nemail.pack()\n#Create widget in the frame\nbutton= Button(frame, text= \"Close\",font= ('Helvetica bold',14),\ncommand= close)\nbutton.pack(pady=20)\nframe.pack()\nwin.mainloop()"
},
{
"code": null,
"e": 2443,
"s": 2289,
"text": "Running the above code will display a window with a frame containing Entry widgets. All the widgets in the frame are aligned vertically with a scrollbar."
}
] |
How can I manually set proxy settings in Python Selenium?
|
We can manually set proxy settings using Selenium webdriver in Python. It is done using the DesiredCapabilities class. We would create an object of this class and apply the add_to_capabilities method to it. Then pass the proxy capabilities as a parameter to this method.
Code Implementation
from selenium import webdriver
from selenium.webdriver.common.proxy import ProxoxyType
#add proxy’s ip and port
p = '<proxy ip, port>'
pxy = Proxy()
#set proxy type
pxy.p_type = ProxyType.MANUAL
#http proxy
pxy.http_pxy = p
#ssl proxy
pxy.ssl_pxy = p
#object of DesiredCapabilities
c = webdriver.DesiredCapabilities.CHROME
#set proxy browser capabilties
pxy.add_to_capabilities(c)
#set chromedriver.exe path
driver = webdriver.Chrome(executable_path="C:\\chromedriver.exe", desired_capabilities = c)
#launch URL
driver.get("https://www.tutorialspoint.com/index.htm")
#quit browser
driver.quit()
|
[
{
"code": null,
"e": 1333,
"s": 1062,
"text": "We can manually set proxy settings using Selenium webdriver in Python. It is done using the DesiredCapabilities class. We would create an object of this class and apply the add_to_capabilities method to it. Then pass the proxy capabilities as a parameter to this method."
},
{
"code": null,
"e": 1353,
"s": 1333,
"text": "Code Implementation"
},
{
"code": null,
"e": 1957,
"s": 1353,
"text": "from selenium import webdriver\nfrom selenium.webdriver.common.proxy import ProxoxyType\n\n#add proxy’s ip and port\np = '<proxy ip, port>'\npxy = Proxy()\n\n#set proxy type\npxy.p_type = ProxyType.MANUAL\n\n#http proxy\npxy.http_pxy = p\n\n#ssl proxy\npxy.ssl_pxy = p\n\n#object of DesiredCapabilities\nc = webdriver.DesiredCapabilities.CHROME\n\n#set proxy browser capabilties\npxy.add_to_capabilities(c)\n\n#set chromedriver.exe path\ndriver = webdriver.Chrome(executable_path=\"C:\\\\chromedriver.exe\", desired_capabilities = c)\n\n#launch URL\ndriver.get(\"https://www.tutorialspoint.com/index.htm\")\n\n#quit browser\ndriver.quit()"
}
] |
Snake Case | Practice | GeeksforGeeks
|
Given a Sentence S of length N containing only english alphabet characters, your task is to write a program that converts the given sentence to Snake Case sentence. Snake case is the practice of writing compound words or phrases in which the elements are separated with one underscore character (_) and no spaces, and the first letter of each word written in lowercase. For ease keep all the characters in lowercase.
Note: The given sentence will not start with a Whitespace.
Example 1:
Input:
N = 14
S = "Geeks ForGeeks"
Output: "geeks_forgeeks"
Explanation: All upper case characters are
converted to lower case and the whitespace
characters are replaced with underscore '_'.
Example 2:
Input:
N = 21
S = "Here comes the garden"
Output: "here_comes_the_garden"
Explanation: All upper case characters are
converted to lower case and the whitespace
characters are replaced with underscore '_'.
Your Task:
You don't need to read input or print anything. Your task is to complete the function snakeCase() which takes N and a String S as input parameters and returns the converted string .
Expected Time Complexity: O(N)
Expected Auxiliary Space: O(1)
Constraints:
1 ≤ N ≤ 105
0
vishal8579941 week ago
// Very Easy Solutions of Java//User function Template for Java
class Solution {
static String snakeCase(String S , int n) {
// code here
StringBuilder sb=new StringBuilder();
for(int i=0;i<n;i++){
char ch=S.charAt(i);
if(ch>='A' && ch<='Z'){
char c=(char)('a'+(ch-'A'));
sb.append(c);
}else if(ch==' '){
sb.append('_');
}else{
sb.append(ch);
}
}
return sb.toString();
}
};
0
manjeetschan12341 week ago
//c++
string snakeCase(string S , int n) { // code here string a=""; for(int i=0;i<S.size();i++){ if(S[i]>='A'&&S[i]<='Z'){ a+=S[i]+32; } if(S[i]>='a'&&S[i]<='z'){ a+=S[i]; } if(S[i]==' '){ a+='_'; } } return a; }
0
arpitsharma202 weeks ago
JAVA
class Solution { static String snakeCase(String S , int n) { // code here return S.toLowerCase().replaceAll(" ", "_"); }};
0
mrdiaz56563 weeks ago
class Solution { static String snakeCase(String S , int n) { String res=S.toLowerCase(); StringBuilder str= new StringBuilder(res); for(int i=0;i<S.length();i++){ if(res.charAt(i)==' ') { str.setCharAt(i,'_'); } } return str.toString(); }};
0
codewithshoaib191 month ago
S=S.toLowerCase(); S= S.replaceAll(" ","_"); return S;
0
ravuripraneeth25001 month ago
class Solution {
public:
string snakeCase(string s , int n) {
transform(s.begin(), s.end(), s.begin(), ::tolower);
string b;
for(int i=0; i<s.size(); i++){
if(s[i] == ' '){
b.push_back('_');
}
else{
b.push_back(s[i]);
}
}
return b;
}
};
+1
imohdalam1 month ago
Java | O(n)
class Solution {
static String snakeCase(String S , int n) {
StringBuilder ans = new StringBuilder();
for(int i=0; i<n; i++){
char ch = S.charAt(i);
if(ch >= 65 && ch <= 90){
ch = (char)(ch + 32);
}else if( ch == ' '){
ch = '_';
}
ans.append(ch);
}
return ans.toString();
}
};
0
saikirannagarjuna0072 months ago
class Solution { static String snakeCase(String S , int n) { // code here S=S.toLowerCase(); String b=S.replaceAll(" ","_"); return b; }};
0
imabheek2 months ago
// Java O(n) 4 liner solution
char [] arr = S.toLowerCase().toCharArray(); for(int i = 0; i < n; i++) if(arr[i]==' ') arr[i] = '_'; return new String(arr);
0
imabheek2 months ago
//Java One liner O(1)
return S.replace(' ','_').toLowerCase();
We strongly recommend solving this problem on your own before viewing its editorial. Do you still
want to view the editorial?
Login to access your submissions.
Problem
Contest
Reset the IDE using the second button on the top right corner.
Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values.
Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints.
You can access the hints to get an idea about what is expected of you as well as the final solution code.
You can view the solutions submitted by other users from the submission tab.
|
[
{
"code": null,
"e": 714,
"s": 238,
"text": "Given a Sentence S of length N containing only english alphabet characters, your task is to write a program that converts the given sentence to Snake Case sentence. Snake case is the practice of writing compound words or phrases in which the elements are separated with one underscore character (_) and no spaces, and the first letter of each word written in lowercase. For ease keep all the characters in lowercase.\nNote: The given sentence will not start with a Whitespace."
},
{
"code": null,
"e": 726,
"s": 714,
"text": "Example 1: "
},
{
"code": null,
"e": 918,
"s": 726,
"text": "Input: \nN = 14\nS = \"Geeks ForGeeks\"\nOutput: \"geeks_forgeeks\"\nExplanation: All upper case characters are\nconverted to lower case and the whitespace\ncharacters are replaced with underscore '_'."
},
{
"code": null,
"e": 930,
"s": 918,
"text": "Example 2: "
},
{
"code": null,
"e": 1137,
"s": 930,
"text": "Input: \nN = 21\nS = \"Here comes the garden\"\nOutput: \"here_comes_the_garden\"\nExplanation: All upper case characters are\nconverted to lower case and the whitespace \ncharacters are replaced with underscore '_'."
},
{
"code": null,
"e": 1330,
"s": 1137,
"text": "Your Task:\nYou don't need to read input or print anything. Your task is to complete the function snakeCase() which takes N and a String S as input parameters and returns the converted string ."
},
{
"code": null,
"e": 1392,
"s": 1330,
"text": "Expected Time Complexity: O(N)\nExpected Auxiliary Space: O(1)"
},
{
"code": null,
"e": 1417,
"s": 1392,
"text": "Constraints:\n1 ≤ N ≤ 105"
},
{
"code": null,
"e": 1419,
"s": 1417,
"text": "0"
},
{
"code": null,
"e": 1442,
"s": 1419,
"text": "vishal8579941 week ago"
},
{
"code": null,
"e": 1507,
"s": 1442,
"text": " // Very Easy Solutions of Java//User function Template for Java"
},
{
"code": null,
"e": 1524,
"s": 1507,
"text": "class Solution {"
},
{
"code": null,
"e": 1571,
"s": 1524,
"text": " static String snakeCase(String S , int n) {"
},
{
"code": null,
"e": 1591,
"s": 1571,
"text": " // code here"
},
{
"code": null,
"e": 1636,
"s": 1591,
"text": " StringBuilder sb=new StringBuilder();"
},
{
"code": null,
"e": 1665,
"s": 1636,
"text": " for(int i=0;i<n;i++){"
},
{
"code": null,
"e": 1697,
"s": 1665,
"text": " char ch=S.charAt(i);"
},
{
"code": null,
"e": 1732,
"s": 1697,
"text": " if(ch>='A' && ch<='Z'){"
},
{
"code": null,
"e": 1776,
"s": 1732,
"text": " char c=(char)('a'+(ch-'A'));"
},
{
"code": null,
"e": 1805,
"s": 1776,
"text": " sb.append(c);"
},
{
"code": null,
"e": 1835,
"s": 1805,
"text": " }else if(ch==' '){"
},
{
"code": null,
"e": 1866,
"s": 1835,
"text": " sb.append('_');"
},
{
"code": null,
"e": 1884,
"s": 1866,
"text": " }else{"
},
{
"code": null,
"e": 1914,
"s": 1884,
"text": " sb.append(ch);"
},
{
"code": null,
"e": 1927,
"s": 1914,
"text": " }"
},
{
"code": null,
"e": 1936,
"s": 1927,
"text": " }"
},
{
"code": null,
"e": 1965,
"s": 1936,
"text": " return sb.toString();"
},
{
"code": null,
"e": 1970,
"s": 1965,
"text": " }"
},
{
"code": null,
"e": 1973,
"s": 1970,
"text": "};"
},
{
"code": null,
"e": 1975,
"s": 1973,
"text": "0"
},
{
"code": null,
"e": 2002,
"s": 1975,
"text": "manjeetschan12341 week ago"
},
{
"code": null,
"e": 2008,
"s": 2002,
"text": "//c++"
},
{
"code": null,
"e": 2420,
"s": 2008,
"text": " string snakeCase(string S , int n) { // code here string a=\"\"; for(int i=0;i<S.size();i++){ if(S[i]>='A'&&S[i]<='Z'){ a+=S[i]+32; } if(S[i]>='a'&&S[i]<='z'){ a+=S[i]; } if(S[i]==' '){ a+='_'; } } return a; }"
},
{
"code": null,
"e": 2422,
"s": 2420,
"text": "0"
},
{
"code": null,
"e": 2447,
"s": 2422,
"text": "arpitsharma202 weeks ago"
},
{
"code": null,
"e": 2452,
"s": 2447,
"text": "JAVA"
},
{
"code": null,
"e": 2591,
"s": 2452,
"text": "class Solution { static String snakeCase(String S , int n) { // code here return S.toLowerCase().replaceAll(\" \", \"_\"); }};"
},
{
"code": null,
"e": 2593,
"s": 2591,
"text": "0"
},
{
"code": null,
"e": 2615,
"s": 2593,
"text": "mrdiaz56563 weeks ago"
},
{
"code": null,
"e": 2929,
"s": 2615,
"text": "class Solution { static String snakeCase(String S , int n) { String res=S.toLowerCase(); StringBuilder str= new StringBuilder(res); for(int i=0;i<S.length();i++){ if(res.charAt(i)==' ') { str.setCharAt(i,'_'); } } return str.toString(); }};"
},
{
"code": null,
"e": 2931,
"s": 2929,
"text": "0"
},
{
"code": null,
"e": 2959,
"s": 2931,
"text": "codewithshoaib191 month ago"
},
{
"code": null,
"e": 3025,
"s": 2959,
"text": "S=S.toLowerCase(); S= S.replaceAll(\" \",\"_\"); return S;"
},
{
"code": null,
"e": 3027,
"s": 3025,
"text": "0"
},
{
"code": null,
"e": 3057,
"s": 3027,
"text": "ravuripraneeth25001 month ago"
},
{
"code": null,
"e": 3427,
"s": 3057,
"text": "class Solution {\n public:\n string snakeCase(string s , int n) {\n transform(s.begin(), s.end(), s.begin(), ::tolower);\n string b;\n \n for(int i=0; i<s.size(); i++){\n if(s[i] == ' '){\n b.push_back('_');\n }\n else{\n b.push_back(s[i]);\n }\n }\n \n return b;\n }\n};\n"
},
{
"code": null,
"e": 3430,
"s": 3427,
"text": "+1"
},
{
"code": null,
"e": 3451,
"s": 3430,
"text": "imohdalam1 month ago"
},
{
"code": null,
"e": 3463,
"s": 3451,
"text": "Java | O(n)"
},
{
"code": null,
"e": 3890,
"s": 3463,
"text": "class Solution {\n static String snakeCase(String S , int n) {\n \n StringBuilder ans = new StringBuilder();\n \n for(int i=0; i<n; i++){\n char ch = S.charAt(i);\n if(ch >= 65 && ch <= 90){\n ch = (char)(ch + 32);\n }else if( ch == ' '){\n ch = '_';\n }\n ans.append(ch);\n }\n return ans.toString();\n }\n};"
},
{
"code": null,
"e": 3892,
"s": 3890,
"text": "0"
},
{
"code": null,
"e": 3925,
"s": 3892,
"text": "saikirannagarjuna0072 months ago"
},
{
"code": null,
"e": 4092,
"s": 3925,
"text": "class Solution { static String snakeCase(String S , int n) { // code here S=S.toLowerCase(); String b=S.replaceAll(\" \",\"_\"); return b; }};"
},
{
"code": null,
"e": 4094,
"s": 4092,
"text": "0"
},
{
"code": null,
"e": 4115,
"s": 4094,
"text": "imabheek2 months ago"
},
{
"code": null,
"e": 4146,
"s": 4115,
"text": "// Java O(n) 4 liner solution "
},
{
"code": null,
"e": 4296,
"s": 4146,
"text": "char [] arr = S.toLowerCase().toCharArray(); for(int i = 0; i < n; i++) if(arr[i]==' ') arr[i] = '_'; return new String(arr);"
},
{
"code": null,
"e": 4298,
"s": 4296,
"text": "0"
},
{
"code": null,
"e": 4319,
"s": 4298,
"text": "imabheek2 months ago"
},
{
"code": null,
"e": 4341,
"s": 4319,
"text": "//Java One liner O(1)"
},
{
"code": null,
"e": 4382,
"s": 4341,
"text": "return S.replace(' ','_').toLowerCase();"
},
{
"code": null,
"e": 4528,
"s": 4382,
"text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?"
},
{
"code": null,
"e": 4564,
"s": 4528,
"text": " Login to access your submissions. "
},
{
"code": null,
"e": 4574,
"s": 4564,
"text": "\nProblem\n"
},
{
"code": null,
"e": 4584,
"s": 4574,
"text": "\nContest\n"
},
{
"code": null,
"e": 4647,
"s": 4584,
"text": "Reset the IDE using the second button on the top right corner."
},
{
"code": null,
"e": 4795,
"s": 4647,
"text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values."
},
{
"code": null,
"e": 5003,
"s": 4795,
"text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints."
},
{
"code": null,
"e": 5109,
"s": 5003,
"text": "You can access the hints to get an idea about what is expected of you as well as the final solution code."
}
] |
A Guide to Creating and Using Your Own Matplotlib Style | by Naveen Venkatesan | Towards Data Science
|
Although the default appearance of plots made using matplotlib can appear somewhat drab, I really enjoy the customizability that allows you to tweak every single tiny element of your plot.
If you find yourself consistently changing some of the basic settings in matplotlib every time you create a new figure, it may be fruitful to generate a style file. By importing this style, you can ensure consistency, while still maintaining the ability to override settings as you wish within the individual scripts. This is great if you are, for example, generating figures for a publication and want them all to look the same without having to copy/paste settings each time.
You can find a template for a style file at the matplotlib Github respository. As you see, there are an almost endless number of settings that you can customize as you wish! We will use this as a guide to create our own style file.
Before we begin, we should create an empty plot using the default matplotlib parameters as a basis for comparison:
# Import packagesimport matplotlib.pyplot as plt# Create figurefig = plt.figure()# Add subplot to figureax = fig.add_subplot(111)# Show empty plotplt.show()
First, we must create a file called your_style.mplstyle which we can then edit with the text editor of your choice. I am going to build upon the scientific theme of my first article, so we will create a style called scientific.mplstyle. To run this style, we must place it in our matplotlib configuration directory. An easy way to check for where this is to run the following code:
import matplotlib as mplmpl.get_configdir()
My configuration directory, which is likely similar for most is found as a subfolder of my home directory: ~/.matplotlib/
We place our .mplstyle file in a subdirectory called stylelib. If this folder doesn’t yet exist for you, can create it and put your scientific.mplstyle file into this folder. Now, when you want to use your style, use the following line in your Python script:
plt.style.use('scientific')
First, we can set the figure size, which is normally 6.4 x 4.8 inches. We will use a 4 x 4 inch figure as our default. Now, every time we call plt.figure(), our figure will have dimensions of 4 x 4:
# Figure Propertiesfigure.figsize: 4, 4
Now we can set our font and the default font size. I will use the font ‘Avenir’ and a font size of 16.
# Font Propertiesfont.family: Avenirfont.size: 16
Now that we have changed the global figure properties, let’s edit properties of the axes/subplot objects. First, we want to change the thickness of the axes, which we do using axes.linewidth:
# Axes propertiesaxes.linewidth: 2
Next, I often find that the default padding (whitespace) between the tick labels and the axis labels are too small. We can insert our own custom value of padding with the following:
axes.labelpad: 10
Finally, we will change the default colors that are used when plotting, which can done by editing the color cycler. We do this by creating a cycler() object and passing it to the property axes.prop_cycle. I quite like the colors used in plots on FiveThirtyEight, so will use this as our color cycler. I will be inserting the colors using their HEX codes:
axes.prop_cycle: cycler(color=['008fd5', 'fc4f30', 'e5ae38', '6d904f', '8b8b8b', '810f7c'])
Let’s inspect our progress so far! We use the following code:
# Import packagesimport matplotlib.pyplot as plt# Use our custom styleplt.style.use('scientific')# Create figurefig = plt.figure()# Add subplot to figureax = fig.add_subplot(111)# Show empty plotplt.show()
Already looking much better! We notice that the tick marks do not match some of the axes changes we have made, so this will be our next step.
We notice from the default plot that the tick marks are pointed outwards, and there are no ticks on the top or right axes. For our scientific theme, we want the ticks to all point inwards, and we want ticks on all of the axes, which we can do with the following:
# Tick properties# x-axisxtick.top: Truextick.direction: in# y-axisytick.right: Trueytick.direction: in
Now, we want to change the two properties size and width. The value of width corresponds to the linewidth, so we will set this equal to the value we gave for axes.linewidth (2). The value of size is the length of the ticks — we will set major ticks to have a size of 7 and minor ticks to have a size of 5:
# x-axisxtick.major.size: 7xtick.major.width: 2xtick.minor.size: 5xtick.minor.width: 2# y-axisytick.major.size: 7ytick.major.width: 2ytick.minor.size: 5ytick.minor.width: 2
We now have the following figure — the ticks have now been fixed!
If we want to edit any of the default parameters that are used when we make a call to plt.plot() we can do so by editing the properties for lines. This can be extended to other plotting functions — for example, you can edit properties for scatter to affect the default parameters when creating a scatter plot. In our case, I am only going to the change the default width of the lines to 2, from their default value of 1.5:
# Lines propertieslines.linewidth: 2
The default legend in matplotlib is semi-transparent and has a frame with curved corners called a FancyBox. To see this in action we can run the following code, with all the default parameters:
# Import packagesimport matplotlib.pyplot as pltimport numpy as np# Create figurefig = plt.figure()# Add subplot to figureax = fig.add_subplot(111)# Create some datax = np.linspace(0, 4*np.pi, 200)y1 = np.sin(x)y2 = 1.5*np.sin(x)y3 = 2*np.sin(x)# Plot dataax.plot(x, y1, label='A = 1')ax.plot(x, y2, label='A = 1.5')ax.plot(x, y3, label='A = 2')# Add legendax.legend()# Show plotplt.show()
Let’s make the legend fully opaque and remove the frame altogether:
# Legend propertieslegend.framealpha: 1legend.frameon: False
Now, let’s see how the previous plot looks with our new matplotlib style! We use the following code:
# Import packagesimport matplotlib.pyplot as pltimport numpy as np# Use our custom styleplt.style.use('scientific')# Create figurefig = plt.figure()# Add subplot to figureax = fig.add_subplot(111)# Create some datax = np.linspace(0, 4*np.pi, 200)y1 = np.sin(x)y2 = 1.5*np.sin(x)y3 = 2*np.sin(x)# Plot dataax.plot(x, y1, label='A = 1')ax.plot(x, y2, label='A = 1.5')ax.plot(x, y3, label='A = 2')# Set axis labelsax.set_xlabel('x')ax.set_ylabel('y')# Add legend - loc is a tuple specifying the bottom left cornerax.legend(loc=(1.02, 0.65))# Save plotplt.show()
And cue reaction in 3...2...1...
There are still some minor tweaks that we could make, like adjust the scale of the x and y-ticks, but we have made our life so much easier by making our style settings do most of the work for us!
There are endless possibilities for how you can adjust properties to define a style for your matplotlib plots. The scientific.mplstyle file from this article will be available at this Github repository.
Thank you for reading! I appreciate any feedback, and you can find me on Twitter and connect with me on LinkedIn for more updates and articles.
|
[
{
"code": null,
"e": 361,
"s": 172,
"text": "Although the default appearance of plots made using matplotlib can appear somewhat drab, I really enjoy the customizability that allows you to tweak every single tiny element of your plot."
},
{
"code": null,
"e": 839,
"s": 361,
"text": "If you find yourself consistently changing some of the basic settings in matplotlib every time you create a new figure, it may be fruitful to generate a style file. By importing this style, you can ensure consistency, while still maintaining the ability to override settings as you wish within the individual scripts. This is great if you are, for example, generating figures for a publication and want them all to look the same without having to copy/paste settings each time."
},
{
"code": null,
"e": 1071,
"s": 839,
"text": "You can find a template for a style file at the matplotlib Github respository. As you see, there are an almost endless number of settings that you can customize as you wish! We will use this as a guide to create our own style file."
},
{
"code": null,
"e": 1186,
"s": 1071,
"text": "Before we begin, we should create an empty plot using the default matplotlib parameters as a basis for comparison:"
},
{
"code": null,
"e": 1343,
"s": 1186,
"text": "# Import packagesimport matplotlib.pyplot as plt# Create figurefig = plt.figure()# Add subplot to figureax = fig.add_subplot(111)# Show empty plotplt.show()"
},
{
"code": null,
"e": 1725,
"s": 1343,
"text": "First, we must create a file called your_style.mplstyle which we can then edit with the text editor of your choice. I am going to build upon the scientific theme of my first article, so we will create a style called scientific.mplstyle. To run this style, we must place it in our matplotlib configuration directory. An easy way to check for where this is to run the following code:"
},
{
"code": null,
"e": 1769,
"s": 1725,
"text": "import matplotlib as mplmpl.get_configdir()"
},
{
"code": null,
"e": 1891,
"s": 1769,
"text": "My configuration directory, which is likely similar for most is found as a subfolder of my home directory: ~/.matplotlib/"
},
{
"code": null,
"e": 2150,
"s": 1891,
"text": "We place our .mplstyle file in a subdirectory called stylelib. If this folder doesn’t yet exist for you, can create it and put your scientific.mplstyle file into this folder. Now, when you want to use your style, use the following line in your Python script:"
},
{
"code": null,
"e": 2178,
"s": 2150,
"text": "plt.style.use('scientific')"
},
{
"code": null,
"e": 2377,
"s": 2178,
"text": "First, we can set the figure size, which is normally 6.4 x 4.8 inches. We will use a 4 x 4 inch figure as our default. Now, every time we call plt.figure(), our figure will have dimensions of 4 x 4:"
},
{
"code": null,
"e": 2417,
"s": 2377,
"text": "# Figure Propertiesfigure.figsize: 4, 4"
},
{
"code": null,
"e": 2520,
"s": 2417,
"text": "Now we can set our font and the default font size. I will use the font ‘Avenir’ and a font size of 16."
},
{
"code": null,
"e": 2570,
"s": 2520,
"text": "# Font Propertiesfont.family: Avenirfont.size: 16"
},
{
"code": null,
"e": 2762,
"s": 2570,
"text": "Now that we have changed the global figure properties, let’s edit properties of the axes/subplot objects. First, we want to change the thickness of the axes, which we do using axes.linewidth:"
},
{
"code": null,
"e": 2797,
"s": 2762,
"text": "# Axes propertiesaxes.linewidth: 2"
},
{
"code": null,
"e": 2979,
"s": 2797,
"text": "Next, I often find that the default padding (whitespace) between the tick labels and the axis labels are too small. We can insert our own custom value of padding with the following:"
},
{
"code": null,
"e": 2997,
"s": 2979,
"text": "axes.labelpad: 10"
},
{
"code": null,
"e": 3352,
"s": 2997,
"text": "Finally, we will change the default colors that are used when plotting, which can done by editing the color cycler. We do this by creating a cycler() object and passing it to the property axes.prop_cycle. I quite like the colors used in plots on FiveThirtyEight, so will use this as our color cycler. I will be inserting the colors using their HEX codes:"
},
{
"code": null,
"e": 3596,
"s": 3352,
"text": "axes.prop_cycle: cycler(color=['008fd5', 'fc4f30', 'e5ae38', '6d904f', '8b8b8b', '810f7c'])"
},
{
"code": null,
"e": 3658,
"s": 3596,
"text": "Let’s inspect our progress so far! We use the following code:"
},
{
"code": null,
"e": 3864,
"s": 3658,
"text": "# Import packagesimport matplotlib.pyplot as plt# Use our custom styleplt.style.use('scientific')# Create figurefig = plt.figure()# Add subplot to figureax = fig.add_subplot(111)# Show empty plotplt.show()"
},
{
"code": null,
"e": 4006,
"s": 3864,
"text": "Already looking much better! We notice that the tick marks do not match some of the axes changes we have made, so this will be our next step."
},
{
"code": null,
"e": 4269,
"s": 4006,
"text": "We notice from the default plot that the tick marks are pointed outwards, and there are no ticks on the top or right axes. For our scientific theme, we want the ticks to all point inwards, and we want ticks on all of the axes, which we can do with the following:"
},
{
"code": null,
"e": 4373,
"s": 4269,
"text": "# Tick properties# x-axisxtick.top: Truextick.direction: in# y-axisytick.right: Trueytick.direction: in"
},
{
"code": null,
"e": 4679,
"s": 4373,
"text": "Now, we want to change the two properties size and width. The value of width corresponds to the linewidth, so we will set this equal to the value we gave for axes.linewidth (2). The value of size is the length of the ticks — we will set major ticks to have a size of 7 and minor ticks to have a size of 5:"
},
{
"code": null,
"e": 4852,
"s": 4679,
"text": "# x-axisxtick.major.size: 7xtick.major.width: 2xtick.minor.size: 5xtick.minor.width: 2# y-axisytick.major.size: 7ytick.major.width: 2ytick.minor.size: 5ytick.minor.width: 2"
},
{
"code": null,
"e": 4918,
"s": 4852,
"text": "We now have the following figure — the ticks have now been fixed!"
},
{
"code": null,
"e": 5341,
"s": 4918,
"text": "If we want to edit any of the default parameters that are used when we make a call to plt.plot() we can do so by editing the properties for lines. This can be extended to other plotting functions — for example, you can edit properties for scatter to affect the default parameters when creating a scatter plot. In our case, I am only going to the change the default width of the lines to 2, from their default value of 1.5:"
},
{
"code": null,
"e": 5378,
"s": 5341,
"text": "# Lines propertieslines.linewidth: 2"
},
{
"code": null,
"e": 5572,
"s": 5378,
"text": "The default legend in matplotlib is semi-transparent and has a frame with curved corners called a FancyBox. To see this in action we can run the following code, with all the default parameters:"
},
{
"code": null,
"e": 5962,
"s": 5572,
"text": "# Import packagesimport matplotlib.pyplot as pltimport numpy as np# Create figurefig = plt.figure()# Add subplot to figureax = fig.add_subplot(111)# Create some datax = np.linspace(0, 4*np.pi, 200)y1 = np.sin(x)y2 = 1.5*np.sin(x)y3 = 2*np.sin(x)# Plot dataax.plot(x, y1, label='A = 1')ax.plot(x, y2, label='A = 1.5')ax.plot(x, y3, label='A = 2')# Add legendax.legend()# Show plotplt.show()"
},
{
"code": null,
"e": 6030,
"s": 5962,
"text": "Let’s make the legend fully opaque and remove the frame altogether:"
},
{
"code": null,
"e": 6091,
"s": 6030,
"text": "# Legend propertieslegend.framealpha: 1legend.frameon: False"
},
{
"code": null,
"e": 6192,
"s": 6091,
"text": "Now, let’s see how the previous plot looks with our new matplotlib style! We use the following code:"
},
{
"code": null,
"e": 6751,
"s": 6192,
"text": "# Import packagesimport matplotlib.pyplot as pltimport numpy as np# Use our custom styleplt.style.use('scientific')# Create figurefig = plt.figure()# Add subplot to figureax = fig.add_subplot(111)# Create some datax = np.linspace(0, 4*np.pi, 200)y1 = np.sin(x)y2 = 1.5*np.sin(x)y3 = 2*np.sin(x)# Plot dataax.plot(x, y1, label='A = 1')ax.plot(x, y2, label='A = 1.5')ax.plot(x, y3, label='A = 2')# Set axis labelsax.set_xlabel('x')ax.set_ylabel('y')# Add legend - loc is a tuple specifying the bottom left cornerax.legend(loc=(1.02, 0.65))# Save plotplt.show()"
},
{
"code": null,
"e": 6784,
"s": 6751,
"text": "And cue reaction in 3...2...1..."
},
{
"code": null,
"e": 6980,
"s": 6784,
"text": "There are still some minor tweaks that we could make, like adjust the scale of the x and y-ticks, but we have made our life so much easier by making our style settings do most of the work for us!"
},
{
"code": null,
"e": 7183,
"s": 6980,
"text": "There are endless possibilities for how you can adjust properties to define a style for your matplotlib plots. The scientific.mplstyle file from this article will be available at this Github repository."
}
] |
How to set global event_scheduler=ON even if MySQL is restarted?
|
There is a single way by which you can set a global event_scheduler=ON even if MySQL is restarted. You need to set global system variable ON and need to use this system variable even if MySQL restart.
For this, I am using system variable @@event_scheduler using select statement. The query is as follows:
mysql> select @@event_scheduler;
The following is the output:
+-------------------+
| @@event_scheduler |
+-------------------+
| ON |
+-------------------+
1 row in set (0.00 sec)
Now, restart MySQL. The query is as follows:
mysql> restart;
Query OK, 0 rows affected (0.00 sec)
After restarting the server the connection is lost for some time. If you use any query you will get the following error message:
mysql> select @@event_scheduler;
ERROR 2013 (HY000): Lost connection to MySQL server during query
After some time if you will use the system variable @@event_scheduler again using select statement, then the output you will get the same i.e. ON. The query is as follows:
mysql> select @@event_scheduler;
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id: 8
Current database: *** NONE ***
+-------------------+
| @@event_scheduler |
+-------------------+
| ON |
+-------------------+
1 row in set (0.04 sec)
Or you can set the event_scheduler ON in my.cnf file or my.ini file. The statement is as follows:
[mysqld]
event_scheduler = ON;
Now your event_scheduler is ON. Whether your server restarts or not, it will always be ON.
|
[
{
"code": null,
"e": 1263,
"s": 1062,
"text": "There is a single way by which you can set a global event_scheduler=ON even if MySQL is restarted. You need to set global system variable ON and need to use this system variable even if MySQL restart."
},
{
"code": null,
"e": 1367,
"s": 1263,
"text": "For this, I am using system variable @@event_scheduler using select statement. The query is as follows:"
},
{
"code": null,
"e": 1400,
"s": 1367,
"text": "mysql> select @@event_scheduler;"
},
{
"code": null,
"e": 1429,
"s": 1400,
"text": "The following is the output:"
},
{
"code": null,
"e": 1563,
"s": 1429,
"text": "+-------------------+\n| @@event_scheduler |\n+-------------------+\n| ON |\n+-------------------+\n1 row in set (0.00 sec)"
},
{
"code": null,
"e": 1608,
"s": 1563,
"text": "Now, restart MySQL. The query is as follows:"
},
{
"code": null,
"e": 1661,
"s": 1608,
"text": "mysql> restart;\nQuery OK, 0 rows affected (0.00 sec)"
},
{
"code": null,
"e": 1790,
"s": 1661,
"text": "After restarting the server the connection is lost for some time. If you use any query you will get the following error message:"
},
{
"code": null,
"e": 1888,
"s": 1790,
"text": "mysql> select @@event_scheduler;\nERROR 2013 (HY000): Lost connection to MySQL server during query"
},
{
"code": null,
"e": 2060,
"s": 1888,
"text": "After some time if you will use the system variable @@event_scheduler again using select statement, then the output you will get the same i.e. ON. The query is as follows:"
},
{
"code": null,
"e": 2360,
"s": 2060,
"text": "mysql> select @@event_scheduler;\nERROR 2006 (HY000): MySQL server has gone away\nNo connection. Trying to reconnect...\nConnection id: 8\nCurrent database: *** NONE ***\n+-------------------+\n| @@event_scheduler |\n+-------------------+\n| ON |\n+-------------------+\n1 row in set (0.04 sec)"
},
{
"code": null,
"e": 2458,
"s": 2360,
"text": "Or you can set the event_scheduler ON in my.cnf file or my.ini file. The statement is as follows:"
},
{
"code": null,
"e": 2489,
"s": 2458,
"text": "[mysqld]\nevent_scheduler = ON;"
},
{
"code": null,
"e": 2580,
"s": 2489,
"text": "Now your event_scheduler is ON. Whether your server restarts or not, it will always be ON."
}
] |
HTML Scripts
|
A script is a small piece of program that can add interactivity to your website. For example, a script could generate a pop-up alert box message, or provide a dropdown menu. This script could be Javascript or VBScript.
You can write your Event Handlers using any of the scripting language and then you can trigger those functions using HTML attributes.
There are two ways of using a style sheet in an HTML document:
If you have to use a single script functionality among many HTML pages then it is a good idea to keep that function in a single script file and then include this file in all the HTML pages. You can incluse a style sheet file into HTML document using <script> element. Below is an example:
<head>
<script src="yourfile.js" type="text/javascript" />
</head>
You can write your script code directly into your HTML document. Usually we keep script code in header of the document using <script> tag, otherwise there is no restriction and you can put your source code anywhere in the document. You can specify whether to make a script run automatically (as soon as the page loads), or after the user has done something (like click on a link). Below is an example this would write a Hello Javascript! message as soon as the page loads.:
<head>
<title>Internal Script</title>
</head>
<body>
<script type="text/javascript">
document.write("Hello Javascript!")
</script>
</body>
This will produce following result:
Hello Javascript!
To become more comfortable - Do Online Practice
It is very easy to write an event handler. Following example explains how to write an event handler. Let's write one simple function myAlert in the header of the document. We will call this function when any user will bring mouse over a paragraph written in the example.
<head>
<title>Event Handler Example t</title>
<script type="text/javascript">
function myAlert()
{
alert("I am an event handler....");
return;
}
</script>
</head>
<body>
<span onmouseover="myAlert();">
Bring your mouse here to see an alert
</span>
</body>
Now this will produce following result. Bring your mouse over this line and see the result:
To become more comfortable - Do Online Practice
Athough most (if not all) browsers these days support scripts, some older browsers don't. If a browser doesn't support JavaScript, instead of running your script, it would display the code to the user. To prevent this from happening, you can simply place HTML comments around the script. Older browsers will ignore the script, while newer browsers will run it.
JavaScript Example:
<script type="text/javascript">
<!--
document.write("Hello Javascript!");
//-->
</script>
VBScript Example:
<script type="text/vbscript">
<!--
document.write("Hello VBScript!")
'-->
</script>
You can also provide alternative info for users whose browsers don't support scripts and for users who have disabled scripts. You do this using the <noscript> tag.
JavaScript Example:
<script type="text/javascript">
<!--
document.write("Hello Javascript!");
//-->
</script>
<noscript>Your browser does not support Javascript!</noscript>
VBScript Example:
<script type="text/vbscript">
<!--
document.write("Hello VBScript!")
'-->
</script>
<noscript>Your browser does not support VBScript!</noscript>
You can specify a default scripting language for all your script tags to use. This saves you from having to specify the language everytime you use a script tag within the page. Below is the example:
<meta http-equiv="Content-Script-Type" content="text/JavaScript" />
Note that you can still override the default by specifying a language within the script tag.
Advertisements
19 Lectures
2 hours
Anadi Sharma
16 Lectures
1.5 hours
Anadi Sharma
18 Lectures
1.5 hours
Frahaan Hussain
57 Lectures
5.5 hours
DigiFisk (Programming Is Fun)
54 Lectures
6 hours
DigiFisk (Programming Is Fun)
45 Lectures
5.5 hours
DigiFisk (Programming Is Fun)
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2593,
"s": 2374,
"text": "A script is a small piece of program that can add interactivity to your website. For example, a script could generate a pop-up alert box message, or provide a dropdown menu. This script could be Javascript or VBScript."
},
{
"code": null,
"e": 2727,
"s": 2593,
"text": "You can write your Event Handlers using any of the scripting language and then you can trigger those functions using HTML attributes."
},
{
"code": null,
"e": 2790,
"s": 2727,
"text": "There are two ways of using a style sheet in an HTML document:"
},
{
"code": null,
"e": 3080,
"s": 2790,
"text": "If you have to use a single script functionality among many HTML pages then it is a good idea to keep that function in a single script file and then include this file in all the HTML pages. You can incluse a style sheet file into HTML document using <script> element. Below is an example:"
},
{
"code": null,
"e": 3148,
"s": 3080,
"text": "<head>\n<script src=\"yourfile.js\" type=\"text/javascript\" />\n</head>\n"
},
{
"code": null,
"e": 3622,
"s": 3148,
"text": "You can write your script code directly into your HTML document. Usually we keep script code in header of the document using <script> tag, otherwise there is no restriction and you can put your source code anywhere in the document. You can specify whether to make a script run automatically (as soon as the page loads), or after the user has done something (like click on a link). Below is an example this would write a Hello Javascript! message as soon as the page loads.:"
},
{
"code": null,
"e": 3765,
"s": 3622,
"text": "<head>\n<title>Internal Script</title>\n</head>\n<body>\n<script type=\"text/javascript\">\n document.write(\"Hello Javascript!\")\n</script>\n</body>\n"
},
{
"code": null,
"e": 3801,
"s": 3765,
"text": "This will produce following result:"
},
{
"code": null,
"e": 3820,
"s": 3801,
"text": "Hello Javascript!\n"
},
{
"code": null,
"e": 3869,
"s": 3820,
"text": "To become more comfortable - Do Online Practice"
},
{
"code": null,
"e": 4140,
"s": 3869,
"text": "It is very easy to write an event handler. Following example explains how to write an event handler. Let's write one simple function myAlert in the header of the document. We will call this function when any user will bring mouse over a paragraph written in the example."
},
{
"code": null,
"e": 4407,
"s": 4140,
"text": "<head>\n<title>Event Handler Example t</title>\n<script type=\"text/javascript\">\nfunction myAlert()\n{\n alert(\"I am an event handler....\");\n\treturn;\n}\n</script>\n</head>\n<body>\n\n<span onmouseover=\"myAlert();\">\n Bring your mouse here to see an alert\n</span>\n\n</body>\n"
},
{
"code": null,
"e": 4499,
"s": 4407,
"text": "Now this will produce following result. Bring your mouse over this line and see the result:"
},
{
"code": null,
"e": 4548,
"s": 4499,
"text": "To become more comfortable - Do Online Practice"
},
{
"code": null,
"e": 4909,
"s": 4548,
"text": "Athough most (if not all) browsers these days support scripts, some older browsers don't. If a browser doesn't support JavaScript, instead of running your script, it would display the code to the user. To prevent this from happening, you can simply place HTML comments around the script. Older browsers will ignore the script, while newer browsers will run it."
},
{
"code": null,
"e": 5133,
"s": 4909,
"text": "JavaScript Example:\n\t<script type=\"text/javascript\">\n\t<!--\n\tdocument.write(\"Hello Javascript!\");\n\t//-->\n\t</script>\n\nVBScript Example:\n\t<script type=\"text/vbscript\">\n\t<!--\n\tdocument.write(\"Hello VBScript!\")\n\t'-->\n\t</script>\n"
},
{
"code": null,
"e": 5297,
"s": 5133,
"text": "You can also provide alternative info for users whose browsers don't support scripts and for users who have disabled scripts. You do this using the <noscript> tag."
},
{
"code": null,
"e": 5652,
"s": 5297,
"text": "JavaScript Example:\n\t<script type=\"text/javascript\">\n\t<!--\n\tdocument.write(\"Hello Javascript!\");\n\t//-->\n\t</script>\n <noscript>Your browser does not support Javascript!</noscript>\nVBScript Example:\n\t<script type=\"text/vbscript\">\n\t<!--\n\tdocument.write(\"Hello VBScript!\")\n\t'-->\n\t</script>\n <noscript>Your browser does not support VBScript!</noscript>\n"
},
{
"code": null,
"e": 5852,
"s": 5652,
"text": "You can specify a default scripting language for all your script tags to use. This saves you from having to specify the language everytime you use a script tag within the page. Below is the example: "
},
{
"code": null,
"e": 5921,
"s": 5852,
"text": "<meta http-equiv=\"Content-Script-Type\" content=\"text/JavaScript\" />\n"
},
{
"code": null,
"e": 6014,
"s": 5921,
"text": "Note that you can still override the default by specifying a language within the script tag."
},
{
"code": null,
"e": 6031,
"s": 6014,
"text": "\nAdvertisements\n"
},
{
"code": null,
"e": 6064,
"s": 6031,
"text": "\n 19 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 6078,
"s": 6064,
"text": " Anadi Sharma"
},
{
"code": null,
"e": 6113,
"s": 6078,
"text": "\n 16 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 6127,
"s": 6113,
"text": " Anadi Sharma"
},
{
"code": null,
"e": 6162,
"s": 6127,
"text": "\n 18 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 6179,
"s": 6162,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 6214,
"s": 6179,
"text": "\n 57 Lectures \n 5.5 hours \n"
},
{
"code": null,
"e": 6245,
"s": 6214,
"text": " DigiFisk (Programming Is Fun)"
},
{
"code": null,
"e": 6278,
"s": 6245,
"text": "\n 54 Lectures \n 6 hours \n"
},
{
"code": null,
"e": 6309,
"s": 6278,
"text": " DigiFisk (Programming Is Fun)"
},
{
"code": null,
"e": 6344,
"s": 6309,
"text": "\n 45 Lectures \n 5.5 hours \n"
},
{
"code": null,
"e": 6375,
"s": 6344,
"text": " DigiFisk (Programming Is Fun)"
},
{
"code": null,
"e": 6382,
"s": 6375,
"text": " Print"
},
{
"code": null,
"e": 6393,
"s": 6382,
"text": " Add Notes"
}
] |
list-unstyled class in Bootstrap
|
For unstyled list in Bootstrap, use the list-unstyled class.
You can try to run the following code to implement the list-unstyled class −
Live Demo
<!DOCTYPE html>
<html>
<head>
<title>Bootstrap lists</title>
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.1/css/bootstrap.min.css">
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<script src="https://stackpath.bootstrapcdn.com/bootstrap/4.1.1/js/bootstrap.min.js"></script>
</head>
<body>
<h1>Lists</h1>
<h2>Definition List</h2>
<h4>Unstyled List</h4>
<ul class = "list-unstyled">
<li>Item 1</li>
<li>Item 2</li>
<li>Item 3</li>
<li>Item 4</li>
</ul>
<h2>Vegetables (UnOrdered List)</h2>
<ul>
<li>Tomato</li>
<li>Brinjal</li>
<li>Broccoli</li>
</ul>
</body>
</html>
|
[
{
"code": null,
"e": 1123,
"s": 1062,
"text": "For unstyled list in Bootstrap, use the list-unstyled class."
},
{
"code": null,
"e": 1200,
"s": 1123,
"text": "You can try to run the following code to implement the list-unstyled class −"
},
{
"code": null,
"e": 1211,
"s": 1200,
"text": " Live Demo"
},
{
"code": null,
"e": 2074,
"s": 1211,
"text": "<!DOCTYPE html>\n<html>\n <head>\n <title>Bootstrap lists</title>\n <meta name=\"viewport\" content=\"width=device-width, initial-scale=1\">\n <link rel=\"stylesheet\" href=\"https://stackpath.bootstrapcdn.com/bootstrap/4.1.1/css/bootstrap.min.css\">\n <script src=\"https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js\"></script>\n <script src=\"https://stackpath.bootstrapcdn.com/bootstrap/4.1.1/js/bootstrap.min.js\"></script>\n </head>\n <body>\n <h1>Lists</h1>\n <h2>Definition List</h2>\n <h4>Unstyled List</h4>\n <ul class = \"list-unstyled\">\n <li>Item 1</li>\n <li>Item 2</li>\n <li>Item 3</li>\n <li>Item 4</li>\n </ul>\n <h2>Vegetables (UnOrdered List)</h2>\n <ul>\n <li>Tomato</li>\n <li>Brinjal</li>\n <li>Broccoli</li>\n </ul>\n </body>\n</html>"
}
] |
Tcl - Ternary Operator
|
Try the following example to understand ternary operator available in Tcl language −
#!/usr/bin/tclsh
set a 10;
set b [expr $a == 1 ? 20: 30]
puts "Value of b is $b\n"
set b [expr $a == 10 ? 20: 30]
puts "Value of b is $b\n"
When you compile and execute the above program it produces the following result −
Value of b is 30
Value of b is 20
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2286,
"s": 2201,
"text": "Try the following example to understand ternary operator available in Tcl language −"
},
{
"code": null,
"e": 2428,
"s": 2286,
"text": "#!/usr/bin/tclsh\n\nset a 10;\nset b [expr $a == 1 ? 20: 30]\nputs \"Value of b is $b\\n\"\nset b [expr $a == 10 ? 20: 30]\nputs \"Value of b is $b\\n\" "
},
{
"code": null,
"e": 2510,
"s": 2428,
"text": "When you compile and execute the above program it produces the following result −"
},
{
"code": null,
"e": 2546,
"s": 2510,
"text": "Value of b is 30\n\nValue of b is 20\n"
},
{
"code": null,
"e": 2553,
"s": 2546,
"text": " Print"
},
{
"code": null,
"e": 2564,
"s": 2553,
"text": " Add Notes"
}
] |
Statistical - COVARIANCE.P Function
|
The COVARIANCE.P function returns population covariance, the average of the products of deviations for each data point pair in two data sets. Use covariance to determine the relationship between two data sets.
COVARIANCE.P (array1, array2)
Covariance is given by −
$$Cov\left ( X,Y \right )=\frac{\sum \left ( x-\bar{x} \right )\left ( y-\bar{y} \right )}{n}$$
Where n is the sample size and $\bar{x}$ and $\bar{y}$ are the sample means AVERAGE (array1) and AVERAGE (array2).
Covariance is given by −
$$Cov\left ( X,Y \right )=\frac{\sum \left ( x-\bar{x} \right )\left ( y-\bar{y} \right )}{n}$$
Where n is the sample size and $\bar{x}$ and $\bar{y}$ are the sample means AVERAGE (array1) and AVERAGE (array2).
The arguments must either be numbers or be names, arrays, or references that contain numbers.
The arguments must either be numbers or be names, arrays, or references that contain numbers.
If an array or reference argument contains text, logical values, or empty cells, those values are ignored. However, cells with the value zero are included.
If an array or reference argument contains text, logical values, or empty cells, those values are ignored. However, cells with the value zero are included.
If array1 and array2 have different numbers of data points, COVARIANCE.P returns the #N/A error value.
If array1 and array2 have different numbers of data points, COVARIANCE.P returns the #N/A error value.
If either array1 or array2 is empty, COVARIANCE.P returns the #DIV/0! error value.
If either array1 or array2 is empty, COVARIANCE.P returns the #DIV/0! error value.
Excel 2010, Excel 2013, Excel 2016
296 Lectures
146 hours
Arun Motoori
56 Lectures
5.5 hours
Pavan Lalwani
120 Lectures
6.5 hours
Inf Sid
134 Lectures
8.5 hours
Yoda Learning
46 Lectures
7.5 hours
William Fiset
25 Lectures
1.5 hours
Sasha Miller
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2064,
"s": 1854,
"text": "The COVARIANCE.P function returns population covariance, the average of the products of deviations for each data point pair in two data sets. Use covariance to determine the relationship between two data sets."
},
{
"code": null,
"e": 2095,
"s": 2064,
"text": "COVARIANCE.P (array1, array2)\n"
},
{
"code": null,
"e": 2332,
"s": 2095,
"text": "Covariance is given by −\n$$Cov\\left ( X,Y \\right )=\\frac{\\sum \\left ( x-\\bar{x} \\right )\\left ( y-\\bar{y} \\right )}{n}$$\nWhere n is the sample size and $\\bar{x}$ and $\\bar{y}$ are the sample means AVERAGE (array1) and AVERAGE (array2).\n"
},
{
"code": null,
"e": 2357,
"s": 2332,
"text": "Covariance is given by −"
},
{
"code": null,
"e": 2453,
"s": 2357,
"text": "$$Cov\\left ( X,Y \\right )=\\frac{\\sum \\left ( x-\\bar{x} \\right )\\left ( y-\\bar{y} \\right )}{n}$$"
},
{
"code": null,
"e": 2568,
"s": 2453,
"text": "Where n is the sample size and $\\bar{x}$ and $\\bar{y}$ are the sample means AVERAGE (array1) and AVERAGE (array2)."
},
{
"code": null,
"e": 2662,
"s": 2568,
"text": "The arguments must either be numbers or be names, arrays, or references that contain numbers."
},
{
"code": null,
"e": 2756,
"s": 2662,
"text": "The arguments must either be numbers or be names, arrays, or references that contain numbers."
},
{
"code": null,
"e": 2912,
"s": 2756,
"text": "If an array or reference argument contains text, logical values, or empty cells, those values are ignored. However, cells with the value zero are included."
},
{
"code": null,
"e": 3068,
"s": 2912,
"text": "If an array or reference argument contains text, logical values, or empty cells, those values are ignored. However, cells with the value zero are included."
},
{
"code": null,
"e": 3171,
"s": 3068,
"text": "If array1 and array2 have different numbers of data points, COVARIANCE.P returns the #N/A error value."
},
{
"code": null,
"e": 3274,
"s": 3171,
"text": "If array1 and array2 have different numbers of data points, COVARIANCE.P returns the #N/A error value."
},
{
"code": null,
"e": 3357,
"s": 3274,
"text": "If either array1 or array2 is empty, COVARIANCE.P returns the #DIV/0! error value."
},
{
"code": null,
"e": 3440,
"s": 3357,
"text": "If either array1 or array2 is empty, COVARIANCE.P returns the #DIV/0! error value."
},
{
"code": null,
"e": 3475,
"s": 3440,
"text": "Excel 2010, Excel 2013, Excel 2016"
},
{
"code": null,
"e": 3511,
"s": 3475,
"text": "\n 296 Lectures \n 146 hours \n"
},
{
"code": null,
"e": 3525,
"s": 3511,
"text": " Arun Motoori"
},
{
"code": null,
"e": 3560,
"s": 3525,
"text": "\n 56 Lectures \n 5.5 hours \n"
},
{
"code": null,
"e": 3575,
"s": 3560,
"text": " Pavan Lalwani"
},
{
"code": null,
"e": 3611,
"s": 3575,
"text": "\n 120 Lectures \n 6.5 hours \n"
},
{
"code": null,
"e": 3620,
"s": 3611,
"text": " Inf Sid"
},
{
"code": null,
"e": 3656,
"s": 3620,
"text": "\n 134 Lectures \n 8.5 hours \n"
},
{
"code": null,
"e": 3671,
"s": 3656,
"text": " Yoda Learning"
},
{
"code": null,
"e": 3706,
"s": 3671,
"text": "\n 46 Lectures \n 7.5 hours \n"
},
{
"code": null,
"e": 3721,
"s": 3706,
"text": " William Fiset"
},
{
"code": null,
"e": 3756,
"s": 3721,
"text": "\n 25 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 3770,
"s": 3756,
"text": " Sasha Miller"
},
{
"code": null,
"e": 3777,
"s": 3770,
"text": " Print"
},
{
"code": null,
"e": 3788,
"s": 3777,
"text": " Add Notes"
}
] |
Machine Learning with Python - Preparing Data
|
Machine Learning algorithms are completely dependent on data because it is the most crucial aspect that makes model training possible. On the other hand, if we won’t be able to make sense out of that data, before feeding it to ML algorithms, a machine will be useless. In simple words, we always need to feed right data i.e. the data in correct scale, format and containing meaningful features, for the problem we want machine to solve.
This makes data preparation the most important step in ML process. Data preparation may be defined as the procedure that makes our dataset more appropriate for ML process.
After selecting the raw data for ML training, the most important task is data pre-processing. In broad sense, data preprocessing will convert the selected data into a form we can work with or can feed to ML algorithms. We always need to preprocess our data so that it can be as per the expectation of machine learning algorithm.
We have the following data preprocessing techniques that can be applied on data set to produce data for ML algorithms −
Most probably our dataset comprises of the attributes with varying scale, but we cannot provide such data to ML algorithm hence it requires rescaling. Data rescaling makes sure that attributes are at same scale. Generally, attributes are rescaled into the range of 0 and 1. ML algorithms like gradient descent and k-Nearest Neighbors requires scaled data. We can rescale the data with the help of MinMaxScaler class of scikit-learn Python library.
In this example we will rescale the data of Pima Indians Diabetes dataset which we used earlier. First, the CSV data will be loaded (as done in the previous chapters) and then with the help of MinMaxScaler class, it will be rescaled in the range of 0 and 1.
The first few lines of the following script are same as we have written in previous chapters while loading CSV data.
from pandas import read_csv
from numpy import set_printoptions
from sklearn import preprocessing
path = r'C:\pima-indians-diabetes.csv'
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
dataframe = read_csv(path, names=names)
array = dataframe.values
Now, we can use MinMaxScaler class to rescale the data in the range of 0 and 1.
data_scaler = preprocessing.MinMaxScaler(feature_range=(0,1))
data_rescaled = data_scaler.fit_transform(array)
We can also summarize the data for output as per our choice. Here, we are setting the precision to 1 and showing the first 10 rows in the output.
set_printoptions(precision=1)
print ("\nScaled data:\n", data_rescaled[0:10])
Scaled data:
[
[0.4 0.7 0.6 0.4 0. 0.5 0.2 0.5 1. ]
[0.1 0.4 0.5 0.3 0. 0.4 0.1 0.2 0. ]
[0.5 0.9 0.5 0. 0. 0.3 0.3 0.2 1. ]
[0.1 0.4 0.5 0.2 0.1 0.4 0. 0. 0. ]
[0. 0.7 0.3 0.4 0.2 0.6 0.9 0.2 1. ]
[0.3 0.6 0.6 0. 0. 0.4 0.1 0.2 0. ]
[0.2 0.4 0.4 0.3 0.1 0.5 0.1 0.1 1. ]
[0.6 0.6 0. 0. 0. 0.5 0. 0.1 0. ]
[0.1 1. 0.6 0.5 0.6 0.5 0. 0.5 1. ]
[0.5 0.6 0.8 0. 0. 0. 0.1 0.6 1. ]
]
From the above output, all the data got rescaled into the range of 0 and 1.
Another useful data preprocessing technique is Normalization. This is used to rescale each row of data to have a length of 1. It is mainly useful in Sparse dataset where we have lots of zeros. We can rescale the data with the help of Normalizer class of scikit-learn Python library.
In machine learning, there are two types of normalization preprocessing techniques as follows −
It may be defined as the normalization technique that modifies the dataset values in a way that in each row the sum of the absolute values will always be up to 1. It is also called Least Absolute Deviations.
Example
In this example, we use L1 Normalize technique to normalize the data of Pima Indians Diabetes dataset which we used earlier. First, the CSV data will be loaded and then with the help of Normalizer class it will be normalized.
The first few lines of following script are same as we have written in previous chapters while loading CSV data.
from pandas import read_csv
from numpy import set_printoptions
from sklearn.preprocessing import Normalizer
path = r'C:\pima-indians-diabetes.csv'
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
dataframe = read_csv (path, names=names)
array = dataframe.values
Now, we can use Normalizer class with L1 to normalize the data.
Data_normalizer = Normalizer(norm='l1').fit(array)
Data_normalized = Data_normalizer.transform(array)
We can also summarize the data for output as per our choice. Here, we are setting the precision to 2 and showing the first 3 rows in the output.
set_printoptions(precision=2)
print ("\nNormalized data:\n", Data_normalized [0:3])
Output
Normalized data:
[
[0.02 0.43 0.21 0.1 0. 0.1 0. 0.14 0. ]
[0. 0.36 0.28 0.12 0. 0.11 0. 0.13 0. ]
[0.03 0.59 0.21 0. 0. 0.07 0. 0.1 0. ]
]
It may be defined as the normalization technique that modifies the dataset values in a way that in each row the sum of the squares will always be up to 1. It is also called least squares.
Example
In this example, we use L2 Normalization technique to normalize the data of Pima Indians Diabetes dataset which we used earlier. First, the CSV data will be loaded (as done in previous chapters) and then with the help of Normalizer class it will be normalized.
The first few lines of following script are same as we have written in previous chapters while loading CSV data.
from pandas import read_csv
from numpy import set_printoptions
from sklearn.preprocessing import Normalizer
path = r'C:\pima-indians-diabetes.csv'
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
dataframe = read_csv (path, names=names)
array = dataframe.values
Now, we can use Normalizer class with L1 to normalize the data.
Data_normalizer = Normalizer(norm='l2').fit(array)
Data_normalized = Data_normalizer.transform(array)
We can also summarize the data for output as per our choice. Here, we are setting the precision to 2 and showing the first 3 rows in the output.
set_printoptions(precision=2)
print ("\nNormalized data:\n", Data_normalized [0:3])
Output
Normalized data:
[
[0.03 0.83 0.4 0.2 0. 0.19 0. 0.28 0.01]
[0.01 0.72 0.56 0.24 0. 0.22 0. 0.26 0. ]
[0.04 0.92 0.32 0. 0. 0.12 0. 0.16 0.01]
]
As the name suggests, this is the technique with the help of which we can make our data binary. We can use a binary threshold for making our data binary. The values above that threshold value will be converted to 1 and below that threshold will be converted to 0. For example, if we choose threshold value = 0.5, then the dataset value above it will become 1 and below this will become 0. That is why we can call it binarizing the data or thresholding the data. This technique is useful when we have probabilities in our dataset and want to convert them into crisp values.
We can binarize the data with the help of Binarizer class of scikit-learn Python library.
In this example, we will rescale the data of Pima Indians Diabetes dataset which we used earlier. First, the CSV data will be loaded and then with the help of Binarizer class it will be converted into binary values i.e. 0 and 1 depending upon the threshold value. We are taking 0.5 as threshold value.
The first few lines of following script are same as we have written in previous chapters while loading CSV data.
from pandas import read_csv
from sklearn.preprocessing import Binarizer
path = r'C:\pima-indians-diabetes.csv'
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
dataframe = read_csv(path, names=names)
array = dataframe.values
Now, we can use Binarize class to convert the data into binary values.
binarizer = Binarizer(threshold=0.5).fit(array)
Data_binarized = binarizer.transform(array)
Here, we are showing the first 5 rows in the output.
print ("\nBinary data:\n", Data_binarized [0:5])
Binary data:
[
[1. 1. 1. 1. 0. 1. 1. 1. 1.]
[1. 1. 1. 1. 0. 1. 0. 1. 0.]
[1. 1. 1. 0. 0. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1. 0. 1. 0.]
[0. 1. 1. 1. 1. 1. 1. 1. 1.]
]
Another useful data preprocessing technique which is basically used to transform the data attributes with a Gaussian distribution. It differs the mean and SD (Standard Deviation) to a standard Gaussian distribution with a mean of 0 and a SD of 1. This technique is useful in ML algorithms like linear regression, logistic regression that assumes a Gaussian distribution in input dataset and produce better results with rescaled data. We can standardize the data (mean = 0 and SD =1) with the help of StandardScaler class of scikit-learn Python library.
In this example, we will rescale the data of Pima Indians Diabetes dataset which we used earlier. First, the CSV data will be loaded and then with the help of StandardScaler class it will be converted into Gaussian Distribution with mean = 0 and SD = 1.
The first few lines of following script are same as we have written in previous chapters while loading CSV data.
from sklearn.preprocessing import StandardScaler
from pandas import read_csv
from numpy import set_printoptions
path = r'C:\pima-indians-diabetes.csv'
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
dataframe = read_csv(path, names=names)
array = dataframe.values
Now, we can use StandardScaler class to rescale the data.
data_scaler = StandardScaler().fit(array)
data_rescaled = data_scaler.transform(array)
We can also summarize the data for output as per our choice. Here, we are setting the precision to 2 and showing the first 5 rows in the output.
set_printoptions(precision=2)
print ("\nRescaled data:\n", data_rescaled [0:5])
Rescaled data:
[
[ 0.64 0.85 0.15 0.91 -0.69 0.2 0.47 1.43 1.37]
[-0.84 -1.12 -0.16 0.53 -0.69 -0.68 -0.37 -0.19 -0.73]
[ 1.23 1.94 -0.26 -1.29 -0.69 -1.1 0.6 -0.11 1.37]
[-0.84 -1. -0.16 0.15 0.12 -0.49 -0.92 -1.04 -0.73]
[-1.14 0.5 -1.5 0.91 0.77 1.41 5.48 -0.02 1.37]
]
We discussed the importance of good fata for ML algorithms as well as some techniques to pre-process the data before sending it to ML algorithms. One more aspect in this regard is data labeling. It is also very important to send the data to ML algorithms having proper labeling. For example, in case of classification problems, lot of labels in the form of words, numbers etc. are there on the data.
Most of the sklearn functions expect that the data with number labels rather than word labels. Hence, we need to convert such labels into number labels. This process is called label encoding. We can perform label encoding of data with the help of LabelEncoder() function of scikit-learn Python library.
In the following example, Python script will perform the label encoding.
First, import the required Python libraries as follows −
import numpy as np
from sklearn import preprocessing
Now, we need to provide the input labels as follows −
input_labels = ['red','black','red','green','black','yellow','white']
The next line of code will create the label encoder and train it.
encoder = preprocessing.LabelEncoder()
encoder.fit(input_labels)
The next lines of script will check the performance by encoding the random ordered list −
test_labels = ['green','red','black']
encoded_values = encoder.transform(test_labels)
print("\nLabels =", test_labels)
print("Encoded values =", list(encoded_values))
encoded_values = [3,0,4,1]
decoded_list = encoder.inverse_transform(encoded_values)
We can get the list of encoded values with the help of following python script −
print("\nEncoded values =", encoded_values)
print("\nDecoded labels =", list(decoded_list))
Labels = ['green', 'red', 'black']
Encoded values = [1, 2, 0]
Encoded values = [3, 0, 4, 1]
Decoded labels = ['white', 'black', 'yellow', 'green']
168 Lectures
13.5 hours
Er. Himanshu Vasishta
64 Lectures
10.5 hours
Eduonix Learning Solutions
91 Lectures
10 hours
Abhilash Nelson
54 Lectures
6 hours
Abhishek And Pukhraj
49 Lectures
5 hours
Abhishek And Pukhraj
35 Lectures
4 hours
Abhishek And Pukhraj
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2741,
"s": 2304,
"text": "Machine Learning algorithms are completely dependent on data because it is the most crucial aspect that makes model training possible. On the other hand, if we won’t be able to make sense out of that data, before feeding it to ML algorithms, a machine will be useless. In simple words, we always need to feed right data i.e. the data in correct scale, format and containing meaningful features, for the problem we want machine to solve."
},
{
"code": null,
"e": 2913,
"s": 2741,
"text": "This makes data preparation the most important step in ML process. Data preparation may be defined as the procedure that makes our dataset more appropriate for ML process."
},
{
"code": null,
"e": 3242,
"s": 2913,
"text": "After selecting the raw data for ML training, the most important task is data pre-processing. In broad sense, data preprocessing will convert the selected data into a form we can work with or can feed to ML algorithms. We always need to preprocess our data so that it can be as per the expectation of machine learning algorithm."
},
{
"code": null,
"e": 3362,
"s": 3242,
"text": "We have the following data preprocessing techniques that can be applied on data set to produce data for ML algorithms −"
},
{
"code": null,
"e": 3810,
"s": 3362,
"text": "Most probably our dataset comprises of the attributes with varying scale, but we cannot provide such data to ML algorithm hence it requires rescaling. Data rescaling makes sure that attributes are at same scale. Generally, attributes are rescaled into the range of 0 and 1. ML algorithms like gradient descent and k-Nearest Neighbors requires scaled data. We can rescale the data with the help of MinMaxScaler class of scikit-learn Python library."
},
{
"code": null,
"e": 4068,
"s": 3810,
"text": "In this example we will rescale the data of Pima Indians Diabetes dataset which we used earlier. First, the CSV data will be loaded (as done in the previous chapters) and then with the help of MinMaxScaler class, it will be rescaled in the range of 0 and 1."
},
{
"code": null,
"e": 4185,
"s": 4068,
"text": "The first few lines of the following script are same as we have written in previous chapters while loading CSV data."
},
{
"code": null,
"e": 4467,
"s": 4185,
"text": "from pandas import read_csv\nfrom numpy import set_printoptions\nfrom sklearn import preprocessing\npath = r'C:\\pima-indians-diabetes.csv'\nnames = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']\ndataframe = read_csv(path, names=names)\narray = dataframe.values"
},
{
"code": null,
"e": 4547,
"s": 4467,
"text": "Now, we can use MinMaxScaler class to rescale the data in the range of 0 and 1."
},
{
"code": null,
"e": 4659,
"s": 4547,
"text": "data_scaler = preprocessing.MinMaxScaler(feature_range=(0,1))\ndata_rescaled = data_scaler.fit_transform(array)\n"
},
{
"code": null,
"e": 4805,
"s": 4659,
"text": "We can also summarize the data for output as per our choice. Here, we are setting the precision to 1 and showing the first 10 rows in the output."
},
{
"code": null,
"e": 4884,
"s": 4805,
"text": "set_printoptions(precision=1)\nprint (\"\\nScaled data:\\n\", data_rescaled[0:10])\n"
},
{
"code": null,
"e": 5312,
"s": 4884,
"text": "Scaled data:\n[\n [0.4 0.7 0.6 0.4 0. 0.5 0.2 0.5 1. ]\n [0.1 0.4 0.5 0.3 0. 0.4 0.1 0.2 0. ]\n [0.5 0.9 0.5 0. 0. 0.3 0.3 0.2 1. ]\n [0.1 0.4 0.5 0.2 0.1 0.4 0. 0. 0. ]\n [0. 0.7 0.3 0.4 0.2 0.6 0.9 0.2 1. ]\n [0.3 0.6 0.6 0. 0. 0.4 0.1 0.2 0. ]\n [0.2 0.4 0.4 0.3 0.1 0.5 0.1 0.1 1. ]\n [0.6 0.6 0. 0. 0. 0.5 0. 0.1 0. ]\n [0.1 1. 0.6 0.5 0.6 0.5 0. 0.5 1. ]\n [0.5 0.6 0.8 0. 0. 0. 0.1 0.6 1. ]\n]\n"
},
{
"code": null,
"e": 5388,
"s": 5312,
"text": "From the above output, all the data got rescaled into the range of 0 and 1."
},
{
"code": null,
"e": 5671,
"s": 5388,
"text": "Another useful data preprocessing technique is Normalization. This is used to rescale each row of data to have a length of 1. It is mainly useful in Sparse dataset where we have lots of zeros. We can rescale the data with the help of Normalizer class of scikit-learn Python library."
},
{
"code": null,
"e": 5767,
"s": 5671,
"text": "In machine learning, there are two types of normalization preprocessing techniques as follows −"
},
{
"code": null,
"e": 5975,
"s": 5767,
"text": "It may be defined as the normalization technique that modifies the dataset values in a way that in each row the sum of the absolute values will always be up to 1. It is also called Least Absolute Deviations."
},
{
"code": null,
"e": 5983,
"s": 5975,
"text": "Example"
},
{
"code": null,
"e": 6209,
"s": 5983,
"text": "In this example, we use L1 Normalize technique to normalize the data of Pima Indians Diabetes dataset which we used earlier. First, the CSV data will be loaded and then with the help of Normalizer class it will be normalized."
},
{
"code": null,
"e": 6322,
"s": 6209,
"text": "The first few lines of following script are same as we have written in previous chapters while loading CSV data."
},
{
"code": null,
"e": 6616,
"s": 6322,
"text": "from pandas import read_csv\nfrom numpy import set_printoptions\nfrom sklearn.preprocessing import Normalizer\npath = r'C:\\pima-indians-diabetes.csv'\nnames = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']\ndataframe = read_csv (path, names=names)\narray = dataframe.values"
},
{
"code": null,
"e": 6680,
"s": 6616,
"text": "Now, we can use Normalizer class with L1 to normalize the data."
},
{
"code": null,
"e": 6783,
"s": 6680,
"text": "Data_normalizer = Normalizer(norm='l1').fit(array)\nData_normalized = Data_normalizer.transform(array)\n"
},
{
"code": null,
"e": 6928,
"s": 6783,
"text": "We can also summarize the data for output as per our choice. Here, we are setting the precision to 2 and showing the first 3 rows in the output."
},
{
"code": null,
"e": 7013,
"s": 6928,
"text": "set_printoptions(precision=2)\nprint (\"\\nNormalized data:\\n\", Data_normalized [0:3])\n"
},
{
"code": null,
"e": 7020,
"s": 7013,
"text": "Output"
},
{
"code": null,
"e": 7177,
"s": 7020,
"text": "Normalized data:\n[\n [0.02 0.43 0.21 0.1 0. 0.1 0. 0.14 0. ]\n [0. 0.36 0.28 0.12 0. 0.11 0. 0.13 0. ]\n [0.03 0.59 0.21 0. 0. 0.07 0. 0.1 0. ]\n]\n"
},
{
"code": null,
"e": 7365,
"s": 7177,
"text": "It may be defined as the normalization technique that modifies the dataset values in a way that in each row the sum of the squares will always be up to 1. It is also called least squares."
},
{
"code": null,
"e": 7373,
"s": 7365,
"text": "Example"
},
{
"code": null,
"e": 7634,
"s": 7373,
"text": "In this example, we use L2 Normalization technique to normalize the data of Pima Indians Diabetes dataset which we used earlier. First, the CSV data will be loaded (as done in previous chapters) and then with the help of Normalizer class it will be normalized."
},
{
"code": null,
"e": 7747,
"s": 7634,
"text": "The first few lines of following script are same as we have written in previous chapters while loading CSV data."
},
{
"code": null,
"e": 8041,
"s": 7747,
"text": "from pandas import read_csv\nfrom numpy import set_printoptions\nfrom sklearn.preprocessing import Normalizer\npath = r'C:\\pima-indians-diabetes.csv'\nnames = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']\ndataframe = read_csv (path, names=names)\narray = dataframe.values"
},
{
"code": null,
"e": 8105,
"s": 8041,
"text": "Now, we can use Normalizer class with L1 to normalize the data."
},
{
"code": null,
"e": 8208,
"s": 8105,
"text": "Data_normalizer = Normalizer(norm='l2').fit(array)\nData_normalized = Data_normalizer.transform(array)\n"
},
{
"code": null,
"e": 8353,
"s": 8208,
"text": "We can also summarize the data for output as per our choice. Here, we are setting the precision to 2 and showing the first 3 rows in the output."
},
{
"code": null,
"e": 8438,
"s": 8353,
"text": "set_printoptions(precision=2)\nprint (\"\\nNormalized data:\\n\", Data_normalized [0:3])\n"
},
{
"code": null,
"e": 8445,
"s": 8438,
"text": "Output"
},
{
"code": null,
"e": 8605,
"s": 8445,
"text": "Normalized data:\n[\n [0.03 0.83 0.4 0.2 0. 0.19 0. 0.28 0.01]\n [0.01 0.72 0.56 0.24 0. 0.22 0. 0.26 0. ]\n [0.04 0.92 0.32 0. 0. 0.12 0. 0.16 0.01]\n]\n"
},
{
"code": null,
"e": 9178,
"s": 8605,
"text": "As the name suggests, this is the technique with the help of which we can make our data binary. We can use a binary threshold for making our data binary. The values above that threshold value will be converted to 1 and below that threshold will be converted to 0. For example, if we choose threshold value = 0.5, then the dataset value above it will become 1 and below this will become 0. That is why we can call it binarizing the data or thresholding the data. This technique is useful when we have probabilities in our dataset and want to convert them into crisp values."
},
{
"code": null,
"e": 9268,
"s": 9178,
"text": "We can binarize the data with the help of Binarizer class of scikit-learn Python library."
},
{
"code": null,
"e": 9570,
"s": 9268,
"text": "In this example, we will rescale the data of Pima Indians Diabetes dataset which we used earlier. First, the CSV data will be loaded and then with the help of Binarizer class it will be converted into binary values i.e. 0 and 1 depending upon the threshold value. We are taking 0.5 as threshold value."
},
{
"code": null,
"e": 9683,
"s": 9570,
"text": "The first few lines of following script are same as we have written in previous chapters while loading CSV data."
},
{
"code": null,
"e": 9940,
"s": 9683,
"text": "from pandas import read_csv\nfrom sklearn.preprocessing import Binarizer\npath = r'C:\\pima-indians-diabetes.csv'\nnames = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']\ndataframe = read_csv(path, names=names)\narray = dataframe.values"
},
{
"code": null,
"e": 10011,
"s": 9940,
"text": "Now, we can use Binarize class to convert the data into binary values."
},
{
"code": null,
"e": 10104,
"s": 10011,
"text": "binarizer = Binarizer(threshold=0.5).fit(array)\nData_binarized = binarizer.transform(array)\n"
},
{
"code": null,
"e": 10157,
"s": 10104,
"text": "Here, we are showing the first 5 rows in the output."
},
{
"code": null,
"e": 10207,
"s": 10157,
"text": "print (\"\\nBinary data:\\n\", Data_binarized [0:5])\n"
},
{
"code": null,
"e": 10385,
"s": 10207,
"text": "Binary data:\n[\n [1. 1. 1. 1. 0. 1. 1. 1. 1.]\n [1. 1. 1. 1. 0. 1. 0. 1. 0.]\n [1. 1. 1. 0. 0. 1. 1. 1. 1.]\n [1. 1. 1. 1. 1. 1. 0. 1. 0.]\n [0. 1. 1. 1. 1. 1. 1. 1. 1.]\n]\n"
},
{
"code": null,
"e": 10938,
"s": 10385,
"text": "Another useful data preprocessing technique which is basically used to transform the data attributes with a Gaussian distribution. It differs the mean and SD (Standard Deviation) to a standard Gaussian distribution with a mean of 0 and a SD of 1. This technique is useful in ML algorithms like linear regression, logistic regression that assumes a Gaussian distribution in input dataset and produce better results with rescaled data. We can standardize the data (mean = 0 and SD =1) with the help of StandardScaler class of scikit-learn Python library."
},
{
"code": null,
"e": 11192,
"s": 10938,
"text": "In this example, we will rescale the data of Pima Indians Diabetes dataset which we used earlier. First, the CSV data will be loaded and then with the help of StandardScaler class it will be converted into Gaussian Distribution with mean = 0 and SD = 1."
},
{
"code": null,
"e": 11305,
"s": 11192,
"text": "The first few lines of following script are same as we have written in previous chapters while loading CSV data."
},
{
"code": null,
"e": 11602,
"s": 11305,
"text": "from sklearn.preprocessing import StandardScaler\nfrom pandas import read_csv\nfrom numpy import set_printoptions\npath = r'C:\\pima-indians-diabetes.csv'\nnames = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']\ndataframe = read_csv(path, names=names)\narray = dataframe.values"
},
{
"code": null,
"e": 11660,
"s": 11602,
"text": "Now, we can use StandardScaler class to rescale the data."
},
{
"code": null,
"e": 11748,
"s": 11660,
"text": "data_scaler = StandardScaler().fit(array)\ndata_rescaled = data_scaler.transform(array)\n"
},
{
"code": null,
"e": 11893,
"s": 11748,
"text": "We can also summarize the data for output as per our choice. Here, we are setting the precision to 2 and showing the first 5 rows in the output."
},
{
"code": null,
"e": 11974,
"s": 11893,
"text": "set_printoptions(precision=2)\nprint (\"\\nRescaled data:\\n\", data_rescaled [0:5])\n"
},
{
"code": null,
"e": 12289,
"s": 11974,
"text": "Rescaled data:\n[\n [ 0.64 0.85 0.15 0.91 -0.69 0.2 0.47 1.43 1.37]\n [-0.84 -1.12 -0.16 0.53 -0.69 -0.68 -0.37 -0.19 -0.73]\n [ 1.23 1.94 -0.26 -1.29 -0.69 -1.1 0.6 -0.11 1.37]\n [-0.84 -1. -0.16 0.15 0.12 -0.49 -0.92 -1.04 -0.73]\n [-1.14 0.5 -1.5 0.91 0.77 1.41 5.48 -0.02 1.37]\n]\n"
},
{
"code": null,
"e": 12689,
"s": 12289,
"text": "We discussed the importance of good fata for ML algorithms as well as some techniques to pre-process the data before sending it to ML algorithms. One more aspect in this regard is data labeling. It is also very important to send the data to ML algorithms having proper labeling. For example, in case of classification problems, lot of labels in the form of words, numbers etc. are there on the data."
},
{
"code": null,
"e": 12992,
"s": 12689,
"text": "Most of the sklearn functions expect that the data with number labels rather than word labels. Hence, we need to convert such labels into number labels. This process is called label encoding. We can perform label encoding of data with the help of LabelEncoder() function of scikit-learn Python library."
},
{
"code": null,
"e": 13065,
"s": 12992,
"text": "In the following example, Python script will perform the label encoding."
},
{
"code": null,
"e": 13122,
"s": 13065,
"text": "First, import the required Python libraries as follows −"
},
{
"code": null,
"e": 13176,
"s": 13122,
"text": "import numpy as np\nfrom sklearn import preprocessing\n"
},
{
"code": null,
"e": 13230,
"s": 13176,
"text": "Now, we need to provide the input labels as follows −"
},
{
"code": null,
"e": 13301,
"s": 13230,
"text": "input_labels = ['red','black','red','green','black','yellow','white']\n"
},
{
"code": null,
"e": 13367,
"s": 13301,
"text": "The next line of code will create the label encoder and train it."
},
{
"code": null,
"e": 13433,
"s": 13367,
"text": "encoder = preprocessing.LabelEncoder()\nencoder.fit(input_labels)\n"
},
{
"code": null,
"e": 13523,
"s": 13433,
"text": "The next lines of script will check the performance by encoding the random ordered list −"
},
{
"code": null,
"e": 13774,
"s": 13523,
"text": "test_labels = ['green','red','black']\nencoded_values = encoder.transform(test_labels)\nprint(\"\\nLabels =\", test_labels)\nprint(\"Encoded values =\", list(encoded_values))\nencoded_values = [3,0,4,1]\ndecoded_list = encoder.inverse_transform(encoded_values)"
},
{
"code": null,
"e": 13855,
"s": 13774,
"text": "We can get the list of encoded values with the help of following python script −"
},
{
"code": null,
"e": 13948,
"s": 13855,
"text": "print(\"\\nEncoded values =\", encoded_values)\nprint(\"\\nDecoded labels =\", list(decoded_list))\n"
},
{
"code": null,
"e": 14096,
"s": 13948,
"text": "Labels = ['green', 'red', 'black']\nEncoded values = [1, 2, 0]\nEncoded values = [3, 0, 4, 1]\nDecoded labels = ['white', 'black', 'yellow', 'green']\n"
},
{
"code": null,
"e": 14133,
"s": 14096,
"text": "\n 168 Lectures \n 13.5 hours \n"
},
{
"code": null,
"e": 14156,
"s": 14133,
"text": " Er. Himanshu Vasishta"
},
{
"code": null,
"e": 14192,
"s": 14156,
"text": "\n 64 Lectures \n 10.5 hours \n"
},
{
"code": null,
"e": 14220,
"s": 14192,
"text": " Eduonix Learning Solutions"
},
{
"code": null,
"e": 14254,
"s": 14220,
"text": "\n 91 Lectures \n 10 hours \n"
},
{
"code": null,
"e": 14271,
"s": 14254,
"text": " Abhilash Nelson"
},
{
"code": null,
"e": 14304,
"s": 14271,
"text": "\n 54 Lectures \n 6 hours \n"
},
{
"code": null,
"e": 14326,
"s": 14304,
"text": " Abhishek And Pukhraj"
},
{
"code": null,
"e": 14359,
"s": 14326,
"text": "\n 49 Lectures \n 5 hours \n"
},
{
"code": null,
"e": 14381,
"s": 14359,
"text": " Abhishek And Pukhraj"
},
{
"code": null,
"e": 14414,
"s": 14381,
"text": "\n 35 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 14436,
"s": 14414,
"text": " Abhishek And Pukhraj"
},
{
"code": null,
"e": 14443,
"s": 14436,
"text": " Print"
},
{
"code": null,
"e": 14454,
"s": 14443,
"text": " Add Notes"
}
] |
How to use a saved model in Tensorflow 2.x | by Rahul Bhadani | Towards Data Science
|
In my previous article, I wrote about model validation, regularization, and callbacks using TensorFlow 2.x. In the machine-learning pipeline, creating a trained model is not enough. What are we going to do with the trained model once we have finished the training, validated, and tested it with a portion of data set aside? In practice, we would like to import such a trained model so that it can be of use in some practical applications. For example, let's say I trained a model on camera images to recognize pedestrians. Ultimately, I want to use the trained model to make a real-time prediction of detecting pedestrians with a camera mounted on a self-driving car. Additionally, training a model also needs saving the model as checkpoints, especially when you are training a model on a really large dataset or training time is in the order of hours. Model saving is also useful in case your training gets interrupted for some reasons such as a flaw in your programming logic, the battery of your laptop died, there was an I/O error, etc.
medium.com
Ultimately, I want to use the trained model to make a real-time prediction of detecting pedestrians with a camera mounted on a self-driving car.
There are a couple of things we can do while saving a model. Do we want to save the model weights and training parameters in every iteration (epochs), every once in a while, or once training has finished? We can use built-in callbacks just like we saw in my previous article to automatically save the model weights during the training process. Alternatively, we can also save the model weights and other necessary information once training has finished.
There are two main formats for saved models: One in native TensorFlow, and the other in HDF5 format since we are using TensorFlow through Keras API.
An example of saving the model during the training procedure:
from tensorflow.keras.models import Sequentialfrom tensorflow.keras.layers import Densefrom tensorflow.keras.losses import BinaryCrossentropyfrom tensorflow.keras.callbacks import ModelCheckpointmodel = Sequential( [Dense(128, activation='sigmoid', input_shape = (10, )),Dense(1)])model.compile(optimizer = 'sgd', loss = BinaryCrossentropy(from_logits = True))checkpoint = ModelCheckpoint('saved_modelname', save_weights_only=True)model.fit(X_train, y_train, epochs = 10, callbacks = [checkpoint])
You can see that, I used ModelCheckpoint class to create an object checkpoint that takes an argument which will be used as a filename to save the model. Since save_weights_only=True is used, only weights will be saved, and network architecture will not be saved. Finally, we pass callback = [checkpoint] to the fit function.
If instead of ‘saved_modelname’ if we supply ‘saved_modelname.h5 , then the model will be saved in HDF5 format.
To load the weights of previously saved, we call load_weights function.
model = Sequential( [Dense(128, activation='sigmoid', input_shape = (10, )),Dense(1)])model.load_weights('saved_modelname')
We can also save the models manually by calling save_weights at the end of the training
model = Sequential( [Dense(128, activation='sigmoid', input_shape = (10, )),Dense(1)])model.compile(optimizer = 'sgd', loss = BinaryCrossentropy(from_logits = True))model.fit(X_train, y_train, epochs = 10)model.save_weights("saved_modelname")
You can also analyze the directory where the model is saved:
total 184K-rw-r--r-- 1 ivory ivory 61 Jan 12 01:08 saved_modelname-rw-r--r-- 1 ivory ivory 174K Jan 12 01:08 saved_modelname.data-00000-of-00001-rw-r--r-- 1 ivory ivory 2.0K Jan 12 01:08 saved_modelname.index
Here, you can see that the actual model saved is saved_modelname.data-00000-of-00001 and the rest of the file contains metadata.
Until now, we saw that we were only saving the model weights. However, saving the entire model is very easy. Just pass save_weights_only=False while instantiating ModelCheckpoint class.
checkpoint_dir = 'saved_model'checkpoint = ModelCheckpoint(filepath=checkpoint_dir, frequency = "epoch", save_weights_only = False, verbose= True)model.fit(X_train, y_train, callbacks=[checkpoint])
In this case, a new directory is created with the following content:
total 128drwxr-xr-x 2 ivory ivory 4096 Jan 12 01:14 assets-rw-r--r-- 1 ivory ivory 122124 Jan 12 01:14 saved_model.pbdrwxr-xr-x 2 ivory ivory 4096 Jan 12 01:14 variables
In this case, the main model is saved in the file saved_model.pb and other files are metadata.
Finally, we can use the saved model as follows:
from tensorflow.keras.models import load_modelmodel = load_model(checkpoint_dir)
If we want to save the model once the training procedure is finished, we can call save function as follows:
model.save("mysavedmodel")
If you use model.save(“mysavedmodel.h5”), then the model will be saved as a single file mysavedmodel.h5 .
The saved model can be used to make predictions using a brand new data set.
model.predict(X_test)
A more descriptive example is given in my GitHub repo at https://github.com/rahulbhadani/medium.com/blob/master/01_12_2021/Saving_Model_TF2.ipynb.
The article is motivated by the author’s learning from the TensorFlow2 Coursera course https://www.coursera.org/learn/getting-started-with-tensor-flow2/. Readers may find similarities in the presented example with examples from the Coursera course. Permission to use the code example (in its original form or with some modification) from the Coursera course was obtained from the instructor.
|
[
{
"code": null,
"e": 1213,
"s": 172,
"text": "In my previous article, I wrote about model validation, regularization, and callbacks using TensorFlow 2.x. In the machine-learning pipeline, creating a trained model is not enough. What are we going to do with the trained model once we have finished the training, validated, and tested it with a portion of data set aside? In practice, we would like to import such a trained model so that it can be of use in some practical applications. For example, let's say I trained a model on camera images to recognize pedestrians. Ultimately, I want to use the trained model to make a real-time prediction of detecting pedestrians with a camera mounted on a self-driving car. Additionally, training a model also needs saving the model as checkpoints, especially when you are training a model on a really large dataset or training time is in the order of hours. Model saving is also useful in case your training gets interrupted for some reasons such as a flaw in your programming logic, the battery of your laptop died, there was an I/O error, etc."
},
{
"code": null,
"e": 1224,
"s": 1213,
"text": "medium.com"
},
{
"code": null,
"e": 1369,
"s": 1224,
"text": "Ultimately, I want to use the trained model to make a real-time prediction of detecting pedestrians with a camera mounted on a self-driving car."
},
{
"code": null,
"e": 1823,
"s": 1369,
"text": "There are a couple of things we can do while saving a model. Do we want to save the model weights and training parameters in every iteration (epochs), every once in a while, or once training has finished? We can use built-in callbacks just like we saw in my previous article to automatically save the model weights during the training process. Alternatively, we can also save the model weights and other necessary information once training has finished."
},
{
"code": null,
"e": 1972,
"s": 1823,
"text": "There are two main formats for saved models: One in native TensorFlow, and the other in HDF5 format since we are using TensorFlow through Keras API."
},
{
"code": null,
"e": 2034,
"s": 1972,
"text": "An example of saving the model during the training procedure:"
},
{
"code": null,
"e": 2533,
"s": 2034,
"text": "from tensorflow.keras.models import Sequentialfrom tensorflow.keras.layers import Densefrom tensorflow.keras.losses import BinaryCrossentropyfrom tensorflow.keras.callbacks import ModelCheckpointmodel = Sequential( [Dense(128, activation='sigmoid', input_shape = (10, )),Dense(1)])model.compile(optimizer = 'sgd', loss = BinaryCrossentropy(from_logits = True))checkpoint = ModelCheckpoint('saved_modelname', save_weights_only=True)model.fit(X_train, y_train, epochs = 10, callbacks = [checkpoint])"
},
{
"code": null,
"e": 2858,
"s": 2533,
"text": "You can see that, I used ModelCheckpoint class to create an object checkpoint that takes an argument which will be used as a filename to save the model. Since save_weights_only=True is used, only weights will be saved, and network architecture will not be saved. Finally, we pass callback = [checkpoint] to the fit function."
},
{
"code": null,
"e": 2970,
"s": 2858,
"text": "If instead of ‘saved_modelname’ if we supply ‘saved_modelname.h5 , then the model will be saved in HDF5 format."
},
{
"code": null,
"e": 3042,
"s": 2970,
"text": "To load the weights of previously saved, we call load_weights function."
},
{
"code": null,
"e": 3166,
"s": 3042,
"text": "model = Sequential( [Dense(128, activation='sigmoid', input_shape = (10, )),Dense(1)])model.load_weights('saved_modelname')"
},
{
"code": null,
"e": 3254,
"s": 3166,
"text": "We can also save the models manually by calling save_weights at the end of the training"
},
{
"code": null,
"e": 3498,
"s": 3254,
"text": "model = Sequential( [Dense(128, activation='sigmoid', input_shape = (10, )),Dense(1)])model.compile(optimizer = 'sgd', loss = BinaryCrossentropy(from_logits = True))model.fit(X_train, y_train, epochs = 10)model.save_weights(\"saved_modelname\")"
},
{
"code": null,
"e": 3559,
"s": 3498,
"text": "You can also analyze the directory where the model is saved:"
},
{
"code": null,
"e": 3770,
"s": 3559,
"text": "total 184K-rw-r--r-- 1 ivory ivory 61 Jan 12 01:08 saved_modelname-rw-r--r-- 1 ivory ivory 174K Jan 12 01:08 saved_modelname.data-00000-of-00001-rw-r--r-- 1 ivory ivory 2.0K Jan 12 01:08 saved_modelname.index"
},
{
"code": null,
"e": 3899,
"s": 3770,
"text": "Here, you can see that the actual model saved is saved_modelname.data-00000-of-00001 and the rest of the file contains metadata."
},
{
"code": null,
"e": 4085,
"s": 3899,
"text": "Until now, we saw that we were only saving the model weights. However, saving the entire model is very easy. Just pass save_weights_only=False while instantiating ModelCheckpoint class."
},
{
"code": null,
"e": 4366,
"s": 4085,
"text": "checkpoint_dir = 'saved_model'checkpoint = ModelCheckpoint(filepath=checkpoint_dir, frequency = \"epoch\", save_weights_only = False, verbose= True)model.fit(X_train, y_train, callbacks=[checkpoint])"
},
{
"code": null,
"e": 4435,
"s": 4366,
"text": "In this case, a new directory is created with the following content:"
},
{
"code": null,
"e": 4609,
"s": 4435,
"text": "total 128drwxr-xr-x 2 ivory ivory 4096 Jan 12 01:14 assets-rw-r--r-- 1 ivory ivory 122124 Jan 12 01:14 saved_model.pbdrwxr-xr-x 2 ivory ivory 4096 Jan 12 01:14 variables"
},
{
"code": null,
"e": 4704,
"s": 4609,
"text": "In this case, the main model is saved in the file saved_model.pb and other files are metadata."
},
{
"code": null,
"e": 4752,
"s": 4704,
"text": "Finally, we can use the saved model as follows:"
},
{
"code": null,
"e": 4833,
"s": 4752,
"text": "from tensorflow.keras.models import load_modelmodel = load_model(checkpoint_dir)"
},
{
"code": null,
"e": 4941,
"s": 4833,
"text": "If we want to save the model once the training procedure is finished, we can call save function as follows:"
},
{
"code": null,
"e": 4968,
"s": 4941,
"text": "model.save(\"mysavedmodel\")"
},
{
"code": null,
"e": 5074,
"s": 4968,
"text": "If you use model.save(“mysavedmodel.h5”), then the model will be saved as a single file mysavedmodel.h5 ."
},
{
"code": null,
"e": 5150,
"s": 5074,
"text": "The saved model can be used to make predictions using a brand new data set."
},
{
"code": null,
"e": 5172,
"s": 5150,
"text": "model.predict(X_test)"
},
{
"code": null,
"e": 5319,
"s": 5172,
"text": "A more descriptive example is given in my GitHub repo at https://github.com/rahulbhadani/medium.com/blob/master/01_12_2021/Saving_Model_TF2.ipynb."
}
] |
C# - Copying the Contents From One File to Another File - GeeksforGeeks
|
16 Nov, 2021
Given a file, now our task is to copy data from one file to another file using C#. So to do this task we use the Copy() method of the File class from the System.IO namespace. This function is used to copy content from one file to a new file. It has two different types of overloaded methods:
1. Copy(String, String): This function is used to copy content from one file to a new file. It does not support overwriting of a file with the same name.
Syntax:
File.Copy(file1, file2);
Where file1 is the first file and file2 is the second file.
Exceptions: This method will throw the following exceptions:
UnauthorizedAccessException: This exception will occur when the caller does not have the required permission.
ArgumentException: This exception will occur when file1 or file2 specifies a directory.
ArgumentNullException: This exception will occur when file1 or file2 is null.
PathTooLongException: This exception will occur when the specified path, file name, or both exceed the system-defined maximum length.
DirectoryNotFoundException: This exception will occur when the path specified in file1 or file2 is invalid.
FileNotFoundException: This exception will occur when file1 was not found.
IOException: This exception will occur when file2 exists.
NotSupportedException: This exception will occur when file1 or file2 is in an invalid format.
2. Copy(String, String, Boolean): This function is used to copy content from one file to a new file. It does not support overwriting of a file with the same name.
Syntax:
File.Copy(file1, file2, owrite);
Where file1 is the first file, file2 is the second file, and write is a boolean variable if the destination file can be overwritten then it is set to true otherwise false.
Exceptions: This method will throw the following exceptions:
UnauthorizedAccessException: This exception will occur when the caller does not have the required permission. Or the file2 is readonly or write is set to true and file to is hidden but file1 is not hidden.
ArgumentException: This exception will occur when file1 or file2 specifies a directory.
ArgumentNullException: This exception will occur when file1 or file2 is null.
PathTooLongException: This exception will occur when the specified path, file name, or both exceed the system-defined maximum length.
DirectoryNotFoundException: This exception will occur when the path specified in file1 or file2 is invalid.
FileNotFoundException: This exception will occur when file1 was not found.
IOException: This exception will occur when file2 exists and owrite is false.
NotSupportedException: This exception will occur when file1 or file2 is in an invalid format.
Example:
Let us consider two files named file1 and file2. Now the file1.txt contains the following text:
Now the file2.txt contains the following text:
Approach:
Place two files in your csharp executable folder in your system.In the main method use File.Copy() to copy contents from first file to second file.Display the text in file2 using File.ReadAllText() method.
Place two files in your csharp executable folder in your system.
In the main method use File.Copy() to copy contents from first file to second file.
Display the text in file2 using File.ReadAllText() method.
C#
// C# program to copy data from one file to anotherusing System;using System.IO; class GFG{ static void Main(){ // Copy contents from file1 to file2 File.Copy("file1.txt", "file2.txt"); // Display file2 contents Console.WriteLine(File.ReadAllText("file2.txt"));}}
Output:
Now, file2.txt is:
Picked
C#
C# Programs
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Destructors in C#
Extension Method in C#
HashSet in C# with Examples
Top 50 C# Interview Questions & Answers
C# | How to insert an element in an Array?
Convert String to Character Array in C#
Socket Programming in C#
Program to Print a New Line in C#
Getting a Month Name Using Month Number in C#
Program to find absolute value of a given number
|
[
{
"code": null,
"e": 24302,
"s": 24274,
"text": "\n16 Nov, 2021"
},
{
"code": null,
"e": 24594,
"s": 24302,
"text": "Given a file, now our task is to copy data from one file to another file using C#. So to do this task we use the Copy() method of the File class from the System.IO namespace. This function is used to copy content from one file to a new file. It has two different types of overloaded methods:"
},
{
"code": null,
"e": 24749,
"s": 24594,
"text": "1. Copy(String, String): This function is used to copy content from one file to a new file. It does not support overwriting of a file with the same name. "
},
{
"code": null,
"e": 24757,
"s": 24749,
"text": "Syntax:"
},
{
"code": null,
"e": 24782,
"s": 24757,
"text": "File.Copy(file1, file2);"
},
{
"code": null,
"e": 24843,
"s": 24782,
"text": "Where file1 is the first file and file2 is the second file. "
},
{
"code": null,
"e": 24904,
"s": 24843,
"text": "Exceptions: This method will throw the following exceptions:"
},
{
"code": null,
"e": 25014,
"s": 24904,
"text": "UnauthorizedAccessException: This exception will occur when the caller does not have the required permission."
},
{
"code": null,
"e": 25102,
"s": 25014,
"text": "ArgumentException: This exception will occur when file1 or file2 specifies a directory."
},
{
"code": null,
"e": 25180,
"s": 25102,
"text": "ArgumentNullException: This exception will occur when file1 or file2 is null."
},
{
"code": null,
"e": 25314,
"s": 25180,
"text": "PathTooLongException: This exception will occur when the specified path, file name, or both exceed the system-defined maximum length."
},
{
"code": null,
"e": 25422,
"s": 25314,
"text": "DirectoryNotFoundException: This exception will occur when the path specified in file1 or file2 is invalid."
},
{
"code": null,
"e": 25497,
"s": 25422,
"text": "FileNotFoundException: This exception will occur when file1 was not found."
},
{
"code": null,
"e": 25555,
"s": 25497,
"text": "IOException: This exception will occur when file2 exists."
},
{
"code": null,
"e": 25649,
"s": 25555,
"text": "NotSupportedException: This exception will occur when file1 or file2 is in an invalid format."
},
{
"code": null,
"e": 25813,
"s": 25649,
"text": "2. Copy(String, String, Boolean): This function is used to copy content from one file to a new file. It does not support overwriting of a file with the same name. "
},
{
"code": null,
"e": 25821,
"s": 25813,
"text": "Syntax:"
},
{
"code": null,
"e": 25854,
"s": 25821,
"text": "File.Copy(file1, file2, owrite);"
},
{
"code": null,
"e": 26027,
"s": 25854,
"text": "Where file1 is the first file, file2 is the second file, and write is a boolean variable if the destination file can be overwritten then it is set to true otherwise false. "
},
{
"code": null,
"e": 26088,
"s": 26027,
"text": "Exceptions: This method will throw the following exceptions:"
},
{
"code": null,
"e": 26294,
"s": 26088,
"text": "UnauthorizedAccessException: This exception will occur when the caller does not have the required permission. Or the file2 is readonly or write is set to true and file to is hidden but file1 is not hidden."
},
{
"code": null,
"e": 26382,
"s": 26294,
"text": "ArgumentException: This exception will occur when file1 or file2 specifies a directory."
},
{
"code": null,
"e": 26460,
"s": 26382,
"text": "ArgumentNullException: This exception will occur when file1 or file2 is null."
},
{
"code": null,
"e": 26594,
"s": 26460,
"text": "PathTooLongException: This exception will occur when the specified path, file name, or both exceed the system-defined maximum length."
},
{
"code": null,
"e": 26702,
"s": 26594,
"text": "DirectoryNotFoundException: This exception will occur when the path specified in file1 or file2 is invalid."
},
{
"code": null,
"e": 26777,
"s": 26702,
"text": "FileNotFoundException: This exception will occur when file1 was not found."
},
{
"code": null,
"e": 26855,
"s": 26777,
"text": "IOException: This exception will occur when file2 exists and owrite is false."
},
{
"code": null,
"e": 26949,
"s": 26855,
"text": "NotSupportedException: This exception will occur when file1 or file2 is in an invalid format."
},
{
"code": null,
"e": 26958,
"s": 26949,
"text": "Example:"
},
{
"code": null,
"e": 27054,
"s": 26958,
"text": "Let us consider two files named file1 and file2. Now the file1.txt contains the following text:"
},
{
"code": null,
"e": 27101,
"s": 27054,
"text": "Now the file2.txt contains the following text:"
},
{
"code": null,
"e": 27111,
"s": 27101,
"text": "Approach:"
},
{
"code": null,
"e": 27317,
"s": 27111,
"text": "Place two files in your csharp executable folder in your system.In the main method use File.Copy() to copy contents from first file to second file.Display the text in file2 using File.ReadAllText() method."
},
{
"code": null,
"e": 27382,
"s": 27317,
"text": "Place two files in your csharp executable folder in your system."
},
{
"code": null,
"e": 27466,
"s": 27382,
"text": "In the main method use File.Copy() to copy contents from first file to second file."
},
{
"code": null,
"e": 27525,
"s": 27466,
"text": "Display the text in file2 using File.ReadAllText() method."
},
{
"code": null,
"e": 27528,
"s": 27525,
"text": "C#"
},
{
"code": "// C# program to copy data from one file to anotherusing System;using System.IO; class GFG{ static void Main(){ // Copy contents from file1 to file2 File.Copy(\"file1.txt\", \"file2.txt\"); // Display file2 contents Console.WriteLine(File.ReadAllText(\"file2.txt\"));}}",
"e": 27820,
"s": 27528,
"text": null
},
{
"code": null,
"e": 27828,
"s": 27820,
"text": "Output:"
},
{
"code": null,
"e": 27847,
"s": 27828,
"text": "Now, file2.txt is:"
},
{
"code": null,
"e": 27854,
"s": 27847,
"text": "Picked"
},
{
"code": null,
"e": 27857,
"s": 27854,
"text": "C#"
},
{
"code": null,
"e": 27869,
"s": 27857,
"text": "C# Programs"
},
{
"code": null,
"e": 27967,
"s": 27869,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27985,
"s": 27967,
"text": "Destructors in C#"
},
{
"code": null,
"e": 28008,
"s": 27985,
"text": "Extension Method in C#"
},
{
"code": null,
"e": 28036,
"s": 28008,
"text": "HashSet in C# with Examples"
},
{
"code": null,
"e": 28076,
"s": 28036,
"text": "Top 50 C# Interview Questions & Answers"
},
{
"code": null,
"e": 28119,
"s": 28076,
"text": "C# | How to insert an element in an Array?"
},
{
"code": null,
"e": 28159,
"s": 28119,
"text": "Convert String to Character Array in C#"
},
{
"code": null,
"e": 28184,
"s": 28159,
"text": "Socket Programming in C#"
},
{
"code": null,
"e": 28218,
"s": 28184,
"text": "Program to Print a New Line in C#"
},
{
"code": null,
"e": 28264,
"s": 28218,
"text": "Getting a Month Name Using Month Number in C#"
}
] |
A Guide to Scraping HTML Tables with Pandas and BeautifulSoup | by Otávio Simões Silveira | Towards Data Science
|
It’s very common to run into HTML tables while scraping a webpage, and without the right approach, it can be a little tricky to extract useful, consistent data from them.
In this article, you’ll see how to perform a quick, efficient scraping of these elements with two main different approaches: using only the Pandas library and using the traditional scraping library BeautifulSoup.
As an example, I scraped the Premier League classification table. This is good because it’s a common table that can be found on basically any sports website. Although it makes sense to inform you this, the table being is scraped won’t make much difference while you read as I tried to make this article as generalistic as possible.
If all you want is to get some tables from a page and nothing else, you don’t even need to set up a whole scraper to do it as Pandas can get this job done by itself. The pandas.read_html() function uses some scraping libraries such as BeautifulSoup and Urllib to return a list containing all the tables in a page as DataFrames. You just need to pass the URL of the page.
dfs = pd.read_html(url)
All you need to do now is to select the DataFrame you want from this list:
df = dfs[4]
If you’re not sure about the order of the frames in the list or if you don’t want your code to rely on this order (websites can change), you can always search the DataFrames to find the one you’re looking for by its length...
for df in dfs: if len(df) == 20: the_one = df break
... or by the name of its columns, for example.
for df in dfs: if df.columns == ['#', 'Team', 'MP', 'W', 'D', 'L', 'Points']: the_one = df break
But Pandas isn’t done making our lives easier. This function accepts some helpful arguments to help you get the right table. You can use match to specify a string o regex that the table should match; header to get the table with the specific headers you pass; the attrs parameter allows you to identify the table by its class or id, for example.
However, if you’re not scraping only the tables and are using, let’s say, Requests to get the page, you’re encouraged to pass page.text to the function instead of the URL:
page = requests.get(url)soup = BeautifulSoup(page.text, 'html.parser')dfs = pd.read_html(page.text)
The same goes if you’re using Selenium’s web driver to get the page:
dfs = pd.read_html(driver.page_source)
That’s because by doing this you’ll significantly reduce the time your code takes to run since the read_html() function does not need to get the page anymore. Check the average time elapsed for one hundred repetitions in each scenario:
Using the URL:Average time elapsed: 0.2345 secondsUsing page.text:Average time elapsed: 0.0774 seconds
Using the URL made the code about three times slower. So it only makes sense to use it if you’re not going to get the page first using other libraries.
Although Pandas is really great, it does not solve all of our problems. There will be times when you’ll need to scrape a table element-wise, maybe because you don’t want the entire table or because the table’s structure is not consistent or for whatever other reason.
To cover that, we first need to understand the standard structure of an HTML table:
<table> <tr> <th> <th> <th> <th> <th> <th> <th> </tr> <tr> <td> <td> <td> <td> <td> <td> <td> </tr> <tr> <td> <td> <td> <td> <td> <td> <td> </tr>...</table>
Where tr stands for “table row”, th stands for “table header” and td stands for “table data”, which is where the data is stored as text.
The pattern is usually helpful, so all we have left to do is select the correct elements using BeautifulSoup.
The first thing to do is to find the table. The find_all() method returns a list of all elements that satisfied the requirements we pass to it. We then must select the table we need in that list:
table = soup.find_all('table')[4]
Depending on the website, it will be necessary to specify the table class or id, for instance.
The rest of the process is now almost intuitive, right? We just need to select all the tr tags and the text in the th and td tags inside them. We could just use find_all() again to find all the tr tags, yes, but we can also to iterate over these tags in a more straight forward manner.
Thechildren attribute returns an iterable object with all the tags right beneath the parent tag, which is table, therefore it returns all the tr tags. As it’s an iterable object, we need to use it as such.
After that, each child is tr tag. We just need to extract the text of each td tag inside it. Here’s the code for all this:
for child in soup.find_all('table')[4].children: for td in child: print(td.text)
And the process is done! You then have the data you were looking for and you can manipulate it the way it best suits you.
Let’s say you’re not interested in the table’s header, for instance. Instead of using children, you could select the first trtag, which contains the header data, and use the next_siblingsattribute. This, just like thechildren attribute, will return an iterable, but with all the other tr tags, which are the siblings of the first one we selected. You’d be then skipping the header of the table.
for sibling in soup.find_all('table')[4].tr.next_siblings: for td in sibling: print(td.text)
Just like children and the next siblings, you can also look for the previous siblings, parents, descendants, and way more. The possibilities are endless, so make sure to check the BeautifulSoup documentation to find the best option for your scraper.
We’ve so far written some very straight forward code to extract HTML tables using Python. However, when doing this for real you’ll, of course, have some other issues to consider.
For instance, you need to know how you’re going to store your data. Will you directly write it in a text file? Or will you store it in a list or in a dictionary and then creating the .csv file? Or will you create an empty DataFrame and fill it with the data? There certainly are lots of possibilities. My choice was to store everything in a big list of lists that will be later transformed into a DataFrame and exported as a .csv file.
In another subject, you might want to use some try and except clauses in your code to make it prepared to handle some exceptions it may find along the way. Of course, you’ll also want to insert some random pauses in order not to overload the server, and also take advantage of a proxy provider, such as Infatica, to make sure your code will keep running as long as there are tables left to scrape and that you and your connection are protected.
In this example, I scraped the Premier League table after every round in the entire 2019/20 season using most of what I’ve covered in this article. This is the entire code for it:
Everything is there: gathering all the elements in the table using the children attribute, handling exceptions, transforming the data into a DataFrame, exporting a .csv file, and pausing the code for a random number of seconds. After all this, all the data gathered by this code produced this interesting chart:
You’re not going to find the data needed to plot a chart like that waiting for you on the internet. But that’s the beauty of scraping: you can go get the data yourself!
As a wrap this up I hope was somehow useful and that you never have problems when scraping an HTML table again. If you have a question, a suggestion, or just want to be in touch, feel free to contact through Twitter, GitHub, or Linkedin.
Thanks for reading!
|
[
{
"code": null,
"e": 342,
"s": 171,
"text": "It’s very common to run into HTML tables while scraping a webpage, and without the right approach, it can be a little tricky to extract useful, consistent data from them."
},
{
"code": null,
"e": 555,
"s": 342,
"text": "In this article, you’ll see how to perform a quick, efficient scraping of these elements with two main different approaches: using only the Pandas library and using the traditional scraping library BeautifulSoup."
},
{
"code": null,
"e": 887,
"s": 555,
"text": "As an example, I scraped the Premier League classification table. This is good because it’s a common table that can be found on basically any sports website. Although it makes sense to inform you this, the table being is scraped won’t make much difference while you read as I tried to make this article as generalistic as possible."
},
{
"code": null,
"e": 1258,
"s": 887,
"text": "If all you want is to get some tables from a page and nothing else, you don’t even need to set up a whole scraper to do it as Pandas can get this job done by itself. The pandas.read_html() function uses some scraping libraries such as BeautifulSoup and Urllib to return a list containing all the tables in a page as DataFrames. You just need to pass the URL of the page."
},
{
"code": null,
"e": 1282,
"s": 1258,
"text": "dfs = pd.read_html(url)"
},
{
"code": null,
"e": 1357,
"s": 1282,
"text": "All you need to do now is to select the DataFrame you want from this list:"
},
{
"code": null,
"e": 1369,
"s": 1357,
"text": "df = dfs[4]"
},
{
"code": null,
"e": 1595,
"s": 1369,
"text": "If you’re not sure about the order of the frames in the list or if you don’t want your code to rely on this order (websites can change), you can always search the DataFrames to find the one you’re looking for by its length..."
},
{
"code": null,
"e": 1664,
"s": 1595,
"text": "for df in dfs: if len(df) == 20: the_one = df break"
},
{
"code": null,
"e": 1712,
"s": 1664,
"text": "... or by the name of its columns, for example."
},
{
"code": null,
"e": 1826,
"s": 1712,
"text": "for df in dfs: if df.columns == ['#', 'Team', 'MP', 'W', 'D', 'L', 'Points']: the_one = df break"
},
{
"code": null,
"e": 2172,
"s": 1826,
"text": "But Pandas isn’t done making our lives easier. This function accepts some helpful arguments to help you get the right table. You can use match to specify a string o regex that the table should match; header to get the table with the specific headers you pass; the attrs parameter allows you to identify the table by its class or id, for example."
},
{
"code": null,
"e": 2344,
"s": 2172,
"text": "However, if you’re not scraping only the tables and are using, let’s say, Requests to get the page, you’re encouraged to pass page.text to the function instead of the URL:"
},
{
"code": null,
"e": 2444,
"s": 2344,
"text": "page = requests.get(url)soup = BeautifulSoup(page.text, 'html.parser')dfs = pd.read_html(page.text)"
},
{
"code": null,
"e": 2513,
"s": 2444,
"text": "The same goes if you’re using Selenium’s web driver to get the page:"
},
{
"code": null,
"e": 2552,
"s": 2513,
"text": "dfs = pd.read_html(driver.page_source)"
},
{
"code": null,
"e": 2788,
"s": 2552,
"text": "That’s because by doing this you’ll significantly reduce the time your code takes to run since the read_html() function does not need to get the page anymore. Check the average time elapsed for one hundred repetitions in each scenario:"
},
{
"code": null,
"e": 2891,
"s": 2788,
"text": "Using the URL:Average time elapsed: 0.2345 secondsUsing page.text:Average time elapsed: 0.0774 seconds"
},
{
"code": null,
"e": 3043,
"s": 2891,
"text": "Using the URL made the code about three times slower. So it only makes sense to use it if you’re not going to get the page first using other libraries."
},
{
"code": null,
"e": 3311,
"s": 3043,
"text": "Although Pandas is really great, it does not solve all of our problems. There will be times when you’ll need to scrape a table element-wise, maybe because you don’t want the entire table or because the table’s structure is not consistent or for whatever other reason."
},
{
"code": null,
"e": 3395,
"s": 3311,
"text": "To cover that, we first need to understand the standard structure of an HTML table:"
},
{
"code": null,
"e": 3717,
"s": 3395,
"text": "<table> <tr> <th> <th> <th> <th> <th> <th> <th> </tr> <tr> <td> <td> <td> <td> <td> <td> <td> </tr> <tr> <td> <td> <td> <td> <td> <td> <td> </tr>...</table>"
},
{
"code": null,
"e": 3854,
"s": 3717,
"text": "Where tr stands for “table row”, th stands for “table header” and td stands for “table data”, which is where the data is stored as text."
},
{
"code": null,
"e": 3964,
"s": 3854,
"text": "The pattern is usually helpful, so all we have left to do is select the correct elements using BeautifulSoup."
},
{
"code": null,
"e": 4160,
"s": 3964,
"text": "The first thing to do is to find the table. The find_all() method returns a list of all elements that satisfied the requirements we pass to it. We then must select the table we need in that list:"
},
{
"code": null,
"e": 4194,
"s": 4160,
"text": "table = soup.find_all('table')[4]"
},
{
"code": null,
"e": 4289,
"s": 4194,
"text": "Depending on the website, it will be necessary to specify the table class or id, for instance."
},
{
"code": null,
"e": 4575,
"s": 4289,
"text": "The rest of the process is now almost intuitive, right? We just need to select all the tr tags and the text in the th and td tags inside them. We could just use find_all() again to find all the tr tags, yes, but we can also to iterate over these tags in a more straight forward manner."
},
{
"code": null,
"e": 4781,
"s": 4575,
"text": "Thechildren attribute returns an iterable object with all the tags right beneath the parent tag, which is table, therefore it returns all the tr tags. As it’s an iterable object, we need to use it as such."
},
{
"code": null,
"e": 4904,
"s": 4781,
"text": "After that, each child is tr tag. We just need to extract the text of each td tag inside it. Here’s the code for all this:"
},
{
"code": null,
"e": 4995,
"s": 4904,
"text": "for child in soup.find_all('table')[4].children: for td in child: print(td.text)"
},
{
"code": null,
"e": 5117,
"s": 4995,
"text": "And the process is done! You then have the data you were looking for and you can manipulate it the way it best suits you."
},
{
"code": null,
"e": 5512,
"s": 5117,
"text": "Let’s say you’re not interested in the table’s header, for instance. Instead of using children, you could select the first trtag, which contains the header data, and use the next_siblingsattribute. This, just like thechildren attribute, will return an iterable, but with all the other tr tags, which are the siblings of the first one we selected. You’d be then skipping the header of the table."
},
{
"code": null,
"e": 5615,
"s": 5512,
"text": "for sibling in soup.find_all('table')[4].tr.next_siblings: for td in sibling: print(td.text)"
},
{
"code": null,
"e": 5865,
"s": 5615,
"text": "Just like children and the next siblings, you can also look for the previous siblings, parents, descendants, and way more. The possibilities are endless, so make sure to check the BeautifulSoup documentation to find the best option for your scraper."
},
{
"code": null,
"e": 6044,
"s": 5865,
"text": "We’ve so far written some very straight forward code to extract HTML tables using Python. However, when doing this for real you’ll, of course, have some other issues to consider."
},
{
"code": null,
"e": 6480,
"s": 6044,
"text": "For instance, you need to know how you’re going to store your data. Will you directly write it in a text file? Or will you store it in a list or in a dictionary and then creating the .csv file? Or will you create an empty DataFrame and fill it with the data? There certainly are lots of possibilities. My choice was to store everything in a big list of lists that will be later transformed into a DataFrame and exported as a .csv file."
},
{
"code": null,
"e": 6925,
"s": 6480,
"text": "In another subject, you might want to use some try and except clauses in your code to make it prepared to handle some exceptions it may find along the way. Of course, you’ll also want to insert some random pauses in order not to overload the server, and also take advantage of a proxy provider, such as Infatica, to make sure your code will keep running as long as there are tables left to scrape and that you and your connection are protected."
},
{
"code": null,
"e": 7105,
"s": 6925,
"text": "In this example, I scraped the Premier League table after every round in the entire 2019/20 season using most of what I’ve covered in this article. This is the entire code for it:"
},
{
"code": null,
"e": 7417,
"s": 7105,
"text": "Everything is there: gathering all the elements in the table using the children attribute, handling exceptions, transforming the data into a DataFrame, exporting a .csv file, and pausing the code for a random number of seconds. After all this, all the data gathered by this code produced this interesting chart:"
},
{
"code": null,
"e": 7586,
"s": 7417,
"text": "You’re not going to find the data needed to plot a chart like that waiting for you on the internet. But that’s the beauty of scraping: you can go get the data yourself!"
},
{
"code": null,
"e": 7824,
"s": 7586,
"text": "As a wrap this up I hope was somehow useful and that you never have problems when scraping an HTML table again. If you have a question, a suggestion, or just want to be in touch, feel free to contact through Twitter, GitHub, or Linkedin."
}
] |
SQL Query to Print the Name and Salary of the Person Having Least Salary in the Department - GeeksforGeeks
|
29 Dec, 2021
In SQL, we need to find out the department wise information from the given table containing information about employees. One such data is the minimum salary of the employees of each department. We shall use the GROUP BY clause to achieve this. This is illustrated below. For this article, we will be using the Microsoft SQL Server as our database.
Step 1: Create a Database. For this use the below command to create a database named GeeksForGeeks.
Query:
CREATE DATABASE GeeksForGeeks
Output:
Step 2: Use the GeeksForGeeks database. For this use the below command.
Query:
USE GeeksForGeeks
Output:
Step 3: Create a table COMPANY inside the database GeeksForGeeks. This table has 4 columns namely EMPLOYEE_ID, EMPLOYEE_NAME, DEPARTMENT_NAME and SALARY containing the id, name, department and the salary of various employees.
Query:
CREATE TABLE COMPANY(
EMPLOYEE_ID INT PRIMARY KEY,
EMPLOYEE_NAME VARCHAR(10),
DEPARTMENT_NAME VARCHAR(10),
SALARY INT);
Output:
Step 4: Describe the structure of the table COMPANY.
Query:
EXEC SP_COLUMNS COMPANY;
Output:
Step 5: Insert 5 rows into the COMPANY table.
Query:
INSERT INTO COMPANY VALUES(1,'RAM','HR',10000);
INSERT INTO COMPANY VALUES(2,'AMRIT','MRKT',20000);
INSERT INTO COMPANY VALUES(3,'RAVI','HR',30000);
INSERT INTO COMPANY VALUES(4,'NITIN','MRKT',40000);
INSERT INTO COMPANY VALUES(5,'VARUN','IT',50000);
Output:
Step 6: Display all the rows of the COMPANY table.
Query:
SELECT * FROM COMPANY;
Output:
Step 7: Display the minimum salary obtained by the employees in each department along with their employee name and department. We will use the IN clause here to compare the salaries obtained from the outer query to minimum salaries obtained from the inner query. The inner query uses the GROUP BY clause to return only 1 salary from each department i.e. the least one. MIN aggregate function is used to find the least salary in a department.
Syntax:
SELECT INFORMATION FROM TABLE_NAME WHERE
COLUMN_1 IN (SELECT AGGREGATE_FUNCTION
(COLUMN_1) FROM TABLE_NAME GROUP BY COLUMN_2);
Query:
SELECT EMPLOYEE_NAME,DEPARTMENT_NAME,
SALARY FROM COMPANY WHERE
SALARY IN (SELECT MIN(SALARY) FROM
COMPANY GROUP BY DEPARTMENT_NAME);
Output:
Picked
SQL-Query
SQL-Server
SQL
SQL
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
SQL | DROP, TRUNCATE
How to Select Data Between Two Dates and Times in SQL Server?
SQL vs NoSQL: Which one is better to use?
Advanced SQL Interview Questions
SQL | OFFSET-FETCH Clause
Insert multiple values into multiple tables using a single statement in SQL Server
How to Update Multiple Columns in Single Update Statement in SQL?
SQL | Comments
SQL | CREATE
Adding multiple constraints in a single table
|
[
{
"code": null,
"e": 23877,
"s": 23849,
"text": "\n29 Dec, 2021"
},
{
"code": null,
"e": 24225,
"s": 23877,
"text": "In SQL, we need to find out the department wise information from the given table containing information about employees. One such data is the minimum salary of the employees of each department. We shall use the GROUP BY clause to achieve this. This is illustrated below. For this article, we will be using the Microsoft SQL Server as our database."
},
{
"code": null,
"e": 24325,
"s": 24225,
"text": "Step 1: Create a Database. For this use the below command to create a database named GeeksForGeeks."
},
{
"code": null,
"e": 24332,
"s": 24325,
"text": "Query:"
},
{
"code": null,
"e": 24362,
"s": 24332,
"text": "CREATE DATABASE GeeksForGeeks"
},
{
"code": null,
"e": 24370,
"s": 24362,
"text": "Output:"
},
{
"code": null,
"e": 24442,
"s": 24370,
"text": "Step 2: Use the GeeksForGeeks database. For this use the below command."
},
{
"code": null,
"e": 24449,
"s": 24442,
"text": "Query:"
},
{
"code": null,
"e": 24467,
"s": 24449,
"text": "USE GeeksForGeeks"
},
{
"code": null,
"e": 24475,
"s": 24467,
"text": "Output:"
},
{
"code": null,
"e": 24701,
"s": 24475,
"text": "Step 3: Create a table COMPANY inside the database GeeksForGeeks. This table has 4 columns namely EMPLOYEE_ID, EMPLOYEE_NAME, DEPARTMENT_NAME and SALARY containing the id, name, department and the salary of various employees."
},
{
"code": null,
"e": 24708,
"s": 24701,
"text": "Query:"
},
{
"code": null,
"e": 24828,
"s": 24708,
"text": "CREATE TABLE COMPANY(\nEMPLOYEE_ID INT PRIMARY KEY,\nEMPLOYEE_NAME VARCHAR(10),\nDEPARTMENT_NAME VARCHAR(10),\nSALARY INT);"
},
{
"code": null,
"e": 24836,
"s": 24828,
"text": "Output:"
},
{
"code": null,
"e": 24889,
"s": 24836,
"text": "Step 4: Describe the structure of the table COMPANY."
},
{
"code": null,
"e": 24896,
"s": 24889,
"text": "Query:"
},
{
"code": null,
"e": 24921,
"s": 24896,
"text": "EXEC SP_COLUMNS COMPANY;"
},
{
"code": null,
"e": 24929,
"s": 24921,
"text": "Output:"
},
{
"code": null,
"e": 24975,
"s": 24929,
"text": "Step 5: Insert 5 rows into the COMPANY table."
},
{
"code": null,
"e": 24982,
"s": 24975,
"text": "Query:"
},
{
"code": null,
"e": 25233,
"s": 24982,
"text": "INSERT INTO COMPANY VALUES(1,'RAM','HR',10000);\nINSERT INTO COMPANY VALUES(2,'AMRIT','MRKT',20000);\nINSERT INTO COMPANY VALUES(3,'RAVI','HR',30000);\nINSERT INTO COMPANY VALUES(4,'NITIN','MRKT',40000);\nINSERT INTO COMPANY VALUES(5,'VARUN','IT',50000);"
},
{
"code": null,
"e": 25241,
"s": 25233,
"text": "Output:"
},
{
"code": null,
"e": 25292,
"s": 25241,
"text": "Step 6: Display all the rows of the COMPANY table."
},
{
"code": null,
"e": 25299,
"s": 25292,
"text": "Query:"
},
{
"code": null,
"e": 25322,
"s": 25299,
"text": "SELECT * FROM COMPANY;"
},
{
"code": null,
"e": 25330,
"s": 25322,
"text": "Output:"
},
{
"code": null,
"e": 25772,
"s": 25330,
"text": "Step 7: Display the minimum salary obtained by the employees in each department along with their employee name and department. We will use the IN clause here to compare the salaries obtained from the outer query to minimum salaries obtained from the inner query. The inner query uses the GROUP BY clause to return only 1 salary from each department i.e. the least one. MIN aggregate function is used to find the least salary in a department."
},
{
"code": null,
"e": 25780,
"s": 25772,
"text": "Syntax:"
},
{
"code": null,
"e": 25907,
"s": 25780,
"text": "SELECT INFORMATION FROM TABLE_NAME WHERE\nCOLUMN_1 IN (SELECT AGGREGATE_FUNCTION\n(COLUMN_1) FROM TABLE_NAME GROUP BY COLUMN_2);"
},
{
"code": null,
"e": 25914,
"s": 25907,
"text": "Query:"
},
{
"code": null,
"e": 26048,
"s": 25914,
"text": "SELECT EMPLOYEE_NAME,DEPARTMENT_NAME,\nSALARY FROM COMPANY WHERE\nSALARY IN (SELECT MIN(SALARY) FROM\nCOMPANY GROUP BY DEPARTMENT_NAME);"
},
{
"code": null,
"e": 26056,
"s": 26048,
"text": "Output:"
},
{
"code": null,
"e": 26063,
"s": 26056,
"text": "Picked"
},
{
"code": null,
"e": 26073,
"s": 26063,
"text": "SQL-Query"
},
{
"code": null,
"e": 26084,
"s": 26073,
"text": "SQL-Server"
},
{
"code": null,
"e": 26088,
"s": 26084,
"text": "SQL"
},
{
"code": null,
"e": 26092,
"s": 26088,
"text": "SQL"
},
{
"code": null,
"e": 26190,
"s": 26092,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26199,
"s": 26190,
"text": "Comments"
},
{
"code": null,
"e": 26212,
"s": 26199,
"text": "Old Comments"
},
{
"code": null,
"e": 26233,
"s": 26212,
"text": "SQL | DROP, TRUNCATE"
},
{
"code": null,
"e": 26295,
"s": 26233,
"text": "How to Select Data Between Two Dates and Times in SQL Server?"
},
{
"code": null,
"e": 26337,
"s": 26295,
"text": "SQL vs NoSQL: Which one is better to use?"
},
{
"code": null,
"e": 26370,
"s": 26337,
"text": "Advanced SQL Interview Questions"
},
{
"code": null,
"e": 26396,
"s": 26370,
"text": "SQL | OFFSET-FETCH Clause"
},
{
"code": null,
"e": 26479,
"s": 26396,
"text": "Insert multiple values into multiple tables using a single statement in SQL Server"
},
{
"code": null,
"e": 26545,
"s": 26479,
"text": "How to Update Multiple Columns in Single Update Statement in SQL?"
},
{
"code": null,
"e": 26560,
"s": 26545,
"text": "SQL | Comments"
},
{
"code": null,
"e": 26573,
"s": 26560,
"text": "SQL | CREATE"
}
] |
Pattern matching in Python with Regex
|
In the real world, string parsing in most programming languages is handled by regular expression. Regular expression in a python programming language is a method used for matching text pattern.
The “re” module which comes with every python installation provides regular expression support.
In python, a regular expression search is typically written as:
match = re.search(pattern, string)
The re.search() method takes two arguments, a regular expression pattern and a string and searches for that pattern within the string. If the pattern is found within the string, search() returns a match object or None otherwise. So in a regular expression, given a string, determine whether that string matches a given pattern, and, optionally, collect substrings that contain relevant information. A regular expression can be used to answer questions like −
Is this string a valid URL?
Is this string a valid URL?
Which users in /etc/passwd are in a given group?
Which users in /etc/passwd are in a given group?
What is the date and time of all warning messages in a log file?
What is the date and time of all warning messages in a log file?
What username and document were requested by the URL a visitor typed?
What username and document were requested by the URL a visitor typed?
Regular expressions are complicated mini-language. They rely on special characters to match unknown strings, but let's start with literal characters, such as letters, numbers, and the space character, which always match themselves. Let's see a basic example:
Live Demo
#Need module 're' for regular expression
import re
#
search_string = "TutorialsPoint"
pattern = "Tutorials"
match = re.match(pattern, search_string)
#If-statement after search() tests if it succeeded
if match:
print("regex matches: ", match.group())
else:
print('pattern not found')
regex matches: Tutorials
The “re” module of python has numerous method, and to test whether a particular regular expression matches a specific string, you can use re.search(). The re.MatchObject provides additional information like which part of the string the match was found.
matchObject = re.search(pattern, input_string, flags=0)
Live Demo
#Need module 're' for regular expression
import re
# Lets use a regular expression to match a date string.
regex = r"([a-zA-Z]+) (\d+)"
if re.search(regex, "Jan 2"):
match = re.search(regex, "Jan 2")
# This will print [0, 5), since it matches at the beginning and end of the
# string
print("Match at index %s, %s" % (match.start(), match.end()))
# The groups contain the matched values. In particular:
# match.group(0) always returns the fully matched string
# match.group(1), match.group(2), ... will return the capture
# groups in order from left to right in the input string
# match.group() is equivalent to match.group(0)
# So this will print "Jan 2"
print("Full match: %s" % (match.group(0)))
# So this will print "Jan"
print("Month: %s" % (match.group(1)))
# So this will print "2"
print("Day: %s" % (match.group(2)))
else:
# If re.search() does not match, then None is returned
print("Pattern not Found! ")
Match at index 0, 5
Full match: Jan 2
Month: Jan
Day: 2
As the above method stops after the first match, so is better suited for testing a regular expression than extracting data.
If the pattern includes two or more parenthesis, then the end result will be a tuple instead of a list of string, with the help of parenthesis() group mechanism and finall(). Each pattern matched is represented by a tuple and each tuple contains group(1), group(2).. data.
Live Demo
import re
regex = r'([\w\.-]+)@([\w\.-]+)'
str = ('hello john@hotmail.com, hello@Tutorialspoint.com, hello python@gmail.com')
matches = re.findall(regex, str)
print(matches)
for tuple in matches:
print("Username: ",tuple[0]) #username
print("Host: ",tuple[1]) #host
[('john', 'hotmail.com'), ('hello', 'Tutorialspoint.com'), ('python', 'gmail.com')]
Username: john
Host: hotmail.com
Username: hello
Host: Tutorialspoint.com
Username: python
Host: gmail.com
Another common task is to search for all the instances of the pattern in the given string and replace them, the re.sub(pattern, replacement, string) will exactly do that. For example to replace all instances of an old email domain
Live Demo
# requid library
import re
#given string
str = ('hello john@hotmail.com, hello@Tutorialspoint.com, hello python@gmail.com, Hello World!')
#pattern to match
pattern = r'([\w\.-]+)@([\w\.-]+)'
#replace the matched pattern from string with,
replace = r'\1@XYZ.com'
## re.sub(pat, replacement, str) -- returns new string with all replacements,
## \1 is group(1), \2 group(2) in the replacement
print (re.sub(pattern, replace, str))
hello john@XYZ.com, hello@XYZ.com, hello python@XYZ.com, Hello World!
In the python regular expression like above, we can use different options to modify the behavior of the pattern match. These extra arguments, optional flag is added to the search() or findall() etc. function, for example re.search(pattern, string, re.IGNORECASE).
IGNORECASE −As the name indicates, it makes the pattern case insensitive(upper/lowercase), with this, strings containing ‘a’ and ‘A’ both matches.
IGNORECASE −
As the name indicates, it makes the pattern case insensitive(upper/lowercase), with this, strings containing ‘a’ and ‘A’ both matches.
DOTALLThe re.DOTALL allows dot(.) metacharacter to match all character including newline (\n).
DOTALL
The re.DOTALL allows dot(.) metacharacter to match all character including newline (\n).
MULTILINEThe re.MULTILINE allows matching the start(^) and end($) of each line of a string. However, generally, ^ and & would just match the start and end of the whole string.
MULTILINE
The re.MULTILINE allows matching the start(^) and end($) of each line of a string. However, generally, ^ and & would just match the start and end of the whole string.
|
[
{
"code": null,
"e": 1256,
"s": 1062,
"text": "In the real world, string parsing in most programming languages is handled by regular expression. Regular expression in a python programming language is a method used for matching text pattern."
},
{
"code": null,
"e": 1352,
"s": 1256,
"text": "The “re” module which comes with every python installation provides regular expression support."
},
{
"code": null,
"e": 1416,
"s": 1352,
"text": "In python, a regular expression search is typically written as:"
},
{
"code": null,
"e": 1451,
"s": 1416,
"text": "match = re.search(pattern, string)"
},
{
"code": null,
"e": 1910,
"s": 1451,
"text": "The re.search() method takes two arguments, a regular expression pattern and a string and searches for that pattern within the string. If the pattern is found within the string, search() returns a match object or None otherwise. So in a regular expression, given a string, determine whether that string matches a given pattern, and, optionally, collect substrings that contain relevant information. A regular expression can be used to answer questions like −"
},
{
"code": null,
"e": 1938,
"s": 1910,
"text": "Is this string a valid URL?"
},
{
"code": null,
"e": 1966,
"s": 1938,
"text": "Is this string a valid URL?"
},
{
"code": null,
"e": 2015,
"s": 1966,
"text": "Which users in /etc/passwd are in a given group?"
},
{
"code": null,
"e": 2064,
"s": 2015,
"text": "Which users in /etc/passwd are in a given group?"
},
{
"code": null,
"e": 2129,
"s": 2064,
"text": "What is the date and time of all warning messages in a log file?"
},
{
"code": null,
"e": 2194,
"s": 2129,
"text": "What is the date and time of all warning messages in a log file?"
},
{
"code": null,
"e": 2264,
"s": 2194,
"text": "What username and document were requested by the URL a visitor typed?"
},
{
"code": null,
"e": 2334,
"s": 2264,
"text": "What username and document were requested by the URL a visitor typed?"
},
{
"code": null,
"e": 2593,
"s": 2334,
"text": "Regular expressions are complicated mini-language. They rely on special characters to match unknown strings, but let's start with literal characters, such as letters, numbers, and the space character, which always match themselves. Let's see a basic example:"
},
{
"code": null,
"e": 2604,
"s": 2593,
"text": " Live Demo"
},
{
"code": null,
"e": 2893,
"s": 2604,
"text": "#Need module 're' for regular expression\nimport re\n#\nsearch_string = \"TutorialsPoint\"\npattern = \"Tutorials\"\nmatch = re.match(pattern, search_string)\n#If-statement after search() tests if it succeeded\nif match:\n print(\"regex matches: \", match.group())\nelse:\n print('pattern not found')"
},
{
"code": null,
"e": 2918,
"s": 2893,
"text": "regex matches: Tutorials"
},
{
"code": null,
"e": 3171,
"s": 2918,
"text": "The “re” module of python has numerous method, and to test whether a particular regular expression matches a specific string, you can use re.search(). The re.MatchObject provides additional information like which part of the string the match was found."
},
{
"code": null,
"e": 3227,
"s": 3171,
"text": "matchObject = re.search(pattern, input_string, flags=0)"
},
{
"code": null,
"e": 3238,
"s": 3227,
"text": " Live Demo"
},
{
"code": null,
"e": 4205,
"s": 3238,
"text": "#Need module 're' for regular expression\nimport re\n# Lets use a regular expression to match a date string.\nregex = r\"([a-zA-Z]+) (\\d+)\"\nif re.search(regex, \"Jan 2\"):\n match = re.search(regex, \"Jan 2\")\n # This will print [0, 5), since it matches at the beginning and end of the\n # string\n print(\"Match at index %s, %s\" % (match.start(), match.end()))\n # The groups contain the matched values. In particular:\n # match.group(0) always returns the fully matched string\n # match.group(1), match.group(2), ... will return the capture\n # groups in order from left to right in the input string \n # match.group() is equivalent to match.group(0)\n # So this will print \"Jan 2\"\n print(\"Full match: %s\" % (match.group(0)))\n # So this will print \"Jan\"\n print(\"Month: %s\" % (match.group(1)))\n # So this will print \"2\"\n print(\"Day: %s\" % (match.group(2)))\nelse:\n # If re.search() does not match, then None is returned\n print(\"Pattern not Found! \")"
},
{
"code": null,
"e": 4261,
"s": 4205,
"text": "Match at index 0, 5\nFull match: Jan 2\nMonth: Jan\nDay: 2"
},
{
"code": null,
"e": 4385,
"s": 4261,
"text": "As the above method stops after the first match, so is better suited for testing a regular expression than extracting data."
},
{
"code": null,
"e": 4658,
"s": 4385,
"text": "If the pattern includes two or more parenthesis, then the end result will be a tuple instead of a list of string, with the help of parenthesis() group mechanism and finall(). Each pattern matched is represented by a tuple and each tuple contains group(1), group(2).. data."
},
{
"code": null,
"e": 4669,
"s": 4658,
"text": " Live Demo"
},
{
"code": null,
"e": 4941,
"s": 4669,
"text": "import re\nregex = r'([\\w\\.-]+)@([\\w\\.-]+)'\nstr = ('hello john@hotmail.com, hello@Tutorialspoint.com, hello python@gmail.com')\nmatches = re.findall(regex, str)\nprint(matches)\nfor tuple in matches:\n print(\"Username: \",tuple[0]) #username\n print(\"Host: \",tuple[1]) #host"
},
{
"code": null,
"e": 5132,
"s": 4941,
"text": "[('john', 'hotmail.com'), ('hello', 'Tutorialspoint.com'), ('python', 'gmail.com')]\nUsername: john\nHost: hotmail.com\nUsername: hello\nHost: Tutorialspoint.com\nUsername: python\nHost: gmail.com"
},
{
"code": null,
"e": 5363,
"s": 5132,
"text": "Another common task is to search for all the instances of the pattern in the given string and replace them, the re.sub(pattern, replacement, string) will exactly do that. For example to replace all instances of an old email domain"
},
{
"code": null,
"e": 5374,
"s": 5363,
"text": " Live Demo"
},
{
"code": null,
"e": 5808,
"s": 5374,
"text": "# requid library\nimport re\n#given string\nstr = ('hello john@hotmail.com, hello@Tutorialspoint.com, hello python@gmail.com, Hello World!')\n#pattern to match\npattern = r'([\\w\\.-]+)@([\\w\\.-]+)'\n#replace the matched pattern from string with,\nreplace = r'\\1@XYZ.com'\n ## re.sub(pat, replacement, str) -- returns new string with all replacements,\n ## \\1 is group(1), \\2 group(2) in the replacement\nprint (re.sub(pattern, replace, str))"
},
{
"code": null,
"e": 5878,
"s": 5808,
"text": "hello john@XYZ.com, hello@XYZ.com, hello python@XYZ.com, Hello World!"
},
{
"code": null,
"e": 6142,
"s": 5878,
"text": "In the python regular expression like above, we can use different options to modify the behavior of the pattern match. These extra arguments, optional flag is added to the search() or findall() etc. function, for example re.search(pattern, string, re.IGNORECASE)."
},
{
"code": null,
"e": 6289,
"s": 6142,
"text": "IGNORECASE −As the name indicates, it makes the pattern case insensitive(upper/lowercase), with this, strings containing ‘a’ and ‘A’ both matches."
},
{
"code": null,
"e": 6302,
"s": 6289,
"text": "IGNORECASE −"
},
{
"code": null,
"e": 6437,
"s": 6302,
"text": "As the name indicates, it makes the pattern case insensitive(upper/lowercase), with this, strings containing ‘a’ and ‘A’ both matches."
},
{
"code": null,
"e": 6532,
"s": 6437,
"text": "DOTALLThe re.DOTALL allows dot(.) metacharacter to match all character including newline (\\n)."
},
{
"code": null,
"e": 6539,
"s": 6532,
"text": "DOTALL"
},
{
"code": null,
"e": 6628,
"s": 6539,
"text": "The re.DOTALL allows dot(.) metacharacter to match all character including newline (\\n)."
},
{
"code": null,
"e": 6804,
"s": 6628,
"text": "MULTILINEThe re.MULTILINE allows matching the start(^) and end($) of each line of a string. However, generally, ^ and & would just match the start and end of the whole string."
},
{
"code": null,
"e": 6814,
"s": 6804,
"text": "MULTILINE"
},
{
"code": null,
"e": 6981,
"s": 6814,
"text": "The re.MULTILINE allows matching the start(^) and end($) of each line of a string. However, generally, ^ and & would just match the start and end of the whole string."
}
] |
Laravel - Authorization
|
In the previous chapter, we have studied about authentication process in Laravel. This chapter explains you the authorization process in Laravel.
Before proceeding further into learning about the authorization process in Laravel, let us understand the difference between authentication and authorization.
In authentication, the system or the web application identifies its users through the credentials they provide. If it finds that the credentials are valid, they are authenticated, or else they are not.
In authorization, the system or the web application checks if the authenticated users can access the resources that they are trying to access or make a request for. In other words, it checks their rights and permissions over the requested resources. If it finds that they can access the resources, it means that they are authorized.
Thus, authentication involves checking the validity of the user credentials, and authorization involves checking the rights and permissions over the resources that an authenticated user has.
Laravel provides a simple mechanism for authorization that contains two primary ways, namely Gates and Policies.
Gates are used to determine if a user is authorized to perform a specified action. They are typically defined in App/Providers/AuthServiceProvider.php using Gate facade. Gates are also functions which are declared for performing authorization mechanism.
Policies are declared within an array and are used within classes and methods which use authorization mechanism.
The following lines of code explain you how to use Gates and Policies for authorizing a user in a Laravel web application. Note that in this example, the boot function is used for authorizing the users.
<?php
namespace App\Providers;
use Illuminate\Contracts\Auth\Access\Gate as GateContract;
use Illuminate\Foundation\Support\Providers\AuthServiceProvider as ServiceProvider;
class AuthServiceProvider extends ServiceProvider{
/**
* The policy mappings for the application.
*
* @var array
*/
protected $policies = [
'App\Model' => 'App\Policies\ModelPolicy',
];
/**
* Register any application authentication / authorization services.
*
* @param \Illuminate\Contracts\Auth\Access\Gate $gate
* @return void
*/
public function boot(GateContract $gate) {
$this->registerPolicies($gate);
//
}
}
13 Lectures
3 hours
Sebastian Sulinski
35 Lectures
3.5 hours
Antonio Papa
7 Lectures
1.5 hours
Sebastian Sulinski
42 Lectures
1 hours
Skillbakerystudios
165 Lectures
13 hours
Paul Carlo Tordecilla
116 Lectures
13 hours
Hafizullah Masoudi
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2618,
"s": 2472,
"text": "In the previous chapter, we have studied about authentication process in Laravel. This chapter explains you the authorization process in Laravel."
},
{
"code": null,
"e": 2777,
"s": 2618,
"text": "Before proceeding further into learning about the authorization process in Laravel, let us understand the difference between authentication and authorization."
},
{
"code": null,
"e": 2979,
"s": 2777,
"text": "In authentication, the system or the web application identifies its users through the credentials they provide. If it finds that the credentials are valid, they are authenticated, or else they are not."
},
{
"code": null,
"e": 3312,
"s": 2979,
"text": "In authorization, the system or the web application checks if the authenticated users can access the resources that they are trying to access or make a request for. In other words, it checks their rights and permissions over the requested resources. If it finds that they can access the resources, it means that they are authorized."
},
{
"code": null,
"e": 3503,
"s": 3312,
"text": "Thus, authentication involves checking the validity of the user credentials, and authorization involves checking the rights and permissions over the resources that an authenticated user has."
},
{
"code": null,
"e": 3616,
"s": 3503,
"text": "Laravel provides a simple mechanism for authorization that contains two primary ways, namely Gates and Policies."
},
{
"code": null,
"e": 3870,
"s": 3616,
"text": "Gates are used to determine if a user is authorized to perform a specified action. They are typically defined in App/Providers/AuthServiceProvider.php using Gate facade. Gates are also functions which are declared for performing authorization mechanism."
},
{
"code": null,
"e": 3983,
"s": 3870,
"text": "Policies are declared within an array and are used within classes and methods which use authorization mechanism."
},
{
"code": null,
"e": 4186,
"s": 3983,
"text": "The following lines of code explain you how to use Gates and Policies for authorizing a user in a Laravel web application. Note that in this example, the boot function is used for authorizing the users."
},
{
"code": null,
"e": 4873,
"s": 4186,
"text": "<?php\n\nnamespace App\\Providers;\n\nuse Illuminate\\Contracts\\Auth\\Access\\Gate as GateContract;\nuse Illuminate\\Foundation\\Support\\Providers\\AuthServiceProvider as ServiceProvider;\n\nclass AuthServiceProvider extends ServiceProvider{\n /**\n * The policy mappings for the application.\n *\n * @var array\n */\n \n protected $policies = [\n 'App\\Model' => 'App\\Policies\\ModelPolicy',\n ];\n \n /**\n * Register any application authentication / authorization services.\n *\n * @param \\Illuminate\\Contracts\\Auth\\Access\\Gate $gate\n * @return void\n */\n \n public function boot(GateContract $gate) {\n $this->registerPolicies($gate);\n //\n }\n}"
},
{
"code": null,
"e": 4906,
"s": 4873,
"text": "\n 13 Lectures \n 3 hours \n"
},
{
"code": null,
"e": 4926,
"s": 4906,
"text": " Sebastian Sulinski"
},
{
"code": null,
"e": 4961,
"s": 4926,
"text": "\n 35 Lectures \n 3.5 hours \n"
},
{
"code": null,
"e": 4975,
"s": 4961,
"text": " Antonio Papa"
},
{
"code": null,
"e": 5009,
"s": 4975,
"text": "\n 7 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 5029,
"s": 5009,
"text": " Sebastian Sulinski"
},
{
"code": null,
"e": 5062,
"s": 5029,
"text": "\n 42 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 5082,
"s": 5062,
"text": " Skillbakerystudios"
},
{
"code": null,
"e": 5117,
"s": 5082,
"text": "\n 165 Lectures \n 13 hours \n"
},
{
"code": null,
"e": 5140,
"s": 5117,
"text": " Paul Carlo Tordecilla"
},
{
"code": null,
"e": 5175,
"s": 5140,
"text": "\n 116 Lectures \n 13 hours \n"
},
{
"code": null,
"e": 5195,
"s": 5175,
"text": " Hafizullah Masoudi"
},
{
"code": null,
"e": 5202,
"s": 5195,
"text": " Print"
},
{
"code": null,
"e": 5213,
"s": 5202,
"text": " Add Notes"
}
] |
stunnel - Unix, Linux Command
|
stunnel can be used to add SSL functionality to commonly used Inetd
daemons like POP-2, POP-3, and IMAP servers, to standalone daemons like
NNTP, SMTP and HTTP, and in tunneling PPP over network sockets without
changes to the source code.
This product includes cryptographic software written by
Eric Young (eay@cryptsoft.com)
chroot keeps stunnel in chrooted jail. CApath, CRLpath, pid
and exec are located inside the jail and the patches have to be relative
to the directory specified with chroot.
To have libwrap (TCP Wrappers) control effective in a chrooted environment
you also have to copy its configuration files (/etc/hosts.allow and
/etc/hosts.deny) there.
default: no compression
Level is a one of the syslog level names or numbers
emerg (0), alert (1), crit (2), err (3), warning (4), notice (5),
info (6), or debug (7). All logs for the specified level and
all levels numerically less than it will be shown. Use debug = debug or
debug = 7 for greatest debugging output. The default is notice (5).
The syslog facility ’authpriv’ will be used unless a facility name is supplied.
(Facilities are not supported on Win32.)
Case is ignored for both facilities and levels.
Entropy Gathering Daemon socket to use to feed OpenSSL random number
generator. (Available only if compiled with OpenSSL 0.9.5a or higher)
default: software-only cryptography
Stay in foreground (don’t fork) and log to stderr
instead of via syslog (unless output is specified).
default: background in daemon mode
/dev/stdout device can be used to redirect log messages to the standard
output (for example to log them with daemontools splogger).
If the argument is empty, then no pid file will be created.
pid path is relative to chroot directory if specified.
Number of bytes of data read from random seed files. With SSL versions
less than 0.9.5a, also determines how many bytes of data are considered
sufficient to seed the PRNG. More recent OpenSSL versions have a builtin
function to determine when sufficient randomness is available.
The SSL library will use data from this file first to seed the random
number generator.
default: yes
On Unix: inetd mode service name for TCP Wrapper library.
On NT/2000/XP: NT service name in the Control Panel.
default: stunnel
The values for linger option are l_onof:l_linger.
The values for time are tv_sec:tv_usec.
Examples:
socket = l:SO_LINGER=1:60
set one minute timeout for closing local socket
socket = r:TCP_NODELAY=1
turn off the Nagle algorithm for remote sockets
socket = r:SO_OOBINLINE=1
place out-of-band data directly into the
receive data stream for remote sockets
socket = a:SO_REUSEADDR=0
disable address reuse (enabled by default)
socket = a:SO_BINDTODEVICE=lo
only accept connections on loopback interface
default: yes
Note that if you wish to run stunnel in inetd mode (where it
is provided a network socket by a server such as inetd, xinetd,
or tcpserver) then you should read the section entitled INETD MODE
below.
If no host specified, defaults to all IP addresses for the local host.
This is the directory in which stunnel will look for certificates when
using the verify. Note that the certificates in this directory
should be named XXXXXXXX.0 where XXXXXXXX is the hash value of the cert.
CApath path is relative to chroot directory if specified.
This file contains multiple CA certificates, used with the verify.
A PEM is always needed in server mode.
Specifying this flag in client mode will use this certificate chain
as a client side certificate chain. Using client side certs is optional.
The certificates must be in PEM format and must be sorted starting with the
certificate to the highest level (root CA).
A colon delimited list of the ciphers to allow in the SSL connection.
For example DES-CBC3-SHA:IDEA-CBC-MD5
default: no (server mode)
If no host specified, defaults to localhost.
This is the directory in which stunnel will look for CRLs when
using the verify. Note that the CRLs in this directory should
be named XXXXXXXX.0 where XXXXXXXX is the hash value of the CRL.
CRLpath path is relative to chroot directory if specified.
This file contains multiple CRLs, used with the verify.
exec path is relative to chroot directory if specified.
Quoting is currently not supported.
Arguments are separated with arbitrary number of whitespaces.
Private key is needed to authenticate certificate owner.
Since this file should be kept secret it should only be readable
to its owner. On Unix systems you can use the following command:
chmod 600 keyfile
default: value of cert option
The parameter is the OpenSSL option name as described in the
SSL_CTX_set_options(3ssl) manual, but without SSL_OP_ prefix.
Several options can be used to specify multiple options.
For example for compatibility with erroneous Eudora SSL implementation
the following option can be used:
options = DONT_INSERT_EMPTY_FRAGMENTS
currently supported: cifs, connect, nntp, pop3, smtp
Re-write address to appear as if wrapped daemon is connecting
from the SSL client machine instead of the machine running stunnel.
This option is only available in local mode (exec option)
by LD_PRELOADing env.so shared library or in remote mode (connect
option) on Linux 2.2 kernel compiled with transparent proxy option
and then only in server mode. Note that this option will not combine
with proxy mode (connect) unless the client’s default route to the target
machine lies through the host running stunnel, which cannot be localhost.
level 1 - verify peer certificate if present
level 2 - verify peer certificate
level 3 - verify peer with locally installed certificate
default - no verify
[imapd]
accept = 993
exec = /usr/sbin/imapd
execargs = imapd
If you want to provide tunneling to your pppd daemon on port 2020,
use something like
[vpn]
accept = 2020
exec = /usr/sbin/pppd
execargs = pppd local
pty = yes
If you want to use stunnel in inetd mode to launch your imapd
process, you’d use this stunnel.conf.
Note there must be no [service_name] section.
exec = /usr/sbin/imapd
execargs = imapd
For example, if you have the following line in inetd.conf:
imaps stream tcp nowait root /usr/sbin/stunnel stunnel /etc/stunnel/imaps.conf
In these cases, the inetd-style program is responsible
for binding a network socket (imaps above) and handing
it to stunnel when a connection is received.
Thus you do not want stunnel to have any accept option.
All the Service Level Options should be placed in the
global options section, and no [service_name] section
will be present. See the EXAMPLES section for example
configurations.
Two things are important when generating certificate-key pairs for
stunnel. The private key cannot be encrypted, because the server
has no way to obtain the password from the user. To produce an
unencrypted key add the -nodes option when running the req
command from the OpenSSL kit.
The order of contents of the .pem file is also important.
It should contain the unencrypted private key first, then a signed certificate
(not certificate request).
There should be also empty lines after certificate and private key.
Plaintext certificate information appended on the top of generated certificate
should be discarded. So the file should look like this:
-----BEGIN RSA PRIVATE KEY-----
[encoded key]
-----END RSA PRIVATE KEY-----
[empty line]
-----BEGIN CERTIFICATE-----
[encoded certificate]
-----END CERTIFICATE-----
[empty line]
Note that on Windows machines that do not have console user interaction
(mouse movements, creating windows, etc) the screen contents are not
variable enough to be sufficient, and you should provide a random file
for use with the RNDfile flag.
Note that the file specified with the RNDfile flag should contain
random data — that means it should contain different information
each time stunnel is run. This is handled automatically
unless the RNDoverwrite flag is used. If you wish to update this file
manually, the openssl rand command in recent versions of OpenSSL,
would be useful.
One important note — if /dev/urandom is available, OpenSSL has a habit of
seeding the PRNG with it even when checking the random state, so on
systems with /dev/urandom you’re likely to use it even though it’s listed
at the very bottom of the list above. This isn’t stunnel’s behaviour, it’s
OpenSSLs.
Advertisements
129 Lectures
23 hours
Eduonix Learning Solutions
5 Lectures
4.5 hours
Frahaan Hussain
35 Lectures
2 hours
Pradeep D
41 Lectures
2.5 hours
Musab Zayadneh
46 Lectures
4 hours
GUHARAJANM
6 Lectures
4 hours
Uplatz
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 10818,
"s": 10577,
"text": "\nstunnel can be used to add SSL functionality to commonly used Inetd\ndaemons like POP-2, POP-3, and IMAP servers, to standalone daemons like\nNNTP, SMTP and HTTP, and in tunneling PPP over network sockets without\nchanges to the source code.\n"
},
{
"code": null,
"e": 10907,
"s": 10818,
"text": "\nThis product includes cryptographic software written by\nEric Young (eay@cryptsoft.com)\n"
},
{
"code": null,
"e": 11084,
"s": 10907,
"text": "\n\nchroot keeps stunnel in chrooted jail. CApath, CRLpath, pid\nand exec are located inside the jail and the patches have to be relative\nto the directory specified with chroot.\n"
},
{
"code": null,
"e": 11254,
"s": 11084,
"text": "\n\nTo have libwrap (TCP Wrappers) control effective in a chrooted environment\nyou also have to copy its configuration files (/etc/hosts.allow and\n/etc/hosts.deny) there.\n"
},
{
"code": null,
"e": 11281,
"s": 11254,
"text": "\n\ndefault: no compression\n"
},
{
"code": null,
"e": 11606,
"s": 11281,
"text": "\n\nLevel is a one of the syslog level names or numbers\nemerg (0), alert (1), crit (2), err (3), warning (4), notice (5),\ninfo (6), or debug (7). All logs for the specified level and\nall levels numerically less than it will be shown. Use debug = debug or\ndebug = 7 for greatest debugging output. The default is notice (5).\n"
},
{
"code": null,
"e": 11730,
"s": 11606,
"text": "\n\nThe syslog facility ’authpriv’ will be used unless a facility name is supplied.\n(Facilities are not supported on Win32.)\n"
},
{
"code": null,
"e": 11781,
"s": 11730,
"text": "\n\nCase is ignored for both facilities and levels.\n"
},
{
"code": null,
"e": 11924,
"s": 11781,
"text": "\n\nEntropy Gathering Daemon socket to use to feed OpenSSL random number\ngenerator. (Available only if compiled with OpenSSL 0.9.5a or higher)\n"
},
{
"code": null,
"e": 11963,
"s": 11924,
"text": "\n\ndefault: software-only cryptography\n"
},
{
"code": null,
"e": 12068,
"s": 11963,
"text": "\n\nStay in foreground (don’t fork) and log to stderr\ninstead of via syslog (unless output is specified).\n"
},
{
"code": null,
"e": 12106,
"s": 12068,
"text": "\n\ndefault: background in daemon mode\n"
},
{
"code": null,
"e": 12241,
"s": 12106,
"text": "\n\n/dev/stdout device can be used to redirect log messages to the standard\noutput (for example to log them with daemontools splogger).\n"
},
{
"code": null,
"e": 12304,
"s": 12241,
"text": "\n\nIf the argument is empty, then no pid file will be created.\n"
},
{
"code": null,
"e": 12362,
"s": 12304,
"text": "\n\npid path is relative to chroot directory if specified.\n"
},
{
"code": null,
"e": 12646,
"s": 12362,
"text": "\n\nNumber of bytes of data read from random seed files. With SSL versions\nless than 0.9.5a, also determines how many bytes of data are considered\nsufficient to seed the PRNG. More recent OpenSSL versions have a builtin\nfunction to determine when sufficient randomness is available.\n"
},
{
"code": null,
"e": 12737,
"s": 12646,
"text": "\n\nThe SSL library will use data from this file first to seed the random\nnumber generator.\n"
},
{
"code": null,
"e": 12753,
"s": 12737,
"text": "\n\ndefault: yes\n"
},
{
"code": null,
"e": 12814,
"s": 12753,
"text": "\n\nOn Unix: inetd mode service name for TCP Wrapper library.\n"
},
{
"code": null,
"e": 12870,
"s": 12814,
"text": "\n\nOn NT/2000/XP: NT service name in the Control Panel.\n"
},
{
"code": null,
"e": 12890,
"s": 12870,
"text": "\n\ndefault: stunnel\n"
},
{
"code": null,
"e": 12983,
"s": 12890,
"text": "\n\nThe values for linger option are l_onof:l_linger.\nThe values for time are tv_sec:tv_usec.\n"
},
{
"code": null,
"e": 12996,
"s": 12983,
"text": "\n\nExamples:\n"
},
{
"code": null,
"e": 13467,
"s": 12999,
"text": "\n socket = l:SO_LINGER=1:60\n set one minute timeout for closing local socket\n socket = r:TCP_NODELAY=1\n turn off the Nagle algorithm for remote sockets\n socket = r:SO_OOBINLINE=1\n place out-of-band data directly into the\n receive data stream for remote sockets\n socket = a:SO_REUSEADDR=0\n disable address reuse (enabled by default)\n socket = a:SO_BINDTODEVICE=lo\n only accept connections on loopback interface\n"
},
{
"code": null,
"e": 13483,
"s": 13467,
"text": "\n\ndefault: yes\n"
},
{
"code": null,
"e": 13684,
"s": 13483,
"text": "\nNote that if you wish to run stunnel in inetd mode (where it\nis provided a network socket by a server such as inetd, xinetd,\nor tcpserver) then you should read the section entitled INETD MODE\nbelow.\n"
},
{
"code": null,
"e": 13758,
"s": 13684,
"text": "\n\nIf no host specified, defaults to all IP addresses for the local host.\n"
},
{
"code": null,
"e": 13968,
"s": 13758,
"text": "\n\nThis is the directory in which stunnel will look for certificates when\nusing the verify. Note that the certificates in this directory\nshould be named XXXXXXXX.0 where XXXXXXXX is the hash value of the cert.\n"
},
{
"code": null,
"e": 14029,
"s": 13968,
"text": "\n\nCApath path is relative to chroot directory if specified.\n"
},
{
"code": null,
"e": 14099,
"s": 14029,
"text": "\n\nThis file contains multiple CA certificates, used with the verify.\n"
},
{
"code": null,
"e": 14403,
"s": 14099,
"text": "\n\nA PEM is always needed in server mode.\nSpecifying this flag in client mode will use this certificate chain\nas a client side certificate chain. Using client side certs is optional.\nThe certificates must be in PEM format and must be sorted starting with the\ncertificate to the highest level (root CA).\n"
},
{
"code": null,
"e": 14514,
"s": 14403,
"text": "\n\nA colon delimited list of the ciphers to allow in the SSL connection.\nFor example DES-CBC3-SHA:IDEA-CBC-MD5\n"
},
{
"code": null,
"e": 14543,
"s": 14514,
"text": "\n\ndefault: no (server mode)\n"
},
{
"code": null,
"e": 14591,
"s": 14543,
"text": "\n\nIf no host specified, defaults to localhost.\n"
},
{
"code": null,
"e": 14784,
"s": 14591,
"text": "\n\nThis is the directory in which stunnel will look for CRLs when\nusing the verify. Note that the CRLs in this directory should\nbe named XXXXXXXX.0 where XXXXXXXX is the hash value of the CRL.\n"
},
{
"code": null,
"e": 14846,
"s": 14784,
"text": "\n\nCRLpath path is relative to chroot directory if specified.\n"
},
{
"code": null,
"e": 14905,
"s": 14846,
"text": "\n\nThis file contains multiple CRLs, used with the verify.\n"
},
{
"code": null,
"e": 14965,
"s": 14905,
"text": "\n\nexec path is relative to chroot directory if specified.\n\n"
},
{
"code": null,
"e": 15066,
"s": 14965,
"text": "\n\nQuoting is currently not supported.\nArguments are separated with arbitrary number of whitespaces.\n"
},
{
"code": null,
"e": 15257,
"s": 15066,
"text": "\n\nPrivate key is needed to authenticate certificate owner.\nSince this file should be kept secret it should only be readable\nto its owner. On Unix systems you can use the following command:\n"
},
{
"code": null,
"e": 15284,
"s": 15260,
"text": "\n chmod 600 keyfile\n"
},
{
"code": null,
"e": 15317,
"s": 15284,
"text": "\n\ndefault: value of cert option\n"
},
{
"code": null,
"e": 15500,
"s": 15317,
"text": "\n\nThe parameter is the OpenSSL option name as described in the\nSSL_CTX_set_options(3ssl) manual, but without SSL_OP_ prefix.\nSeveral options can be used to specify multiple options.\n"
},
{
"code": null,
"e": 15608,
"s": 15500,
"text": "\n\nFor example for compatibility with erroneous Eudora SSL implementation\nthe following option can be used:\n"
},
{
"code": null,
"e": 15655,
"s": 15611,
"text": "\n options = DONT_INSERT_EMPTY_FRAGMENTS\n"
},
{
"code": null,
"e": 15711,
"s": 15655,
"text": "\n\ncurrently supported: cifs, connect, nntp, pop3, smtp\n"
},
{
"code": null,
"e": 16252,
"s": 15711,
"text": "\n\nRe-write address to appear as if wrapped daemon is connecting\nfrom the SSL client machine instead of the machine running stunnel.\nThis option is only available in local mode (exec option)\nby LD_PRELOADing env.so shared library or in remote mode (connect\noption) on Linux 2.2 kernel compiled with transparent proxy option\nand then only in server mode. Note that this option will not combine\nwith proxy mode (connect) unless the client’s default route to the target\nmachine lies through the host running stunnel, which cannot be localhost.\n"
},
{
"code": null,
"e": 16429,
"s": 16255,
"text": "\n level 1 - verify peer certificate if present\n level 2 - verify peer certificate\n level 3 - verify peer with locally installed certificate\n default - no verify\n"
},
{
"code": null,
"e": 16510,
"s": 16431,
"text": "\n [imapd]\n accept = 993\n exec = /usr/sbin/imapd\n execargs = imapd\n"
},
{
"code": null,
"e": 16598,
"s": 16510,
"text": "\nIf you want to provide tunneling to your pppd daemon on port 2020,\nuse something like\n"
},
{
"code": null,
"e": 16696,
"s": 16600,
"text": "\n [vpn]\n accept = 2020\n exec = /usr/sbin/pppd\n execargs = pppd local\n pty = yes\n"
},
{
"code": null,
"e": 16844,
"s": 16696,
"text": "\nIf you want to use stunnel in inetd mode to launch your imapd\nprocess, you’d use this stunnel.conf.\nNote there must be no [service_name] section.\n"
},
{
"code": null,
"e": 16896,
"s": 16846,
"text": "\n exec = /usr/sbin/imapd\n execargs = imapd\n"
},
{
"code": null,
"e": 16957,
"s": 16896,
"text": "\nFor example, if you have the following line in inetd.conf:\n"
},
{
"code": null,
"e": 17044,
"s": 16959,
"text": "\n imaps stream tcp nowait root /usr/sbin/stunnel stunnel /etc/stunnel/imaps.conf\n"
},
{
"code": null,
"e": 17436,
"s": 17044,
"text": "\nIn these cases, the inetd-style program is responsible\nfor binding a network socket (imaps above) and handing\nit to stunnel when a connection is received.\nThus you do not want stunnel to have any accept option.\nAll the Service Level Options should be placed in the\nglobal options section, and no [service_name] section\nwill be present. See the EXAMPLES section for example\nconfigurations.\n"
},
{
"code": null,
"e": 17722,
"s": 17436,
"text": "\nTwo things are important when generating certificate-key pairs for\nstunnel. The private key cannot be encrypted, because the server\nhas no way to obtain the password from the user. To produce an\nunencrypted key add the -nodes option when running the req\ncommand from the OpenSSL kit.\n"
},
{
"code": null,
"e": 18091,
"s": 17722,
"text": "\nThe order of contents of the .pem file is also important.\nIt should contain the unencrypted private key first, then a signed certificate\n(not certificate request).\nThere should be also empty lines after certificate and private key.\nPlaintext certificate information appended on the top of generated certificate\nshould be discarded. So the file should look like this:\n"
},
{
"code": null,
"e": 18305,
"s": 18093,
"text": "\n -----BEGIN RSA PRIVATE KEY-----\n [encoded key]\n -----END RSA PRIVATE KEY-----\n [empty line]\n -----BEGIN CERTIFICATE-----\n [encoded certificate]\n -----END CERTIFICATE-----\n [empty line]\n"
},
{
"code": null,
"e": 18550,
"s": 18305,
"text": "\nNote that on Windows machines that do not have console user interaction\n(mouse movements, creating windows, etc) the screen contents are not\nvariable enough to be sufficient, and you should provide a random file\nfor use with the RNDfile flag.\n"
},
{
"code": null,
"e": 18894,
"s": 18550,
"text": "\nNote that the file specified with the RNDfile flag should contain\nrandom data — that means it should contain different information\neach time stunnel is run. This is handled automatically\nunless the RNDoverwrite flag is used. If you wish to update this file\nmanually, the openssl rand command in recent versions of OpenSSL,\nwould be useful.\n"
},
{
"code": null,
"e": 19198,
"s": 18894,
"text": "\nOne important note — if /dev/urandom is available, OpenSSL has a habit of\nseeding the PRNG with it even when checking the random state, so on\nsystems with /dev/urandom you’re likely to use it even though it’s listed\nat the very bottom of the list above. This isn’t stunnel’s behaviour, it’s\nOpenSSLs.\n"
},
{
"code": null,
"e": 19215,
"s": 19198,
"text": "\nAdvertisements\n"
},
{
"code": null,
"e": 19250,
"s": 19215,
"text": "\n 129 Lectures \n 23 hours \n"
},
{
"code": null,
"e": 19278,
"s": 19250,
"text": " Eduonix Learning Solutions"
},
{
"code": null,
"e": 19312,
"s": 19278,
"text": "\n 5 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 19329,
"s": 19312,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 19362,
"s": 19329,
"text": "\n 35 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 19373,
"s": 19362,
"text": " Pradeep D"
},
{
"code": null,
"e": 19408,
"s": 19373,
"text": "\n 41 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 19424,
"s": 19408,
"text": " Musab Zayadneh"
},
{
"code": null,
"e": 19457,
"s": 19424,
"text": "\n 46 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 19469,
"s": 19457,
"text": " GUHARAJANM"
},
{
"code": null,
"e": 19501,
"s": 19469,
"text": "\n 6 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 19509,
"s": 19501,
"text": " Uplatz"
},
{
"code": null,
"e": 19516,
"s": 19509,
"text": " Print"
},
{
"code": null,
"e": 19527,
"s": 19516,
"text": " Add Notes"
}
] |
Image Manipulation in Python. Step by step guide to commonly used... | by Behic Guven | Towards Data Science
|
In this post, I will show you how to edit an image using Python. The process of editing an image is called image manipulation. You might be wondering why you need to do some touches on your images before using them in your projects. There are many reasons for this, but a couple of main reasons can be listed as saving storage space, improving the quality of training, and faster running time. The manipulation techniques that will be covered in this post are image resizing, image brightness, and lastly converting the image color to grayscale. We will do some hands-on practice on an image by testing each of them.
For one image it doesn’t make much difference but when you think of processing thousands of images in your program, little size change will save you a lot of time and storage.
It’s easy for us to understand what we see when we look at an image, but for machines, it can be a bit challenging. So, to help machines to process the image better, a little brightness can help to improve the accuracy.
When training a machine, grayscale images are doing much better. The reason is since machines see images as a matrix of arrays, it is easier to store black and white image, instead of multiple images with many colors.
These are just some of the image manipulation techniques we will cover in this post. There are many more, but as mentioned earlier, these are the most commonly used techniques and can be applied to any format of image. If you are ready, let’s started!
Before we get to coding, let’s choose an image that we want to test our codes with. This image can be in any format, but I would suggest using ‘png’ or ‘jpg’. Here is the image I choose to play with. The good thing about this image is no filters or effects are used.
We will only use one package in total. The main library that we will use for image manipulation is called PIL, which is the image processing library. PIL will be installed as ‘pillow’, don’t get confused, they are the same thing.
Let’s start by installing the package then.
pip install pillow
It’s time to import the package as libraries so that we can use them.
from PIL import Image, ImageEnhance
I’ve renamed my image as ‘testme.jpg’, and in this step, I am importing the image by assigning it to a variable name called ‘img’. We will do the most commonly used three manipulation techniques. To make things clean and simple, I will use the same image for each technique, but feel free to define different images for each manipulation technique.
img = Image.open("testme.jpg")
The first thing we will try on this image is resizing it. Resizing is very easy using PIL library, which is an image processing library as mentioned earlier. After resizing the image, I want to save it so that we can see the image after we run the program. And the end to save the image as an image file, we will use ‘save’ method.
Before we resize, you can print the current size of your image by the following code.
print(img.size)
Resizing the image:
newsize = (300, 300) img_resized = img.resize(newsize) print(img_resized.size)
Now, we can save the image file in our folder.
img_resized.save("resized.jpg")
Secondly, let’s add some light to our image. Especially with shadowed images, adding brightness with help the machine to see the lines in the image better. This is very important when training a machine, because machines accept things as it is, and little misinformation can cause wrong training. Anyways, here is the code to add some brightness:
# Define enhancerenhancer = ImageEnhance.Brightness(img) img_light = enhancer.enhance(1.8) img_light.save("brightened.jpg")
How about for the images with too much light, can we do something for them? Of course, like adding light, there is also a way to get some light out. This will be helpful for images taken under a sunny day. To do this is very easy, we will just change the value between the parenthesis. If the number is more than 1, the enhancer will add light. If the value is between 0 and 1, the enhancer will get some light out, which will make the image darker.
As the value gets smaller, the darker it gets. You can play with the value, and watch how it acts.
enhancer = ImageEnhance.Brightness(img) img_light = enhancer.enhance(0.8) img_light.save("darkened.jpg")
As our third image manipulation techniques, we will see how to convert an image to greyscale. In other words, we can say black and white. Storing a grayscale image instead of a multiple colored images is more efficient, and it’s easier for a machine to understand. Especially, when training the machine to learn a specific object in an image, greyscale is one of the most commonly used technique to start with.
img = Image.open('testme.jpg') img = img.convert('L') img.save('grayscaled.jpg')
These are some of the most commonly used image manipulation techniques. These tricks will help you to edit images in a faster and easier way. In this exercise, we edited just one image, but it’s possible to run the same code in a loop, this way you will be able to edit thousands of images in couple lines of code.
I am Behic Guven, and I love sharing stories on creativity, programming, motivation, and life.
Follow my blog and Towards Data Science to stay inspired.
|
[
{
"code": null,
"e": 663,
"s": 46,
"text": "In this post, I will show you how to edit an image using Python. The process of editing an image is called image manipulation. You might be wondering why you need to do some touches on your images before using them in your projects. There are many reasons for this, but a couple of main reasons can be listed as saving storage space, improving the quality of training, and faster running time. The manipulation techniques that will be covered in this post are image resizing, image brightness, and lastly converting the image color to grayscale. We will do some hands-on practice on an image by testing each of them."
},
{
"code": null,
"e": 839,
"s": 663,
"text": "For one image it doesn’t make much difference but when you think of processing thousands of images in your program, little size change will save you a lot of time and storage."
},
{
"code": null,
"e": 1059,
"s": 839,
"text": "It’s easy for us to understand what we see when we look at an image, but for machines, it can be a bit challenging. So, to help machines to process the image better, a little brightness can help to improve the accuracy."
},
{
"code": null,
"e": 1277,
"s": 1059,
"text": "When training a machine, grayscale images are doing much better. The reason is since machines see images as a matrix of arrays, it is easier to store black and white image, instead of multiple images with many colors."
},
{
"code": null,
"e": 1529,
"s": 1277,
"text": "These are just some of the image manipulation techniques we will cover in this post. There are many more, but as mentioned earlier, these are the most commonly used techniques and can be applied to any format of image. If you are ready, let’s started!"
},
{
"code": null,
"e": 1796,
"s": 1529,
"text": "Before we get to coding, let’s choose an image that we want to test our codes with. This image can be in any format, but I would suggest using ‘png’ or ‘jpg’. Here is the image I choose to play with. The good thing about this image is no filters or effects are used."
},
{
"code": null,
"e": 2026,
"s": 1796,
"text": "We will only use one package in total. The main library that we will use for image manipulation is called PIL, which is the image processing library. PIL will be installed as ‘pillow’, don’t get confused, they are the same thing."
},
{
"code": null,
"e": 2070,
"s": 2026,
"text": "Let’s start by installing the package then."
},
{
"code": null,
"e": 2089,
"s": 2070,
"text": "pip install pillow"
},
{
"code": null,
"e": 2159,
"s": 2089,
"text": "It’s time to import the package as libraries so that we can use them."
},
{
"code": null,
"e": 2195,
"s": 2159,
"text": "from PIL import Image, ImageEnhance"
},
{
"code": null,
"e": 2544,
"s": 2195,
"text": "I’ve renamed my image as ‘testme.jpg’, and in this step, I am importing the image by assigning it to a variable name called ‘img’. We will do the most commonly used three manipulation techniques. To make things clean and simple, I will use the same image for each technique, but feel free to define different images for each manipulation technique."
},
{
"code": null,
"e": 2575,
"s": 2544,
"text": "img = Image.open(\"testme.jpg\")"
},
{
"code": null,
"e": 2907,
"s": 2575,
"text": "The first thing we will try on this image is resizing it. Resizing is very easy using PIL library, which is an image processing library as mentioned earlier. After resizing the image, I want to save it so that we can see the image after we run the program. And the end to save the image as an image file, we will use ‘save’ method."
},
{
"code": null,
"e": 2993,
"s": 2907,
"text": "Before we resize, you can print the current size of your image by the following code."
},
{
"code": null,
"e": 3009,
"s": 2993,
"text": "print(img.size)"
},
{
"code": null,
"e": 3029,
"s": 3009,
"text": "Resizing the image:"
},
{
"code": null,
"e": 3108,
"s": 3029,
"text": "newsize = (300, 300) img_resized = img.resize(newsize) print(img_resized.size)"
},
{
"code": null,
"e": 3155,
"s": 3108,
"text": "Now, we can save the image file in our folder."
},
{
"code": null,
"e": 3187,
"s": 3155,
"text": "img_resized.save(\"resized.jpg\")"
},
{
"code": null,
"e": 3534,
"s": 3187,
"text": "Secondly, let’s add some light to our image. Especially with shadowed images, adding brightness with help the machine to see the lines in the image better. This is very important when training a machine, because machines accept things as it is, and little misinformation can cause wrong training. Anyways, here is the code to add some brightness:"
},
{
"code": null,
"e": 3658,
"s": 3534,
"text": "# Define enhancerenhancer = ImageEnhance.Brightness(img) img_light = enhancer.enhance(1.8) img_light.save(\"brightened.jpg\")"
},
{
"code": null,
"e": 4108,
"s": 3658,
"text": "How about for the images with too much light, can we do something for them? Of course, like adding light, there is also a way to get some light out. This will be helpful for images taken under a sunny day. To do this is very easy, we will just change the value between the parenthesis. If the number is more than 1, the enhancer will add light. If the value is between 0 and 1, the enhancer will get some light out, which will make the image darker."
},
{
"code": null,
"e": 4207,
"s": 4108,
"text": "As the value gets smaller, the darker it gets. You can play with the value, and watch how it acts."
},
{
"code": null,
"e": 4312,
"s": 4207,
"text": "enhancer = ImageEnhance.Brightness(img) img_light = enhancer.enhance(0.8) img_light.save(\"darkened.jpg\")"
},
{
"code": null,
"e": 4723,
"s": 4312,
"text": "As our third image manipulation techniques, we will see how to convert an image to greyscale. In other words, we can say black and white. Storing a grayscale image instead of a multiple colored images is more efficient, and it’s easier for a machine to understand. Especially, when training the machine to learn a specific object in an image, greyscale is one of the most commonly used technique to start with."
},
{
"code": null,
"e": 4804,
"s": 4723,
"text": "img = Image.open('testme.jpg') img = img.convert('L') img.save('grayscaled.jpg')"
},
{
"code": null,
"e": 5119,
"s": 4804,
"text": "These are some of the most commonly used image manipulation techniques. These tricks will help you to edit images in a faster and easier way. In this exercise, we edited just one image, but it’s possible to run the same code in a loop, this way you will be able to edit thousands of images in couple lines of code."
},
{
"code": null,
"e": 5214,
"s": 5119,
"text": "I am Behic Guven, and I love sharing stories on creativity, programming, motivation, and life."
}
] |
Cartooning an Image using OpenCV - Python - GeeksforGeeks
|
22 Jul, 2021
Computer Vision as you know (or even if you don’t) is a very powerful tool with immense possibilities. So, when I set up to prepare a comic of one of my friend’s college life, I soon realized that I needed something that would reduce my efforts of actually painting it but will retain the quality and I came up with the following solution.
Let’s see the result first –
Original Image
Cartooned Version
Edges obtained from the image (Adaptive Threshold result)
Let’s see the code:
class Cartoonizer: """Cartoonizer effect A class that applies a cartoon effect to an image. The class uses a bilateral filter and adaptive thresholding to create a cartoon effect. """ def __init__(self): pass def render(self, img_rgb): img_rgb = cv2.imread(img_rgb) img_rgb = cv2.resize(img_rgb, (1366,768)) numDownSamples = 2 # number of downscaling steps numBilateralFilters = 50 # number of bilateral filtering steps # -- STEP 1 -- # downsample image using Gaussian pyramid img_color = img_rgb for _ in range(numDownSamples): img_color = cv2.pyrDown(img_color) #cv2.imshow("downcolor",img_color) #cv2.waitKey(0) # repeatedly apply small bilateral filter instead of applying # one large filter for _ in range(numBilateralFilters): img_color = cv2.bilateralFilter(img_color, 9, 9, 7) #cv2.imshow("bilateral filter",img_color) #cv2.waitKey(0) # upsample image to original size for _ in range(numDownSamples): img_color = cv2.pyrUp(img_color) #cv2.imshow("upscaling",img_color) #cv2.waitKey(0) # -- STEPS 2 and 3 -- # convert to grayscale and apply median blur img_gray = cv2.cvtColor(img_rgb, cv2.COLOR_RGB2GRAY) img_blur = cv2.medianBlur(img_gray, 3) #cv2.imshow("grayscale+median blur",img_color) #cv2.waitKey(0) # -- STEP 4 -- # detect and enhance edges img_edge = cv2.adaptiveThreshold(img_blur, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 9, 2) #cv2.imshow("edge",img_edge) #cv2.waitKey(0) # -- STEP 5 -- # convert back to color so that it can be bit-ANDed with color image (x,y,z) = img_color.shape img_edge = cv2.resize(img_edge,(y,x)) img_edge = cv2.cvtColor(img_edge, cv2.COLOR_GRAY2RGB) cv2.imwrite("edge.png",img_edge) #cv2.imshow("step 5", img_edge) #cv2.waitKey(0) #img_edge = cv2.resize(img_edge,(i for i in img_color.shape[:2])) #print img_edge.shape, img_color.shape return cv2.bitwise_and(img_color, img_edge) tmp_canvas = Cartoonizer() file_name = "Screenshot.png" #File_name will come here res = tmp_canvas.render(file_name) cv2.imwrite("Cartoon version.jpg", res) cv2.imshow("Cartoon version", res) cv2.waitKey(0) cv2.destroyAllWindows()
Explanation:
Basically, we are going to use a series of filters and image conversions.
First we downscale the image and then apply bilateral filter to get a cartoon flavor. Then again we upscale the image.
Next step is getting a blurred version of the original image. Now, we don’t want the colours to interfere in this process. We only want the blurring of the boundaries. For this, we first convert the image to gray – scale and then we apply the media blur filter.
Next step is to identify the edges in the image and then add this to the previously modified images to get a sketch pen effect. For this first we are using adaptive threshold. You can experiment with other types of threshold techniques also. Because Computer Vision is all about experimenting. In step 5, we compile the final images obtained from the previous steps.
# importing librariesimport cv2import numpy as np # reading image img = cv2.imread("koala.jpeg") # Edgesgray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)gray = cv2.medianBlur(gray, 5)edges = cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 9, 9) # Cartoonizationcolor = cv2.bilateralFilter(img, 9, 250, 250)cartoon = cv2.bitwise_and(color, color, mask=edges) cv2.imshow("Image", img)cv2.imshow("edges", edges)cv2.imshow("Cartoon", cartoon)cv2.waitKey(0)cv2.destroyAllWindows()
What you can do?Experiment! Try changing the down sample steps, or the number of bilateral filters applied, or even the size of the filter, or the threshold technique to get the edges. Now, one thing to keep in mind. This process is a general one and will not give the best result for different images. That’s why, you should experiment with different values to get a feel of the whole process.
That’s all from my side! Auf Wiedersehen!
About the author:
Vishwesh Shrimali is an Undergraduate Mechanical Engineering student at BITS Pilani. He fulfills all the requirements not taught in his branch- white-hat hacker, network security operator, and an ex – Competitive Programmer. As a firm believer in power of Python, his majority work has been in the same language. Whenever he get some time apart from programming, attending classes, watching CSI Cyber, he go for a long walk and play guitar in silence. His motto of life is – “Enjoy your life, ‘cause it’s worth enjoying!”
If you also wish to showcase your blog here, please see GBlog for guest blog writing on GeeksforGeeks.
Sourabh_Sinha
Image-Processing
OpenCV
Project
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Twitter Sentiment Analysis using Python
Snake Game in C
Java Swing | Simple User Registration Form
Banking Transaction System using Java
Simple registration form using Python Tkinter
Read JSON file using Python
Adding new column to existing DataFrame in Pandas
Python map() function
How to get column names in Pandas dataframe
|
[
{
"code": null,
"e": 24097,
"s": 24069,
"text": "\n22 Jul, 2021"
},
{
"code": null,
"e": 24437,
"s": 24097,
"text": "Computer Vision as you know (or even if you don’t) is a very powerful tool with immense possibilities. So, when I set up to prepare a comic of one of my friend’s college life, I soon realized that I needed something that would reduce my efforts of actually painting it but will retain the quality and I came up with the following solution."
},
{
"code": null,
"e": 24466,
"s": 24437,
"text": "Let’s see the result first –"
},
{
"code": null,
"e": 24481,
"s": 24466,
"text": "Original Image"
},
{
"code": null,
"e": 24501,
"s": 24483,
"text": "Cartooned Version"
},
{
"code": null,
"e": 24559,
"s": 24501,
"text": "Edges obtained from the image (Adaptive Threshold result)"
},
{
"code": null,
"e": 24579,
"s": 24559,
"text": "Let’s see the code:"
},
{
"code": "class Cartoonizer: \"\"\"Cartoonizer effect A class that applies a cartoon effect to an image. The class uses a bilateral filter and adaptive thresholding to create a cartoon effect. \"\"\" def __init__(self): pass def render(self, img_rgb): img_rgb = cv2.imread(img_rgb) img_rgb = cv2.resize(img_rgb, (1366,768)) numDownSamples = 2 # number of downscaling steps numBilateralFilters = 50 # number of bilateral filtering steps # -- STEP 1 -- # downsample image using Gaussian pyramid img_color = img_rgb for _ in range(numDownSamples): img_color = cv2.pyrDown(img_color) #cv2.imshow(\"downcolor\",img_color) #cv2.waitKey(0) # repeatedly apply small bilateral filter instead of applying # one large filter for _ in range(numBilateralFilters): img_color = cv2.bilateralFilter(img_color, 9, 9, 7) #cv2.imshow(\"bilateral filter\",img_color) #cv2.waitKey(0) # upsample image to original size for _ in range(numDownSamples): img_color = cv2.pyrUp(img_color) #cv2.imshow(\"upscaling\",img_color) #cv2.waitKey(0) # -- STEPS 2 and 3 -- # convert to grayscale and apply median blur img_gray = cv2.cvtColor(img_rgb, cv2.COLOR_RGB2GRAY) img_blur = cv2.medianBlur(img_gray, 3) #cv2.imshow(\"grayscale+median blur\",img_color) #cv2.waitKey(0) # -- STEP 4 -- # detect and enhance edges img_edge = cv2.adaptiveThreshold(img_blur, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 9, 2) #cv2.imshow(\"edge\",img_edge) #cv2.waitKey(0) # -- STEP 5 -- # convert back to color so that it can be bit-ANDed with color image (x,y,z) = img_color.shape img_edge = cv2.resize(img_edge,(y,x)) img_edge = cv2.cvtColor(img_edge, cv2.COLOR_GRAY2RGB) cv2.imwrite(\"edge.png\",img_edge) #cv2.imshow(\"step 5\", img_edge) #cv2.waitKey(0) #img_edge = cv2.resize(img_edge,(i for i in img_color.shape[:2])) #print img_edge.shape, img_color.shape return cv2.bitwise_and(img_color, img_edge) tmp_canvas = Cartoonizer() file_name = \"Screenshot.png\" #File_name will come here res = tmp_canvas.render(file_name) cv2.imwrite(\"Cartoon version.jpg\", res) cv2.imshow(\"Cartoon version\", res) cv2.waitKey(0) cv2.destroyAllWindows() ",
"e": 27162,
"s": 24579,
"text": null
},
{
"code": null,
"e": 27175,
"s": 27162,
"text": "Explanation:"
},
{
"code": null,
"e": 27249,
"s": 27175,
"text": "Basically, we are going to use a series of filters and image conversions."
},
{
"code": null,
"e": 27368,
"s": 27249,
"text": "First we downscale the image and then apply bilateral filter to get a cartoon flavor. Then again we upscale the image."
},
{
"code": null,
"e": 27630,
"s": 27368,
"text": "Next step is getting a blurred version of the original image. Now, we don’t want the colours to interfere in this process. We only want the blurring of the boundaries. For this, we first convert the image to gray – scale and then we apply the media blur filter."
},
{
"code": null,
"e": 27997,
"s": 27630,
"text": "Next step is to identify the edges in the image and then add this to the previously modified images to get a sketch pen effect. For this first we are using adaptive threshold. You can experiment with other types of threshold techniques also. Because Computer Vision is all about experimenting. In step 5, we compile the final images obtained from the previous steps."
},
{
"code": "# importing librariesimport cv2import numpy as np # reading image img = cv2.imread(\"koala.jpeg\") # Edgesgray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)gray = cv2.medianBlur(gray, 5)edges = cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 9, 9) # Cartoonizationcolor = cv2.bilateralFilter(img, 9, 250, 250)cartoon = cv2.bitwise_and(color, color, mask=edges) cv2.imshow(\"Image\", img)cv2.imshow(\"edges\", edges)cv2.imshow(\"Cartoon\", cartoon)cv2.waitKey(0)cv2.destroyAllWindows()",
"e": 28551,
"s": 27997,
"text": null
},
{
"code": null,
"e": 28946,
"s": 28551,
"text": "What you can do?Experiment! Try changing the down sample steps, or the number of bilateral filters applied, or even the size of the filter, or the threshold technique to get the edges. Now, one thing to keep in mind. This process is a general one and will not give the best result for different images. That’s why, you should experiment with different values to get a feel of the whole process."
},
{
"code": null,
"e": 28988,
"s": 28946,
"text": "That’s all from my side! Auf Wiedersehen!"
},
{
"code": null,
"e": 29006,
"s": 28988,
"text": "About the author:"
},
{
"code": null,
"e": 29528,
"s": 29006,
"text": "Vishwesh Shrimali is an Undergraduate Mechanical Engineering student at BITS Pilani. He fulfills all the requirements not taught in his branch- white-hat hacker, network security operator, and an ex – Competitive Programmer. As a firm believer in power of Python, his majority work has been in the same language. Whenever he get some time apart from programming, attending classes, watching CSI Cyber, he go for a long walk and play guitar in silence. His motto of life is – “Enjoy your life, ‘cause it’s worth enjoying!”"
},
{
"code": null,
"e": 29631,
"s": 29528,
"text": "If you also wish to showcase your blog here, please see GBlog for guest blog writing on GeeksforGeeks."
},
{
"code": null,
"e": 29645,
"s": 29631,
"text": "Sourabh_Sinha"
},
{
"code": null,
"e": 29662,
"s": 29645,
"text": "Image-Processing"
},
{
"code": null,
"e": 29669,
"s": 29662,
"text": "OpenCV"
},
{
"code": null,
"e": 29677,
"s": 29669,
"text": "Project"
},
{
"code": null,
"e": 29684,
"s": 29677,
"text": "Python"
},
{
"code": null,
"e": 29782,
"s": 29684,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 29791,
"s": 29782,
"text": "Comments"
},
{
"code": null,
"e": 29804,
"s": 29791,
"text": "Old Comments"
},
{
"code": null,
"e": 29844,
"s": 29804,
"text": "Twitter Sentiment Analysis using Python"
},
{
"code": null,
"e": 29860,
"s": 29844,
"text": "Snake Game in C"
},
{
"code": null,
"e": 29903,
"s": 29860,
"text": "Java Swing | Simple User Registration Form"
},
{
"code": null,
"e": 29941,
"s": 29903,
"text": "Banking Transaction System using Java"
},
{
"code": null,
"e": 29987,
"s": 29941,
"text": "Simple registration form using Python Tkinter"
},
{
"code": null,
"e": 30015,
"s": 29987,
"text": "Read JSON file using Python"
},
{
"code": null,
"e": 30065,
"s": 30015,
"text": "Adding new column to existing DataFrame in Pandas"
},
{
"code": null,
"e": 30087,
"s": 30065,
"text": "Python map() function"
}
] |
How to match a whitespace in python using Regular Expression?
|
The following code matches whitespaces in the given string.
import re
result = re.search(r'[\s]', 'The Indian Express')
print result
<_sre.SRE_Match object at 0x0000000005106648>
The following code finds all whitespaces in the given string and prints them
import re
result = re.findall(r'[\s]', 'The Indian Express')
print result
[' ', ' ']
|
[
{
"code": null,
"e": 1122,
"s": 1062,
"text": "The following code matches whitespaces in the given string."
},
{
"code": null,
"e": 1195,
"s": 1122,
"text": "import re\nresult = re.search(r'[\\s]', 'The Indian Express')\nprint result"
},
{
"code": null,
"e": 1241,
"s": 1195,
"text": "<_sre.SRE_Match object at 0x0000000005106648>"
},
{
"code": null,
"e": 1318,
"s": 1241,
"text": "The following code finds all whitespaces in the given string and prints them"
},
{
"code": null,
"e": 1392,
"s": 1318,
"text": "import re\nresult = re.findall(r'[\\s]', 'The Indian Express')\nprint result"
},
{
"code": null,
"e": 1403,
"s": 1392,
"text": "[' ', ' ']"
}
] |
MATLAB - Integration
|
Integration deals with two essentially different types of problems.
In the first type, derivative of a function is given and we want to find the function. Therefore, we basically reverse the process of differentiation. This reverse process is known as anti-differentiation, or finding the primitive function, or finding an indefinite integral.
In the first type, derivative of a function is given and we want to find the function. Therefore, we basically reverse the process of differentiation. This reverse process is known as anti-differentiation, or finding the primitive function, or finding an indefinite integral.
The second type of problems involve adding up a very large number of very small quantities and then taking a limit as the size of the quantities approaches zero, while the number of terms tend to infinity. This process leads to the definition of the definite integral.
The second type of problems involve adding up a very large number of very small quantities and then taking a limit as the size of the quantities approaches zero, while the number of terms tend to infinity. This process leads to the definition of the definite integral.
Definite integrals are used for finding area, volume, center of gravity, moment of inertia, work done by a force, and in numerous other applications.
By definition, if the derivative of a function f(x) is f'(x), then we say that an indefinite integral of f'(x) with respect to x is f(x). For example, since the derivative (with respect to x) of x2 is 2x, we can say that an indefinite integral of 2x is x2.
In symbols −
f'(x2) = 2x, therefore,
∫ 2xdx = x2.
Indefinite integral is not unique, because derivative of x2 + c, for any value of a constant c, will also be 2x.
This is expressed in symbols as −
∫ 2xdx = x2 + c.
Where, c is called an 'arbitrary constant'.
MATLAB provides an int command for calculating integral of an expression. To derive an expression for the indefinite integral of a function, we write −
int(f);
For example, from our previous example −
syms x
int(2*x)
MATLAB executes the above statement and returns the following result −
ans =
x^2
In this example, let us find the integral of some commonly used expressions. Create a script file and type the following code in it −
syms x n
int(sym(x^n))
f = 'sin(n*t)'
int(sym(f))
syms a t
int(a*cos(pi*t))
int(a^x)
When you run the file, it displays the following result −
ans =
piecewise([n == -1, log(x)], [n ~= -1, x^(n + 1)/(n + 1)])
f =
sin(n*t)
ans =
-cos(n*t)/n
ans =
(a*sin(pi*t))/pi
ans =
a^x/log(a)
Create a script file and type the following code in it −
syms x n
int(cos(x))
int(exp(x))
int(log(x))
int(x^-1)
int(x^5*cos(5*x))
pretty(int(x^5*cos(5*x)))
int(x^-5)
int(sec(x)^2)
pretty(int(1 - 10*x + 9 * x^2))
int((3 + 5*x -6*x^2 - 7*x^3)/2*x^2)
pretty(int((3 + 5*x -6*x^2 - 7*x^3)/2*x^2))
Note that the pretty function returns an expression in a more readable format.
When you run the file, it displays the following result −
ans =
sin(x)
ans =
exp(x)
ans =
x*(log(x) - 1)
ans =
log(x)
ans =
(24*cos(5*x))/3125 + (24*x*sin(5*x))/625 - (12*x^2*cos(5*x))/125 + (x^4*cos(5*x))/5 - (4*x^3*sin(5*x))/25 + (x^5*sin(5*x))/5
2 4
24 cos(5 x) 24 x sin(5 x) 12 x cos(5 x) x cos(5 x)
----------- + ------------- - -------------- + ------------
3125 625 125 5
3 5
4 x sin(5 x) x sin(5 x)
------------- + -----------
25 5
ans =
-1/(4*x^4)
ans =
tan(x)
2
x (3 x - 5 x + 1)
ans =
- (7*x^6)/12 - (3*x^5)/5 + (5*x^4)/8 + x^3/2
6 5 4 3
7 x 3 x 5 x x
- ---- - ---- + ---- + --
12 5 8 2
By definition, definite integral is basically the limit of a sum. We use definite integrals to find areas such as the area between a curve and the x-axis and the area between two curves. Definite integrals can also be used in other situations, where the quantity required can be expressed as the limit of a sum.
The int function can be used for definite integration by passing the limits over which you want to calculate the integral.
To calculate
we write,
int(x, a, b)
For example, to calculate the value of we write −
int(x, 4, 9)
MATLAB executes the above statement and returns the following result −
ans =
65/2
Following is Octave equivalent of the above calculation −
pkg load symbolic
symbols
x = sym("x");
f = x;
c = [1, 0];
integral = polyint(c);
a = polyval(integral, 9) - polyval(integral, 4);
display('Area: '), disp(double(a));
Octave executes the code and returns the following result −
Area:
32.500
An alternative solution can be given using quad() function provided by Octave as follows −
pkg load symbolic
symbols
f = inline("x");
[a, ierror, nfneval] = quad(f, 4, 9);
display('Area: '), disp(double(a));
Octave executes the code and returns the following result −
Area:
32.500
Let us calculate the area enclosed between the x-axis, and the curve y = x3−2x+5 and the ordinates x = 1 and x = 2.
The required area is given by −
Create a script file and type the following code −
f = x^3 - 2*x +5;
a = int(f, 1, 2)
display('Area: '), disp(double(a));
When you run the file, it displays the following result −
a =
23/4
Area:
5.7500
Following is Octave equivalent of the above calculation −
pkg load symbolic
symbols
x = sym("x");
f = x^3 - 2*x +5;
c = [1, 0, -2, 5];
integral = polyint(c);
a = polyval(integral, 2) - polyval(integral, 1);
display('Area: '), disp(double(a));
Octave executes the code and returns the following result −
Area:
5.7500
An alternative solution can be given using quad() function provided by Octave as follows −
pkg load symbolic
symbols
x = sym("x");
f = inline("x^3 - 2*x +5");
[a, ierror, nfneval] = quad(f, 1, 2);
display('Area: '), disp(double(a));
Octave executes the code and returns the following result −
Area:
5.7500
Find the area under the curve: f(x) = x2 cos(x) for −4 ≤ x ≤ 9.
Create a script file and write the following code −
f = x^2*cos(x);
ezplot(f, [-4,9])
a = int(f, -4, 9)
disp('Area: '), disp(double(a));
When you run the file, MATLAB plots the graph −
The output is given below −
a =
8*cos(4) + 18*cos(9) + 14*sin(4) + 79*sin(9)
Area:
0.3326
Following is Octave equivalent of the above calculation −
pkg load symbolic
symbols
x = sym("x");
f = inline("x^2*cos(x)");
ezplot(f, [-4,9])
print -deps graph.eps
[a, ierror, nfneval] = quad(f, -4, 9);
display('Area: '), disp(double(a));
30 Lectures
4 hours
Nouman Azam
127 Lectures
12 hours
Nouman Azam
17 Lectures
3 hours
Sanjeev
37 Lectures
5 hours
TELCOMA Global
22 Lectures
4 hours
TELCOMA Global
18 Lectures
3 hours
Phinite Academy
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2209,
"s": 2141,
"text": "Integration deals with two essentially different types of problems."
},
{
"code": null,
"e": 2485,
"s": 2209,
"text": "In the first type, derivative of a function is given and we want to find the function. Therefore, we basically reverse the process of differentiation. This reverse process is known as anti-differentiation, or finding the primitive function, or finding an indefinite integral."
},
{
"code": null,
"e": 2761,
"s": 2485,
"text": "In the first type, derivative of a function is given and we want to find the function. Therefore, we basically reverse the process of differentiation. This reverse process is known as anti-differentiation, or finding the primitive function, or finding an indefinite integral."
},
{
"code": null,
"e": 3030,
"s": 2761,
"text": "The second type of problems involve adding up a very large number of very small quantities and then taking a limit as the size of the quantities approaches zero, while the number of terms tend to infinity. This process leads to the definition of the definite integral."
},
{
"code": null,
"e": 3299,
"s": 3030,
"text": "The second type of problems involve adding up a very large number of very small quantities and then taking a limit as the size of the quantities approaches zero, while the number of terms tend to infinity. This process leads to the definition of the definite integral."
},
{
"code": null,
"e": 3449,
"s": 3299,
"text": "Definite integrals are used for finding area, volume, center of gravity, moment of inertia, work done by a force, and in numerous other applications."
},
{
"code": null,
"e": 3707,
"s": 3449,
"text": "By definition, if the derivative of a function f(x) is f'(x), then we say that an indefinite integral of f'(x) with respect to x is f(x). For example, since the derivative (with respect to x) of x2 is 2x, we can say that an indefinite integral of 2x is x2."
},
{
"code": null,
"e": 3720,
"s": 3707,
"text": "In symbols −"
},
{
"code": null,
"e": 3744,
"s": 3720,
"text": "f'(x2) = 2x, therefore,"
},
{
"code": null,
"e": 3757,
"s": 3744,
"text": "∫ 2xdx = x2."
},
{
"code": null,
"e": 3870,
"s": 3757,
"text": "Indefinite integral is not unique, because derivative of x2 + c, for any value of a constant c, will also be 2x."
},
{
"code": null,
"e": 3904,
"s": 3870,
"text": "This is expressed in symbols as −"
},
{
"code": null,
"e": 3921,
"s": 3904,
"text": "∫ 2xdx = x2 + c."
},
{
"code": null,
"e": 3965,
"s": 3921,
"text": "Where, c is called an 'arbitrary constant'."
},
{
"code": null,
"e": 4117,
"s": 3965,
"text": "MATLAB provides an int command for calculating integral of an expression. To derive an expression for the indefinite integral of a function, we write −"
},
{
"code": null,
"e": 4126,
"s": 4117,
"text": "int(f);\n"
},
{
"code": null,
"e": 4167,
"s": 4126,
"text": "For example, from our previous example −"
},
{
"code": null,
"e": 4184,
"s": 4167,
"text": "syms x \nint(2*x)"
},
{
"code": null,
"e": 4255,
"s": 4184,
"text": "MATLAB executes the above statement and returns the following result −"
},
{
"code": null,
"e": 4269,
"s": 4255,
"text": "ans =\n x^2\n"
},
{
"code": null,
"e": 4403,
"s": 4269,
"text": "In this example, let us find the integral of some commonly used expressions. Create a script file and type the following code in it −"
},
{
"code": null,
"e": 4489,
"s": 4403,
"text": "syms x n\n\nint(sym(x^n))\nf = 'sin(n*t)'\nint(sym(f))\nsyms a t\nint(a*cos(pi*t))\nint(a^x)"
},
{
"code": null,
"e": 4547,
"s": 4489,
"text": "When you run the file, it displays the following result −"
},
{
"code": null,
"e": 4702,
"s": 4547,
"text": "ans =\n piecewise([n == -1, log(x)], [n ~= -1, x^(n + 1)/(n + 1)])\nf =\nsin(n*t)\nans =\n -cos(n*t)/n\n ans =\n (a*sin(pi*t))/pi\n ans =\n a^x/log(a)\n"
},
{
"code": null,
"e": 4759,
"s": 4702,
"text": "Create a script file and type the following code in it −"
},
{
"code": null,
"e": 4996,
"s": 4759,
"text": "syms x n\nint(cos(x))\nint(exp(x))\nint(log(x))\nint(x^-1)\nint(x^5*cos(5*x))\npretty(int(x^5*cos(5*x)))\n\nint(x^-5)\nint(sec(x)^2)\npretty(int(1 - 10*x + 9 * x^2))\n\nint((3 + 5*x -6*x^2 - 7*x^3)/2*x^2)\npretty(int((3 + 5*x -6*x^2 - 7*x^3)/2*x^2))"
},
{
"code": null,
"e": 5075,
"s": 4996,
"text": "Note that the pretty function returns an expression in a more readable format."
},
{
"code": null,
"e": 5133,
"s": 5075,
"text": "When you run the file, it displays the following result −"
},
{
"code": null,
"e": 5940,
"s": 5133,
"text": "ans =\n sin(x)\n \nans =\n exp(x)\n \nans =\n x*(log(x) - 1)\n \nans =\n log(x)\n \nans =\n(24*cos(5*x))/3125 + (24*x*sin(5*x))/625 - (12*x^2*cos(5*x))/125 + (x^4*cos(5*x))/5 - (4*x^3*sin(5*x))/25 + (x^5*sin(5*x))/5\n 2 4 \n 24 cos(5 x) 24 x sin(5 x) 12 x cos(5 x) x cos(5 x) \n ----------- + ------------- - -------------- + ------------ \n 3125 625 125 5 \n \n 3 5 \n \n 4 x sin(5 x) x sin(5 x) \n ------------- + ----------- \n 25 5\n \nans =\n-1/(4*x^4)\n \nans =\ntan(x)\n 2 \n x (3 x - 5 x + 1)\n \nans = \n- (7*x^6)/12 - (3*x^5)/5 + (5*x^4)/8 + x^3/2\n \n 6 5 4 3 \n 7 x 3 x 5 x x \n - ---- - ---- + ---- + -- \n 12 5 8 2\n\n"
},
{
"code": null,
"e": 6252,
"s": 5940,
"text": "By definition, definite integral is basically the limit of a sum. We use definite integrals to find areas such as the area between a curve and the x-axis and the area between two curves. Definite integrals can also be used in other situations, where the quantity required can be expressed as the limit of a sum."
},
{
"code": null,
"e": 6375,
"s": 6252,
"text": "The int function can be used for definite integration by passing the limits over which you want to calculate the integral."
},
{
"code": null,
"e": 6388,
"s": 6375,
"text": "To calculate"
},
{
"code": null,
"e": 6398,
"s": 6388,
"text": "we write,"
},
{
"code": null,
"e": 6412,
"s": 6398,
"text": "int(x, a, b)\n"
},
{
"code": null,
"e": 6463,
"s": 6412,
"text": "For example, to calculate the value of we write −"
},
{
"code": null,
"e": 6476,
"s": 6463,
"text": "int(x, 4, 9)"
},
{
"code": null,
"e": 6547,
"s": 6476,
"text": "MATLAB executes the above statement and returns the following result −"
},
{
"code": null,
"e": 6562,
"s": 6547,
"text": "ans =\n 65/2\n"
},
{
"code": null,
"e": 6620,
"s": 6562,
"text": "Following is Octave equivalent of the above calculation −"
},
{
"code": null,
"e": 6789,
"s": 6620,
"text": "pkg load symbolic\nsymbols\n\nx = sym(\"x\");\nf = x;\nc = [1, 0];\nintegral = polyint(c);\n\na = polyval(integral, 9) - polyval(integral, 4);\ndisplay('Area: '), disp(double(a));"
},
{
"code": null,
"e": 6849,
"s": 6789,
"text": "Octave executes the code and returns the following result −"
},
{
"code": null,
"e": 6868,
"s": 6849,
"text": "Area: \n\n 32.500\n"
},
{
"code": null,
"e": 6959,
"s": 6868,
"text": "An alternative solution can be given using quad() function provided by Octave as follows −"
},
{
"code": null,
"e": 7078,
"s": 6959,
"text": "pkg load symbolic\nsymbols\n\nf = inline(\"x\");\n[a, ierror, nfneval] = quad(f, 4, 9);\n\ndisplay('Area: '), disp(double(a));"
},
{
"code": null,
"e": 7138,
"s": 7078,
"text": "Octave executes the code and returns the following result −"
},
{
"code": null,
"e": 7156,
"s": 7138,
"text": "Area: \n 32.500\n"
},
{
"code": null,
"e": 7272,
"s": 7156,
"text": "Let us calculate the area enclosed between the x-axis, and the curve y = x3−2x+5 and the ordinates x = 1 and x = 2."
},
{
"code": null,
"e": 7304,
"s": 7272,
"text": "The required area is given by −"
},
{
"code": null,
"e": 7355,
"s": 7304,
"text": "Create a script file and type the following code −"
},
{
"code": null,
"e": 7426,
"s": 7355,
"text": "f = x^3 - 2*x +5;\na = int(f, 1, 2)\ndisplay('Area: '), disp(double(a));"
},
{
"code": null,
"e": 7484,
"s": 7426,
"text": "When you run the file, it displays the following result −"
},
{
"code": null,
"e": 7511,
"s": 7484,
"text": "a =\n23/4\nArea: \n 5.7500\n"
},
{
"code": null,
"e": 7569,
"s": 7511,
"text": "Following is Octave equivalent of the above calculation −"
},
{
"code": null,
"e": 7756,
"s": 7569,
"text": "pkg load symbolic\nsymbols\n\nx = sym(\"x\");\nf = x^3 - 2*x +5;\nc = [1, 0, -2, 5];\nintegral = polyint(c);\n\na = polyval(integral, 2) - polyval(integral, 1);\ndisplay('Area: '), disp(double(a));"
},
{
"code": null,
"e": 7816,
"s": 7756,
"text": "Octave executes the code and returns the following result −"
},
{
"code": null,
"e": 7835,
"s": 7816,
"text": "Area: \n\n 5.7500\n"
},
{
"code": null,
"e": 7926,
"s": 7835,
"text": "An alternative solution can be given using quad() function provided by Octave as follows −"
},
{
"code": null,
"e": 8070,
"s": 7926,
"text": "pkg load symbolic\nsymbols\n\nx = sym(\"x\");\nf = inline(\"x^3 - 2*x +5\");\n\n[a, ierror, nfneval] = quad(f, 1, 2);\ndisplay('Area: '), disp(double(a));"
},
{
"code": null,
"e": 8130,
"s": 8070,
"text": "Octave executes the code and returns the following result −"
},
{
"code": null,
"e": 8148,
"s": 8130,
"text": "Area: \n 5.7500\n"
},
{
"code": null,
"e": 8213,
"s": 8148,
"text": "Find the area under the curve: f(x) = x2 cos(x) for −4 ≤ x ≤ 9."
},
{
"code": null,
"e": 8265,
"s": 8213,
"text": "Create a script file and write the following code −"
},
{
"code": null,
"e": 8350,
"s": 8265,
"text": "f = x^2*cos(x);\nezplot(f, [-4,9])\na = int(f, -4, 9)\ndisp('Area: '), disp(double(a));"
},
{
"code": null,
"e": 8398,
"s": 8350,
"text": "When you run the file, MATLAB plots the graph −"
},
{
"code": null,
"e": 8426,
"s": 8398,
"text": "The output is given below −"
},
{
"code": null,
"e": 8496,
"s": 8426,
"text": "a = \n8*cos(4) + 18*cos(9) + 14*sin(4) + 79*sin(9)\n \nArea: \n 0.3326\n"
},
{
"code": null,
"e": 8554,
"s": 8496,
"text": "Following is Octave equivalent of the above calculation −"
},
{
"code": null,
"e": 8738,
"s": 8554,
"text": "pkg load symbolic\nsymbols\n\nx = sym(\"x\");\nf = inline(\"x^2*cos(x)\");\n\nezplot(f, [-4,9])\nprint -deps graph.eps\n\n[a, ierror, nfneval] = quad(f, -4, 9);\ndisplay('Area: '), disp(double(a));"
},
{
"code": null,
"e": 8771,
"s": 8738,
"text": "\n 30 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 8784,
"s": 8771,
"text": " Nouman Azam"
},
{
"code": null,
"e": 8819,
"s": 8784,
"text": "\n 127 Lectures \n 12 hours \n"
},
{
"code": null,
"e": 8832,
"s": 8819,
"text": " Nouman Azam"
},
{
"code": null,
"e": 8865,
"s": 8832,
"text": "\n 17 Lectures \n 3 hours \n"
},
{
"code": null,
"e": 8874,
"s": 8865,
"text": " Sanjeev"
},
{
"code": null,
"e": 8907,
"s": 8874,
"text": "\n 37 Lectures \n 5 hours \n"
},
{
"code": null,
"e": 8923,
"s": 8907,
"text": " TELCOMA Global"
},
{
"code": null,
"e": 8956,
"s": 8923,
"text": "\n 22 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 8972,
"s": 8956,
"text": " TELCOMA Global"
},
{
"code": null,
"e": 9005,
"s": 8972,
"text": "\n 18 Lectures \n 3 hours \n"
},
{
"code": null,
"e": 9022,
"s": 9005,
"text": " Phinite Academy"
},
{
"code": null,
"e": 9029,
"s": 9022,
"text": " Print"
},
{
"code": null,
"e": 9040,
"s": 9029,
"text": " Add Notes"
}
] |
C++ Program to Perform Edge Coloring of a Graph
|
In this program, we will perform Edge Coloring of a Graph in which we have to
color the edges of the graph that no two adjacent edges have the same color.
Steps in Example.
Begin
Take the input of the number of vertices, n, and then number of edges, e, in the graph.
The graph is stored as adjacency list.
BFS is implemented using queue and colors are assigned to each edge.
End
#include<bits/stdc++.h>
using namespace std;
int n, e, i, j;
vector<vector<pair<int, int> > > g;
vector<int> color;
bool v[111001];
void col(int n) {
queue<int> q;
int c = 0;
set<int> vertex_colored;
if(v[n])
return;
v[n] = 1;
for(i = 0;i<g[n].size();i++) {
if(color[g[n][i].second]!=-1) {
vertex_colored.insert(color[g[n][i].second]);
}
}
for(i = 0;i<g[n].size();i++) {
if(!v[g[n][i].first]) {
q.push(g[n][i].first);
}
if(color[g[n][i].second]==-1) {
while(vertex_colored.find(c)!=vertex_colored.end())
c++;
color[g[n][i].second] = c;
vertex_colored.insert(c);
c++;
}
}
while(!q.empty()) {
int temp = q.front();
q.pop();
col(temp);
}
return;
}
int main() {
int u,w;
set<int> empty;
cout<<"Enter number of vertices and edges respectively:";
cin>>n>>e;
cout<<"\n";
g.resize(n); //number of vertices
color.resize(e,-1); //number of edges
memset(v,0,sizeof(v));
for(i = 0;i<e;i++) {
cout<<"\nEnter edge vertices of edge "<<i+1<<" :"<<"\n";
cin>>u>>w;
u--; w--;
g[u].push_back(make_pair(w,i));
g[w].push_back(make_pair(u,i));
}
col(0);
for(i = 0;i<e;i++) {
cout<<"Edge "<<i+1<<" is coloured with colour "<<color[i]+1
<< "\n";
}
}
Enter number of vertices and edges respectively:4 5
Enter edge vertices of edge 1 :1 2
Enter edge vertices of edge 2 :2 3
Enter edge vertices of edge 3 :1 1
Enter edge vertices of edge 4 :3 4
Enter edge vertices of edge 5 :1 4
Edge 1 is coloured with colour 1
Edge 2 is coloured with colour 2
Edge 3 is coloured with colour 2
Edge 4 is coloured with colour 1
Edge 5 is coloured with colour 3
|
[
{
"code": null,
"e": 1235,
"s": 1062,
"text": "In this program, we will perform Edge Coloring of a Graph in which we have to\ncolor the edges of the graph that no two adjacent edges have the same color.\nSteps in Example."
},
{
"code": null,
"e": 1450,
"s": 1235,
"text": "Begin\n Take the input of the number of vertices, n, and then number of edges, e, in the graph.\n The graph is stored as adjacency list.\n BFS is implemented using queue and colors are assigned to each edge.\nEnd"
},
{
"code": null,
"e": 2831,
"s": 1450,
"text": "#include<bits/stdc++.h>\nusing namespace std;\nint n, e, i, j;\nvector<vector<pair<int, int> > > g;\nvector<int> color;\nbool v[111001];\nvoid col(int n) {\n queue<int> q;\n int c = 0;\n set<int> vertex_colored;\n if(v[n])\n return;\n v[n] = 1;\n for(i = 0;i<g[n].size();i++) {\n if(color[g[n][i].second]!=-1) {\n vertex_colored.insert(color[g[n][i].second]);\n }\n }\n for(i = 0;i<g[n].size();i++) {\n if(!v[g[n][i].first]) {\n q.push(g[n][i].first);\n }\n if(color[g[n][i].second]==-1) {\n while(vertex_colored.find(c)!=vertex_colored.end())\n c++;\n color[g[n][i].second] = c;\n vertex_colored.insert(c);\n c++;\n }\n }\n while(!q.empty()) {\n int temp = q.front();\n q.pop();\n col(temp);\n }\n return;\n}\nint main() {\n int u,w;\n set<int> empty;\n cout<<\"Enter number of vertices and edges respectively:\";\n cin>>n>>e;\n cout<<\"\\n\";\n g.resize(n); //number of vertices\n color.resize(e,-1); //number of edges\n memset(v,0,sizeof(v));\n for(i = 0;i<e;i++) {\n cout<<\"\\nEnter edge vertices of edge \"<<i+1<<\" :\"<<\"\\n\";\n cin>>u>>w;\n u--; w--;\n g[u].push_back(make_pair(w,i));\n g[w].push_back(make_pair(u,i));\n }\n col(0);\n for(i = 0;i<e;i++) {\n cout<<\"Edge \"<<i+1<<\" is coloured with colour \"<<color[i]+1\n << \"\\n\";\n }\n}"
},
{
"code": null,
"e": 3223,
"s": 2831,
"text": "Enter number of vertices and edges respectively:4 5\nEnter edge vertices of edge 1 :1 2\nEnter edge vertices of edge 2 :2 3\nEnter edge vertices of edge 3 :1 1\nEnter edge vertices of edge 4 :3 4\nEnter edge vertices of edge 5 :1 4\nEdge 1 is coloured with colour 1\nEdge 2 is coloured with colour 2\nEdge 3 is coloured with colour 2\nEdge 4 is coloured with colour 1\nEdge 5 is coloured with colour 3"
}
] |
How to retrieve the Azure VM deallocated date using PowerShell?
|
To get the Azure VM deallocated date using PowerShell we can use the below command.
PS C:\> Get-AzVM -VMName Win2k16VM1 -ResourceGroupName TestVMRG -Status
Here it will retrieve the PowerState of the VM.
To retrieve the date when the VM was deallocated, we need to filter out the result.
PS C:\> $vm = Get-AzVM -VMName Win2k16VM1 -ResourceGroupName TestVMRG -
Status
PS C:\> $vm.Statuses[0].Time
Saturday, June 19, 2021 12:49:16 PM
|
[
{
"code": null,
"e": 1146,
"s": 1062,
"text": "To get the Azure VM deallocated date using PowerShell we can use the below command."
},
{
"code": null,
"e": 1218,
"s": 1146,
"text": "PS C:\\> Get-AzVM -VMName Win2k16VM1 -ResourceGroupName TestVMRG -Status"
},
{
"code": null,
"e": 1266,
"s": 1218,
"text": "Here it will retrieve the PowerState of the VM."
},
{
"code": null,
"e": 1350,
"s": 1266,
"text": "To retrieve the date when the VM was deallocated, we need to filter out the result."
},
{
"code": null,
"e": 1459,
"s": 1350,
"text": "PS C:\\> $vm = Get-AzVM -VMName Win2k16VM1 -ResourceGroupName TestVMRG -\nStatus\n\nPS C:\\> $vm.Statuses[0].Time"
},
{
"code": null,
"e": 1495,
"s": 1459,
"text": "Saturday, June 19, 2021 12:49:16 PM"
}
] |
KnockoutJS - Click Binding
|
Click binding is one of the simplest binding and is used to invoke a JavaScript function associated with a DOM element based on a click. This binding works like an event handler.
This is most commonly used with elements such as button, input, and a, but actually works with any visible DOM element.
Syntax
click: <binding-function>
Parameters
The parameter here will be a JavaScript function which needs to be invoked based on a click. This can be any function and need not be a ViewModel function.
Example
Let us look at the following example which demonstrates the use of click binding.
<!DOCTYPE html>
<head>
<title>KnockoutJS Click Binding</title>
<script src = "https://ajax.aspnetcdn.com/ajax/knockout/knockout-3.3.0.js"
type = "text/javascript"></script>
</head>
<body>
<p>Enter your name: <input data-bind = "value: someValue" /></p>
<p><button data-bind = "click: showMessage">Click here</button></p>
<script type = "text/javascript">
function ViewModel () {
this.someValue = ko.observable();
this.showMessage = function() {
alert("Hello "+ this.someValue()+ "!!! How are you today?"+
"\nClick Binding is used here !!!");
}
};
var vm = new ViewModel();
ko.applyBindings(vm);
</script>
</body>
</html>
Output
Let's carry out the following steps to see how the above code works −
Save the above code in click-bind.htm file.
Save the above code in click-bind.htm file.
Open this HTML file in a browser.
Open this HTML file in a browser.
Click the Click here button and a message will be shown on the screen.
Click the Click here button and a message will be shown on the screen.
Enter your name:
Click here
It is also possible to provide a current model value as a parameter when the handler function is called. This is useful when dealing with a collection of data, wherein the same action needs to be performed on a set of items.
Example
Let us look at the following example to understand it better.
<!DOCTYPE html>
<head>
<title>KnockoutJS Click binding</title>
<script src = "https://ajax.aspnetcdn.com/ajax/knockout/knockout-3.3.0.js"
type = "text/javascript"></script>
</head>
<body>
<p>List of product details:</p>
<ul data-bind = "foreach: productArray ">
<li>
<span data-bind = "text: productName"></span>
<a href = "#" data-bind = "click: $parent.removeProduct">Remove </a>
</li>
</ul>
<script type = "text/javascript">
function AppViewModel() {
self = this;
self.productArray = ko.observableArray ([
{productName: 'Milk'},
{productName: 'Oil'},
{productName: 'Shampoo'}
]);
self.removeProduct = function() {
self.productArray.remove(this);
}
};
var vm = new AppViewModel();
ko.applyBindings(vm);
</script>
</body>
</html>
Output
Let's carry out the following steps to see how the above code works −
Save the above code in click-for-current-item.htm file.
Save the above code in click-for-current-item.htm file.
Open this HTML file in a browser.
Open this HTML file in a browser.
removeProduct function is called every time the Remove link is clicked and is called for that particular item in array.
removeProduct function is called every time the Remove link is clicked and is called for that particular item in array.
Note that the $parent binding context is used to reach the handler function.
Note that the $parent binding context is used to reach the handler function.
List of product details −
Milk Remove
Oil Remove
Shampoo Remove
DOM event along with the current model value can also be passed to the handler function.
Example
Let us take a look at the following example to understand it better.
<!DOCTYPE html>
<head>
<title>KnockoutJS Click Binding</title>
<script src = "https://ajax.aspnetcdn.com/ajax/knockout/knockout-3.3.0.js"
type = "text/javascript"></script>
</head>
<body>
<p>Press Control key + click below button.</p>
<p><button data-bind = "click: showMessage">Click here to read message</button></p>
<script type = "text/javascript">
function ViewModel () {
this.showMessage = function(data,event) {
alert("Click Binding is used here !!!");
if (event.ctrlKey) {
alert("User was pressing down the Control key.");
}
}
};
var vm = new ViewModel();
ko.applyBindings(vm);
</script>
</body>
</html>
Output
Let's carry out the following steps to see how the above code works −
Save the above code in click-bind-more-params.htm file.
Save the above code in click-bind-more-params.htm file.
Open this HTML file in a browser.
Open this HTML file in a browser.
Pressing of the control key is captured by this binding.
Pressing of the control key is captured by this binding.
Press Control key + click below button.
Click here to read message
KnockoutJS prevents click event to perform any default action by default. Meaning if Click binding is used on <a> tag, then the browser will only call the handler function and will not actually take you to the link mentioned in href.
If you want the default action to take place in click binding, then you just need to return true from your handler function.
Example
Let us look at the following example which demonstrates the default action performed by click binding.
<!DOCTYPE html>
<head>
<title>KnockoutJS Click Binding - allowing default action</title>
<script src = "https://ajax.aspnetcdn.com/ajax/knockout/knockout-3.3.0.js"
type = "text/javascript"></script>
</head>
<body>
<a href = "http://www.tutorialspoint.com//" target = "_blank"
data-bind = "click: callUrl">Click here to see how default
Click binding works.
</a>
<script type = "text/javascript">
function ViewModel() {
this.callUrl = function() {
alert("Default action in Click Binding is allowed here !!!
You are redirected to link.");
return true;
}
};
var vm = new ViewModel();
ko.applyBindings(vm);
</script>
</body>
</html>
Output
Let's carry out the following steps to see how the above code works −
Save the above code in click-default-bind.htm file.
Save the above code in click-default-bind.htm file.
Open this HTML file in a browser.
Open this HTML file in a browser.
Click the link and a message will be shown on the screen. The URL mentioned in href opens in a new window.
Click the link and a message will be shown on the screen. The URL mentioned in href opens in a new window.
KO will allow the click event to bubble up to the higher level event handlers. Meaning if you have 2 click events nested, then the click handler function for both of them will be called. If needed, this bubbling can be prevented by adding an extra binding called as clickBubble and passing false value to it.
Example
Let us look at the following example which demonstrates the use of clickBubble binding.
<!DOCTYPE html>
<head>
<title>KnockoutJS Click Binding - handling clickBubble</title>
<script src = "https://ajax.aspnetcdn.com/ajax/knockout/knockout-3.3.0.js"
type = "text/javascript"></script>
</head>
<body>
<div data-bind = "click: outerFunction">
<button data-bind = "click: innerFunction, clickBubble:false">
Click me to see use of clickBubble.
</button>
</div>
<script type = "text/javascript">
function ViewModel () {
this.outerFunction = function() {
alert("Handler function from Outer loop called.");
}
this.innerFunction = function() {
alert("Handler function from Inner loop called.");
}
};
var vm = new ViewModel();
ko.applyBindings(vm);
</script>
</body>
</html>
Output
Let's carry out the following steps to see how the above code works −
Save the above code in click-cllickbubble-bind.htm file.
Save the above code in click-cllickbubble-bind.htm file.
Open this HTML file in a browser.
Open this HTML file in a browser.
Click the button and observe that adding of clickBubble binding with value false prevents the event from making it past innerFunction.
Click the button and observe that adding of clickBubble binding with value false prevents the event from making it past innerFunction.
38 Lectures
2 hours
Skillbakerystudios
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2031,
"s": 1852,
"text": "Click binding is one of the simplest binding and is used to invoke a JavaScript function associated with a DOM element based on a click. This binding works like an event handler."
},
{
"code": null,
"e": 2151,
"s": 2031,
"text": "This is most commonly used with elements such as button, input, and a, but actually works with any visible DOM element."
},
{
"code": null,
"e": 2158,
"s": 2151,
"text": "Syntax"
},
{
"code": null,
"e": 2185,
"s": 2158,
"text": "click: <binding-function>\n"
},
{
"code": null,
"e": 2196,
"s": 2185,
"text": "Parameters"
},
{
"code": null,
"e": 2352,
"s": 2196,
"text": "The parameter here will be a JavaScript function which needs to be invoked based on a click. This can be any function and need not be a ViewModel function."
},
{
"code": null,
"e": 2360,
"s": 2352,
"text": "Example"
},
{
"code": null,
"e": 2442,
"s": 2360,
"text": "Let us look at the following example which demonstrates the use of click binding."
},
{
"code": null,
"e": 3253,
"s": 2442,
"text": "<!DOCTYPE html>\n <head>\n <title>KnockoutJS Click Binding</title>\n <script src = \"https://ajax.aspnetcdn.com/ajax/knockout/knockout-3.3.0.js\"\n type = \"text/javascript\"></script>\n </head>\n \n <body>\n\n <p>Enter your name: <input data-bind = \"value: someValue\" /></p>\n <p><button data-bind = \"click: showMessage\">Click here</button></p>\n\n <script type = \"text/javascript\">\n function ViewModel () {\n this.someValue = ko.observable();\n \n this.showMessage = function() {\n alert(\"Hello \"+ this.someValue()+ \"!!! How are you today?\"+ \n \"\\nClick Binding is used here !!!\");\n }\n };\n\n var vm = new ViewModel();\n ko.applyBindings(vm);\n </script>\n \n </body>\n</html>"
},
{
"code": null,
"e": 3260,
"s": 3253,
"text": "Output"
},
{
"code": null,
"e": 3330,
"s": 3260,
"text": "Let's carry out the following steps to see how the above code works −"
},
{
"code": null,
"e": 3374,
"s": 3330,
"text": "Save the above code in click-bind.htm file."
},
{
"code": null,
"e": 3418,
"s": 3374,
"text": "Save the above code in click-bind.htm file."
},
{
"code": null,
"e": 3452,
"s": 3418,
"text": "Open this HTML file in a browser."
},
{
"code": null,
"e": 3486,
"s": 3452,
"text": "Open this HTML file in a browser."
},
{
"code": null,
"e": 3557,
"s": 3486,
"text": "Click the Click here button and a message will be shown on the screen."
},
{
"code": null,
"e": 3628,
"s": 3557,
"text": "Click the Click here button and a message will be shown on the screen."
},
{
"code": null,
"e": 3646,
"s": 3628,
"text": "Enter your name: "
},
{
"code": null,
"e": 3657,
"s": 3646,
"text": "Click here"
},
{
"code": null,
"e": 3882,
"s": 3657,
"text": "It is also possible to provide a current model value as a parameter when the handler function is called. This is useful when dealing with a collection of data, wherein the same action needs to be performed on a set of items."
},
{
"code": null,
"e": 3890,
"s": 3882,
"text": "Example"
},
{
"code": null,
"e": 3952,
"s": 3890,
"text": "Let us look at the following example to understand it better."
},
{
"code": null,
"e": 4964,
"s": 3952,
"text": "<!DOCTYPE html>\n <head>\n <title>KnockoutJS Click binding</title>\n <script src = \"https://ajax.aspnetcdn.com/ajax/knockout/knockout-3.3.0.js\"\n type = \"text/javascript\"></script>\n </head>\n \n <body>\n <p>List of product details:</p>\n <ul data-bind = \"foreach: productArray \">\n <li>\n <span data-bind = \"text: productName\"></span>\n <a href = \"#\" data-bind = \"click: $parent.removeProduct\">Remove </a>\n </li>\n </ul>\n\n <script type = \"text/javascript\">\n function AppViewModel() {\n self = this;\n self.productArray = ko.observableArray ([\n {productName: 'Milk'},\n {productName: 'Oil'},\n {productName: 'Shampoo'}\n ]);\n\n self.removeProduct = function() {\n self.productArray.remove(this);\n }\n };\n \n var vm = new AppViewModel();\n ko.applyBindings(vm);\n </script>\n \n </body>\n</html>"
},
{
"code": null,
"e": 4971,
"s": 4964,
"text": "Output"
},
{
"code": null,
"e": 5041,
"s": 4971,
"text": "Let's carry out the following steps to see how the above code works −"
},
{
"code": null,
"e": 5097,
"s": 5041,
"text": "Save the above code in click-for-current-item.htm file."
},
{
"code": null,
"e": 5153,
"s": 5097,
"text": "Save the above code in click-for-current-item.htm file."
},
{
"code": null,
"e": 5187,
"s": 5153,
"text": "Open this HTML file in a browser."
},
{
"code": null,
"e": 5221,
"s": 5187,
"text": "Open this HTML file in a browser."
},
{
"code": null,
"e": 5341,
"s": 5221,
"text": "removeProduct function is called every time the Remove link is clicked and is called for that particular item in array."
},
{
"code": null,
"e": 5461,
"s": 5341,
"text": "removeProduct function is called every time the Remove link is clicked and is called for that particular item in array."
},
{
"code": null,
"e": 5538,
"s": 5461,
"text": "Note that the $parent binding context is used to reach the handler function."
},
{
"code": null,
"e": 5615,
"s": 5538,
"text": "Note that the $parent binding context is used to reach the handler function."
},
{
"code": null,
"e": 5641,
"s": 5615,
"text": "List of product details −"
},
{
"code": null,
"e": 5656,
"s": 5641,
"text": "\nMilk Remove \n"
},
{
"code": null,
"e": 5670,
"s": 5656,
"text": "\nOil Remove \n"
},
{
"code": null,
"e": 5688,
"s": 5670,
"text": "\nShampoo Remove \n"
},
{
"code": null,
"e": 5777,
"s": 5688,
"text": "DOM event along with the current model value can also be passed to the handler function."
},
{
"code": null,
"e": 5785,
"s": 5777,
"text": "Example"
},
{
"code": null,
"e": 5854,
"s": 5785,
"text": "Let us take a look at the following example to understand it better."
},
{
"code": null,
"e": 6688,
"s": 5854,
"text": "<!DOCTYPE html>\n <head>\n <title>KnockoutJS Click Binding</title>\n <script src = \"https://ajax.aspnetcdn.com/ajax/knockout/knockout-3.3.0.js\"\n type = \"text/javascript\"></script>\n </head>\n \n <body>\n <p>Press Control key + click below button.</p>\n <p><button data-bind = \"click: showMessage\">Click here to read message</button></p>\n\n <script type = \"text/javascript\">\n function ViewModel () {\n \n this.showMessage = function(data,event) {\n alert(\"Click Binding is used here !!!\");\n \n if (event.ctrlKey) {\n alert(\"User was pressing down the Control key.\");\n }\n }\n };\n\n var vm = new ViewModel();\n ko.applyBindings(vm);\n </script>\n \n </body>\n</html>"
},
{
"code": null,
"e": 6695,
"s": 6688,
"text": "Output"
},
{
"code": null,
"e": 6765,
"s": 6695,
"text": "Let's carry out the following steps to see how the above code works −"
},
{
"code": null,
"e": 6821,
"s": 6765,
"text": "Save the above code in click-bind-more-params.htm file."
},
{
"code": null,
"e": 6877,
"s": 6821,
"text": "Save the above code in click-bind-more-params.htm file."
},
{
"code": null,
"e": 6911,
"s": 6877,
"text": "Open this HTML file in a browser."
},
{
"code": null,
"e": 6945,
"s": 6911,
"text": "Open this HTML file in a browser."
},
{
"code": null,
"e": 7002,
"s": 6945,
"text": "Pressing of the control key is captured by this binding."
},
{
"code": null,
"e": 7059,
"s": 7002,
"text": "Pressing of the control key is captured by this binding."
},
{
"code": null,
"e": 7099,
"s": 7059,
"text": "Press Control key + click below button."
},
{
"code": null,
"e": 7127,
"s": 7099,
"text": "Click here to read message "
},
{
"code": null,
"e": 7361,
"s": 7127,
"text": "KnockoutJS prevents click event to perform any default action by default. Meaning if Click binding is used on <a> tag, then the browser will only call the handler function and will not actually take you to the link mentioned in href."
},
{
"code": null,
"e": 7486,
"s": 7361,
"text": "If you want the default action to take place in click binding, then you just need to return true from your handler function."
},
{
"code": null,
"e": 7494,
"s": 7486,
"text": "Example"
},
{
"code": null,
"e": 7597,
"s": 7494,
"text": "Let us look at the following example which demonstrates the default action performed by click binding."
},
{
"code": null,
"e": 8439,
"s": 7597,
"text": "<!DOCTYPE html>\n <head>\n <title>KnockoutJS Click Binding - allowing default action</title>\n <script src = \"https://ajax.aspnetcdn.com/ajax/knockout/knockout-3.3.0.js\"\n type = \"text/javascript\"></script>\n </head>\n \n <body>\n <a href = \"http://www.tutorialspoint.com//\" target = \"_blank\" \n data-bind = \"click: callUrl\">Click here to see how default \n Click binding works.\n </a>\n\n <script type = \"text/javascript\">\n function ViewModel() {\n \n this.callUrl = function() {\n alert(\"Default action in Click Binding is allowed here !!! \n You are redirected to link.\");\n return true;\n }\n };\n\n var vm = new ViewModel();\n ko.applyBindings(vm);\n </script>\n \n </body>\n</html>\n"
},
{
"code": null,
"e": 8446,
"s": 8439,
"text": "Output"
},
{
"code": null,
"e": 8516,
"s": 8446,
"text": "Let's carry out the following steps to see how the above code works −"
},
{
"code": null,
"e": 8568,
"s": 8516,
"text": "Save the above code in click-default-bind.htm file."
},
{
"code": null,
"e": 8620,
"s": 8568,
"text": "Save the above code in click-default-bind.htm file."
},
{
"code": null,
"e": 8654,
"s": 8620,
"text": "Open this HTML file in a browser."
},
{
"code": null,
"e": 8688,
"s": 8654,
"text": "Open this HTML file in a browser."
},
{
"code": null,
"e": 8795,
"s": 8688,
"text": "Click the link and a message will be shown on the screen. The URL mentioned in href opens in a new window."
},
{
"code": null,
"e": 8902,
"s": 8795,
"text": "Click the link and a message will be shown on the screen. The URL mentioned in href opens in a new window."
},
{
"code": null,
"e": 9211,
"s": 8902,
"text": "KO will allow the click event to bubble up to the higher level event handlers. Meaning if you have 2 click events nested, then the click handler function for both of them will be called. If needed, this bubbling can be prevented by adding an extra binding called as clickBubble and passing false value to it."
},
{
"code": null,
"e": 9219,
"s": 9211,
"text": "Example"
},
{
"code": null,
"e": 9307,
"s": 9219,
"text": "Let us look at the following example which demonstrates the use of clickBubble binding."
},
{
"code": null,
"e": 10219,
"s": 9307,
"text": "<!DOCTYPE html>\n <head>\n <title>KnockoutJS Click Binding - handling clickBubble</title>\n <script src = \"https://ajax.aspnetcdn.com/ajax/knockout/knockout-3.3.0.js\"\n type = \"text/javascript\"></script>\n </head>\n \n <body>\n <div data-bind = \"click: outerFunction\">\n <button data-bind = \"click: innerFunction, clickBubble:false\">\n Click me to see use of clickBubble.\n </button>\n </div>\n\n <script type = \"text/javascript\">\n function ViewModel () {\n \n this.outerFunction = function() {\n alert(\"Handler function from Outer loop called.\");\n }\n \n this.innerFunction = function() {\n alert(\"Handler function from Inner loop called.\");\n }\n };\n\n var vm = new ViewModel();\n ko.applyBindings(vm);\n </script>\n \n </body>\n</html>\n"
},
{
"code": null,
"e": 10226,
"s": 10219,
"text": "Output"
},
{
"code": null,
"e": 10296,
"s": 10226,
"text": "Let's carry out the following steps to see how the above code works −"
},
{
"code": null,
"e": 10353,
"s": 10296,
"text": "Save the above code in click-cllickbubble-bind.htm file."
},
{
"code": null,
"e": 10410,
"s": 10353,
"text": "Save the above code in click-cllickbubble-bind.htm file."
},
{
"code": null,
"e": 10444,
"s": 10410,
"text": "Open this HTML file in a browser."
},
{
"code": null,
"e": 10478,
"s": 10444,
"text": "Open this HTML file in a browser."
},
{
"code": null,
"e": 10613,
"s": 10478,
"text": "Click the button and observe that adding of clickBubble binding with value false prevents the event from making it past innerFunction."
},
{
"code": null,
"e": 10748,
"s": 10613,
"text": "Click the button and observe that adding of clickBubble binding with value false prevents the event from making it past innerFunction."
},
{
"code": null,
"e": 10781,
"s": 10748,
"text": "\n 38 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 10801,
"s": 10781,
"text": " Skillbakerystudios"
},
{
"code": null,
"e": 10808,
"s": 10801,
"text": " Print"
},
{
"code": null,
"e": 10819,
"s": 10808,
"text": " Add Notes"
}
] |
Understand Universal Approximation Theorem with Code | by Timothy Lim | Towards Data Science
|
The Universal approximation theorem claims that the standard multi-layer feedforward networks with a single hidden layer that contains a finite number of hidden neurons are able to approximate continuous functions with the usage of arbitrary activation functions. (source)
However, the ability of the neural network to approximate any continuous functions mapping the input to the output goal is constraint by the number of neurons, hidden layers and many techniques utilised during the training process of the network. Intuitively, you can think of this as to whether are there possibly enough computational units and operations set up to approximate a continuous function that can properly map the input to the output. The ability to approximate is also highly dependent on the efficiency of the optimization routine and loss function that we use.
Suggestion: Download the script, run it yourself, and play around the parameters. The repo is (here). If you have forgotten about neural networks, read about it (here).
These parameters determining the setup and training of the neural network is commonly known as hyperparameters.
Example of hyperparameters we can tune in the code:
Network structure. (Number of hidden layers, number of neurons)
Network structure. (Number of hidden layers, number of neurons)
model = nn.Sequential( nn.Linear(1, n_neurons), nn.ReLU(), #nn.Linear(n_neurons,n_neurons), #nn.ReLU(), nn.Linear(n_neurons,1), nn.ReLU() )
2. Number of epochs (number of time we go through all the data), line 57
3. Loss function and optimizer, there are so many optimizers available, check it out [here]:
optimizer = optim.RMSprop(model.parameters(), lr=learning_rate) # define optimizer#optimizer = optim.SGD(model.parameters(), lr=learning_rate)criterion = nn.MSELoss() # define loss function
We can run some experiments in code to better understand the concept of approximation. Given that the function we are trying to approximate has the relationship of y = x2, we can run some experiments to gauge how many neurons for a single hidden layer is necessary to fit the y=x2 curve and tune hyperparameters in search for the best results.
From the figure above (Fig 1), we can see the with 20 neurons in a single hidden layer, the neural network is able to approximate the function pretty well just by training on the output values. Increasing to 50 neurons in the single hidden layer provided us with better results.
Recap, a simple illustration for a single hidden layer feedforward network architecture with 8 neurons in case you forget:
In theory, the universal approximation theorems imply that neural networks can approximate a wide variety of functions very well when given an appropriate combination value. However, learning to construct the network with the appropriate values is not always possible due to the constraint/challenges when training the network in the search of such values.
From the figure above (Fig 3), the same architecture of a single hidden layer, the network approximates very poorly. This is due to the fact that training the neural network does not always provide us with precise/perfect values. Therefore, we have to be aware that even though theoretically the neural network could approximate a very accurate continuous function mapping, it may fail to approximate close to the expected continuous function as the training process of the neural network comes with its own challenges.
Running another experiment, we connected another hidden layer with 20 neurons and 50 neurons, the results can be seen in the figure above (Fig 4) . It can be observed that the approximation of the predicted function is much better without spending much time tuning the training parameters which is of expectation. Increasing the neurons and connections present in search for better approximation is a pretty good heuristic but we have to remember that in the process of training the neurons lies a few challenges too that may deter the neural network from learning the best values needed to approximate the function even if more than enough nodes are available in theory.
Another important takeaway from the experiment is that by spending more time tuning the hyperparameters of the neural network, we can actually get a near-perfect approximation with the same architecture of 1 hidden layer with 50 neurons as shown in the figure above (Fig 5). It can be observed that the results are even better than using 2 hidden layers with bad hyperparameters. The experiment with 2 hidden layers can definitely approximate better if we spend more time tuning the hyperparameters. This shows how important is the optimization routine and certain hyperparameters are to training the network. With 2 layers and more neurons, it does not take much tuning to get a good result because there are more connections and nodes to use. However, as we add more nodes and layers, it gets more computationally expensive.
Lastly, if the relationship is too complex, 1 hidden layer with 50 neurons may not even theoretically be able to approximate the input to output mapping well enough in the first place. y=x2 is a relatively easy relationship to approximate but we can think of the relationship of inputs such as images. The relationship of image pixel values to the classification of the image is ridiculously complex where not even the best mathematician can possibly come out with an appropriate function. However, we can use neural networks to approximate such complex relationships by adding more hidden layers and neurons. This gave birth to the field of Deep Learning which is a subset of Machine Learning focusing on utilising neural networks with a lot of layers (ex: deep neural networks, deep convolutional networks) to learn very complex function mappings.
Please do try other functions such as sin(x) or cos(x) and see whether can you approximate the relationship well. You may keep failing till you get the hyperparameters right but it will give you a good insight into tuning hyperparameters. I suggest if the function is too hard to approximate, go ahead and add more layers and neurons! Try out different optimizers such as SGD and ADAM, compare the results.
|
[
{
"code": null,
"e": 445,
"s": 172,
"text": "The Universal approximation theorem claims that the standard multi-layer feedforward networks with a single hidden layer that contains a finite number of hidden neurons are able to approximate continuous functions with the usage of arbitrary activation functions. (source)"
},
{
"code": null,
"e": 1022,
"s": 445,
"text": "However, the ability of the neural network to approximate any continuous functions mapping the input to the output goal is constraint by the number of neurons, hidden layers and many techniques utilised during the training process of the network. Intuitively, you can think of this as to whether are there possibly enough computational units and operations set up to approximate a continuous function that can properly map the input to the output. The ability to approximate is also highly dependent on the efficiency of the optimization routine and loss function that we use."
},
{
"code": null,
"e": 1191,
"s": 1022,
"text": "Suggestion: Download the script, run it yourself, and play around the parameters. The repo is (here). If you have forgotten about neural networks, read about it (here)."
},
{
"code": null,
"e": 1303,
"s": 1191,
"text": "These parameters determining the setup and training of the neural network is commonly known as hyperparameters."
},
{
"code": null,
"e": 1355,
"s": 1303,
"text": "Example of hyperparameters we can tune in the code:"
},
{
"code": null,
"e": 1419,
"s": 1355,
"text": "Network structure. (Number of hidden layers, number of neurons)"
},
{
"code": null,
"e": 1483,
"s": 1419,
"text": "Network structure. (Number of hidden layers, number of neurons)"
},
{
"code": null,
"e": 1625,
"s": 1483,
"text": "model = nn.Sequential( nn.Linear(1, n_neurons), nn.ReLU(), #nn.Linear(n_neurons,n_neurons), #nn.ReLU(), nn.Linear(n_neurons,1), nn.ReLU() )"
},
{
"code": null,
"e": 1698,
"s": 1625,
"text": "2. Number of epochs (number of time we go through all the data), line 57"
},
{
"code": null,
"e": 1791,
"s": 1698,
"text": "3. Loss function and optimizer, there are so many optimizers available, check it out [here]:"
},
{
"code": null,
"e": 1981,
"s": 1791,
"text": "optimizer = optim.RMSprop(model.parameters(), lr=learning_rate) # define optimizer#optimizer = optim.SGD(model.parameters(), lr=learning_rate)criterion = nn.MSELoss() # define loss function"
},
{
"code": null,
"e": 2325,
"s": 1981,
"text": "We can run some experiments in code to better understand the concept of approximation. Given that the function we are trying to approximate has the relationship of y = x2, we can run some experiments to gauge how many neurons for a single hidden layer is necessary to fit the y=x2 curve and tune hyperparameters in search for the best results."
},
{
"code": null,
"e": 2604,
"s": 2325,
"text": "From the figure above (Fig 1), we can see the with 20 neurons in a single hidden layer, the neural network is able to approximate the function pretty well just by training on the output values. Increasing to 50 neurons in the single hidden layer provided us with better results."
},
{
"code": null,
"e": 2727,
"s": 2604,
"text": "Recap, a simple illustration for a single hidden layer feedforward network architecture with 8 neurons in case you forget:"
},
{
"code": null,
"e": 3084,
"s": 2727,
"text": "In theory, the universal approximation theorems imply that neural networks can approximate a wide variety of functions very well when given an appropriate combination value. However, learning to construct the network with the appropriate values is not always possible due to the constraint/challenges when training the network in the search of such values."
},
{
"code": null,
"e": 3604,
"s": 3084,
"text": "From the figure above (Fig 3), the same architecture of a single hidden layer, the network approximates very poorly. This is due to the fact that training the neural network does not always provide us with precise/perfect values. Therefore, we have to be aware that even though theoretically the neural network could approximate a very accurate continuous function mapping, it may fail to approximate close to the expected continuous function as the training process of the neural network comes with its own challenges."
},
{
"code": null,
"e": 4276,
"s": 3604,
"text": "Running another experiment, we connected another hidden layer with 20 neurons and 50 neurons, the results can be seen in the figure above (Fig 4) . It can be observed that the approximation of the predicted function is much better without spending much time tuning the training parameters which is of expectation. Increasing the neurons and connections present in search for better approximation is a pretty good heuristic but we have to remember that in the process of training the neurons lies a few challenges too that may deter the neural network from learning the best values needed to approximate the function even if more than enough nodes are available in theory."
},
{
"code": null,
"e": 5103,
"s": 4276,
"text": "Another important takeaway from the experiment is that by spending more time tuning the hyperparameters of the neural network, we can actually get a near-perfect approximation with the same architecture of 1 hidden layer with 50 neurons as shown in the figure above (Fig 5). It can be observed that the results are even better than using 2 hidden layers with bad hyperparameters. The experiment with 2 hidden layers can definitely approximate better if we spend more time tuning the hyperparameters. This shows how important is the optimization routine and certain hyperparameters are to training the network. With 2 layers and more neurons, it does not take much tuning to get a good result because there are more connections and nodes to use. However, as we add more nodes and layers, it gets more computationally expensive."
},
{
"code": null,
"e": 5953,
"s": 5103,
"text": "Lastly, if the relationship is too complex, 1 hidden layer with 50 neurons may not even theoretically be able to approximate the input to output mapping well enough in the first place. y=x2 is a relatively easy relationship to approximate but we can think of the relationship of inputs such as images. The relationship of image pixel values to the classification of the image is ridiculously complex where not even the best mathematician can possibly come out with an appropriate function. However, we can use neural networks to approximate such complex relationships by adding more hidden layers and neurons. This gave birth to the field of Deep Learning which is a subset of Machine Learning focusing on utilising neural networks with a lot of layers (ex: deep neural networks, deep convolutional networks) to learn very complex function mappings."
}
] |
TensorFlow Lite Android Support Library: Simplify ML On Android | by Shubham Panchal | Towards Data Science
|
Everyone loves TensorFlow and even more when you can run a TF model on Android directly. We all use TensorFlow Lite on Android and we have a couple of CodeLabs on it too. Using the Interpreter class on Android, we are currently running our .tflite models in apps.
But we have to do a lot before that, right? If we’re performing an image classification task, you’ll probably get a Bitmap or an Image object from the Camera library and then we transform it into a float[][][] or a byte[] . Then we load our model from the assets folder as a MappedByteBuffer . After calling interpreter.run() , we get the class probabilities, on which we perform the argmax() operation and then finally get a label from the labels.txt file.
This is the traditional approach which we developers follow and there’s no other way round.
The TensorFlow team has released the TensorFlow Lite Android Support Library to solve the tedious tasks of preprocessing. The GitHub page gives an intuition of their aim,
Mobile application developers typically interact with typed objects such as bitmaps or primitives such as integers. However, the TensorFlow Lite Interpreter that runs the on-device machine learning model uses tensors in the form of ByteBuffer, which can be difficult to debug and manipulate. The TensorFlow Lite Android Support Library is designed to help process the input and output of TensorFlow Lite models, and make the TensorFlow Lite interpreter easier to use.
First, we need to get this right in our Android project. Remembering build.gradle file? Right! We’ll add these dependencies in our app-level build.gradle file,
The first step in running a TFLite model is to create some array object which can store the inputs for our model as well the outputs which the model will produce. To make our lives easier and less struggling with float[] objects, TF Support Library includes a TensorBuffer class that takes in the shape of the desired array and its data type.
Note: As of 1st pril 2020, only DataType.FLOAT32 and DataType.UINT8 are supported.
You can even create a TensorBuffer object from an existing TensorBuffer object by modifying its data type,
val newImage = TensorImage.createFrom( image , DataType.FLOAT32 )
If you’re working with object detection, image classification or other images -related models, you need to work on Bitmap and resize it or normalize it. We have three ops for this namely, ResizeOp , ResizeWithCropOrPadOp and Rot900p .
First, we define our preprocessing pipeline using the ImageProcessor class.
Question: What are BILINEAR and NEAREST_NEIGHBOR methods?
Answer: Read this.
Next, create a TensorImage object and process the image.
Normalization of image arrays is necessary for almost all models to be it image classification models or regression models. For processing tensors, we have a TensorProcessor . Along with NormalizeOp we have CastOp , QuantizeOp and DequantizeOp .
Question: What is Normalization?
Answer: The process of converting an actual range of values into a standard range of values, typically -1 to +1 or 0 to 1. For example, suppose the natural range of a certain feature is 800 to 6,000. Through subtraction and division, you can normalize those values into the range -1 to +1.
Also, we have the freedom to build custom ops by implementing TensorOperator class, as shown below.
We can easily load our .tflite model using the FileUtil.loadMappedFile() method. Similarly, we can load the labels from a InputStream or from the assets folder.
And then perform inference using Interpreter.run() ,
I hope you liked the new TensorFlow Lite Android Support Library. This was a quick review of what’s inside but try exploring it yourself too. Thanks for reading!
|
[
{
"code": null,
"e": 435,
"s": 171,
"text": "Everyone loves TensorFlow and even more when you can run a TF model on Android directly. We all use TensorFlow Lite on Android and we have a couple of CodeLabs on it too. Using the Interpreter class on Android, we are currently running our .tflite models in apps."
},
{
"code": null,
"e": 893,
"s": 435,
"text": "But we have to do a lot before that, right? If we’re performing an image classification task, you’ll probably get a Bitmap or an Image object from the Camera library and then we transform it into a float[][][] or a byte[] . Then we load our model from the assets folder as a MappedByteBuffer . After calling interpreter.run() , we get the class probabilities, on which we perform the argmax() operation and then finally get a label from the labels.txt file."
},
{
"code": null,
"e": 985,
"s": 893,
"text": "This is the traditional approach which we developers follow and there’s no other way round."
},
{
"code": null,
"e": 1156,
"s": 985,
"text": "The TensorFlow team has released the TensorFlow Lite Android Support Library to solve the tedious tasks of preprocessing. The GitHub page gives an intuition of their aim,"
},
{
"code": null,
"e": 1624,
"s": 1156,
"text": "Mobile application developers typically interact with typed objects such as bitmaps or primitives such as integers. However, the TensorFlow Lite Interpreter that runs the on-device machine learning model uses tensors in the form of ByteBuffer, which can be difficult to debug and manipulate. The TensorFlow Lite Android Support Library is designed to help process the input and output of TensorFlow Lite models, and make the TensorFlow Lite interpreter easier to use."
},
{
"code": null,
"e": 1784,
"s": 1624,
"text": "First, we need to get this right in our Android project. Remembering build.gradle file? Right! We’ll add these dependencies in our app-level build.gradle file,"
},
{
"code": null,
"e": 2127,
"s": 1784,
"text": "The first step in running a TFLite model is to create some array object which can store the inputs for our model as well the outputs which the model will produce. To make our lives easier and less struggling with float[] objects, TF Support Library includes a TensorBuffer class that takes in the shape of the desired array and its data type."
},
{
"code": null,
"e": 2210,
"s": 2127,
"text": "Note: As of 1st pril 2020, only DataType.FLOAT32 and DataType.UINT8 are supported."
},
{
"code": null,
"e": 2317,
"s": 2210,
"text": "You can even create a TensorBuffer object from an existing TensorBuffer object by modifying its data type,"
},
{
"code": null,
"e": 2383,
"s": 2317,
"text": "val newImage = TensorImage.createFrom( image , DataType.FLOAT32 )"
},
{
"code": null,
"e": 2618,
"s": 2383,
"text": "If you’re working with object detection, image classification or other images -related models, you need to work on Bitmap and resize it or normalize it. We have three ops for this namely, ResizeOp , ResizeWithCropOrPadOp and Rot900p ."
},
{
"code": null,
"e": 2694,
"s": 2618,
"text": "First, we define our preprocessing pipeline using the ImageProcessor class."
},
{
"code": null,
"e": 2752,
"s": 2694,
"text": "Question: What are BILINEAR and NEAREST_NEIGHBOR methods?"
},
{
"code": null,
"e": 2771,
"s": 2752,
"text": "Answer: Read this."
},
{
"code": null,
"e": 2828,
"s": 2771,
"text": "Next, create a TensorImage object and process the image."
},
{
"code": null,
"e": 3074,
"s": 2828,
"text": "Normalization of image arrays is necessary for almost all models to be it image classification models or regression models. For processing tensors, we have a TensorProcessor . Along with NormalizeOp we have CastOp , QuantizeOp and DequantizeOp ."
},
{
"code": null,
"e": 3107,
"s": 3074,
"text": "Question: What is Normalization?"
},
{
"code": null,
"e": 3397,
"s": 3107,
"text": "Answer: The process of converting an actual range of values into a standard range of values, typically -1 to +1 or 0 to 1. For example, suppose the natural range of a certain feature is 800 to 6,000. Through subtraction and division, you can normalize those values into the range -1 to +1."
},
{
"code": null,
"e": 3497,
"s": 3397,
"text": "Also, we have the freedom to build custom ops by implementing TensorOperator class, as shown below."
},
{
"code": null,
"e": 3658,
"s": 3497,
"text": "We can easily load our .tflite model using the FileUtil.loadMappedFile() method. Similarly, we can load the labels from a InputStream or from the assets folder."
},
{
"code": null,
"e": 3711,
"s": 3658,
"text": "And then perform inference using Interpreter.run() ,"
}
] |
React useCallback Hook
|
The React useCallback Hook returns a memoized callback function.
Think of memoization as caching a value so that it does not need to be recalculated.
This allows us to isolate resource intensive functions so that they will not automatically run on every render.
The useCallback Hook only runs when one of its dependencies update.
This can improve performance.
The useCallback and useMemo Hooks are similar.
The main difference is that useMemo returns a memoized value and useCallback returns a memoized function.
You can learn more about useMemo in the useMemo chapter.
One reason to use useCallback is to prevent a component from re-rendering unless its props have changed.
In this example, you might think that the Todos component will not re-render unless the todos change:
This is a similar example to the one in the React.memo section.
index.js
import { useState } from "react";
import ReactDOM from "react-dom/client";
import Todos from "./Todos";
const App = () => {
const [count, setCount] = useState(0);
const [todos, setTodos] = useState([]);
const increment = () => {
setCount((c) => c + 1);
};
const addTodo = () => {
setTodos((t) => [...t, "New Todo"]);
};
return (
<>
<Todos todos={todos} addTodo={addTodo} />
<hr />
<div>
Count: {count}
<button onClick={increment}>+</button>
</div>
</>
);
};
const root = ReactDOM.createRoot(document.getElementById('root'));
root.render(<App />);
Todos.js
import { memo } from "react";
const Todos = ({ todos, addTodo }) => {
console.log("child render");
return (
<>
<h2>My Todos</h2>
{todos.map((todo, index) => {
return <p key={index}>{todo}</p>;
})}
<button onClick={addTodo}>Add Todo</button>
</>
);
};
export default memo(Todos);
Run
Example »
Try running this and click the count increment button.
You will notice that the Todos component re-renders even when the todos do not change.
Why does this not work? We are using memo, so the Todos component should not re-render since neither the todos state nor the addTodo function are changing when the count is incremented.
This is because of something called "referential equality".
Every time a component re-renders, its functions get recreated. Because of this, the addTodo function has actually changed.
To fix this, we can use the useCallback hook to prevent the function from being recreated unless necessary.
Use the useCallback Hook to prevent the Todos component from re-rendering needlessly:
index.js
import { useState, useCallback } from "react";
import ReactDOM from "react-dom/client";
import Todos from "./Todos";
const App = () => {
const [count, setCount] = useState(0);
const [todos, setTodos] = useState([]);
const increment = () => {
setCount((c) => c + 1);
};
const addTodo = useCallback(() => {
setTodos((t) => [...t, "New Todo"]);
}, [todos]);
return (
<>
<Todos todos={todos} addTodo={addTodo} />
<hr />
<div>
Count: {count}
<button onClick={increment}>+</button>
</div>
</>
);
};
const root = ReactDOM.createRoot(document.getElementById('root'));
root.render(<App />);
Todos.js
import { memo } from "react";
const Todos = ({ todos, addTodo }) => {
console.log("child render");
return (
<>
<h2>My Todos</h2>
{todos.map((todo, index) => {
return <p key={index}>{todo}</p>;
})}
<button onClick={addTodo}>Add Todo</button>
</>
);
};
export default memo(Todos);
Run
Example »
Now the Todos component will only re-render when the todos prop changes.
We just launchedW3Schools videos
Get certifiedby completinga course today!
If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail:
help@w3schools.com
Your message has been sent to W3Schools.
|
[
{
"code": null,
"e": 65,
"s": 0,
"text": "The React useCallback Hook returns a memoized callback function."
},
{
"code": null,
"e": 150,
"s": 65,
"text": "Think of memoization as caching a value so that it does not need to be recalculated."
},
{
"code": null,
"e": 262,
"s": 150,
"text": "This allows us to isolate resource intensive functions so that they will not automatically run on every render."
},
{
"code": null,
"e": 330,
"s": 262,
"text": "The useCallback Hook only runs when one of its dependencies update."
},
{
"code": null,
"e": 360,
"s": 330,
"text": "This can improve performance."
},
{
"code": null,
"e": 570,
"s": 360,
"text": "The useCallback and useMemo Hooks are similar.\nThe main difference is that useMemo returns a memoized value and useCallback returns a memoized function.\nYou can learn more about useMemo in the useMemo chapter."
},
{
"code": null,
"e": 675,
"s": 570,
"text": "One reason to use useCallback is to prevent a component from re-rendering unless its props have changed."
},
{
"code": null,
"e": 777,
"s": 675,
"text": "In this example, you might think that the Todos component will not re-render unless the todos change:"
},
{
"code": null,
"e": 841,
"s": 777,
"text": "This is a similar example to the one in the React.memo section."
},
{
"code": null,
"e": 850,
"s": 841,
"text": "index.js"
},
{
"code": null,
"e": 1474,
"s": 850,
"text": "import { useState } from \"react\";\nimport ReactDOM from \"react-dom/client\";\nimport Todos from \"./Todos\";\n\nconst App = () => {\n const [count, setCount] = useState(0);\n const [todos, setTodos] = useState([]);\n\n const increment = () => {\n setCount((c) => c + 1);\n };\n const addTodo = () => {\n setTodos((t) => [...t, \"New Todo\"]);\n };\n\n return (\n <>\n <Todos todos={todos} addTodo={addTodo} />\n <hr />\n <div>\n Count: {count}\n <button onClick={increment}>+</button>\n </div>\n </>\n );\n};\n\nconst root = ReactDOM.createRoot(document.getElementById('root'));\nroot.render(<App />);\n"
},
{
"code": null,
"e": 1483,
"s": 1474,
"text": "Todos.js"
},
{
"code": null,
"e": 1811,
"s": 1483,
"text": "import { memo } from \"react\";\n\nconst Todos = ({ todos, addTodo }) => {\n console.log(\"child render\");\n return (\n <>\n <h2>My Todos</h2>\n {todos.map((todo, index) => {\n return <p key={index}>{todo}</p>;\n })}\n <button onClick={addTodo}>Add Todo</button>\n </>\n );\n};\n\nexport default memo(Todos);\n"
},
{
"code": null,
"e": 1828,
"s": 1811,
"text": "\nRun \nExample »\n"
},
{
"code": null,
"e": 1883,
"s": 1828,
"text": "Try running this and click the count increment button."
},
{
"code": null,
"e": 1970,
"s": 1883,
"text": "You will notice that the Todos component re-renders even when the todos do not change."
},
{
"code": null,
"e": 2156,
"s": 1970,
"text": "Why does this not work? We are using memo, so the Todos component should not re-render since neither the todos state nor the addTodo function are changing when the count is incremented."
},
{
"code": null,
"e": 2216,
"s": 2156,
"text": "This is because of something called \"referential equality\"."
},
{
"code": null,
"e": 2340,
"s": 2216,
"text": "Every time a component re-renders, its functions get recreated. Because of this, the addTodo function has actually changed."
},
{
"code": null,
"e": 2448,
"s": 2340,
"text": "To fix this, we can use the useCallback hook to prevent the function from being recreated unless necessary."
},
{
"code": null,
"e": 2534,
"s": 2448,
"text": "Use the useCallback Hook to prevent the Todos component from re-rendering needlessly:"
},
{
"code": null,
"e": 2543,
"s": 2534,
"text": "index.js"
},
{
"code": null,
"e": 3202,
"s": 2543,
"text": "import { useState, useCallback } from \"react\";\nimport ReactDOM from \"react-dom/client\";\nimport Todos from \"./Todos\";\n\nconst App = () => {\n const [count, setCount] = useState(0);\n const [todos, setTodos] = useState([]);\n\n const increment = () => {\n setCount((c) => c + 1);\n };\n const addTodo = useCallback(() => {\n setTodos((t) => [...t, \"New Todo\"]);\n }, [todos]);\n\n return (\n <>\n <Todos todos={todos} addTodo={addTodo} />\n <hr />\n <div>\n Count: {count}\n <button onClick={increment}>+</button>\n </div>\n </>\n );\n};\n\nconst root = ReactDOM.createRoot(document.getElementById('root'));\nroot.render(<App />);\n"
},
{
"code": null,
"e": 3211,
"s": 3202,
"text": "Todos.js"
},
{
"code": null,
"e": 3539,
"s": 3211,
"text": "import { memo } from \"react\";\n\nconst Todos = ({ todos, addTodo }) => {\n console.log(\"child render\");\n return (\n <>\n <h2>My Todos</h2>\n {todos.map((todo, index) => {\n return <p key={index}>{todo}</p>;\n })}\n <button onClick={addTodo}>Add Todo</button>\n </>\n );\n};\n\nexport default memo(Todos);\n"
},
{
"code": null,
"e": 3556,
"s": 3539,
"text": "\nRun \nExample »\n"
},
{
"code": null,
"e": 3629,
"s": 3556,
"text": "Now the Todos component will only re-render when the todos prop changes."
},
{
"code": null,
"e": 3662,
"s": 3629,
"text": "We just launchedW3Schools videos"
},
{
"code": null,
"e": 3704,
"s": 3662,
"text": "Get certifiedby completinga course today!"
},
{
"code": null,
"e": 3811,
"s": 3704,
"text": "If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail:"
},
{
"code": null,
"e": 3830,
"s": 3811,
"text": "help@w3schools.com"
}
] |
Creating Software Documentation in under 10 minutes with mkDocs | by Shreya Chaudhary | Towards Data Science
|
As you work on any project, documentation is extremely helpful, almost critical. Luckily, mkDocs created a nice, efficient method of creating documentation that looks both professional and is easy-to-use.
Software documentation is writing about your code and your project, explaining what it is about and how it works. This is important for open-source projects, team projects, and even personal projects.
By documenting your code you:
Make your code explainable for others you’re collaborating with (and for yourself when you later look back at your code).Keep track of all aspects of your software.Make debugging easier later on.Can have a universal space to keep your files.
Make your code explainable for others you’re collaborating with (and for yourself when you later look back at your code).
Keep track of all aspects of your software.
Make debugging easier later on.
Can have a universal space to keep your files.
It should only take at most 10 minutes to create the skeleton of your documentation with mkDocs. Spending 10 minutes to document now is worth having to spend countless minutes struggling to debug your code and explain your code to your coworkers and yourself later.
To use mkDocs, you’ll need pip. You can find instructions on how to install pip here. If you have pip, make sure to update it, then install mkDocs. While you install mkDocs, you should also pick a theme (check out the options here). In the example, we picked the material theme.
pip install --upgrade pippip install mkdocspip install mkdocs-material
Now you’re ready to create your documentation. Run the below command, but replace PROJECT_NAME with whatever your project name is.
mkdocs new PROJECT_NAMEcd PROJECT_NAME
You should see a file called mkdocs.yaml and a folder called docs. The folder will have a single markdown file, index.md.
To run the documentation, use mkdocs serve and then go to http://127.0.0.1:8000/ in your browser.
Open mkdocs.yaml and you should see the following:
site_name: My Docs
We’re going to edit this document. First, we’ll create a generic outline; feel free to fill in the placeholder variables. The theme is what we pip installed in the past.
site_name: NAMEnav: - Home: index.md - Page2: page2.md - Section1: - Subpage1: subpage1.md - Subpage2: subpage2.mdtheme: name: THEME_DOWNLOADED
For example, imagine that I wanted to create documentation for an amusement park simulation. This is what I would write in mkdocs.yaml:
site_name: Amusement Park Simulationnav: - Home: index.md - About: about.md - Games: - "Ping Pong": games/ping.md - Balloon: games/balloon.md - Rides: - "Scary Coaster": rides/scary.md - "Drop of Doom": rides/drop.mdtheme: name: material
Note that all the folders and directories mentioned in mkdocs.yaml will be in the docs directory. This is how my structure would look:
PROJECT_NAME/ docs/ index.md about.md games/ ping.md balloon.md rides/ scary.md drop.md mkdocs.yaml Add the rest of your code here
If you don’t want several documents, categories can also be created with markdown header syntax. If you’re unfamiliar with .md files, you can learn more about the syntax here.
To end, we’ll host our documentation on GitHub Pages. Simply run mkdocs gh-deploy. It should create a new branch in your repository that will host your site at USERNAME.github.io/REPOSITORY_NAME.
And that’s it! You’ve successfully created documentation for your project. If you want to learn how to further customise your documentation or other mkDocs options, visit their website here. Check out my GitHub repository for this tutorial here and the deployed website here.
Make sure that you choose to use either tabs or spaces. mkDocs doesn’t allow a combination of both.
If there are spaces in the name, add quotation marks.
If you’re getting a 404 error, that means that you’re probably missing a file. Check docs to make sure your file exists there.
Note that organisations do not support GitHub pages.
|
[
{
"code": null,
"e": 376,
"s": 171,
"text": "As you work on any project, documentation is extremely helpful, almost critical. Luckily, mkDocs created a nice, efficient method of creating documentation that looks both professional and is easy-to-use."
},
{
"code": null,
"e": 577,
"s": 376,
"text": "Software documentation is writing about your code and your project, explaining what it is about and how it works. This is important for open-source projects, team projects, and even personal projects."
},
{
"code": null,
"e": 607,
"s": 577,
"text": "By documenting your code you:"
},
{
"code": null,
"e": 849,
"s": 607,
"text": "Make your code explainable for others you’re collaborating with (and for yourself when you later look back at your code).Keep track of all aspects of your software.Make debugging easier later on.Can have a universal space to keep your files."
},
{
"code": null,
"e": 971,
"s": 849,
"text": "Make your code explainable for others you’re collaborating with (and for yourself when you later look back at your code)."
},
{
"code": null,
"e": 1015,
"s": 971,
"text": "Keep track of all aspects of your software."
},
{
"code": null,
"e": 1047,
"s": 1015,
"text": "Make debugging easier later on."
},
{
"code": null,
"e": 1094,
"s": 1047,
"text": "Can have a universal space to keep your files."
},
{
"code": null,
"e": 1360,
"s": 1094,
"text": "It should only take at most 10 minutes to create the skeleton of your documentation with mkDocs. Spending 10 minutes to document now is worth having to spend countless minutes struggling to debug your code and explain your code to your coworkers and yourself later."
},
{
"code": null,
"e": 1639,
"s": 1360,
"text": "To use mkDocs, you’ll need pip. You can find instructions on how to install pip here. If you have pip, make sure to update it, then install mkDocs. While you install mkDocs, you should also pick a theme (check out the options here). In the example, we picked the material theme."
},
{
"code": null,
"e": 1710,
"s": 1639,
"text": "pip install --upgrade pippip install mkdocspip install mkdocs-material"
},
{
"code": null,
"e": 1841,
"s": 1710,
"text": "Now you’re ready to create your documentation. Run the below command, but replace PROJECT_NAME with whatever your project name is."
},
{
"code": null,
"e": 1880,
"s": 1841,
"text": "mkdocs new PROJECT_NAMEcd PROJECT_NAME"
},
{
"code": null,
"e": 2002,
"s": 1880,
"text": "You should see a file called mkdocs.yaml and a folder called docs. The folder will have a single markdown file, index.md."
},
{
"code": null,
"e": 2100,
"s": 2002,
"text": "To run the documentation, use mkdocs serve and then go to http://127.0.0.1:8000/ in your browser."
},
{
"code": null,
"e": 2151,
"s": 2100,
"text": "Open mkdocs.yaml and you should see the following:"
},
{
"code": null,
"e": 2170,
"s": 2151,
"text": "site_name: My Docs"
},
{
"code": null,
"e": 2340,
"s": 2170,
"text": "We’re going to edit this document. First, we’ll create a generic outline; feel free to fill in the placeholder variables. The theme is what we pip installed in the past."
},
{
"code": null,
"e": 2494,
"s": 2340,
"text": "site_name: NAMEnav: - Home: index.md - Page2: page2.md - Section1: - Subpage1: subpage1.md - Subpage2: subpage2.mdtheme: name: THEME_DOWNLOADED"
},
{
"code": null,
"e": 2630,
"s": 2494,
"text": "For example, imagine that I wanted to create documentation for an amusement park simulation. This is what I would write in mkdocs.yaml:"
},
{
"code": null,
"e": 2885,
"s": 2630,
"text": "site_name: Amusement Park Simulationnav: - Home: index.md - About: about.md - Games: - \"Ping Pong\": games/ping.md - Balloon: games/balloon.md - Rides: - \"Scary Coaster\": rides/scary.md - \"Drop of Doom\": rides/drop.mdtheme: name: material"
},
{
"code": null,
"e": 3020,
"s": 2885,
"text": "Note that all the folders and directories mentioned in mkdocs.yaml will be in the docs directory. This is how my structure would look:"
},
{
"code": null,
"e": 3236,
"s": 3020,
"text": "PROJECT_NAME/ docs/ index.md about.md games/ ping.md balloon.md rides/ scary.md drop.md mkdocs.yaml Add the rest of your code here"
},
{
"code": null,
"e": 3412,
"s": 3236,
"text": "If you don’t want several documents, categories can also be created with markdown header syntax. If you’re unfamiliar with .md files, you can learn more about the syntax here."
},
{
"code": null,
"e": 3608,
"s": 3412,
"text": "To end, we’ll host our documentation on GitHub Pages. Simply run mkdocs gh-deploy. It should create a new branch in your repository that will host your site at USERNAME.github.io/REPOSITORY_NAME."
},
{
"code": null,
"e": 3884,
"s": 3608,
"text": "And that’s it! You’ve successfully created documentation for your project. If you want to learn how to further customise your documentation or other mkDocs options, visit their website here. Check out my GitHub repository for this tutorial here and the deployed website here."
},
{
"code": null,
"e": 3984,
"s": 3884,
"text": "Make sure that you choose to use either tabs or spaces. mkDocs doesn’t allow a combination of both."
},
{
"code": null,
"e": 4038,
"s": 3984,
"text": "If there are spaces in the name, add quotation marks."
},
{
"code": null,
"e": 4165,
"s": 4038,
"text": "If you’re getting a 404 error, that means that you’re probably missing a file. Check docs to make sure your file exists there."
}
] |
How to use week input type in HTML?
|
The week input type is used in HTML using the <input type="week">. Using this, allow users to select a week and year.
A date picker popup is visible whenever you will give a user input to the week input type.
Note − The input type week is not supported in Firefox and Internet Explorer. It works on Google Chrome.
You can try to run the following code to learn how to use week input type in HTML. It will show both week and year.
<!DOCTYPE html>
<html>
<head>
<title>HTML input week</title>
</head>
<body>
<form action = "" method = "get">
Details:<br><br>
Student Name<br><input type = "name" name = "sname"><br>
Training week<br><input type = "week" name = "week"><br>
<input type = "submit" value = "Submit">
</form>
</body>
</html>
|
[
{
"code": null,
"e": 1180,
"s": 1062,
"text": "The week input type is used in HTML using the <input type=\"week\">. Using this, allow users to select a week and year."
},
{
"code": null,
"e": 1271,
"s": 1180,
"text": "A date picker popup is visible whenever you will give a user input to the week input type."
},
{
"code": null,
"e": 1376,
"s": 1271,
"text": "Note − The input type week is not supported in Firefox and Internet Explorer. It works on Google Chrome."
},
{
"code": null,
"e": 1492,
"s": 1376,
"text": "You can try to run the following code to learn how to use week input type in HTML. It will show both week and year."
},
{
"code": null,
"e": 1869,
"s": 1492,
"text": "<!DOCTYPE html>\n<html>\n <head>\n <title>HTML input week</title>\n </head>\n\n <body>\n <form action = \"\" method = \"get\">\n Details:<br><br>\n Student Name<br><input type = \"name\" name = \"sname\"><br>\n Training week<br><input type = \"week\" name = \"week\"><br>\n <input type = \"submit\" value = \"Submit\">\n </form>\n </body>\n \n</html>"
}
] |
fgetcsv() function in PHP
|
The fgetcsv() function parses a line from an open file to check for CSV fields. It returns an array containing the fields read.
fgetcsv(file_pointer, length, delimiter, enclosure, escape)
file_pointer − A valid file pointer to a file successfully opened by fopen(), popen(), or fsockopen().
file_pointer − A valid file pointer to a file successfully opened by fopen(), popen(), or fsockopen().
length − Maximum length of a line.
length − Maximum length of a line.
delimiter − Character that specifies the field separator. Default is comma ( , )
delimiter − Character that specifies the field separator. Default is comma ( , )
enclosure − Set the field enclosure character. Defaults as a double quotation mark.
enclosure − Set the field enclosure character. Defaults as a double quotation mark.
escape − Set the escape character. Defaults as a backslash (\).
escape − Set the escape character. Defaults as a backslash (\).
The fgetcsv() function returns an array containing the fields read.
Let’s say we have the following “products.csv” CSV file.
laptop, keyboard, mouse
The following is an example that displays the content of CSV, that includes the products.
<?php
$file_pointer = fopen("products.csv","r");
print_r(fgetcsv($file_pointer));
fclose($file_pointer);
?>
Array
(
[0] => Laptop
[1] => Keyboard
[2] => Mouse
)
Let us see another example.
We have the following “tutorials.csv” CSV file.
Java, C#, HTML5, CSS3, Bootstrap, Android
The following is an example that displays the content of CSV “tutorials.csv”.
<?php
$file_pointer = fopen("tutorials.csv","r");
while(! feof($file_pointer)) {
print_r(fgetcsv($file_pointer));
}
fclose($file_pointer);
?>
The following is the output: Java, C#, HTML5, CSS3, Bootstrap, Android
Array
(
[0] => Java
[1] => C#
[2] => HTML5
[3] => CSS3
[4] => Bootstrap
[5] => Android
)
|
[
{
"code": null,
"e": 1190,
"s": 1062,
"text": "The fgetcsv() function parses a line from an open file to check for CSV fields. It returns an array containing the fields read."
},
{
"code": null,
"e": 1250,
"s": 1190,
"text": "fgetcsv(file_pointer, length, delimiter, enclosure, escape)"
},
{
"code": null,
"e": 1353,
"s": 1250,
"text": "file_pointer − A valid file pointer to a file successfully opened by fopen(), popen(), or fsockopen()."
},
{
"code": null,
"e": 1456,
"s": 1353,
"text": "file_pointer − A valid file pointer to a file successfully opened by fopen(), popen(), or fsockopen()."
},
{
"code": null,
"e": 1491,
"s": 1456,
"text": "length − Maximum length of a line."
},
{
"code": null,
"e": 1526,
"s": 1491,
"text": "length − Maximum length of a line."
},
{
"code": null,
"e": 1607,
"s": 1526,
"text": "delimiter − Character that specifies the field separator. Default is comma ( , )"
},
{
"code": null,
"e": 1688,
"s": 1607,
"text": "delimiter − Character that specifies the field separator. Default is comma ( , )"
},
{
"code": null,
"e": 1772,
"s": 1688,
"text": "enclosure − Set the field enclosure character. Defaults as a double quotation mark."
},
{
"code": null,
"e": 1856,
"s": 1772,
"text": "enclosure − Set the field enclosure character. Defaults as a double quotation mark."
},
{
"code": null,
"e": 1920,
"s": 1856,
"text": "escape − Set the escape character. Defaults as a backslash (\\)."
},
{
"code": null,
"e": 1984,
"s": 1920,
"text": "escape − Set the escape character. Defaults as a backslash (\\)."
},
{
"code": null,
"e": 2052,
"s": 1984,
"text": "The fgetcsv() function returns an array containing the fields read."
},
{
"code": null,
"e": 2109,
"s": 2052,
"text": "Let’s say we have the following “products.csv” CSV file."
},
{
"code": null,
"e": 2133,
"s": 2109,
"text": "laptop, keyboard, mouse"
},
{
"code": null,
"e": 2223,
"s": 2133,
"text": "The following is an example that displays the content of CSV, that includes the products."
},
{
"code": null,
"e": 2340,
"s": 2223,
"text": "<?php\n $file_pointer = fopen(\"products.csv\",\"r\");\n print_r(fgetcsv($file_pointer));\n fclose($file_pointer);\n?>"
},
{
"code": null,
"e": 2402,
"s": 2340,
"text": "Array\n(\n [0] => Laptop\n [1] => Keyboard\n [2] => Mouse\n)"
},
{
"code": null,
"e": 2430,
"s": 2402,
"text": "Let us see another example."
},
{
"code": null,
"e": 2478,
"s": 2430,
"text": "We have the following “tutorials.csv” CSV file."
},
{
"code": null,
"e": 2520,
"s": 2478,
"text": "Java, C#, HTML5, CSS3, Bootstrap, Android"
},
{
"code": null,
"e": 2598,
"s": 2520,
"text": "The following is an example that displays the content of CSV “tutorials.csv”."
},
{
"code": null,
"e": 2758,
"s": 2598,
"text": "<?php\n $file_pointer = fopen(\"tutorials.csv\",\"r\");\n while(! feof($file_pointer)) {\n print_r(fgetcsv($file_pointer));\n }\n fclose($file_pointer);\n?>"
},
{
"code": null,
"e": 2829,
"s": 2758,
"text": "The following is the output: Java, C#, HTML5, CSS3, Bootstrap, Android"
},
{
"code": null,
"e": 2936,
"s": 2829,
"text": "Array\n(\n [0] => Java\n [1] => C#\n [2] => HTML5\n [3] => CSS3\n [4] => Bootstrap\n [5] => Android\n)"
}
] |
Delete Tuple Elements in Python
|
Removing individual tuple elements is not possible. There is, of course, nothing wrong with putting together another tuple with the undesired elements discarded.
To explicitly remove an entire tuple, just use the del statement.
Live Demo
#!/usr/bin/python
tup = ('physics', 'chemistry', 1997, 2000);
print tup;
del tup;
print "After deleting tup : ";
print tup;
This produces the following result. Note an exception raised, this is because after del tup tuple does not exist any more −
('physics', 'chemistry', 1997, 2000)
After deleting tup :
Traceback (most recent call last):
File "test.py", line 9, in <module>
print tup;
NameError: name 'tup' is not defined
|
[
{
"code": null,
"e": 1224,
"s": 1062,
"text": "Removing individual tuple elements is not possible. There is, of course, nothing wrong with putting together another tuple with the undesired elements discarded."
},
{
"code": null,
"e": 1290,
"s": 1224,
"text": "To explicitly remove an entire tuple, just use the del statement."
},
{
"code": null,
"e": 1301,
"s": 1290,
"text": " Live Demo"
},
{
"code": null,
"e": 1425,
"s": 1301,
"text": "#!/usr/bin/python\ntup = ('physics', 'chemistry', 1997, 2000);\nprint tup;\ndel tup;\nprint \"After deleting tup : \";\nprint tup;"
},
{
"code": null,
"e": 1549,
"s": 1425,
"text": "This produces the following result. Note an exception raised, this is because after del tup tuple does not exist any more −"
},
{
"code": null,
"e": 1726,
"s": 1549,
"text": "('physics', 'chemistry', 1997, 2000)\nAfter deleting tup :\nTraceback (most recent call last):\nFile \"test.py\", line 9, in <module>\nprint tup;\nNameError: name 'tup' is not defined"
}
] |
LESS - Command Line Usage
|
Using the command line, we can compile the .less file to .css.
The following command is used to install lessc with npm(node package manager) to make the lessc available globally.
npm install less -g
You can also add a specific version after the package name. For example npm install less@1.6.2 -g
The following command is used to install the latest version of lessc in your project folder.
npm i less -save-dev
It is also added to the devDependencies in your project package.json.
It is tagged as beta when the lessc structure is published to npm Here, the new functionality is developed periodically. less -v is used to get the current version.
The commit - ish is to be specified, when we proceed to install an unpublished version of lessc and the instructions need to be followed for identifying a git URL as a dependency. This will ensure that you are using the correct version of leesc for your project.
bin/lessc includes binary in the repository. It works with Windows, OS X and nodejs on *nix.
Input is read from stdin when source is set to dash or hyphen(-).
lessc [option option = parameter ...] [destination]
For instance, we can compile .less to .css by using the following command −
lessc stylesheet.less stylesheet.css
We can compile .less to .css by and minify the result using the following command.
lessc -x stylesheet.less stylesheet.css
Following table lists out options used in command line usage −
Help
Help message is displayed with the options available.
lessc -help
lessc -h
Include Paths
It includes the available paths to the library. These paths can be referenced simply and relatively in the Less files. The paths in windows are separated by colon(:) or semicolon(;).
lessc --include-path = PATH1;PATH2
Makefile
It generates a makefile import dependencies list to stdout as output.
lessc -M
lessc --depends
No Color
It disables colorized output.
lessc --no-color
No IE Compatibility
It disables IE compatibility checks.
lessc --no-ie-compat
Disable Javascript
It disables the javascript in less files.
lessc --no-js
Lint
It checks the syntax and reports error without any output.
lessc --lint
lessc -l
Silent
It forcibly stops the display of error messages.
lessc --silent
lessc -s
Strict Imports
It force evaluates imports.
lessc --strict-imports
Allow Imports from Insecure HTTPS Hosts
It imports from the insecure HTTPS hosts.
lessc --insecure
Version
It displays the version number and exits.
lessc -version
lessc -v
Compress
It helps in removing the whitespaces and compress the output.
lessc -x
lessc --compress
Source Map Output Filename
It generates the sourcemap in less. If sourcemap option is defined without filename then it will use the extension map with the Less file name as source.
lessc --source-map
lessc -source-map = file.map
Source Map Rootpath
Rootpath is specified and should be added to Less file paths inside the sourcemap and also to the map file which is specified in your output css.
lessc --source-map-rootpath = dev-files/
Source Map Basepath
A path is specified which has to be removed from the output paths. Basepath is opposite of the rootpath option.
lessc --source-map-basepath = less-files/
Source Map Less Inline
All the Less files should be included in the sourcemap.
lessc --source-map-less-inline
Source Map Map Inline
It specifies that in the output css the map file should be inline.
lessc --source-map-map-inline
Source Map URL
A URL is allowed to override the points in the map file in the css.
lessc --source-map-url = ../my-map.json
Rootpath
It sets paths for URL rewriting in relative imports and urls.
lessc -rp=resources/
lessc --rootpath=resources/
Relative URLs
In imported files, the URL are re-written so that the URL is always relative to the base file.
lessc -ru
lessc --relative-urls
Strict Math
It processes all Math function in your css. By default, it's off.
lessc -sm = on
lessc --strict-math = on
Strict Units
It allows mixed units.
lessc -su = on
lessc --strict-units = on
Global Variable
A variable is defined which can be referenced by the file.
lessc --global-var = "background = green"
Modify Variable
This is unlike global variable option; it moves the declaration at the end of your less file.
lessc --modify-var = "background = green"
URL Arguments
To move on to every URL, an argument is allowed to specify.
lessc --url-args = "arg736357"
Line Numbers
Inline source-mapping is generated.
lessc --line-numbers = comments
lessc --line-numbers = mediaquery
lessc --line-numbers = all
Plugin
It loads the plugin.
lessc --clean-css
lessc --plugin = clean-css = "advanced"
20 Lectures
1 hours
Anadi Sharma
44 Lectures
7.5 hours
Eduonix Learning Solutions
17 Lectures
2 hours
Zach Miller
23 Lectures
1.5 hours
Zach Miller
34 Lectures
4 hours
Syed Raza
31 Lectures
3 hours
Harshit Srivastava
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2613,
"s": 2550,
"text": "Using the command line, we can compile the .less file to .css."
},
{
"code": null,
"e": 2729,
"s": 2613,
"text": "The following command is used to install lessc with npm(node package manager) to make the lessc available globally."
},
{
"code": null,
"e": 2750,
"s": 2729,
"text": "npm install less -g\n"
},
{
"code": null,
"e": 2848,
"s": 2750,
"text": "You can also add a specific version after the package name. For example npm install less@1.6.2 -g"
},
{
"code": null,
"e": 2941,
"s": 2848,
"text": "The following command is used to install the latest version of lessc in your project folder."
},
{
"code": null,
"e": 2963,
"s": 2941,
"text": "npm i less -save-dev\n"
},
{
"code": null,
"e": 3033,
"s": 2963,
"text": "It is also added to the devDependencies in your project package.json."
},
{
"code": null,
"e": 3198,
"s": 3033,
"text": "It is tagged as beta when the lessc structure is published to npm Here, the new functionality is developed periodically. less -v is used to get the current version."
},
{
"code": null,
"e": 3461,
"s": 3198,
"text": "The commit - ish is to be specified, when we proceed to install an unpublished version of lessc and the instructions need to be followed for identifying a git URL as a dependency. This will ensure that you are using the correct version of leesc for your project."
},
{
"code": null,
"e": 3554,
"s": 3461,
"text": "bin/lessc includes binary in the repository. It works with Windows, OS X and nodejs on *nix."
},
{
"code": null,
"e": 3620,
"s": 3554,
"text": "Input is read from stdin when source is set to dash or hyphen(-)."
},
{
"code": null,
"e": 3673,
"s": 3620,
"text": "lessc [option option = parameter ...] [destination]"
},
{
"code": null,
"e": 3749,
"s": 3673,
"text": "For instance, we can compile .less to .css by using the following command −"
},
{
"code": null,
"e": 3787,
"s": 3749,
"text": "lessc stylesheet.less stylesheet.css\n"
},
{
"code": null,
"e": 3870,
"s": 3787,
"text": "We can compile .less to .css by and minify the result using the following command."
},
{
"code": null,
"e": 3911,
"s": 3870,
"text": "lessc -x stylesheet.less stylesheet.css\n"
},
{
"code": null,
"e": 3974,
"s": 3911,
"text": "Following table lists out options used in command line usage −"
},
{
"code": null,
"e": 3979,
"s": 3974,
"text": "Help"
},
{
"code": null,
"e": 4033,
"s": 3979,
"text": "Help message is displayed with the options available."
},
{
"code": null,
"e": 4055,
"s": 4033,
"text": "lessc -help\nlessc -h\n"
},
{
"code": null,
"e": 4069,
"s": 4055,
"text": "Include Paths"
},
{
"code": null,
"e": 4252,
"s": 4069,
"text": "It includes the available paths to the library. These paths can be referenced simply and relatively in the Less files. The paths in windows are separated by colon(:) or semicolon(;)."
},
{
"code": null,
"e": 4288,
"s": 4252,
"text": "lessc --include-path = PATH1;PATH2\n"
},
{
"code": null,
"e": 4297,
"s": 4288,
"text": "Makefile"
},
{
"code": null,
"e": 4367,
"s": 4297,
"text": "It generates a makefile import dependencies list to stdout as output."
},
{
"code": null,
"e": 4393,
"s": 4367,
"text": "lessc -M\nlessc --depends\n"
},
{
"code": null,
"e": 4402,
"s": 4393,
"text": "No Color"
},
{
"code": null,
"e": 4432,
"s": 4402,
"text": "It disables colorized output."
},
{
"code": null,
"e": 4450,
"s": 4432,
"text": "lessc --no-color\n"
},
{
"code": null,
"e": 4470,
"s": 4450,
"text": "No IE Compatibility"
},
{
"code": null,
"e": 4507,
"s": 4470,
"text": "It disables IE compatibility checks."
},
{
"code": null,
"e": 4529,
"s": 4507,
"text": "lessc --no-ie-compat\n"
},
{
"code": null,
"e": 4548,
"s": 4529,
"text": "Disable Javascript"
},
{
"code": null,
"e": 4590,
"s": 4548,
"text": "It disables the javascript in less files."
},
{
"code": null,
"e": 4605,
"s": 4590,
"text": "lessc --no-js\n"
},
{
"code": null,
"e": 4610,
"s": 4605,
"text": "Lint"
},
{
"code": null,
"e": 4669,
"s": 4610,
"text": "It checks the syntax and reports error without any output."
},
{
"code": null,
"e": 4692,
"s": 4669,
"text": "lessc --lint\nlessc -l\n"
},
{
"code": null,
"e": 4699,
"s": 4692,
"text": "Silent"
},
{
"code": null,
"e": 4748,
"s": 4699,
"text": "It forcibly stops the display of error messages."
},
{
"code": null,
"e": 4773,
"s": 4748,
"text": "lessc --silent\nlessc -s\n"
},
{
"code": null,
"e": 4788,
"s": 4773,
"text": "Strict Imports"
},
{
"code": null,
"e": 4816,
"s": 4788,
"text": "It force evaluates imports."
},
{
"code": null,
"e": 4840,
"s": 4816,
"text": "lessc --strict-imports\n"
},
{
"code": null,
"e": 4880,
"s": 4840,
"text": "Allow Imports from Insecure HTTPS Hosts"
},
{
"code": null,
"e": 4922,
"s": 4880,
"text": "It imports from the insecure HTTPS hosts."
},
{
"code": null,
"e": 4940,
"s": 4922,
"text": "lessc --insecure\n"
},
{
"code": null,
"e": 4948,
"s": 4940,
"text": "Version"
},
{
"code": null,
"e": 4990,
"s": 4948,
"text": "It displays the version number and exits."
},
{
"code": null,
"e": 5015,
"s": 4990,
"text": "lessc -version\nlessc -v\n"
},
{
"code": null,
"e": 5024,
"s": 5015,
"text": "Compress"
},
{
"code": null,
"e": 5086,
"s": 5024,
"text": "It helps in removing the whitespaces and compress the output."
},
{
"code": null,
"e": 5113,
"s": 5086,
"text": "lessc -x\nlessc --compress\n"
},
{
"code": null,
"e": 5140,
"s": 5113,
"text": "Source Map Output Filename"
},
{
"code": null,
"e": 5294,
"s": 5140,
"text": "It generates the sourcemap in less. If sourcemap option is defined without filename then it will use the extension map with the Less file name as source."
},
{
"code": null,
"e": 5343,
"s": 5294,
"text": "lessc --source-map\nlessc -source-map = file.map\n"
},
{
"code": null,
"e": 5363,
"s": 5343,
"text": "Source Map Rootpath"
},
{
"code": null,
"e": 5509,
"s": 5363,
"text": "Rootpath is specified and should be added to Less file paths inside the sourcemap and also to the map file which is specified in your output css."
},
{
"code": null,
"e": 5551,
"s": 5509,
"text": "lessc --source-map-rootpath = dev-files/\n"
},
{
"code": null,
"e": 5571,
"s": 5551,
"text": "Source Map Basepath"
},
{
"code": null,
"e": 5683,
"s": 5571,
"text": "A path is specified which has to be removed from the output paths. Basepath is opposite of the rootpath option."
},
{
"code": null,
"e": 5726,
"s": 5683,
"text": "lessc --source-map-basepath = less-files/\n"
},
{
"code": null,
"e": 5749,
"s": 5726,
"text": "Source Map Less Inline"
},
{
"code": null,
"e": 5805,
"s": 5749,
"text": "All the Less files should be included in the sourcemap."
},
{
"code": null,
"e": 5837,
"s": 5805,
"text": "lessc --source-map-less-inline\n"
},
{
"code": null,
"e": 5859,
"s": 5837,
"text": "Source Map Map Inline"
},
{
"code": null,
"e": 5926,
"s": 5859,
"text": "It specifies that in the output css the map file should be inline."
},
{
"code": null,
"e": 5957,
"s": 5926,
"text": "lessc --source-map-map-inline\n"
},
{
"code": null,
"e": 5972,
"s": 5957,
"text": "Source Map URL"
},
{
"code": null,
"e": 6040,
"s": 5972,
"text": "A URL is allowed to override the points in the map file in the css."
},
{
"code": null,
"e": 6081,
"s": 6040,
"text": "lessc --source-map-url = ../my-map.json\n"
},
{
"code": null,
"e": 6090,
"s": 6081,
"text": "Rootpath"
},
{
"code": null,
"e": 6152,
"s": 6090,
"text": "It sets paths for URL rewriting in relative imports and urls."
},
{
"code": null,
"e": 6202,
"s": 6152,
"text": "lessc -rp=resources/\nlessc --rootpath=resources/\n"
},
{
"code": null,
"e": 6216,
"s": 6202,
"text": "Relative URLs"
},
{
"code": null,
"e": 6311,
"s": 6216,
"text": "In imported files, the URL are re-written so that the URL is always relative to the base file."
},
{
"code": null,
"e": 6344,
"s": 6311,
"text": "lessc -ru\nlessc --relative-urls\n"
},
{
"code": null,
"e": 6356,
"s": 6344,
"text": "Strict Math"
},
{
"code": null,
"e": 6422,
"s": 6356,
"text": "It processes all Math function in your css. By default, it's off."
},
{
"code": null,
"e": 6463,
"s": 6422,
"text": "lessc -sm = on\nlessc --strict-math = on\n"
},
{
"code": null,
"e": 6476,
"s": 6463,
"text": "Strict Units"
},
{
"code": null,
"e": 6499,
"s": 6476,
"text": "It allows mixed units."
},
{
"code": null,
"e": 6541,
"s": 6499,
"text": "lessc -su = on\nlessc --strict-units = on\n"
},
{
"code": null,
"e": 6557,
"s": 6541,
"text": "Global Variable"
},
{
"code": null,
"e": 6616,
"s": 6557,
"text": "A variable is defined which can be referenced by the file."
},
{
"code": null,
"e": 6660,
"s": 6616,
"text": "lessc --global-var = \"background = green\"\n\n"
},
{
"code": null,
"e": 6676,
"s": 6660,
"text": "Modify Variable"
},
{
"code": null,
"e": 6770,
"s": 6676,
"text": "This is unlike global variable option; it moves the declaration at the end of your less file."
},
{
"code": null,
"e": 6813,
"s": 6770,
"text": "lessc --modify-var = \"background = green\"\n"
},
{
"code": null,
"e": 6827,
"s": 6813,
"text": "URL Arguments"
},
{
"code": null,
"e": 6887,
"s": 6827,
"text": "To move on to every URL, an argument is allowed to specify."
},
{
"code": null,
"e": 6919,
"s": 6887,
"text": "lessc --url-args = \"arg736357\"\n"
},
{
"code": null,
"e": 6932,
"s": 6919,
"text": "Line Numbers"
},
{
"code": null,
"e": 6968,
"s": 6932,
"text": "Inline source-mapping is generated."
},
{
"code": null,
"e": 7062,
"s": 6968,
"text": "lessc --line-numbers = comments\nlessc --line-numbers = mediaquery\nlessc --line-numbers = all\n"
},
{
"code": null,
"e": 7069,
"s": 7062,
"text": "Plugin"
},
{
"code": null,
"e": 7090,
"s": 7069,
"text": "It loads the plugin."
},
{
"code": null,
"e": 7149,
"s": 7090,
"text": "lessc --clean-css\nlessc --plugin = clean-css = \"advanced\"\n"
},
{
"code": null,
"e": 7182,
"s": 7149,
"text": "\n 20 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 7196,
"s": 7182,
"text": " Anadi Sharma"
},
{
"code": null,
"e": 7231,
"s": 7196,
"text": "\n 44 Lectures \n 7.5 hours \n"
},
{
"code": null,
"e": 7259,
"s": 7231,
"text": " Eduonix Learning Solutions"
},
{
"code": null,
"e": 7292,
"s": 7259,
"text": "\n 17 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 7305,
"s": 7292,
"text": " Zach Miller"
},
{
"code": null,
"e": 7340,
"s": 7305,
"text": "\n 23 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 7353,
"s": 7340,
"text": " Zach Miller"
},
{
"code": null,
"e": 7386,
"s": 7353,
"text": "\n 34 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 7397,
"s": 7386,
"text": " Syed Raza"
},
{
"code": null,
"e": 7430,
"s": 7397,
"text": "\n 31 Lectures \n 3 hours \n"
},
{
"code": null,
"e": 7450,
"s": 7430,
"text": " Harshit Srivastava"
},
{
"code": null,
"e": 7457,
"s": 7450,
"text": " Print"
},
{
"code": null,
"e": 7468,
"s": 7457,
"text": " Add Notes"
}
] |
How to Create Decorators in Python That You Can Actually Use | Towards Data Science
|
One of the magical things you can do with Python and many other languages is decorating functions. Decorators can modify the function’s inputs, its output, and the very behavior of the function itself. And the best part is you can do all of those with only one line of code and without modifying the function syntax at all!
To learn how decorators work and how you can create one for yourself, you need to know some of the concepts of Python.
Therefore, before we get to decorators, we are gonna go knee-deep into learning some of the internals of Python such as scope and closures. If you are familiar with these concepts, feel free to skip them and go to section 5 where all the fun starts!
IntroductionFunctions are objectsScopeClosuresDecoratorsReal-world Examples With DecoratorsDecorators That Take ArgumentsPreserving the Decorated Function’s MetadataConclusion
Introduction
Functions are objects
Scope
Closures
Decorators
Real-world Examples With Decorators
Decorators That Take Arguments
Preserving the Decorated Function’s Metadata
Conclusion
One of the many things you will love about Python is its ability to represent anything as objects and functions are no exception. For people who are first reading this, passing a function as an argument to another may seem strange but it is completely legal to do so:
Important note to be useful in later sections: Using the function with parentheses, my_func(), is called 'calling' the function while writing without, my_func, is called referencing. As you have seen, print(my_func) prints the function's index in memory.
As objects go, functions are absolutely the same as:
str
int, float
pandas.DataFrame
list, tuple, dict
modules: os, datatime, numpy
You may assign functions to a new variable and use it to call the function:
>>> new_func = my_func>>> new_func()Printing the function's argument
Now this variable also contains the function’s attributes:
You can also store each function in other objects such as lists, dictionaries, and call them:
Consider this conversation between Bob and Job:
Bob: ‘Jon, why did not you come to the lesson yesterday?’
Jon: ‘I had a flu...’
Not the best of stories but when Bob asks the reason for Jon’s absence in yesterday’s class, we know he is referring to the Jon standing next to him not some random Jon in another country. As humans, it is not difficult to notice this but programming languages use something called scope to tell which name we are referring to in our programs.
In Python, names can be variables, function names, module names, etc.
Consider these two variables:
>>> a = 24>>> b = 42>>> print(a)24
Here, print had no trouble to tell that we are referring to the a we just defined. Now consider this:
>>> def foo():... a = 100... print(a)
What do you think will happen if we run foo? Will it print 24 or 100?
>>> foo()100
How did Python differentiate between the a we defined in the beginning or in the function? This is where scope gets interesting, because we are introducing layers of scope:
The above image shows the scope for this little script:
The global scope is the overall scope of your script/program. Variables, functions, etc. with the same indentation level as a and b defined in the beginning will be in the global scope. For example, foo function is in the global scope but its variable a is in the scope that is local to foo.
In one global scope, there can be many local scopes. For example, each temporary variables in for loops and list comprehensions, return values of context managers will be local inside their code block and cannot be accessed from the global scope.
So, a rule of thumb is that Python interpreter will not be able to access a name defined in a smaller scope than the current one.
There is also a bigger level of scope outside global:
The built-in scope contains all the modules and packages you installed with Python, pip or conda.
Now, let’s explore another case. In our foo function, we want to modify the value of global a. We want it to be a string but if we write a = 'some text' inside the foo, Python will just create a new variable without modifying the global a.
Python provides us with a keyword that lets us specify we are referring to names in the global scope:
Writing global <name> will let us modify the values of names in the global scope.
BTW, bad news, I left out one level of scope in the above image. Between global and local, there is one level we did not cover:
nonlocal scope comes into play when we have, for example, nested functions:
In the nested function outer, we first create a variable called my_var and assign it to the string Python. Then we decide to create a new inner function and want to assign my_var a new value, Data Science and print it. But if we run it, we see that my_var is still assigned to ‘Python’. We cannot use global keyword since my_var is not in the global scope.
For such cases, you can use nonlocal keyword which gives access to all the names in the scope of the outer function (nonlocal) but not the global:
In conclusion, scope tells the Python interpreter where to look for names in our program. There can be four levels of scope in a single script/program:
Built-in: all the package names installed with Python, pip and conda
Global: general scope, all names that have no indentation in the script
Local: contains local variables in code blocks such as functions, loops, list comprehensions, etc.
Nonlocal: an extra level of scope between global and local in the case of nested functions
Before I explain how decorators work, we need to talk about closures too. Let’s start with an example:
We create a nested function bar inside foo and return it. bar tries to print the value of x:
When we write var = foo(), we are assigning the bar function to var. Now var can be used to call bar. When we call it, it prints out 42.
>>> var()42
But wait a minute, how does var know anything about x? x is defined in foo's scope not bar's. You would think that x would be accessible outside the scope of foo. That's where closures come in.
Closure is a built-in memory of a function that contains all the nonlocal names (in a tuple) the function needs to run!
So, when foo returned bar, it attached all the nonlocal variables bar needs to run outside of the foo's scope. The closure of a function can be accessed with .__closure__ attribute:
Once you access the closure of a function as a tuple, it will contain elements called cells with the value of a single nonlocal argument. There can be as many cells inside closure as the function needs:
In this example, the variables x, y, z are nonlocal variables to child so they get added to the function's closure. Any other names such as value and outside are not in the closure because they are not in the nonlocal scope.
Now, consider this trickier example:
We create a parent function which takes a single argument and a nested function child which prints whatever value passed to parent. We call parent with var ('dummy') and assign the result to func. If we call it:
>>> func()dummy
As expected, it prints out ‘dummy’. Now let’s delete var and call func again:
>>> # Delete 'var'>>> del var>>> # call func again>>> func()dummy
It still prints out ‘dummy’. Why?
You guessed it, it got added to the closure! So, when a value from outer levels of scope gets added to closure, it will stay there unchanged even if we delete the original value!
>>> func.__closure__[0].cell_contents‘dummy’
If we did not delete var and changed its value, the closure would still contain its old value:
This concept is going to be important when we talk about decorators in the next section.
Let’s go over some of the concepts to make sure you understand:
The closure is an internal memory of a nested function. It contains all the nonlocal variables stored in a tuple which are essential for the function to run.
Once a value is stored in a closure, it can be accessed but cannot be overridden if the original value gets deleted or modified
A nested function is a function defined in another and follows this general pattern:
>>> def parent(arg): ... def child():... print(arg)... ... return child
Decorators are functions that modify another function. They can change the function’s inputs, its output or even its behavior.
You may have seen decorators when you were creating custom context managers or when you were first introduced to Flask (remember @app.route?)
Below, we created a function that squares whatever argument is passed and we are decorating it with add_one. add_one adds 1 to the argument of the passed function:
To use a function as a decorator, just put @ symbol followed by the decorating function's name right above the function definition. When we passed 5 to the decorated square function, instead of returning 25, it returns 36 because add_one takes the argument of square, which is 5, and adds one to it and inserts back into our function:
>>> square(10)121
In this section, we will build add_one together.
First, let’s start with add_one that only returns whatever function passed to it:
For our decorator to return a modified function, it is usually helpful to define a nested function to return:
Our decorator is still doing nothing. Inside add_one we defined a nested child function. child only takes one argument and calls whatever function passed to add_one. Then, add_one returns child.
In this case of nested child function, we are assuming func passed to add_one takes exactly the same number of arguments as child.
Now, we can make all the magic happen inside the child function. Instead of simply calling the func, we want to modify its arguments by adding 1 to them:
Notice func(a + 1)? It is calling whatever passed to add_one with 1 added to the argument. This time, instead of creating a new variable to store child, we will override square:
>>> square = add_one(square)>>> square(5)36
Now it is returning 36 instead of 25 when we pass 5.
How can it use square function even when we override it? Good thing we learned closures because the old square is now inside the closure of child:
>>> square.__closure__[0].cell_contents<function __main__.square(a)>
At this point, our add_one function is ready to be used as a decorator. We can just put @add_one right above the definition of square and see the magic happen:
I think it would be a shame if I did not show you how to create a timer decorator:
This time, notice how we are using *args and **kwargs. They are used when we don't know the exact number of positional and keyword arguments in the function which is perfect in this case since we may use timer on any kind of function.
Now, you can use this decorator before any function to find out how long it runs. No repeated code!
The next very useful decorator would be a caching decorator. Caching decorators are great for computation-heavy functions which may be called with the same arguments many times. Caching the results of each function call in a closure will let us immediately return the result if the decorated function gets called with known values:
In the main, cache function, we want to create a dictionary which stores all arguments in tuples as keys and their results. The caching dictionary would look like this:
cache = { (arg1, arg2, arg3): func(arg1, arg2, arg3)}
We can use tuples of arguments as keys because tuples are immutable objects.
Now, let’s see what happens if we decorate our sleeping function with both cache and timer:
First, let’s try to sleep for 10 seconds:
>>> sleep(10)sleep took 10.0001 seconds to run!
As expected, it took 10 seconds to run. Now, what do you think will happen if we run sleep with 10 as an argument again:
>>> sleep(10)sleep took 0.0 seconds to run!
It took 0 seconds! Our caching decorator works!
So far, our knowledge of decorators is pretty solid. However, the real power of decorators comes when you enable them to take arguments.
Consider this decorator which checks if the function’s result is of type str:
We call it on a dummy function to check that it works:
It is working. However, wouldn’t be cool if we had a way to check the function’s return type for any data type? Here, check this out:
With this type of decorator, you could write data type checks for all your functions. Let’s build it together from scratch.
First, let’s just create a simple decorator that calls whatever function passed to it:
How do we tweak this piece of code so that it also accepts a custom data type and performs a check for func's results? We cannot add an extra argument to decorator because decorators should only take a function as an argument.
The way to get around this problem is to define an even bigger parent function that returns a decorator. That way, we can pass any argument to the parent function which, in turn, can be used in the decorator:
Note how we just wrapped our decorator in a bigger parent function? All it does is take a data type as an argument and pass it to our decorator and return it. In wrapper, we wrote type(result) == dtype which evaluates to True or False whether data types match or not. Now you can use this function to perform type checks for any function:
Up until this point, we never checked one thing — is the decorated function preserved in all ways? For example, let’s go back to our sleep function which was decorated with timer:
Let’s call it and check its metadata:
We check 3 metadata attributes of the function. The first two returned None but they should have returned something. I mean, sleep had a long docstring and a default argument which was equal to 5. Where did they go? We got the answer when we called __name__ and got wrapper for the function name.
If we examine the definition of timer:
We can see that we are not actually returning the passed function but returning inside wrapper. Obviously wrapper does not have a docstring or any default arguments, that was the reason we got None above.
To solve this problem, Python provides us with a helpful function from functools module:
Using wraps on the wrapper function lets us keep all the metadata attached to func. Notice how we are passing func to wraps above the function definition.
If we use this modified version of timer, we will see that it works as expected:
Using wraps(func) is a good practice for writing decorators, so go and add it to all the decorators we defined today!
After reading this post, you have a pretty strong knowledge of creating decorators. More importantly, you know how they work and the way they work.
As a final point, I suggest using decorators whenever you have repeated code that performs similar tasks on your functions. Decorators can be another step to making your code DRY (Don’t Repeat Yourself).
Read more articles related to the topic:
|
[
{
"code": null,
"e": 495,
"s": 171,
"text": "One of the magical things you can do with Python and many other languages is decorating functions. Decorators can modify the function’s inputs, its output, and the very behavior of the function itself. And the best part is you can do all of those with only one line of code and without modifying the function syntax at all!"
},
{
"code": null,
"e": 614,
"s": 495,
"text": "To learn how decorators work and how you can create one for yourself, you need to know some of the concepts of Python."
},
{
"code": null,
"e": 864,
"s": 614,
"text": "Therefore, before we get to decorators, we are gonna go knee-deep into learning some of the internals of Python such as scope and closures. If you are familiar with these concepts, feel free to skip them and go to section 5 where all the fun starts!"
},
{
"code": null,
"e": 1040,
"s": 864,
"text": "IntroductionFunctions are objectsScopeClosuresDecoratorsReal-world Examples With DecoratorsDecorators That Take ArgumentsPreserving the Decorated Function’s MetadataConclusion"
},
{
"code": null,
"e": 1053,
"s": 1040,
"text": "Introduction"
},
{
"code": null,
"e": 1075,
"s": 1053,
"text": "Functions are objects"
},
{
"code": null,
"e": 1081,
"s": 1075,
"text": "Scope"
},
{
"code": null,
"e": 1090,
"s": 1081,
"text": "Closures"
},
{
"code": null,
"e": 1101,
"s": 1090,
"text": "Decorators"
},
{
"code": null,
"e": 1137,
"s": 1101,
"text": "Real-world Examples With Decorators"
},
{
"code": null,
"e": 1168,
"s": 1137,
"text": "Decorators That Take Arguments"
},
{
"code": null,
"e": 1213,
"s": 1168,
"text": "Preserving the Decorated Function’s Metadata"
},
{
"code": null,
"e": 1224,
"s": 1213,
"text": "Conclusion"
},
{
"code": null,
"e": 1492,
"s": 1224,
"text": "One of the many things you will love about Python is its ability to represent anything as objects and functions are no exception. For people who are first reading this, passing a function as an argument to another may seem strange but it is completely legal to do so:"
},
{
"code": null,
"e": 1747,
"s": 1492,
"text": "Important note to be useful in later sections: Using the function with parentheses, my_func(), is called 'calling' the function while writing without, my_func, is called referencing. As you have seen, print(my_func) prints the function's index in memory."
},
{
"code": null,
"e": 1800,
"s": 1747,
"text": "As objects go, functions are absolutely the same as:"
},
{
"code": null,
"e": 1804,
"s": 1800,
"text": "str"
},
{
"code": null,
"e": 1815,
"s": 1804,
"text": "int, float"
},
{
"code": null,
"e": 1832,
"s": 1815,
"text": "pandas.DataFrame"
},
{
"code": null,
"e": 1850,
"s": 1832,
"text": "list, tuple, dict"
},
{
"code": null,
"e": 1879,
"s": 1850,
"text": "modules: os, datatime, numpy"
},
{
"code": null,
"e": 1955,
"s": 1879,
"text": "You may assign functions to a new variable and use it to call the function:"
},
{
"code": null,
"e": 2024,
"s": 1955,
"text": ">>> new_func = my_func>>> new_func()Printing the function's argument"
},
{
"code": null,
"e": 2083,
"s": 2024,
"text": "Now this variable also contains the function’s attributes:"
},
{
"code": null,
"e": 2177,
"s": 2083,
"text": "You can also store each function in other objects such as lists, dictionaries, and call them:"
},
{
"code": null,
"e": 2225,
"s": 2177,
"text": "Consider this conversation between Bob and Job:"
},
{
"code": null,
"e": 2283,
"s": 2225,
"text": "Bob: ‘Jon, why did not you come to the lesson yesterday?’"
},
{
"code": null,
"e": 2305,
"s": 2283,
"text": "Jon: ‘I had a flu...’"
},
{
"code": null,
"e": 2649,
"s": 2305,
"text": "Not the best of stories but when Bob asks the reason for Jon’s absence in yesterday’s class, we know he is referring to the Jon standing next to him not some random Jon in another country. As humans, it is not difficult to notice this but programming languages use something called scope to tell which name we are referring to in our programs."
},
{
"code": null,
"e": 2719,
"s": 2649,
"text": "In Python, names can be variables, function names, module names, etc."
},
{
"code": null,
"e": 2749,
"s": 2719,
"text": "Consider these two variables:"
},
{
"code": null,
"e": 2784,
"s": 2749,
"text": ">>> a = 24>>> b = 42>>> print(a)24"
},
{
"code": null,
"e": 2886,
"s": 2784,
"text": "Here, print had no trouble to tell that we are referring to the a we just defined. Now consider this:"
},
{
"code": null,
"e": 2932,
"s": 2886,
"text": ">>> def foo():... a = 100... print(a)"
},
{
"code": null,
"e": 3002,
"s": 2932,
"text": "What do you think will happen if we run foo? Will it print 24 or 100?"
},
{
"code": null,
"e": 3015,
"s": 3002,
"text": ">>> foo()100"
},
{
"code": null,
"e": 3188,
"s": 3015,
"text": "How did Python differentiate between the a we defined in the beginning or in the function? This is where scope gets interesting, because we are introducing layers of scope:"
},
{
"code": null,
"e": 3244,
"s": 3188,
"text": "The above image shows the scope for this little script:"
},
{
"code": null,
"e": 3536,
"s": 3244,
"text": "The global scope is the overall scope of your script/program. Variables, functions, etc. with the same indentation level as a and b defined in the beginning will be in the global scope. For example, foo function is in the global scope but its variable a is in the scope that is local to foo."
},
{
"code": null,
"e": 3783,
"s": 3536,
"text": "In one global scope, there can be many local scopes. For example, each temporary variables in for loops and list comprehensions, return values of context managers will be local inside their code block and cannot be accessed from the global scope."
},
{
"code": null,
"e": 3913,
"s": 3783,
"text": "So, a rule of thumb is that Python interpreter will not be able to access a name defined in a smaller scope than the current one."
},
{
"code": null,
"e": 3967,
"s": 3913,
"text": "There is also a bigger level of scope outside global:"
},
{
"code": null,
"e": 4065,
"s": 3967,
"text": "The built-in scope contains all the modules and packages you installed with Python, pip or conda."
},
{
"code": null,
"e": 4305,
"s": 4065,
"text": "Now, let’s explore another case. In our foo function, we want to modify the value of global a. We want it to be a string but if we write a = 'some text' inside the foo, Python will just create a new variable without modifying the global a."
},
{
"code": null,
"e": 4407,
"s": 4305,
"text": "Python provides us with a keyword that lets us specify we are referring to names in the global scope:"
},
{
"code": null,
"e": 4489,
"s": 4407,
"text": "Writing global <name> will let us modify the values of names in the global scope."
},
{
"code": null,
"e": 4617,
"s": 4489,
"text": "BTW, bad news, I left out one level of scope in the above image. Between global and local, there is one level we did not cover:"
},
{
"code": null,
"e": 4693,
"s": 4617,
"text": "nonlocal scope comes into play when we have, for example, nested functions:"
},
{
"code": null,
"e": 5050,
"s": 4693,
"text": "In the nested function outer, we first create a variable called my_var and assign it to the string Python. Then we decide to create a new inner function and want to assign my_var a new value, Data Science and print it. But if we run it, we see that my_var is still assigned to ‘Python’. We cannot use global keyword since my_var is not in the global scope."
},
{
"code": null,
"e": 5197,
"s": 5050,
"text": "For such cases, you can use nonlocal keyword which gives access to all the names in the scope of the outer function (nonlocal) but not the global:"
},
{
"code": null,
"e": 5349,
"s": 5197,
"text": "In conclusion, scope tells the Python interpreter where to look for names in our program. There can be four levels of scope in a single script/program:"
},
{
"code": null,
"e": 5418,
"s": 5349,
"text": "Built-in: all the package names installed with Python, pip and conda"
},
{
"code": null,
"e": 5490,
"s": 5418,
"text": "Global: general scope, all names that have no indentation in the script"
},
{
"code": null,
"e": 5589,
"s": 5490,
"text": "Local: contains local variables in code blocks such as functions, loops, list comprehensions, etc."
},
{
"code": null,
"e": 5680,
"s": 5589,
"text": "Nonlocal: an extra level of scope between global and local in the case of nested functions"
},
{
"code": null,
"e": 5783,
"s": 5680,
"text": "Before I explain how decorators work, we need to talk about closures too. Let’s start with an example:"
},
{
"code": null,
"e": 5876,
"s": 5783,
"text": "We create a nested function bar inside foo and return it. bar tries to print the value of x:"
},
{
"code": null,
"e": 6013,
"s": 5876,
"text": "When we write var = foo(), we are assigning the bar function to var. Now var can be used to call bar. When we call it, it prints out 42."
},
{
"code": null,
"e": 6025,
"s": 6013,
"text": ">>> var()42"
},
{
"code": null,
"e": 6219,
"s": 6025,
"text": "But wait a minute, how does var know anything about x? x is defined in foo's scope not bar's. You would think that x would be accessible outside the scope of foo. That's where closures come in."
},
{
"code": null,
"e": 6339,
"s": 6219,
"text": "Closure is a built-in memory of a function that contains all the nonlocal names (in a tuple) the function needs to run!"
},
{
"code": null,
"e": 6521,
"s": 6339,
"text": "So, when foo returned bar, it attached all the nonlocal variables bar needs to run outside of the foo's scope. The closure of a function can be accessed with .__closure__ attribute:"
},
{
"code": null,
"e": 6724,
"s": 6521,
"text": "Once you access the closure of a function as a tuple, it will contain elements called cells with the value of a single nonlocal argument. There can be as many cells inside closure as the function needs:"
},
{
"code": null,
"e": 6949,
"s": 6724,
"text": "In this example, the variables x, y, z are nonlocal variables to child so they get added to the function's closure. Any other names such as value and outside are not in the closure because they are not in the nonlocal scope."
},
{
"code": null,
"e": 6986,
"s": 6949,
"text": "Now, consider this trickier example:"
},
{
"code": null,
"e": 7198,
"s": 6986,
"text": "We create a parent function which takes a single argument and a nested function child which prints whatever value passed to parent. We call parent with var ('dummy') and assign the result to func. If we call it:"
},
{
"code": null,
"e": 7214,
"s": 7198,
"text": ">>> func()dummy"
},
{
"code": null,
"e": 7292,
"s": 7214,
"text": "As expected, it prints out ‘dummy’. Now let’s delete var and call func again:"
},
{
"code": null,
"e": 7358,
"s": 7292,
"text": ">>> # Delete 'var'>>> del var>>> # call func again>>> func()dummy"
},
{
"code": null,
"e": 7392,
"s": 7358,
"text": "It still prints out ‘dummy’. Why?"
},
{
"code": null,
"e": 7571,
"s": 7392,
"text": "You guessed it, it got added to the closure! So, when a value from outer levels of scope gets added to closure, it will stay there unchanged even if we delete the original value!"
},
{
"code": null,
"e": 7616,
"s": 7571,
"text": ">>> func.__closure__[0].cell_contents‘dummy’"
},
{
"code": null,
"e": 7711,
"s": 7616,
"text": "If we did not delete var and changed its value, the closure would still contain its old value:"
},
{
"code": null,
"e": 7800,
"s": 7711,
"text": "This concept is going to be important when we talk about decorators in the next section."
},
{
"code": null,
"e": 7864,
"s": 7800,
"text": "Let’s go over some of the concepts to make sure you understand:"
},
{
"code": null,
"e": 8022,
"s": 7864,
"text": "The closure is an internal memory of a nested function. It contains all the nonlocal variables stored in a tuple which are essential for the function to run."
},
{
"code": null,
"e": 8150,
"s": 8022,
"text": "Once a value is stored in a closure, it can be accessed but cannot be overridden if the original value gets deleted or modified"
},
{
"code": null,
"e": 8235,
"s": 8150,
"text": "A nested function is a function defined in another and follows this general pattern:"
},
{
"code": null,
"e": 8327,
"s": 8235,
"text": ">>> def parent(arg): ... def child():... print(arg)... ... return child"
},
{
"code": null,
"e": 8454,
"s": 8327,
"text": "Decorators are functions that modify another function. They can change the function’s inputs, its output or even its behavior."
},
{
"code": null,
"e": 8596,
"s": 8454,
"text": "You may have seen decorators when you were creating custom context managers or when you were first introduced to Flask (remember @app.route?)"
},
{
"code": null,
"e": 8760,
"s": 8596,
"text": "Below, we created a function that squares whatever argument is passed and we are decorating it with add_one. add_one adds 1 to the argument of the passed function:"
},
{
"code": null,
"e": 9095,
"s": 8760,
"text": "To use a function as a decorator, just put @ symbol followed by the decorating function's name right above the function definition. When we passed 5 to the decorated square function, instead of returning 25, it returns 36 because add_one takes the argument of square, which is 5, and adds one to it and inserts back into our function:"
},
{
"code": null,
"e": 9113,
"s": 9095,
"text": ">>> square(10)121"
},
{
"code": null,
"e": 9162,
"s": 9113,
"text": "In this section, we will build add_one together."
},
{
"code": null,
"e": 9244,
"s": 9162,
"text": "First, let’s start with add_one that only returns whatever function passed to it:"
},
{
"code": null,
"e": 9354,
"s": 9244,
"text": "For our decorator to return a modified function, it is usually helpful to define a nested function to return:"
},
{
"code": null,
"e": 9549,
"s": 9354,
"text": "Our decorator is still doing nothing. Inside add_one we defined a nested child function. child only takes one argument and calls whatever function passed to add_one. Then, add_one returns child."
},
{
"code": null,
"e": 9680,
"s": 9549,
"text": "In this case of nested child function, we are assuming func passed to add_one takes exactly the same number of arguments as child."
},
{
"code": null,
"e": 9834,
"s": 9680,
"text": "Now, we can make all the magic happen inside the child function. Instead of simply calling the func, we want to modify its arguments by adding 1 to them:"
},
{
"code": null,
"e": 10012,
"s": 9834,
"text": "Notice func(a + 1)? It is calling whatever passed to add_one with 1 added to the argument. This time, instead of creating a new variable to store child, we will override square:"
},
{
"code": null,
"e": 10056,
"s": 10012,
"text": ">>> square = add_one(square)>>> square(5)36"
},
{
"code": null,
"e": 10109,
"s": 10056,
"text": "Now it is returning 36 instead of 25 when we pass 5."
},
{
"code": null,
"e": 10256,
"s": 10109,
"text": "How can it use square function even when we override it? Good thing we learned closures because the old square is now inside the closure of child:"
},
{
"code": null,
"e": 10325,
"s": 10256,
"text": ">>> square.__closure__[0].cell_contents<function __main__.square(a)>"
},
{
"code": null,
"e": 10485,
"s": 10325,
"text": "At this point, our add_one function is ready to be used as a decorator. We can just put @add_one right above the definition of square and see the magic happen:"
},
{
"code": null,
"e": 10568,
"s": 10485,
"text": "I think it would be a shame if I did not show you how to create a timer decorator:"
},
{
"code": null,
"e": 10803,
"s": 10568,
"text": "This time, notice how we are using *args and **kwargs. They are used when we don't know the exact number of positional and keyword arguments in the function which is perfect in this case since we may use timer on any kind of function."
},
{
"code": null,
"e": 10903,
"s": 10803,
"text": "Now, you can use this decorator before any function to find out how long it runs. No repeated code!"
},
{
"code": null,
"e": 11235,
"s": 10903,
"text": "The next very useful decorator would be a caching decorator. Caching decorators are great for computation-heavy functions which may be called with the same arguments many times. Caching the results of each function call in a closure will let us immediately return the result if the decorated function gets called with known values:"
},
{
"code": null,
"e": 11404,
"s": 11235,
"text": "In the main, cache function, we want to create a dictionary which stores all arguments in tuples as keys and their results. The caching dictionary would look like this:"
},
{
"code": null,
"e": 11461,
"s": 11404,
"text": "cache = { (arg1, arg2, arg3): func(arg1, arg2, arg3)}"
},
{
"code": null,
"e": 11538,
"s": 11461,
"text": "We can use tuples of arguments as keys because tuples are immutable objects."
},
{
"code": null,
"e": 11630,
"s": 11538,
"text": "Now, let’s see what happens if we decorate our sleeping function with both cache and timer:"
},
{
"code": null,
"e": 11672,
"s": 11630,
"text": "First, let’s try to sleep for 10 seconds:"
},
{
"code": null,
"e": 11720,
"s": 11672,
"text": ">>> sleep(10)sleep took 10.0001 seconds to run!"
},
{
"code": null,
"e": 11841,
"s": 11720,
"text": "As expected, it took 10 seconds to run. Now, what do you think will happen if we run sleep with 10 as an argument again:"
},
{
"code": null,
"e": 11885,
"s": 11841,
"text": ">>> sleep(10)sleep took 0.0 seconds to run!"
},
{
"code": null,
"e": 11933,
"s": 11885,
"text": "It took 0 seconds! Our caching decorator works!"
},
{
"code": null,
"e": 12070,
"s": 11933,
"text": "So far, our knowledge of decorators is pretty solid. However, the real power of decorators comes when you enable them to take arguments."
},
{
"code": null,
"e": 12148,
"s": 12070,
"text": "Consider this decorator which checks if the function’s result is of type str:"
},
{
"code": null,
"e": 12203,
"s": 12148,
"text": "We call it on a dummy function to check that it works:"
},
{
"code": null,
"e": 12337,
"s": 12203,
"text": "It is working. However, wouldn’t be cool if we had a way to check the function’s return type for any data type? Here, check this out:"
},
{
"code": null,
"e": 12461,
"s": 12337,
"text": "With this type of decorator, you could write data type checks for all your functions. Let’s build it together from scratch."
},
{
"code": null,
"e": 12548,
"s": 12461,
"text": "First, let’s just create a simple decorator that calls whatever function passed to it:"
},
{
"code": null,
"e": 12775,
"s": 12548,
"text": "How do we tweak this piece of code so that it also accepts a custom data type and performs a check for func's results? We cannot add an extra argument to decorator because decorators should only take a function as an argument."
},
{
"code": null,
"e": 12984,
"s": 12775,
"text": "The way to get around this problem is to define an even bigger parent function that returns a decorator. That way, we can pass any argument to the parent function which, in turn, can be used in the decorator:"
},
{
"code": null,
"e": 13323,
"s": 12984,
"text": "Note how we just wrapped our decorator in a bigger parent function? All it does is take a data type as an argument and pass it to our decorator and return it. In wrapper, we wrote type(result) == dtype which evaluates to True or False whether data types match or not. Now you can use this function to perform type checks for any function:"
},
{
"code": null,
"e": 13503,
"s": 13323,
"text": "Up until this point, we never checked one thing — is the decorated function preserved in all ways? For example, let’s go back to our sleep function which was decorated with timer:"
},
{
"code": null,
"e": 13541,
"s": 13503,
"text": "Let’s call it and check its metadata:"
},
{
"code": null,
"e": 13838,
"s": 13541,
"text": "We check 3 metadata attributes of the function. The first two returned None but they should have returned something. I mean, sleep had a long docstring and a default argument which was equal to 5. Where did they go? We got the answer when we called __name__ and got wrapper for the function name."
},
{
"code": null,
"e": 13877,
"s": 13838,
"text": "If we examine the definition of timer:"
},
{
"code": null,
"e": 14082,
"s": 13877,
"text": "We can see that we are not actually returning the passed function but returning inside wrapper. Obviously wrapper does not have a docstring or any default arguments, that was the reason we got None above."
},
{
"code": null,
"e": 14171,
"s": 14082,
"text": "To solve this problem, Python provides us with a helpful function from functools module:"
},
{
"code": null,
"e": 14326,
"s": 14171,
"text": "Using wraps on the wrapper function lets us keep all the metadata attached to func. Notice how we are passing func to wraps above the function definition."
},
{
"code": null,
"e": 14407,
"s": 14326,
"text": "If we use this modified version of timer, we will see that it works as expected:"
},
{
"code": null,
"e": 14525,
"s": 14407,
"text": "Using wraps(func) is a good practice for writing decorators, so go and add it to all the decorators we defined today!"
},
{
"code": null,
"e": 14673,
"s": 14525,
"text": "After reading this post, you have a pretty strong knowledge of creating decorators. More importantly, you know how they work and the way they work."
},
{
"code": null,
"e": 14877,
"s": 14673,
"text": "As a final point, I suggest using decorators whenever you have repeated code that performs similar tasks on your functions. Decorators can be another step to making your code DRY (Don’t Repeat Yourself)."
}
] |
Goldman Sachs Interview Experience for Java Developer (3+ Years Experienced) - GeeksforGeeks
|
14 Dec, 2020
Online coding round (Hackerrank): 2 easy coding questions; Time: 120 minutes
Game of Book Cricket.Simple string encoding-decoding problem. https://leetcode.com/discuss/interview-question/334671/goldman-sacks-july-2019-hackerrank-2
Game of Book Cricket.
Simple string encoding-decoding problem. https://leetcode.com/discuss/interview-question/334671/goldman-sacks-july-2019-hackerrank-2
After clearing the online test, resume shortlisting takes place and if your resume is shortlisted, further rounds will be conducted.
Round 1 (Coderpad + voice call): Two easy-medium level questions were discussed and you need to write the complete runnable call and pass all the test cases.
Given a String and two words (which occur in the given string), find the minimum distance between two words. Distance between two words is defined as the number of characters between the given two words’ middle characters. The brute-force approach was already implemented but it had some logical bugs, and because of which sample test cases were failing. The objective was to find and fix those bugs and then to add some new test cases and write a code for those test cases as well.Simple DFS + DP in a 2D matrix to find the minimum cost path.
Given a String and two words (which occur in the given string), find the minimum distance between two words. Distance between two words is defined as the number of characters between the given two words’ middle characters. The brute-force approach was already implemented but it had some logical bugs, and because of which sample test cases were failing. The objective was to find and fix those bugs and then to add some new test cases and write a code for those test cases as well.
Simple DFS + DP in a 2D matrix to find the minimum cost path.
Note: Round 2 – 6 (Each round took around 60-70 minutes, all rounds were on the same day, Zoom Video Call + Coderpad):
Round 2(DSA):
Quick introductionFind the difference between two arrays: Two unsorted arrays are given, and you need to find (arr1 – arr2) and (arr2 – arr1). The difference between the two arrays is defined as all the elements from the first array which are not present in the second array, taking the number of occurrences into consideration.Example:arr1: [3, 5, 2, 7, 4, 2, 7] arr2: [1, 7, 5, 2, 2, 9]
arr1 – arr2 = [3, 7, 4]
arr2 – arr1 = [1, 7]Given an array of citations, calculate the researcher’s h-index. https://leetcode.com/problems/h-index/The Follow-up question was: https://leetcode.com/problems/h-index-ii/Next follow-up question: What if we are getting a continuous stream of citations, and we need to calculate the h-index after each input?
Quick introduction
Find the difference between two arrays: Two unsorted arrays are given, and you need to find (arr1 – arr2) and (arr2 – arr1). The difference between the two arrays is defined as all the elements from the first array which are not present in the second array, taking the number of occurrences into consideration.Example:arr1: [3, 5, 2, 7, 4, 2, 7] arr2: [1, 7, 5, 2, 2, 9]
arr1 – arr2 = [3, 7, 4]
arr2 – arr1 = [1, 7]
Example:
arr1: [3, 5, 2, 7, 4, 2, 7] arr2: [1, 7, 5, 2, 2, 9]
arr1 – arr2 = [3, 7, 4]
arr2 – arr1 = [1, 7]
Given an array of citations, calculate the researcher’s h-index. https://leetcode.com/problems/h-index/
Given an array of citations, calculate the researcher’s h-index. https://leetcode.com/problems/h-index/
The Follow-up question was: https://leetcode.com/problems/h-index-ii/
The Follow-up question was: https://leetcode.com/problems/h-index-ii/
Next follow-up question: What if we are getting a continuous stream of citations, and we need to calculate the h-index after each input?
Next follow-up question: What if we are getting a continuous stream of citations, and we need to calculate the h-index after each input?
Round 3 (DSA, Projects mentioned in the resume):
A detailed discussion about the projects I have worked on and technologies and design patterns I have used.Given an imbalanced BST, return the balanced BST.Given the start and end times of the meetings, find out the maximum number of meetings one can attend. https://leetcode.com/problems/maximum-number-of-events-that-can-be-attended/Puzzle: Given a 4-digit number ABCD, ABCD * 4 = DCBA (reversed number), find the values of A and D.
A detailed discussion about the projects I have worked on and technologies and design patterns I have used.
Given an imbalanced BST, return the balanced BST.
Given the start and end times of the meetings, find out the maximum number of meetings one can attend. https://leetcode.com/problems/maximum-number-of-events-that-can-be-attended/
Puzzle: Given a 4-digit number ABCD, ABCD * 4 = DCBA (reversed number), find the values of A and D.
Round 4 (Java, Design):
Introduction and technical discussion about my recent projectOOPS questionsHashMap internal workingJVM architecture.How is Java different than other object-oriented programming languages?Detailed discussion on Garbage collectorYou need to design a relational database; how will you design it? Which data structures will you use?Find the top 3 horses puzzle.
Introduction and technical discussion about my recent project
OOPS questions
HashMap internal working
JVM architecture.
How is Java different than other object-oriented programming languages?
Detailed discussion on Garbage collector
You need to design a relational database; how will you design it? Which data structures will you use?
Find the top 3 horses puzzle.
Round 5 (Hiring Manager):
Quick introductionIf you are to design a garbage collector, how will you design it?What is wrapper class and why do we need it?What is type erasure and why do we need it?Why do you want to leave the current organization?Why GS?He explained my role in the team
Quick introduction
If you are to design a garbage collector, how will you design it?
What is wrapper class and why do we need it?
What is type erasure and why do we need it?
Why do you want to leave the current organization?
Why GS?
He explained my role in the team
Hired!
The interview experience was smooth and it was very well arranged. On average, the whole procedure takes about 2-2.5 months to complete.
Tips:
Make sure you solve a few puzzles before you appear for the interview.Be confident.
Make sure you solve a few puzzles before you appear for the interview.
Be confident.
Goldman Sachs
Marketing
Experienced
Interview Experiences
Goldman Sachs
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Amazon Interview Experience for SDE1 (8 Months Experienced) 2022
Paypal Interview Experience for SSE
Amazon Interview Experience for System Development Engineer (Exp - 6 months)
Infosys Interview Experience for Java Backend Developer (3-5 Years Experienced)
Walmart Interview Experience for SDE-III
Amazon Interview Questions
Microsoft Interview Experience for Internship (Via Engage)
Commonly Asked Java Programming Interview Questions | Set 2
Amazon Interview Experience for SDE-1 (On-Campus)
Amazon Interview Experience for SDE-1
|
[
{
"code": null,
"e": 25501,
"s": 25473,
"text": "\n14 Dec, 2020"
},
{
"code": null,
"e": 25578,
"s": 25501,
"text": "Online coding round (Hackerrank): 2 easy coding questions; Time: 120 minutes"
},
{
"code": null,
"e": 25732,
"s": 25578,
"text": "Game of Book Cricket.Simple string encoding-decoding problem. https://leetcode.com/discuss/interview-question/334671/goldman-sacks-july-2019-hackerrank-2"
},
{
"code": null,
"e": 25754,
"s": 25732,
"text": "Game of Book Cricket."
},
{
"code": null,
"e": 25887,
"s": 25754,
"text": "Simple string encoding-decoding problem. https://leetcode.com/discuss/interview-question/334671/goldman-sacks-july-2019-hackerrank-2"
},
{
"code": null,
"e": 26020,
"s": 25887,
"text": "After clearing the online test, resume shortlisting takes place and if your resume is shortlisted, further rounds will be conducted."
},
{
"code": null,
"e": 26178,
"s": 26020,
"text": "Round 1 (Coderpad + voice call): Two easy-medium level questions were discussed and you need to write the complete runnable call and pass all the test cases."
},
{
"code": null,
"e": 26722,
"s": 26178,
"text": "Given a String and two words (which occur in the given string), find the minimum distance between two words. Distance between two words is defined as the number of characters between the given two words’ middle characters. The brute-force approach was already implemented but it had some logical bugs, and because of which sample test cases were failing. The objective was to find and fix those bugs and then to add some new test cases and write a code for those test cases as well.Simple DFS + DP in a 2D matrix to find the minimum cost path."
},
{
"code": null,
"e": 27205,
"s": 26722,
"text": "Given a String and two words (which occur in the given string), find the minimum distance between two words. Distance between two words is defined as the number of characters between the given two words’ middle characters. The brute-force approach was already implemented but it had some logical bugs, and because of which sample test cases were failing. The objective was to find and fix those bugs and then to add some new test cases and write a code for those test cases as well."
},
{
"code": null,
"e": 27267,
"s": 27205,
"text": "Simple DFS + DP in a 2D matrix to find the minimum cost path."
},
{
"code": null,
"e": 27386,
"s": 27267,
"text": "Note: Round 2 – 6 (Each round took around 60-70 minutes, all rounds were on the same day, Zoom Video Call + Coderpad):"
},
{
"code": null,
"e": 27402,
"s": 27386,
"text": "Round 2(DSA): "
},
{
"code": null,
"e": 28144,
"s": 27402,
"text": "Quick introductionFind the difference between two arrays: Two unsorted arrays are given, and you need to find (arr1 – arr2) and (arr2 – arr1). The difference between the two arrays is defined as all the elements from the first array which are not present in the second array, taking the number of occurrences into consideration.Example:arr1: [3, 5, 2, 7, 4, 2, 7] arr2: [1, 7, 5, 2, 2, 9]\narr1 – arr2 = [3, 7, 4]\narr2 – arr1 = [1, 7]Given an array of citations, calculate the researcher’s h-index. https://leetcode.com/problems/h-index/The Follow-up question was: https://leetcode.com/problems/h-index-ii/Next follow-up question: What if we are getting a continuous stream of citations, and we need to calculate the h-index after each input?"
},
{
"code": null,
"e": 28163,
"s": 28144,
"text": "Quick introduction"
},
{
"code": null,
"e": 28579,
"s": 28163,
"text": "Find the difference between two arrays: Two unsorted arrays are given, and you need to find (arr1 – arr2) and (arr2 – arr1). The difference between the two arrays is defined as all the elements from the first array which are not present in the second array, taking the number of occurrences into consideration.Example:arr1: [3, 5, 2, 7, 4, 2, 7] arr2: [1, 7, 5, 2, 2, 9]\narr1 – arr2 = [3, 7, 4]\narr2 – arr1 = [1, 7]"
},
{
"code": null,
"e": 28588,
"s": 28579,
"text": "Example:"
},
{
"code": null,
"e": 28686,
"s": 28588,
"text": "arr1: [3, 5, 2, 7, 4, 2, 7] arr2: [1, 7, 5, 2, 2, 9]\narr1 – arr2 = [3, 7, 4]\narr2 – arr1 = [1, 7]"
},
{
"code": null,
"e": 28790,
"s": 28686,
"text": "Given an array of citations, calculate the researcher’s h-index. https://leetcode.com/problems/h-index/"
},
{
"code": null,
"e": 28894,
"s": 28790,
"text": "Given an array of citations, calculate the researcher’s h-index. https://leetcode.com/problems/h-index/"
},
{
"code": null,
"e": 28964,
"s": 28894,
"text": "The Follow-up question was: https://leetcode.com/problems/h-index-ii/"
},
{
"code": null,
"e": 29034,
"s": 28964,
"text": "The Follow-up question was: https://leetcode.com/problems/h-index-ii/"
},
{
"code": null,
"e": 29171,
"s": 29034,
"text": "Next follow-up question: What if we are getting a continuous stream of citations, and we need to calculate the h-index after each input?"
},
{
"code": null,
"e": 29308,
"s": 29171,
"text": "Next follow-up question: What if we are getting a continuous stream of citations, and we need to calculate the h-index after each input?"
},
{
"code": null,
"e": 29359,
"s": 29308,
"text": "Round 3 (DSA, Projects mentioned in the resume): "
},
{
"code": null,
"e": 29794,
"s": 29359,
"text": "A detailed discussion about the projects I have worked on and technologies and design patterns I have used.Given an imbalanced BST, return the balanced BST.Given the start and end times of the meetings, find out the maximum number of meetings one can attend. https://leetcode.com/problems/maximum-number-of-events-that-can-be-attended/Puzzle: Given a 4-digit number ABCD, ABCD * 4 = DCBA (reversed number), find the values of A and D."
},
{
"code": null,
"e": 29902,
"s": 29794,
"text": "A detailed discussion about the projects I have worked on and technologies and design patterns I have used."
},
{
"code": null,
"e": 29952,
"s": 29902,
"text": "Given an imbalanced BST, return the balanced BST."
},
{
"code": null,
"e": 30132,
"s": 29952,
"text": "Given the start and end times of the meetings, find out the maximum number of meetings one can attend. https://leetcode.com/problems/maximum-number-of-events-that-can-be-attended/"
},
{
"code": null,
"e": 30232,
"s": 30132,
"text": "Puzzle: Given a 4-digit number ABCD, ABCD * 4 = DCBA (reversed number), find the values of A and D."
},
{
"code": null,
"e": 30256,
"s": 30232,
"text": "Round 4 (Java, Design):"
},
{
"code": null,
"e": 30614,
"s": 30256,
"text": "Introduction and technical discussion about my recent projectOOPS questionsHashMap internal workingJVM architecture.How is Java different than other object-oriented programming languages?Detailed discussion on Garbage collectorYou need to design a relational database; how will you design it? Which data structures will you use?Find the top 3 horses puzzle."
},
{
"code": null,
"e": 30676,
"s": 30614,
"text": "Introduction and technical discussion about my recent project"
},
{
"code": null,
"e": 30691,
"s": 30676,
"text": "OOPS questions"
},
{
"code": null,
"e": 30716,
"s": 30691,
"text": "HashMap internal working"
},
{
"code": null,
"e": 30734,
"s": 30716,
"text": "JVM architecture."
},
{
"code": null,
"e": 30806,
"s": 30734,
"text": "How is Java different than other object-oriented programming languages?"
},
{
"code": null,
"e": 30847,
"s": 30806,
"text": "Detailed discussion on Garbage collector"
},
{
"code": null,
"e": 30949,
"s": 30847,
"text": "You need to design a relational database; how will you design it? Which data structures will you use?"
},
{
"code": null,
"e": 30979,
"s": 30949,
"text": "Find the top 3 horses puzzle."
},
{
"code": null,
"e": 31005,
"s": 30979,
"text": "Round 5 (Hiring Manager):"
},
{
"code": null,
"e": 31265,
"s": 31005,
"text": "Quick introductionIf you are to design a garbage collector, how will you design it?What is wrapper class and why do we need it?What is type erasure and why do we need it?Why do you want to leave the current organization?Why GS?He explained my role in the team"
},
{
"code": null,
"e": 31284,
"s": 31265,
"text": "Quick introduction"
},
{
"code": null,
"e": 31350,
"s": 31284,
"text": "If you are to design a garbage collector, how will you design it?"
},
{
"code": null,
"e": 31395,
"s": 31350,
"text": "What is wrapper class and why do we need it?"
},
{
"code": null,
"e": 31439,
"s": 31395,
"text": "What is type erasure and why do we need it?"
},
{
"code": null,
"e": 31490,
"s": 31439,
"text": "Why do you want to leave the current organization?"
},
{
"code": null,
"e": 31498,
"s": 31490,
"text": "Why GS?"
},
{
"code": null,
"e": 31531,
"s": 31498,
"text": "He explained my role in the team"
},
{
"code": null,
"e": 31538,
"s": 31531,
"text": "Hired!"
},
{
"code": null,
"e": 31676,
"s": 31538,
"text": "The interview experience was smooth and it was very well arranged. On average, the whole procedure takes about 2-2.5 months to complete. "
},
{
"code": null,
"e": 31683,
"s": 31676,
"text": "Tips: "
},
{
"code": null,
"e": 31767,
"s": 31683,
"text": "Make sure you solve a few puzzles before you appear for the interview.Be confident."
},
{
"code": null,
"e": 31838,
"s": 31767,
"text": "Make sure you solve a few puzzles before you appear for the interview."
},
{
"code": null,
"e": 31852,
"s": 31838,
"text": "Be confident."
},
{
"code": null,
"e": 31866,
"s": 31852,
"text": "Goldman Sachs"
},
{
"code": null,
"e": 31876,
"s": 31866,
"text": "Marketing"
},
{
"code": null,
"e": 31888,
"s": 31876,
"text": "Experienced"
},
{
"code": null,
"e": 31910,
"s": 31888,
"text": "Interview Experiences"
},
{
"code": null,
"e": 31924,
"s": 31910,
"text": "Goldman Sachs"
},
{
"code": null,
"e": 32022,
"s": 31924,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 32031,
"s": 32022,
"text": "Comments"
},
{
"code": null,
"e": 32044,
"s": 32031,
"text": "Old Comments"
},
{
"code": null,
"e": 32109,
"s": 32044,
"text": "Amazon Interview Experience for SDE1 (8 Months Experienced) 2022"
},
{
"code": null,
"e": 32145,
"s": 32109,
"text": "Paypal Interview Experience for SSE"
},
{
"code": null,
"e": 32222,
"s": 32145,
"text": "Amazon Interview Experience for System Development Engineer (Exp - 6 months)"
},
{
"code": null,
"e": 32302,
"s": 32222,
"text": "Infosys Interview Experience for Java Backend Developer (3-5 Years Experienced)"
},
{
"code": null,
"e": 32343,
"s": 32302,
"text": "Walmart Interview Experience for SDE-III"
},
{
"code": null,
"e": 32370,
"s": 32343,
"text": "Amazon Interview Questions"
},
{
"code": null,
"e": 32429,
"s": 32370,
"text": "Microsoft Interview Experience for Internship (Via Engage)"
},
{
"code": null,
"e": 32489,
"s": 32429,
"text": "Commonly Asked Java Programming Interview Questions | Set 2"
},
{
"code": null,
"e": 32539,
"s": 32489,
"text": "Amazon Interview Experience for SDE-1 (On-Campus)"
}
] |
How to use express router
|
In earlier examples, we wrote all routing code in a single file App.js. But in real world scenarios, we have to split the code into multiple files.
We can create separate files and import them but express gives a router mechanism which is easy to use.
Create a separate file called route.js (name can be anything)
Create router using express −
const express = require('express');
const router = express.Router();
exporting router −
module.exports = router;
Adding routing functions −
router.get('/add-username', (req, res,next)=>{
res.send('<form action="/post-username" method="POST"> <input type="text" name="username"> <button type="submit"> Send </button> </form>');
});
router.post('/post-username', (req, res, next)=>{
console.log('data: ', req.body.username);
res.redirect('/');
});
Similar to functions we used in App.js for creating paths , we used router .
Import the router in App.js file −
const route = require('./routes');
add a middleware for using router in App.js file.
app.use(route);
with these changes the complete App.js file is −
const http = require('http');
const express = require('express');
const bodyParser = require('body-parser');
const route = require('./routes');
const app = express();
app.use(bodyParser.urlencoded({extended: false}));
app.use(route); app.use('/', (req, res,next)=>{
res.send('<h1> first midleware: Hello Tutorials Point </h1>');
});
const server = http.createServer(app);
server.listen(3000);
route.js
const express = require('express');
const router = express.Router();
router.get('/add-username', (req, res,next)=>{
res.send('<form action="/post-username" method="POST"> <input type="text" name="username"> <button type="submit"> Send </button> </form>');
});
router.post('/post-username', (req, res, next)=>{
console.log('data: ', req.body.username);
res.redirect('/');
});
module.exports = router;
The router middleware should be placed before any url handling if present in App.js. Because code execution works from top to bottom in App.js file.
If any additional routers required , we can create separate files similar to routes.js and import it in App.js with another middleware in order for using that router.
|
[
{
"code": null,
"e": 1210,
"s": 1062,
"text": "In earlier examples, we wrote all routing code in a single file App.js. But in real world scenarios, we have to split the code into multiple files."
},
{
"code": null,
"e": 1314,
"s": 1210,
"text": "We can create separate files and import them but express gives a router mechanism which is easy to use."
},
{
"code": null,
"e": 1376,
"s": 1314,
"text": "Create a separate file called route.js (name can be anything)"
},
{
"code": null,
"e": 1406,
"s": 1376,
"text": "Create router using express −"
},
{
"code": null,
"e": 1475,
"s": 1406,
"text": "const express = require('express');\nconst router = express.Router();"
},
{
"code": null,
"e": 1494,
"s": 1475,
"text": "exporting router −"
},
{
"code": null,
"e": 1519,
"s": 1494,
"text": "module.exports = router;"
},
{
"code": null,
"e": 1546,
"s": 1519,
"text": "Adding routing functions −"
},
{
"code": null,
"e": 1865,
"s": 1546,
"text": "router.get('/add-username', (req, res,next)=>{\n res.send('<form action=\"/post-username\" method=\"POST\"> <input type=\"text\" name=\"username\"> <button type=\"submit\"> Send </button> </form>');\n});\n router.post('/post-username', (req, res, next)=>{\n console.log('data: ', req.body.username);\n res.redirect('/');\n});"
},
{
"code": null,
"e": 1942,
"s": 1865,
"text": "Similar to functions we used in App.js for creating paths , we used router ."
},
{
"code": null,
"e": 1977,
"s": 1942,
"text": "Import the router in App.js file −"
},
{
"code": null,
"e": 2012,
"s": 1977,
"text": "const route = require('./routes');"
},
{
"code": null,
"e": 2062,
"s": 2012,
"text": "add a middleware for using router in App.js file."
},
{
"code": null,
"e": 2078,
"s": 2062,
"text": "app.use(route);"
},
{
"code": null,
"e": 2127,
"s": 2078,
"text": "with these changes the complete App.js file is −"
},
{
"code": null,
"e": 2523,
"s": 2127,
"text": "const http = require('http');\nconst express = require('express');\nconst bodyParser = require('body-parser');\nconst route = require('./routes');\nconst app = express();\napp.use(bodyParser.urlencoded({extended: false}));\napp.use(route); app.use('/', (req, res,next)=>{\n res.send('<h1> first midleware: Hello Tutorials Point </h1>');\n});\nconst server = http.createServer(app);\nserver.listen(3000);"
},
{
"code": null,
"e": 2532,
"s": 2523,
"text": "route.js"
},
{
"code": null,
"e": 2944,
"s": 2532,
"text": "const express = require('express');\nconst router = express.Router();\nrouter.get('/add-username', (req, res,next)=>{\n res.send('<form action=\"/post-username\" method=\"POST\"> <input type=\"text\" name=\"username\"> <button type=\"submit\"> Send </button> </form>');\n});\nrouter.post('/post-username', (req, res, next)=>{\n console.log('data: ', req.body.username);\n res.redirect('/');\n});\nmodule.exports = router;"
},
{
"code": null,
"e": 3093,
"s": 2944,
"text": "The router middleware should be placed before any url handling if present in App.js. Because code execution works from top to bottom in App.js file."
},
{
"code": null,
"e": 3260,
"s": 3093,
"text": "If any additional routers required , we can create separate files similar to routes.js and import it in App.js with another middleware in order for using that router."
}
] |
Creating a Browse Button with Tkinter
|
In order to create buttons in a Tkinter application, we can use the Button widget. Buttons can be used to process the execution of an event in the runtime of an application. We can create a button by defining the Button(parent, text, **options) constructor.
Let us suppose we want to create a Browse Button which when clicked, will ask the user to select a file from the system explorer. To create a dialog box for selecting a file, we can use filedialog package in tkinter library. We can import the filedialog in the notebook using the following command,
from tkinter import filedialog
Once the package is imported in the program, we can use it to create a dialog box for opening and selecting all the Python files and it will return the number of characters present in that particular file.
# Import the required Libraries
from tkinter import *
from tkinter import ttk, filedialog
from tkinter.filedialog import askopenfile
# Create an instance of tkinter frame
win = Tk()
# Set the geometry of tkinter frame
win.geometry("700x350")
def open_file():
file = filedialog.askopenfile(mode='r', filetypes=[('Python Files', '*.py')])
if file:
content = file.read()
file.close()
print("%d characters in this file" % len(content))
# Add a Label widget
label = Label(win, text="Click the Button to browse the Files", font=('Georgia 13'))
label.pack(pady=10)
# Create a Button
ttk.Button(win, text="Browse", command=open_file).pack(pady=20)
win.mainloop()
Now, run the above code to browse and select the files from the system explorer.
|
[
{
"code": null,
"e": 1320,
"s": 1062,
"text": "In order to create buttons in a Tkinter application, we can use the Button widget. Buttons can be used to process the execution of an event in the runtime of an application. We can create a button by defining the Button(parent, text, **options) constructor."
},
{
"code": null,
"e": 1619,
"s": 1320,
"text": "Let us suppose we want to create a Browse Button which when clicked, will ask the user to select a file from the system explorer. To create a dialog box for selecting a file, we can use filedialog package in tkinter library. We can import the filedialog in the notebook using the following command,"
},
{
"code": null,
"e": 1650,
"s": 1619,
"text": "from tkinter import filedialog"
},
{
"code": null,
"e": 1856,
"s": 1650,
"text": "Once the package is imported in the program, we can use it to create a dialog box for opening and selecting all the Python files and it will return the number of characters present in that particular file."
},
{
"code": null,
"e": 2541,
"s": 1856,
"text": "# Import the required Libraries\nfrom tkinter import *\nfrom tkinter import ttk, filedialog\nfrom tkinter.filedialog import askopenfile\n\n# Create an instance of tkinter frame\nwin = Tk()\n\n# Set the geometry of tkinter frame\nwin.geometry(\"700x350\")\n\ndef open_file():\n file = filedialog.askopenfile(mode='r', filetypes=[('Python Files', '*.py')])\n if file:\n content = file.read()\n file.close()\n print(\"%d characters in this file\" % len(content))\n\n# Add a Label widget\nlabel = Label(win, text=\"Click the Button to browse the Files\", font=('Georgia 13'))\nlabel.pack(pady=10)\n\n# Create a Button\nttk.Button(win, text=\"Browse\", command=open_file).pack(pady=20)\n\nwin.mainloop()"
},
{
"code": null,
"e": 2622,
"s": 2541,
"text": "Now, run the above code to browse and select the files from the system explorer."
}
] |
How to use min and max attributes in HTML?
|
The min and max attributes in HTML are used to set the minimum and maximum value of an element. The min and max attribute can be used on the following elements:
You can try to run the following code to learn how to use the min and max attribute in HTML.
<!DOCTYPE html>
<html>
<head>
<title>HTML min and max attribute</title>
</head>
<body>
<form action = "" method = "get">
Mention any number between 1 to 20
<input type = "number" name="num" min = "1" max = "20"><br>
<input type = "submit" value = "Submit">
</form>
</body>
</html>
|
[
{
"code": null,
"e": 1223,
"s": 1062,
"text": "The min and max attributes in HTML are used to set the minimum and maximum value of an element. The min and max attribute can be used on the following elements:"
},
{
"code": null,
"e": 1316,
"s": 1223,
"text": "You can try to run the following code to learn how to use the min and max attribute in HTML."
},
{
"code": null,
"e": 1654,
"s": 1316,
"text": "<!DOCTYPE html>\n<html>\n <head>\n <title>HTML min and max attribute</title>\n </head>\n <body>\n <form action = \"\" method = \"get\">\n Mention any number between 1 to 20\n <input type = \"number\" name=\"num\" min = \"1\" max = \"20\"><br>\n <input type = \"submit\" value = \"Submit\">\n </form>\n </body>\n</html>"
}
] |
Different Colorspaces as Inputs to CNNs | by Vardan Agarwal | Towards Data Science
|
I had just finished a rant in another article where I forgot that OpenCV used BGR format and the output was checked in RGB for a CNN and how it had wasted a lot of time when I got an idea, what if passed images in different colorspaces to CNN’s and how will it affect the model? That is what we are going to find out in this article.
Requirements
Requirements
2. Colorspaces Used
3. Code and Results
Imports and loading the dataset
Creating an image data generator
Results with normal CNN
Results with transfer learning
Creating an Ensemble model
If you are not interested in the process and just want to know the results you can jump straight down to the results with the normal CNN section.
If you want to code along, then you would require Tensorflow and OpenCV. You can pip install them using the code shown below, or you use Google Colab like me, where no set up will be required along with free GPU.
pip install tensorflowpip install opencv-python
Deciding which colorspaces to use was a pretty easy task. I just opened the documentation for OpenCV and choose all the unique ones in which it was possible to convert. I will just list them and provide additional reading links if anyone is interested in knowing more about them.
RGB or BGR — Additional reading
HSV — Additional Reading
YCbCr — Additional Reading
LAB — Additional Reading
LUV — Additional Reading
XYZ — Additional Reading
Before beginning, I want to be clear that my aim was not to create a very high-tech state of the art CNN architecture to get the best accuracy but to compare all the colorspaces.
The dataset chosen was the cats vs dogs one. First, we start we importing all the required libraries.
import tensorflow as tffrom tensorflow.keras.models import Sequentialfrom tensorflow.keras.layers import Conv2D, MaxPooling2D, Dropout, Flatten, Dense, Activation, BatchNormalizationimport osimport numpy as npimport matplotlib.pyplot as pltimport reimport randomimport cv2
The next part is to load the dataset and store all the filenames which will then be passed to the image data generator and define all the constants which will be used.
A normal custom data generator is created with an extra argument of colorspace, which defines which colorspace to convert to. cv2.cvtColor is used for that purpose. The images are resized to their required sizes and normalized by dividing by 255 for all except for the Hue matrice of HSV which is divided by 180. NumPy arrays are created for the images and labels and they are yielded.
A basic CNN is created using Conv2D, max pooling, batch normalization, and dropout layers which are eventually flattened, a dense layer gives output with a sigmoid activation function. The model is compiled using Adam. If you want to create advanced CNN’s which may give better results, you can refer to my previous article. Just fit the model with the required values, and we are done.
If we observe the training accuracy and loss all the colorspaces except HSV give better results which are pretty similar. However, it can also be observed that our model is overfitting so HSV can not just be discredited straightaway. Also, for the validation set, the images in HSV can learn much faster than their counterparts, but for a large number of epochs the results even out and choosing a particular one is very difficult.
Again as stated above the transfer learning model used is a pretty simple one. A pre-trained MobileNetV2 is used whose layers are set to non-trainable and a global average pooling layer is followed by a dense layer for output.
Only BGR and XYZ colorspaces give good results while HSV gives the poorest results. But why did this happen? Well we use pre-trained layers that were accustomed to seeing RGB style inputs and XYZ is pretty similar to RGB so only they give good results, whereas HSV is a cylindrical system and is the farthest off RGB in terms of similarity, hence gave the worst results.
So we make the whole pre-trained network trainable by changing base_model.trainable = True and let’s see what happens now.
Even now BGR and XYZ colorspaces perform great from the start however, they are caught up by all the other colorspaces except HSV which again performed the worst. There is hardly much to choose from among all the colorspaces except HSV so let’s create an ensemble model and see whether that improves performance.
If we can achieve an improvement in performance then we will also know that different colorspaces were getting different images classified as right or wrong, which would mean that changing the colorspace had some impact on the models. We will create a pretty simple ensemble model by taking the mean of the predicted probabilities of all the models, converting it to an integer, and evaluating it against the true labels.
accuracy for bgr : 0.9596773982048035accuracy for ycrcb : 0.9536290168762207accuracy for lab : 0.9415322542190552accuracy for luv : 0.9546371102333069accuracy for xyz : 0.9546371102333069Ensemble accuracy: 0.966
So to conclude it can be said that changing the colorspace may or may not improve accuracy especially if you are checking randomly and have not assigned a random seed to repeat results because there were a lots of ups and downs. However, if you want just a little more accuracy you can try other colorspaces or even go for ensemble model.
You can find the links to gist link to colab files here and here if you want to play with them.
|
[
{
"code": null,
"e": 505,
"s": 171,
"text": "I had just finished a rant in another article where I forgot that OpenCV used BGR format and the output was checked in RGB for a CNN and how it had wasted a lot of time when I got an idea, what if passed images in different colorspaces to CNN’s and how will it affect the model? That is what we are going to find out in this article."
},
{
"code": null,
"e": 518,
"s": 505,
"text": "Requirements"
},
{
"code": null,
"e": 531,
"s": 518,
"text": "Requirements"
},
{
"code": null,
"e": 551,
"s": 531,
"text": "2. Colorspaces Used"
},
{
"code": null,
"e": 571,
"s": 551,
"text": "3. Code and Results"
},
{
"code": null,
"e": 603,
"s": 571,
"text": "Imports and loading the dataset"
},
{
"code": null,
"e": 636,
"s": 603,
"text": "Creating an image data generator"
},
{
"code": null,
"e": 660,
"s": 636,
"text": "Results with normal CNN"
},
{
"code": null,
"e": 691,
"s": 660,
"text": "Results with transfer learning"
},
{
"code": null,
"e": 718,
"s": 691,
"text": "Creating an Ensemble model"
},
{
"code": null,
"e": 864,
"s": 718,
"text": "If you are not interested in the process and just want to know the results you can jump straight down to the results with the normal CNN section."
},
{
"code": null,
"e": 1077,
"s": 864,
"text": "If you want to code along, then you would require Tensorflow and OpenCV. You can pip install them using the code shown below, or you use Google Colab like me, where no set up will be required along with free GPU."
},
{
"code": null,
"e": 1125,
"s": 1077,
"text": "pip install tensorflowpip install opencv-python"
},
{
"code": null,
"e": 1405,
"s": 1125,
"text": "Deciding which colorspaces to use was a pretty easy task. I just opened the documentation for OpenCV and choose all the unique ones in which it was possible to convert. I will just list them and provide additional reading links if anyone is interested in knowing more about them."
},
{
"code": null,
"e": 1437,
"s": 1405,
"text": "RGB or BGR — Additional reading"
},
{
"code": null,
"e": 1462,
"s": 1437,
"text": "HSV — Additional Reading"
},
{
"code": null,
"e": 1489,
"s": 1462,
"text": "YCbCr — Additional Reading"
},
{
"code": null,
"e": 1514,
"s": 1489,
"text": "LAB — Additional Reading"
},
{
"code": null,
"e": 1539,
"s": 1514,
"text": "LUV — Additional Reading"
},
{
"code": null,
"e": 1564,
"s": 1539,
"text": "XYZ — Additional Reading"
},
{
"code": null,
"e": 1743,
"s": 1564,
"text": "Before beginning, I want to be clear that my aim was not to create a very high-tech state of the art CNN architecture to get the best accuracy but to compare all the colorspaces."
},
{
"code": null,
"e": 1845,
"s": 1743,
"text": "The dataset chosen was the cats vs dogs one. First, we start we importing all the required libraries."
},
{
"code": null,
"e": 2118,
"s": 1845,
"text": "import tensorflow as tffrom tensorflow.keras.models import Sequentialfrom tensorflow.keras.layers import Conv2D, MaxPooling2D, Dropout, Flatten, Dense, Activation, BatchNormalizationimport osimport numpy as npimport matplotlib.pyplot as pltimport reimport randomimport cv2"
},
{
"code": null,
"e": 2286,
"s": 2118,
"text": "The next part is to load the dataset and store all the filenames which will then be passed to the image data generator and define all the constants which will be used."
},
{
"code": null,
"e": 2672,
"s": 2286,
"text": "A normal custom data generator is created with an extra argument of colorspace, which defines which colorspace to convert to. cv2.cvtColor is used for that purpose. The images are resized to their required sizes and normalized by dividing by 255 for all except for the Hue matrice of HSV which is divided by 180. NumPy arrays are created for the images and labels and they are yielded."
},
{
"code": null,
"e": 3059,
"s": 2672,
"text": "A basic CNN is created using Conv2D, max pooling, batch normalization, and dropout layers which are eventually flattened, a dense layer gives output with a sigmoid activation function. The model is compiled using Adam. If you want to create advanced CNN’s which may give better results, you can refer to my previous article. Just fit the model with the required values, and we are done."
},
{
"code": null,
"e": 3491,
"s": 3059,
"text": "If we observe the training accuracy and loss all the colorspaces except HSV give better results which are pretty similar. However, it can also be observed that our model is overfitting so HSV can not just be discredited straightaway. Also, for the validation set, the images in HSV can learn much faster than their counterparts, but for a large number of epochs the results even out and choosing a particular one is very difficult."
},
{
"code": null,
"e": 3718,
"s": 3491,
"text": "Again as stated above the transfer learning model used is a pretty simple one. A pre-trained MobileNetV2 is used whose layers are set to non-trainable and a global average pooling layer is followed by a dense layer for output."
},
{
"code": null,
"e": 4089,
"s": 3718,
"text": "Only BGR and XYZ colorspaces give good results while HSV gives the poorest results. But why did this happen? Well we use pre-trained layers that were accustomed to seeing RGB style inputs and XYZ is pretty similar to RGB so only they give good results, whereas HSV is a cylindrical system and is the farthest off RGB in terms of similarity, hence gave the worst results."
},
{
"code": null,
"e": 4212,
"s": 4089,
"text": "So we make the whole pre-trained network trainable by changing base_model.trainable = True and let’s see what happens now."
},
{
"code": null,
"e": 4525,
"s": 4212,
"text": "Even now BGR and XYZ colorspaces perform great from the start however, they are caught up by all the other colorspaces except HSV which again performed the worst. There is hardly much to choose from among all the colorspaces except HSV so let’s create an ensemble model and see whether that improves performance."
},
{
"code": null,
"e": 4947,
"s": 4525,
"text": "If we can achieve an improvement in performance then we will also know that different colorspaces were getting different images classified as right or wrong, which would mean that changing the colorspace had some impact on the models. We will create a pretty simple ensemble model by taking the mean of the predicted probabilities of all the models, converting it to an integer, and evaluating it against the true labels."
},
{
"code": null,
"e": 5164,
"s": 4947,
"text": "accuracy for bgr : 0.9596773982048035accuracy for ycrcb : 0.9536290168762207accuracy for lab : 0.9415322542190552accuracy for luv : 0.9546371102333069accuracy for xyz : 0.9546371102333069Ensemble accuracy: 0.966"
},
{
"code": null,
"e": 5503,
"s": 5164,
"text": "So to conclude it can be said that changing the colorspace may or may not improve accuracy especially if you are checking randomly and have not assigned a random seed to repeat results because there were a lots of ups and downs. However, if you want just a little more accuracy you can try other colorspaces or even go for ensemble model."
}
] |
Usage of Bootstrap navbar-fixed-bottom class
|
To fix navbar to the bottom, use the navbar-fixed-bottom class.
You can try to run the following code to implement navbar-fixed-bottom class −
Live Demo
<!DOCTYPE html>
<html>
<head>
<title>Bootstrap Example</title>
<link href = "/bootstrap/css/bootstrap.min.css" rel = "stylesheet">
<script src = "/scripts/jquery.min.js"></script>
<script src = "/bootstrap/js/bootstrap.min.js"></script>
</head>
<body>
<nav class = "navbar navbar-default navbar-fixed-bottom" role = "navigation" style="background: orange;">
<div class = "navbar-header">
<a class = "navbar-brand" href = "#">Java Topics</a>
</div>
<div>
<ul class = "nav navbar-nav">
<li class = "active"><a href = "#">Basics</a></li>
<li><a href = "#">Interface</a></li>
<li><a href = "#">Polymorphism</a></li>
<li><a href = "#">Encapsulation</a></li>
</ul>
</div>
</nav>
</body>
</html>
|
[
{
"code": null,
"e": 1126,
"s": 1062,
"text": "To fix navbar to the bottom, use the navbar-fixed-bottom class."
},
{
"code": null,
"e": 1205,
"s": 1126,
"text": "You can try to run the following code to implement navbar-fixed-bottom class −"
},
{
"code": null,
"e": 1215,
"s": 1205,
"text": "Live Demo"
},
{
"code": null,
"e": 2084,
"s": 1215,
"text": "<!DOCTYPE html>\n<html>\n <head>\n <title>Bootstrap Example</title>\n <link href = \"/bootstrap/css/bootstrap.min.css\" rel = \"stylesheet\">\n <script src = \"/scripts/jquery.min.js\"></script>\n <script src = \"/bootstrap/js/bootstrap.min.js\"></script>\n </head>\n <body>\n <nav class = \"navbar navbar-default navbar-fixed-bottom\" role = \"navigation\" style=\"background: orange;\">\n <div class = \"navbar-header\">\n <a class = \"navbar-brand\" href = \"#\">Java Topics</a>\n </div>\n <div>\n <ul class = \"nav navbar-nav\">\n <li class = \"active\"><a href = \"#\">Basics</a></li>\n <li><a href = \"#\">Interface</a></li>\n <li><a href = \"#\">Polymorphism</a></li>\n <li><a href = \"#\">Encapsulation</a></li>\n </ul>\n </div>\n </nav>\n </body>\n</html>"
}
] |
IDE | GeeksforGeeks | A computer science portal for geeks
|
Please enter your email address or userHandle.
12345678910111213141516171819202122232425262728293031323334// A C/C++ Program to generate OTP (One Time Password)#include<bits/stdc++.h>using namespace std;// A Function to generate a unique OTP everytimestring generateOTP(int len){ // All possible characters of my OTP string str = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"; int n = str.length(); // String to hold my OTP string OTP; for (int i=1; i<=len; i++) OTP.push_back(str[rand() % n]); return(OTP);}// Driver Program to test above functionsint main(){ // For different values each time we run the code srand(time(NULL)); // Delare the length of OTP int len = 6; printf("Your OTP is - %s", generateOTP(len).c_str()); return(0);}ההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
https://ide.geeksforgeeks.org/Ks84Ck
Your OTP is - 8qOtzy
|
[
{
"code": null,
"e": 164,
"s": 117,
"text": "Please enter your email address or userHandle."
},
{
"code": null,
"e": 1452,
"s": 164,
"text": "12345678910111213141516171819202122232425262728293031323334// A C/C++ Program to generate OTP (One Time Password)#include<bits/stdc++.h>using namespace std;// A Function to generate a unique OTP everytimestring generateOTP(int len){ // All possible characters of my OTP string str = \"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789\"; int n = str.length(); // String to hold my OTP string OTP; for (int i=1; i<=len; i++) OTP.push_back(str[rand() % n]); return(OTP);}// Driver Program to test above functionsint main(){ // For different values each time we run the code srand(time(NULL)); // Delare the length of OTP int len = 6; printf(\"Your OTP is - %s\", generateOTP(len).c_str()); return(0);}ההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
},
{
"code": null,
"e": 1489,
"s": 1452,
"text": "https://ide.geeksforgeeks.org/Ks84Ck"
}
] |
Java Concurrency - BlockingQueue Interface
|
A java.util.concurrent.BlockingQueue interface is a subinterface of Queue interface, and additionally supports operations such as waiting for the queue to become non-empty before retrieving an element, and wait for space to become available in the queue before storing an element.
boolean add(E e)
Inserts the specified element into this queue if it is possible to do so immediately without violating capacity restrictions, returning true upon success and throwing an IllegalStateException if no space is currently available.
boolean contains(Object o)
Returns true if this queue contains the specified element.
int drainTo(Collection<? super E> c)
Removes all available elements from this queue and adds them to the given collection.
int drainTo(Collection<? super E> c, int maxElements)
Removes at most the given number of available elements from this queue and adds them to the given collection.
boolean offer(E e)
Inserts the specified element into this queue if it is possible to do so immediately without violating capacity restrictions, returning true upon success and false if no space is currently available.
boolean offer(E e, long timeout, TimeUnit unit)
Inserts the specified element into this queue, waiting up to the specified wait time if necessary for space to become available.
E poll(long timeout, TimeUnit unit)
Retrieves and removes the head of this queue, waiting up to the specified wait time if necessary for an element to become available.
void put(E e)
Inserts the specified element into this queue, waiting if necessary for space to become available.
int remainingCapacity()
Returns the number of additional elements that this queue can ideally (in the absence of memory or resource constraints) accept without blocking, or Integer.MAX_VALUE if there is no intrinsic limit.
boolean remove(Object o)
Removes a single instance of the specified element from this queue, if it is present.
E take()
Retrieves and removes the head of this queue, waiting if necessary until an element becomes available.
The following TestThread program shows usage of BlockingQueue interface in thread based environment.
import java.util.Random;
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.BlockingQueue;
public class TestThread {
public static void main(final String[] arguments) throws InterruptedException {
BlockingQueue<Integer> queue = new ArrayBlockingQueue<Integer>(10);
Producer producer = new Producer(queue);
Consumer consumer = new Consumer(queue);
new Thread(producer).start();
new Thread(consumer).start();
Thread.sleep(4000);
}
static class Producer implements Runnable {
private BlockingQueue<Integer> queue;
public Producer(BlockingQueue queue) {
this.queue = queue;
}
@Override
public void run() {
Random random = new Random();
try {
int result = random.nextInt(100);
Thread.sleep(1000);
queue.put(result);
System.out.println("Added: " + result);
result = random.nextInt(100);
Thread.sleep(1000);
queue.put(result);
System.out.println("Added: " + result);
result = random.nextInt(100);
Thread.sleep(1000);
queue.put(result);
System.out.println("Added: " + result);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
static class Consumer implements Runnable {
private BlockingQueue<Integer> queue;
public Consumer(BlockingQueue queue) {
this.queue = queue;
}
@Override
public void run() {
try {
System.out.println("Removed: " + queue.take());
System.out.println("Removed: " + queue.take());
System.out.println("Removed: " + queue.take());
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
This will produce the following result.
Added: 52
Removed: 52
Added: 70
Removed: 70
Added: 27
Removed: 27
16 Lectures
2 hours
Malhar Lathkar
19 Lectures
5 hours
Malhar Lathkar
25 Lectures
2.5 hours
Anadi Sharma
126 Lectures
7 hours
Tushar Kale
119 Lectures
17.5 hours
Monica Mittal
76 Lectures
7 hours
Arnab Chakraborty
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2938,
"s": 2657,
"text": "A java.util.concurrent.BlockingQueue interface is a subinterface of Queue interface, and additionally supports operations such as waiting for the queue to become non-empty before retrieving an element, and wait for space to become available in the queue before storing an element."
},
{
"code": null,
"e": 2955,
"s": 2938,
"text": "boolean add(E e)"
},
{
"code": null,
"e": 3183,
"s": 2955,
"text": "Inserts the specified element into this queue if it is possible to do so immediately without violating capacity restrictions, returning true upon success and throwing an IllegalStateException if no space is currently available."
},
{
"code": null,
"e": 3210,
"s": 3183,
"text": "boolean contains(Object o)"
},
{
"code": null,
"e": 3269,
"s": 3210,
"text": "Returns true if this queue contains the specified element."
},
{
"code": null,
"e": 3306,
"s": 3269,
"text": "int drainTo(Collection<? super E> c)"
},
{
"code": null,
"e": 3392,
"s": 3306,
"text": "Removes all available elements from this queue and adds them to the given collection."
},
{
"code": null,
"e": 3446,
"s": 3392,
"text": "int drainTo(Collection<? super E> c, int maxElements)"
},
{
"code": null,
"e": 3556,
"s": 3446,
"text": "Removes at most the given number of available elements from this queue and adds them to the given collection."
},
{
"code": null,
"e": 3575,
"s": 3556,
"text": "boolean offer(E e)"
},
{
"code": null,
"e": 3775,
"s": 3575,
"text": "Inserts the specified element into this queue if it is possible to do so immediately without violating capacity restrictions, returning true upon success and false if no space is currently available."
},
{
"code": null,
"e": 3823,
"s": 3775,
"text": "boolean offer(E e, long timeout, TimeUnit unit)"
},
{
"code": null,
"e": 3953,
"s": 3823,
"text": "Inserts the specified element into this queue, waiting up to the specified wait time if necessary for space to become available.\n"
},
{
"code": null,
"e": 3989,
"s": 3953,
"text": "E poll(long timeout, TimeUnit unit)"
},
{
"code": null,
"e": 4122,
"s": 3989,
"text": "Retrieves and removes the head of this queue, waiting up to the specified wait time if necessary for an element to become available."
},
{
"code": null,
"e": 4136,
"s": 4122,
"text": "void put(E e)"
},
{
"code": null,
"e": 4235,
"s": 4136,
"text": "Inserts the specified element into this queue, waiting if necessary for space to become available."
},
{
"code": null,
"e": 4259,
"s": 4235,
"text": "int remainingCapacity()"
},
{
"code": null,
"e": 4458,
"s": 4259,
"text": "Returns the number of additional elements that this queue can ideally (in the absence of memory or resource constraints) accept without blocking, or Integer.MAX_VALUE if there is no intrinsic limit."
},
{
"code": null,
"e": 4483,
"s": 4458,
"text": "boolean remove(Object o)"
},
{
"code": null,
"e": 4569,
"s": 4483,
"text": "Removes a single instance of the specified element from this queue, if it is present."
},
{
"code": null,
"e": 4578,
"s": 4569,
"text": "E take()"
},
{
"code": null,
"e": 4681,
"s": 4578,
"text": "Retrieves and removes the head of this queue, waiting if necessary until an element becomes available."
},
{
"code": null,
"e": 4782,
"s": 4681,
"text": "The following TestThread program shows usage of BlockingQueue interface in thread based environment."
},
{
"code": null,
"e": 6697,
"s": 4782,
"text": "import java.util.Random;\nimport java.util.concurrent.ArrayBlockingQueue;\nimport java.util.concurrent.BlockingQueue;\n\npublic class TestThread {\n\n public static void main(final String[] arguments) throws InterruptedException {\n BlockingQueue<Integer> queue = new ArrayBlockingQueue<Integer>(10);\n\n Producer producer = new Producer(queue);\n Consumer consumer = new Consumer(queue);\n\n new Thread(producer).start();\n new Thread(consumer).start();\n\n Thread.sleep(4000);\n } \n\n\n static class Producer implements Runnable {\n private BlockingQueue<Integer> queue;\n\n public Producer(BlockingQueue queue) {\n this.queue = queue;\n }\n\n @Override\n public void run() {\n Random random = new Random();\n\n try {\n int result = random.nextInt(100);\n Thread.sleep(1000);\n queue.put(result);\n System.out.println(\"Added: \" + result);\n \n result = random.nextInt(100);\n Thread.sleep(1000);\n queue.put(result);\n System.out.println(\"Added: \" + result);\n \n result = random.nextInt(100);\n Thread.sleep(1000);\n queue.put(result);\n System.out.println(\"Added: \" + result);\n } catch (InterruptedException e) {\n e.printStackTrace();\n }\n }\t \n }\n\n static class Consumer implements Runnable {\n private BlockingQueue<Integer> queue;\n\n public Consumer(BlockingQueue queue) {\n this.queue = queue;\n }\n \n @Override\n public void run() {\n \n try {\n System.out.println(\"Removed: \" + queue.take());\n System.out.println(\"Removed: \" + queue.take());\n System.out.println(\"Removed: \" + queue.take());\n } catch (InterruptedException e) {\n e.printStackTrace();\n }\n }\n }\n}"
},
{
"code": null,
"e": 6737,
"s": 6697,
"text": "This will produce the following result."
},
{
"code": null,
"e": 6804,
"s": 6737,
"text": "Added: 52\nRemoved: 52\nAdded: 70\nRemoved: 70\nAdded: 27\nRemoved: 27\n"
},
{
"code": null,
"e": 6837,
"s": 6804,
"text": "\n 16 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 6853,
"s": 6837,
"text": " Malhar Lathkar"
},
{
"code": null,
"e": 6886,
"s": 6853,
"text": "\n 19 Lectures \n 5 hours \n"
},
{
"code": null,
"e": 6902,
"s": 6886,
"text": " Malhar Lathkar"
},
{
"code": null,
"e": 6937,
"s": 6902,
"text": "\n 25 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 6951,
"s": 6937,
"text": " Anadi Sharma"
},
{
"code": null,
"e": 6985,
"s": 6951,
"text": "\n 126 Lectures \n 7 hours \n"
},
{
"code": null,
"e": 6999,
"s": 6985,
"text": " Tushar Kale"
},
{
"code": null,
"e": 7036,
"s": 6999,
"text": "\n 119 Lectures \n 17.5 hours \n"
},
{
"code": null,
"e": 7051,
"s": 7036,
"text": " Monica Mittal"
},
{
"code": null,
"e": 7084,
"s": 7051,
"text": "\n 76 Lectures \n 7 hours \n"
},
{
"code": null,
"e": 7103,
"s": 7084,
"text": " Arnab Chakraborty"
},
{
"code": null,
"e": 7110,
"s": 7103,
"text": " Print"
},
{
"code": null,
"e": 7121,
"s": 7110,
"text": " Add Notes"
}
] |
Is there any way to embed a PDF file into an HTML5 page?
|
To embed a PDF file in an HTML5 page, use the <iframe> element.
<!DOCTYPE html>
<html>
<head>
<title>HTML iframe Tag</title>
</head>
<body>
<h1>HTML5 Tutorial</h1>
<iframe src = " https://www.tutorialspoint.com/html5/html5_tutorial.pdf" style="width:500px; height:300px;"></iframe>
</body>
</html>
|
[
{
"code": null,
"e": 1126,
"s": 1062,
"text": "To embed a PDF file in an HTML5 page, use the <iframe> element."
},
{
"code": null,
"e": 1390,
"s": 1126,
"text": "<!DOCTYPE html>\n<html>\n <head>\n <title>HTML iframe Tag</title>\n </head>\n <body>\n <h1>HTML5 Tutorial</h1>\n <iframe src = \" https://www.tutorialspoint.com/html5/html5_tutorial.pdf\" style=\"width:500px; height:300px;\"></iframe>\n </body>\n</html>"
}
] |
Print longest palindrome word in a sentence in C Program
|
Given a sentence and the challenge is to find the longest palindrome from the given sentence
Palindrome is a word or sequence whose meaning remains same even after reversing the string
Example − Nitin, after reversing the string its meaning remains the same.
Challenge is to find the longest palindrome from the given sentence.
Like sentence is: malayalam liemadameil iji
It contains three palindrome words but the longest is − liemadameil
START
STEP 1 -> Declare start variables I, j, k, l, max to 0, index to -1, check to 0, count to 0
Step 2 -> Loop For i to 0 and i<strlen(str) and i++
Set max =0, k =i and j=i+1
Loop While str[j]!=' ' and str[j]!='\0'
Increment j by 1
End While
Set l=j-1
IF str[k]!=' ' and str[k]!='\0'
Loop While k<=1
If str[k]==str[l]
Increment max by 1
If count<=max
Set index=i and count = max
End If
End IF
Else
Set max = 0, count = -1
Break
End Else
Increment k and I by 1
End Loop While
End If
Set i=j
Step 3 -> End Loop For
Step 4 -> Loop For i = index and i!=-1 && str[i]!=' ' && str[i]!='\0' and i++
Print str[i]
Step 5 -> End Loop For
STOP
#include <stdio.h>
#include <string.h>
int main(int argc, char const *argv[]) {
char str[] = {"malayalam liemadameil iji"};
int i, k, l, j, max =0, index = -1, check = 0, count = 0;
for(i=0; i<strlen(str); i++) {
max = 0;
k = i;
j = i+1;
while(str[j]!=' ' && str[j]!='\0'){
j++;
}
l = j-1;
if(str[k]!=' ' && str[k]!='\0') {
while(k<=l) {
if (str[k]==str[l]) {
max++;
if(count<=max) {
index = i;
count = max;
}
} else {
max = 0;
count = -1;
break;
}
k++;
l--;
}
}
i = j;
}
for (i = index; i!=-1 && str[i]!=' ' && str[i]!='\0'; i++) {
printf("%c", str[i]);
}
return 0;
}
If we run above program then it will generate following output.
liemadameil
|
[
{
"code": null,
"e": 1155,
"s": 1062,
"text": "Given a sentence and the challenge is to find the longest palindrome from the given sentence"
},
{
"code": null,
"e": 1247,
"s": 1155,
"text": "Palindrome is a word or sequence whose meaning remains same even after reversing the string"
},
{
"code": null,
"e": 1321,
"s": 1247,
"text": "Example − Nitin, after reversing the string its meaning remains the same."
},
{
"code": null,
"e": 1390,
"s": 1321,
"text": "Challenge is to find the longest palindrome from the given sentence."
},
{
"code": null,
"e": 1434,
"s": 1390,
"text": "Like sentence is: malayalam liemadameil iji"
},
{
"code": null,
"e": 1502,
"s": 1434,
"text": "It contains three palindrome words but the longest is − liemadameil"
},
{
"code": null,
"e": 2256,
"s": 1502,
"text": "START\nSTEP 1 -> Declare start variables I, j, k, l, max to 0, index to -1, check to 0, count to 0\nStep 2 -> Loop For i to 0 and i<strlen(str) and i++\n Set max =0, k =i and j=i+1\n Loop While str[j]!=' ' and str[j]!='\\0'\n Increment j by 1\n End While\n Set l=j-1\n IF str[k]!=' ' and str[k]!='\\0'\n Loop While k<=1\n If str[k]==str[l]\n Increment max by 1\n If count<=max\n Set index=i and count = max\n End If\n End IF\n Else\n Set max = 0, count = -1\n Break\n End Else\n Increment k and I by 1\n End Loop While\nEnd If\nSet i=j\nStep 3 -> End Loop For\nStep 4 -> Loop For i = index and i!=-1 && str[i]!=' ' && str[i]!='\\0' and i++\n Print str[i]\nStep 5 -> End Loop For\nSTOP"
},
{
"code": null,
"e": 3119,
"s": 2256,
"text": "#include <stdio.h>\n#include <string.h>\nint main(int argc, char const *argv[]) {\n char str[] = {\"malayalam liemadameil iji\"};\n int i, k, l, j, max =0, index = -1, check = 0, count = 0;\n for(i=0; i<strlen(str); i++) {\n max = 0;\n k = i;\n j = i+1;\n while(str[j]!=' ' && str[j]!='\\0'){\n j++;\n }\n l = j-1;\n if(str[k]!=' ' && str[k]!='\\0') {\n while(k<=l) {\n if (str[k]==str[l]) {\n max++;\n if(count<=max) {\n index = i;\n count = max;\n }\n } else {\n max = 0;\n count = -1;\n break;\n }\n k++;\n l--;\n }\n }\n i = j;\n }\n for (i = index; i!=-1 && str[i]!=' ' && str[i]!='\\0'; i++) {\n printf(\"%c\", str[i]);\n }\n return 0;\n}"
},
{
"code": null,
"e": 3183,
"s": 3119,
"text": "If we run above program then it will generate following output."
},
{
"code": null,
"e": 3195,
"s": 3183,
"text": "liemadameil"
}
] |
How to set justification on Tkinter Text box?
|
The Text widget supports multiline user input from the user. We can configure the Text widget properties such as its font properties, text color, background, etc., by using the configure() method.
To set the justification of our text inside the Text widget, we can use tag_add() and tag_configure() properties. We will specify the value of "justify" as CENTER.
# Import the required libraries
from tkinter import *
# Create an instance of tkinter frame or window
win=Tk()
# Set the size of the tkinter window
win.geometry("700x350")
# Create a text widget
text=Text(win, width=40, height=10)
# justify the text alignment to the center
text.tag_configure("center", justify='center')
text.insert(INSERT, "Welcome to Tutorialspoint...")
# Add the tag from start to end text
text.tag_add("center", 1.0, "end")
text.pack()
win.mainloop()
If you run the above code, you will observe that the cursor of the text window will have justification set to its center.
|
[
{
"code": null,
"e": 1259,
"s": 1062,
"text": "The Text widget supports multiline user input from the user. We can configure the Text widget properties such as its font properties, text color, background, etc., by using the configure() method."
},
{
"code": null,
"e": 1423,
"s": 1259,
"text": "To set the justification of our text inside the Text widget, we can use tag_add() and tag_configure() properties. We will specify the value of \"justify\" as CENTER."
},
{
"code": null,
"e": 1901,
"s": 1423,
"text": "# Import the required libraries\nfrom tkinter import *\n\n# Create an instance of tkinter frame or window\nwin=Tk()\n\n# Set the size of the tkinter window\nwin.geometry(\"700x350\")\n\n# Create a text widget\ntext=Text(win, width=40, height=10)\n\n# justify the text alignment to the center\ntext.tag_configure(\"center\", justify='center')\ntext.insert(INSERT, \"Welcome to Tutorialspoint...\")\n\n# Add the tag from start to end text\ntext.tag_add(\"center\", 1.0, \"end\")\ntext.pack()\n\nwin.mainloop()"
},
{
"code": null,
"e": 2023,
"s": 1901,
"text": "If you run the above code, you will observe that the cursor of the text window will have justification set to its center."
}
] |
How to format currencies in JSP?
|
The <fmt:formatNumber> tag is used to format numbers, percentages, and currencies.
The <fmt:formatNumber> tag has the following attributes −
<%@ taglib prefix = "c" uri = "http://java.sun.com/jsp/jstl/core" %>
<%@ taglib prefix = "fmt" uri = "http://java.sun.com/jsp/jstl/fmt" %>
<html>
<head>
<title>JSTL fmt:formatNumber Tag</title>
</head>
<body>
<h3>Number Format:</h3>
<c:set var = "balance" value = "120000.2309" />
<p>Currency in USA :
<fmt:setLocale value = "en_US"/>
<fmt:formatNumber value = "${balance}" type = "currency"/>
</p>
</body>
</html>
The above code will generate the following result −
Number Format:
Currency in USA : $120,000.23
|
[
{
"code": null,
"e": 1145,
"s": 1062,
"text": "The <fmt:formatNumber> tag is used to format numbers, percentages, and currencies."
},
{
"code": null,
"e": 1203,
"s": 1145,
"text": "The <fmt:formatNumber> tag has the following attributes −"
},
{
"code": null,
"e": 1678,
"s": 1203,
"text": "<%@ taglib prefix = \"c\" uri = \"http://java.sun.com/jsp/jstl/core\" %>\n<%@ taglib prefix = \"fmt\" uri = \"http://java.sun.com/jsp/jstl/fmt\" %>\n<html>\n <head>\n <title>JSTL fmt:formatNumber Tag</title>\n </head>\n <body>\n <h3>Number Format:</h3>\n <c:set var = \"balance\" value = \"120000.2309\" />\n <p>Currency in USA :\n <fmt:setLocale value = \"en_US\"/>\n <fmt:formatNumber value = \"${balance}\" type = \"currency\"/>\n </p>\n </body>\n</html>"
},
{
"code": null,
"e": 1730,
"s": 1678,
"text": "The above code will generate the following result −"
},
{
"code": null,
"e": 1775,
"s": 1730,
"text": "Number Format:\nCurrency in USA : $120,000.23"
}
] |
Prediction on Customer Churn with Mobile App Behavior Data | by Luke Sun | Towards Data Science
|
In the previous article, we created a logistic regression model to predict user enrollment using app behavior data. Hopefully, you had good learning there. This post aims to improve your model building skills with new techniques and tricks based on a larger mobile app behavior data. It is split into 7 parts.
1. Business challenge
2. Data processing
3. Model building
4. Model validation
5. Feature analysis
6. Feature selection
7. Conclusion
Now let’s begin the journey 🏃♀️🏃♂️.
Business challenge
Business challenge
We are tasked by a Fintech firm to analyze mobile app behavior data to identify potential churn customers. The goal is to predict which users are likely to churn, so the firm can focus on re-engaging these users with better products.
2. Data processing
EDA should be performed before data processing. Detailed steps are introduced in this article. The video below shows the final data after EDA.
2.1 One-hot encoding
One-hot encoding is a technique to convert categorical variables into numerical variables. It is needed as the model we are to build cannot read categorical data. One-hot encoding simply creates additional features based on the number of unique categories. Here, specifically,
dataset = pd.get_dummies(dataset)
The above automatically convert all categorical variables to numerical variables. But one drawback of one-hot encoding is the dummy variable trap. It is a scenario in which variables are highly correlated to each other. To avoid the trap, one of the dummy variables has to be dropped. Specifically,
dataset = dataset.drop(columns = [‘housing_na’, ‘zodiac_sign_na’, ‘payment_type_na’])
2.2 Data split
This is to split the data into train and test sets. Specifically,
X_train, X_test, y_train, y_test = train_test_split(dataset.drop(columns = ‘churn’), dataset[‘churn’], test_size = 0.2,random_state = 0)
2.3 Data balancing
There are many ways to combat imbalanced classes, such as changing performance metrics, collecting more data, over-sampling or down-sampling data, etc. Here we use the down-sampling method.
First, let’s investigate the imbalance level of the dependent variable in y_train.
As shown in Fig.1, the dependent variable is slightly imbalanced. To down-sample the data, we take the index of each class, and randomly choose the index of the majority class at a number of minority class in y_train. Then concatenate the index of both classes and down-sample x_train and y_train.
pos_index = y_train[y_train.values == 1].indexneg_index = y_train[y_train.values == 0].indexif len(pos_index) > len(neg_index): higher = pos_index lower = neg_indexelse: higher = neg_index lower = pos_indexrandom.seed(0)higher = np.random.choice(higher, size=len(lower))lower = np.asarray(lower)new_indexes = np.concatenate((lower, higher))X_train = X_train.loc[new_indexes,]y_train = y_train[new_indexes]
2.4. Feature scaling
Fundamentally, feature scaling is to normalize the range of the variables. This is to avoid any variable having a dominant impact on the model. For a neural network, feature scaling helps gradient descent converge faster than without it.
Here we use standardization to normalize the variables. Specifically,
from sklearn.preprocessing import StandardScalersc_X = StandardScaler()X_train2 = pd.DataFrame(sc_X.fit_transform(X_train))X_test2 = pd.DataFrame(sc_X.transform(X_test))
3. Model building
Specifically,
from sklearn.linear_model import LogisticRegressionclassifier = LogisticRegression(random_state = 0)classifier.fit(X_train, y_train)
Now, let’s test and evaluate the model. Specifically,
y_pred = classifier.predict(X_test)from sklearn.metrics import confusion_matrix, accuracy_score, f1_scorecm = confusion_matrix(y_test, y_pred)accuracy_score(y_test, y_pred)f1_score(y_test, y_pred)
Finally, we got an accuracy of 0.61 and F1 of 0. 61. Not too bad performance.
4. Model validation
With the model trained and tested, one question is how good the model is to generalize to an unknown dataset. We use cross-validation to measure the size of the performance difference between known datasets and unknown datasets. Specifically,
from sklearn.model_selection import cross_val_scoreaccuracies = cross_val_score(estimator = classifier, X = X_train, y = y_train, cv = 10)
With the above, we found 10-fold cross-validation produces an average accuracy of 0.64.5 with a standard deviation of 0.023. This indicates the model can generalize well on an unknown dataset ✨✨.
5. Feature analysis
With 41 features, we built a logistic regression model. But how to know which feature is more important in predicting the dependent variable? Specifically,
pd.concat([pd.DataFrame(X_train.columns, columns = [“features”]), pd.DataFrame(np.transpose(classifier.coef_), columns = [“coef”])],axis = 1)
As shown in Figure 2, we found two features that are very important: purchase_partners and purchase. This indicates a user’s purchase history plays a great role when deciding churn or not. Meanwhile, this indicates that not all variables are important for prediction.
6. Feature selection
Feature selection is a technique to select a subset of the most relevant features for modeling training.
In this application, x_train contains 41 features, but as seen in Figure 2, not all features play important roles. Using feature selection helps to reduce the number of unimportant features and achieve similar performance with less training data. A more detailed explanation of feature selection can be found here.
Here, we use the Recursive Feature Elimination (RFE). It works by fitting the given algorithm, ranking the feature by importance, discarding the least important features, and refitting until a specified number of features is achieved. Specifically,
from sklearn.feature_selection import RFEfrom sklearn.linear_model import LogisticRegressionclassifier = LogisticRegression()rfe = RFE(classifier, 20)rfe = rfe.fit(X_train, y_train)
Note above, we set to select 20 features. Figure 3 shows all selected features.
Great. with the RFE selected features, let’s retrain and test the model.
classifier.fit(X_train[X_train.columns[rfe.support_]], y_train)y_pred = classifier.predict(X_test[X_train.columns[rfe.support_]])
In the end, we got an accuracy of 0.61 and F1 of 0.61. The same performance as the model trained on 41 features 😇😇!
If we apply cross-validation again, we got an average accuracy of 0.647 with a standard deviation of 0.014. Again, very much the same as the previous model.
7. Conclusion
Initially, we trained a logistic regression model with 41 features, achieving an accuracy of 0.645. But using feature selection, we created a light version of the model with only 20 features, with an accuracy of 0.647. Half of the features are of no relevance in deciding the customer’s churn. Well done!
Great! That’s all of this journey! If you need the source code, feel free to visit my Github page 🤞🤞 (FYI, the repos is actively maintained).
|
[
{
"code": null,
"e": 482,
"s": 172,
"text": "In the previous article, we created a logistic regression model to predict user enrollment using app behavior data. Hopefully, you had good learning there. This post aims to improve your model building skills with new techniques and tricks based on a larger mobile app behavior data. It is split into 7 parts."
},
{
"code": null,
"e": 504,
"s": 482,
"text": "1. Business challenge"
},
{
"code": null,
"e": 523,
"s": 504,
"text": "2. Data processing"
},
{
"code": null,
"e": 541,
"s": 523,
"text": "3. Model building"
},
{
"code": null,
"e": 561,
"s": 541,
"text": "4. Model validation"
},
{
"code": null,
"e": 581,
"s": 561,
"text": "5. Feature analysis"
},
{
"code": null,
"e": 602,
"s": 581,
"text": "6. Feature selection"
},
{
"code": null,
"e": 616,
"s": 602,
"text": "7. Conclusion"
},
{
"code": null,
"e": 654,
"s": 616,
"text": "Now let’s begin the journey 🏃♀️🏃♂️."
},
{
"code": null,
"e": 673,
"s": 654,
"text": "Business challenge"
},
{
"code": null,
"e": 692,
"s": 673,
"text": "Business challenge"
},
{
"code": null,
"e": 926,
"s": 692,
"text": "We are tasked by a Fintech firm to analyze mobile app behavior data to identify potential churn customers. The goal is to predict which users are likely to churn, so the firm can focus on re-engaging these users with better products."
},
{
"code": null,
"e": 945,
"s": 926,
"text": "2. Data processing"
},
{
"code": null,
"e": 1088,
"s": 945,
"text": "EDA should be performed before data processing. Detailed steps are introduced in this article. The video below shows the final data after EDA."
},
{
"code": null,
"e": 1109,
"s": 1088,
"text": "2.1 One-hot encoding"
},
{
"code": null,
"e": 1386,
"s": 1109,
"text": "One-hot encoding is a technique to convert categorical variables into numerical variables. It is needed as the model we are to build cannot read categorical data. One-hot encoding simply creates additional features based on the number of unique categories. Here, specifically,"
},
{
"code": null,
"e": 1420,
"s": 1386,
"text": "dataset = pd.get_dummies(dataset)"
},
{
"code": null,
"e": 1719,
"s": 1420,
"text": "The above automatically convert all categorical variables to numerical variables. But one drawback of one-hot encoding is the dummy variable trap. It is a scenario in which variables are highly correlated to each other. To avoid the trap, one of the dummy variables has to be dropped. Specifically,"
},
{
"code": null,
"e": 1805,
"s": 1719,
"text": "dataset = dataset.drop(columns = [‘housing_na’, ‘zodiac_sign_na’, ‘payment_type_na’])"
},
{
"code": null,
"e": 1820,
"s": 1805,
"text": "2.2 Data split"
},
{
"code": null,
"e": 1886,
"s": 1820,
"text": "This is to split the data into train and test sets. Specifically,"
},
{
"code": null,
"e": 2023,
"s": 1886,
"text": "X_train, X_test, y_train, y_test = train_test_split(dataset.drop(columns = ‘churn’), dataset[‘churn’], test_size = 0.2,random_state = 0)"
},
{
"code": null,
"e": 2042,
"s": 2023,
"text": "2.3 Data balancing"
},
{
"code": null,
"e": 2232,
"s": 2042,
"text": "There are many ways to combat imbalanced classes, such as changing performance metrics, collecting more data, over-sampling or down-sampling data, etc. Here we use the down-sampling method."
},
{
"code": null,
"e": 2315,
"s": 2232,
"text": "First, let’s investigate the imbalance level of the dependent variable in y_train."
},
{
"code": null,
"e": 2613,
"s": 2315,
"text": "As shown in Fig.1, the dependent variable is slightly imbalanced. To down-sample the data, we take the index of each class, and randomly choose the index of the majority class at a number of minority class in y_train. Then concatenate the index of both classes and down-sample x_train and y_train."
},
{
"code": null,
"e": 3031,
"s": 2613,
"text": "pos_index = y_train[y_train.values == 1].indexneg_index = y_train[y_train.values == 0].indexif len(pos_index) > len(neg_index): higher = pos_index lower = neg_indexelse: higher = neg_index lower = pos_indexrandom.seed(0)higher = np.random.choice(higher, size=len(lower))lower = np.asarray(lower)new_indexes = np.concatenate((lower, higher))X_train = X_train.loc[new_indexes,]y_train = y_train[new_indexes]"
},
{
"code": null,
"e": 3052,
"s": 3031,
"text": "2.4. Feature scaling"
},
{
"code": null,
"e": 3290,
"s": 3052,
"text": "Fundamentally, feature scaling is to normalize the range of the variables. This is to avoid any variable having a dominant impact on the model. For a neural network, feature scaling helps gradient descent converge faster than without it."
},
{
"code": null,
"e": 3360,
"s": 3290,
"text": "Here we use standardization to normalize the variables. Specifically,"
},
{
"code": null,
"e": 3530,
"s": 3360,
"text": "from sklearn.preprocessing import StandardScalersc_X = StandardScaler()X_train2 = pd.DataFrame(sc_X.fit_transform(X_train))X_test2 = pd.DataFrame(sc_X.transform(X_test))"
},
{
"code": null,
"e": 3548,
"s": 3530,
"text": "3. Model building"
},
{
"code": null,
"e": 3562,
"s": 3548,
"text": "Specifically,"
},
{
"code": null,
"e": 3695,
"s": 3562,
"text": "from sklearn.linear_model import LogisticRegressionclassifier = LogisticRegression(random_state = 0)classifier.fit(X_train, y_train)"
},
{
"code": null,
"e": 3749,
"s": 3695,
"text": "Now, let’s test and evaluate the model. Specifically,"
},
{
"code": null,
"e": 3946,
"s": 3749,
"text": "y_pred = classifier.predict(X_test)from sklearn.metrics import confusion_matrix, accuracy_score, f1_scorecm = confusion_matrix(y_test, y_pred)accuracy_score(y_test, y_pred)f1_score(y_test, y_pred)"
},
{
"code": null,
"e": 4024,
"s": 3946,
"text": "Finally, we got an accuracy of 0.61 and F1 of 0. 61. Not too bad performance."
},
{
"code": null,
"e": 4044,
"s": 4024,
"text": "4. Model validation"
},
{
"code": null,
"e": 4287,
"s": 4044,
"text": "With the model trained and tested, one question is how good the model is to generalize to an unknown dataset. We use cross-validation to measure the size of the performance difference between known datasets and unknown datasets. Specifically,"
},
{
"code": null,
"e": 4426,
"s": 4287,
"text": "from sklearn.model_selection import cross_val_scoreaccuracies = cross_val_score(estimator = classifier, X = X_train, y = y_train, cv = 10)"
},
{
"code": null,
"e": 4622,
"s": 4426,
"text": "With the above, we found 10-fold cross-validation produces an average accuracy of 0.64.5 with a standard deviation of 0.023. This indicates the model can generalize well on an unknown dataset ✨✨."
},
{
"code": null,
"e": 4642,
"s": 4622,
"text": "5. Feature analysis"
},
{
"code": null,
"e": 4798,
"s": 4642,
"text": "With 41 features, we built a logistic regression model. But how to know which feature is more important in predicting the dependent variable? Specifically,"
},
{
"code": null,
"e": 4940,
"s": 4798,
"text": "pd.concat([pd.DataFrame(X_train.columns, columns = [“features”]), pd.DataFrame(np.transpose(classifier.coef_), columns = [“coef”])],axis = 1)"
},
{
"code": null,
"e": 5208,
"s": 4940,
"text": "As shown in Figure 2, we found two features that are very important: purchase_partners and purchase. This indicates a user’s purchase history plays a great role when deciding churn or not. Meanwhile, this indicates that not all variables are important for prediction."
},
{
"code": null,
"e": 5229,
"s": 5208,
"text": "6. Feature selection"
},
{
"code": null,
"e": 5334,
"s": 5229,
"text": "Feature selection is a technique to select a subset of the most relevant features for modeling training."
},
{
"code": null,
"e": 5649,
"s": 5334,
"text": "In this application, x_train contains 41 features, but as seen in Figure 2, not all features play important roles. Using feature selection helps to reduce the number of unimportant features and achieve similar performance with less training data. A more detailed explanation of feature selection can be found here."
},
{
"code": null,
"e": 5898,
"s": 5649,
"text": "Here, we use the Recursive Feature Elimination (RFE). It works by fitting the given algorithm, ranking the feature by importance, discarding the least important features, and refitting until a specified number of features is achieved. Specifically,"
},
{
"code": null,
"e": 6080,
"s": 5898,
"text": "from sklearn.feature_selection import RFEfrom sklearn.linear_model import LogisticRegressionclassifier = LogisticRegression()rfe = RFE(classifier, 20)rfe = rfe.fit(X_train, y_train)"
},
{
"code": null,
"e": 6160,
"s": 6080,
"text": "Note above, we set to select 20 features. Figure 3 shows all selected features."
},
{
"code": null,
"e": 6233,
"s": 6160,
"text": "Great. with the RFE selected features, let’s retrain and test the model."
},
{
"code": null,
"e": 6363,
"s": 6233,
"text": "classifier.fit(X_train[X_train.columns[rfe.support_]], y_train)y_pred = classifier.predict(X_test[X_train.columns[rfe.support_]])"
},
{
"code": null,
"e": 6479,
"s": 6363,
"text": "In the end, we got an accuracy of 0.61 and F1 of 0.61. The same performance as the model trained on 41 features 😇😇!"
},
{
"code": null,
"e": 6636,
"s": 6479,
"text": "If we apply cross-validation again, we got an average accuracy of 0.647 with a standard deviation of 0.014. Again, very much the same as the previous model."
},
{
"code": null,
"e": 6650,
"s": 6636,
"text": "7. Conclusion"
},
{
"code": null,
"e": 6955,
"s": 6650,
"text": "Initially, we trained a logistic regression model with 41 features, achieving an accuracy of 0.645. But using feature selection, we created a light version of the model with only 20 features, with an accuracy of 0.647. Half of the features are of no relevance in deciding the customer’s churn. Well done!"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.