title stringlengths 3 221 | text stringlengths 17 477k | parsed listlengths 0 3.17k |
|---|---|---|
Python - Extract string between two substrings - GeeksforGeeks | 09 May, 2021
Given a string and two substrings, write a Python program to extract the string between the found two substrings.
Input : test_str = “Gfg is best for geeks and CS”, sub1 = “is”, sub2 = “and”
Output : best for geeks
Explanation : best for geeks is between is and ‘and’
Input : test_str = “Gfg is best for geeks and CS”, sub1 = “for”, sub2 = “and”
Output : geeks
Explanation : geeks is between for and ‘and’
Method #1 : Using index() + loop
In this, we get the indices of both the substrings using index(), then a loop is used to iterate within the index to find the required string between them.
Python3
# Python3 code to demonstrate working# of Extract string between 2 substrings# Using loop + index() # initializing stringtest_str = "Gfg is best for geeks and CS" # printing original stringprint("The original string is : " + str(test_str)) # initializing substringssub1 = "is"sub2 = "and" # getting index of substringsidx1 = test_str.index(sub1)idx2 = test_str.index(sub2) res = ''# getting elements in betweenfor idx in range(idx1 + len(sub1) + 1, idx2): res = res + test_str[idx] # printing resultprint("The extracted string : " + res)
Output:
The original string is : Gfg is best for geeks and CS
The extracted string : best for geeks
Method #2 : Using index() + string slicing
Similar to the above method, just the task of slicing is performed using string slicing for providing a much compact solution.
Python3
# Python3 code to demonstrate working # of Extract string between 2 substrings# Using index() + string slicing # initializing stringtest_str = "Gfg is best for geeks and CS" # printing original stringprint("The original string is : " + str(test_str)) # initializing substringssub1 = "is"sub2 = "and" # getting index of substringsidx1 = test_str.index(sub1)idx2 = test_str.index(sub2) # length of substring 1 is added to# get string from next characterres = test_str[idx1 + len(sub1) + 1: idx2] # printing resultprint("The extracted string : " + res)
Output:
The original string is : Gfg is best for geeks and CS
The extracted string : best for geeks
Python string-programs
Python
Python Programs
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Install PIP on Windows ?
Check if element exists in list in Python
How To Convert Python Dictionary To JSON?
How to drop one or multiple columns in Pandas Dataframe
Python Classes and Objects
Defaultdict in Python
Python | Get dictionary keys as a list
Python | Split string into list of characters
Python | Convert a list to dictionary
How to print without newline in Python? | [
{
"code": null,
"e": 25647,
"s": 25619,
"text": "\n09 May, 2021"
},
{
"code": null,
"e": 25762,
"s": 25647,
"text": "Given a string and two substrings, write a Python program to extract the string between the found two substrings. "
},
{
"code": null,
"e": 25840,
... |
Reading and Writing to text files in Python Program | In this tutorial, we are going to learn about file handling in Python. We can easily edit files in Python using the built-in functions.
We have two types of files that can edit in Python. Let's see what they are.
Text Files
Text files are normal files that contain the English alphabets. We call the content present in the files as text.
Binary Files
Binary files contain data in 0's and 1's. We can't understand that language.
Whenever we are working with the files in Python, we have to mention the accessing mode of the file. For example, if you want to open a file to write something in it, then it's a type of mode. Like the same way, we have different accessing modes.
Read Only - r
In this mode, we can only read the contents of the file. If the file doesn't exist, then we will get an error.
Read and Write - r+
In this mode, we can read the contents of the file, and we can also write data to the file. If the file doesn't exist, then we will get an error.
Write Only - w
In this mode, we can write content to the file. Data present in the file will be overridden. If the file doesn't exist, then it will create a new file.
Append Only - a
In this mode, we can append data to the file at the end. If the file doesn't exist, then it will create a new file.
Append and Write - a+
In this mode, we can append and write data to the file. If the file doesn't exist, then it will create a new file.
Let's see how to write data to a file.
Open a file using the open() in w mode. If you have to read and write data using a file, then open it in an r+ mode.
Open a file using the open() in w mode. If you have to read and write data using a file, then open it in an r+ mode.
Write the data to the file using write() or writelines() method
Write the data to the file using write() or writelines() method
Close the file.
Close the file.
We have the following code to achieve our goal.
# opening a file in 'w'
file = open('sample.txt', 'w')
# write() - it used to write direct text to the file
# writelines() - it used to write multiple lines or strings at a time, it takes ite
rator as an argument
# writing data using the write() method
file.write("I am a Python programmer.\nI am happy.")
# closing the file
file.close()
Go the directory of the program, and you will find a file named sample.txt. See the content in it.
We have seen a method to write data to a file. Let's examine how to read the data which we have written to the file.
Open a file using the open() in r mode. If you have to read and write data using a file, then open it in an r+ mode.
Open a file using the open() in r mode. If you have to read and write data using a file, then open it in an r+ mode.
Read data from the file using read() or readline() or readlines() methods. Store the data in a variable.
Read data from the file using read() or readline() or readlines() methods. Store the data in a variable.
Display the data.
Display the data.
Close the file.
Close the file.
We have the following code to achieve our goal.
# opening a file in 'r'
file = open('sample.txt', 'r')
# read() - it used to all content from a file
# readline() - it used to read number of lines we want, it takes one argument which
is number of lines
# readlines() - it used to read all the lines from a file, it returns a list
# reading data from the file using read() method
data = file.read()
# printing the data
print(data)
# closing the file
file.close()
If you run the above program, you will get the following results.
I am a Python programmer.
I am happy.
I hope you understand the tutorial. If you have any doubts, mention them in the comment section. | [
{
"code": null,
"e": 1198,
"s": 1062,
"text": "In this tutorial, we are going to learn about file handling in Python. We can easily edit files in Python using the built-in functions."
},
{
"code": null,
"e": 1275,
"s": 1198,
"text": "We have two types of files that can edit in Py... |
Optional equals() method in Java with Examples - GeeksforGeeks | 30 Jul, 2019
The equals() method of java.util.Optional class in Java is used to check for equality of this Optional with the specified Optional. This method takes an Optional instance and compares it with this Optional and returns a boolean value representing the same.
Syntax:
public boolean equals(Object obj)
Parameter: This method accepts a parameter obj which is the Optional to be checked for equality with this Optional.
Return Value: This method returns a boolean which tells if this Optional is equal to the specified Object.
Exception: This method do not throw any Exception.
Program 1:
// Java program to demonstrate// the above method import java.text.*;import java.util.*; public class OptionalDemo { public static void main(String[] args) { Optional<Integer> op1 = Optional.of(456); System.out.println("Optional 1: " + op1); Optional<Integer> op2 = Optional.of(456); System.out.println("Optional 2: " + op2); System.out.println("Comparing Optional 1" + " and Optional 2: " + op1.equals(op2)); }}
Optional 1: Optional[456]
Optional 2: Optional[456]
Comparing Optional 1 and Optional 2: true
Program 2:
// Java program to demonstrate// the above method import java.text.*;import java.util.*; public class OptionalDemo { public static void main(String[] args) { Optional<Integer> op1 = Optional.of(456); System.out.println("Optional 1: " + op1); Optional<Integer> op2 = Optional.empty(); System.out.println("Optional 2: " + op2); System.out.println("Comparing Optional 1" + " and Optional 2: " + op1.equals(op2)); }}
Optional 1: Optional[456]
Optional 2: Optional.empty
Comparing Optional 1 and Optional 2: false
Reference: https://docs.oracle.com/javase/9/docs/api/java/util/Optional.html#equals-java.lang.Object-
Java - util package
Java-Functions
Java-Optional
Java
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Object Oriented Programming (OOPs) Concept in Java
HashMap in Java with Examples
Interfaces in Java
How to iterate any Map in Java
ArrayList in Java
Initialize an ArrayList in Java
Stack Class in Java
Singleton Class in Java
Multithreading in Java
Set in Java | [
{
"code": null,
"e": 25601,
"s": 25573,
"text": "\n30 Jul, 2019"
},
{
"code": null,
"e": 25858,
"s": 25601,
"text": "The equals() method of java.util.Optional class in Java is used to check for equality of this Optional with the specified Optional. This method takes an Optional i... |
Application of Syntax Directed Translation - GeeksforGeeks | 21 Aug, 2020
In this article, we are going to cover the application of Syntax Directed Translation where we will also see real example and how can solve the problem with these applications. let’s discuss one by one.
Pre-requisite : Introduction to Syntax Directed Translation
Syntax Directed Translation :It is used for semantic analysis and SDT is basically used to construct the parse tree with Grammar and Semantic action. In Grammar, need to decide who has the highest priority will be done first and In semantic action, will decide what type of action done by grammar.
Example :
SDT = Grammar+Semantic Action
Grammar = E -> E1+E2
Semantic action= if (E1.type != E2.type) then print "type mismatching"
Application of Syntax Directed Translation :
SDT is used for Executing Arithmetic Expression.
In the conversion from infix to postfix expression.
In the conversion from infix to prefix expression.
It is also used for Binary to decimal conversion.
In counting number of Reduction.
In creating a Syntax tree.
SDT is used to generate intermediate code.
In storing information into symbol table.
SDT is commonly used for type checking also.
Example :Here, we are going to cover an example of application of SDT for better understanding the SDT application uses. let’s consider an example of arithmetic expression and then you will see how SDT will be constructed.
Let’s consider Arithmetic Expression is given.
Input : 2+3*4
output: 14
SDT for the above example.
SDT for 2+3*4
Semantic Action is given as following.
E -> E+T { E.val = E.val + T.val then print (E.val)}
|T { E.val = T.val}
T -> T*F { T.val = T.val * F.val}
|F { T.val = F.val}
F -> Id {F.val = id}
Compiler Design
GATE CS
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Directed Acyclic graph in Compiler Design (with examples)
S - attributed and L - attributed SDTs in Syntax directed translation
Issues in the design of a code generator
Why FIRST and FOLLOW in Compiler Design?
Error Handling in Compiler Design
Layers of OSI Model
ACID Properties in DBMS
TCP/IP Model
Types of Operating Systems
Page Replacement Algorithms in Operating Systems | [
{
"code": null,
"e": 24544,
"s": 24516,
"text": "\n21 Aug, 2020"
},
{
"code": null,
"e": 24747,
"s": 24544,
"text": "In this article, we are going to cover the application of Syntax Directed Translation where we will also see real example and how can solve the problem with these ... |
Convert a tree to forest of even nodes - GeeksforGeeks | 28 Jun, 2021
Given a tree of n even nodes. The task is to find the maximum number of edges to be removed from the given tree to obtain forest of trees having even number of nodes. This problem is always solvable as given graph has even nodes.Examples:
Input : n = 10
Edge 1: 1 3
Edge 2: 1 6
Edge 3: 1 2
Edge 4: 3 4
Edge 5: 6 8
Edge 6: 2 7
Edge 7: 2 5
Edge 8: 4 9
Edge 9: 4 10
Output : 2
By removing 2 edges we can obtain the forest with even node tree.
Dotted line shows removed edges. Any further removal of edge will not satisfy
the even nodes condition.
Find a subtree with even number of nodes and remove it from rest of tree by removing the edge connecting it. After removal, we are left with tree with even node only because initially we have even number of nodes in the tree and removed subtree has also even node. Repeat the same procedure until we left with the tree that cannot be further decomposed in this manner.To do this, the idea is to use Depth First Search to traverse the tree. Implement DFS function in such a manner that it will return number of nodes in the subtree whose root is node on which DFS is performed. If the number of nodes is even then remove the edge, else ignore.Below is implementation of this approach:
C++
Java
Python3
C#
Javascript
// C++ program to find maximum number to be removed// to convert a tree into forest containing trees of// even number of nodes#include<bits/stdc++.h>#define N 12using namespace std; // Return the number of nodes of subtree having// node as a root.int dfs(vector<int> tree[N], int visit[N], int *ans, int node){ int num = 0, temp = 0; // Mark node as visited. visit[node] = 1; // Traverse the adjacency list to find non- // visited node. for (int i = 0; i < tree[node].size(); i++) { if (visit[tree[node][i]] == 0) { // Finding number of nodes of the subtree // of a subtree. temp = dfs(tree, visit, ans, tree[node][i]); // If nodes are even, increment number of // edges to removed. // Else leave the node as child of subtree. (temp%2)?(num += temp):((*ans)++); } } return num+1;} // Return the maximum number of edge to remove// to make forest.int minEdge(vector<int> tree[N], int n){ int visit[n+2]; int ans = 0; memset(visit, 0, sizeof visit); dfs(tree, visit, &ans, 1); return ans;} // Driven Programint main(){ int n = 10; vector<int> tree[n+2]; tree[1].push_back(3); tree[3].push_back(1); tree[1].push_back(6); tree[6].push_back(1); tree[1].push_back(2); tree[2].push_back(1); tree[3].push_back(4); tree[4].push_back(3); tree[6].push_back(8); tree[8].push_back(6); tree[2].push_back(7); tree[7].push_back(2); tree[2].push_back(5); tree[5].push_back(2); tree[4].push_back(9); tree[9].push_back(4); tree[4].push_back(10); tree[10].push_back(4); cout << minEdge(tree, n) << endl; return 0;}
// Java program to find maximum number to be removed// to convert a tree into forest containing trees of// even number of nodesimport java.util.*; class GFG{ static int N = 12,ans; static Vector<Vector<Integer>> tree=new Vector<Vector<Integer>>(); // Return the number of nodes of subtree having // node as a root. static int dfs( int visit[], int node) { int num = 0, temp = 0; // Mark node as visited. visit[node] = 1; // Traverse the adjacency list to find non- // visited node. for (int i = 0; i < tree.get(node).size(); i++) { if (visit[tree.get(node).get(i)] == 0) { // Finding number of nodes of the subtree // of a subtree. temp = dfs( visit, tree.get(node).get(i)); // If nodes are even, increment number of // edges to removed. // Else leave the node as child of subtree. if(temp%2!=0) num += temp; else ans++; } } return num+1; } // Return the maximum number of edge to remove // to make forest. static int minEdge( int n) { int visit[] = new int[n+2]; ans = 0; dfs( visit, 1); return ans; } // Driven Program public static void main(String args[]) { int n = 10; //set the size of vector for(int i = 0; i < n + 2;i++) tree.add(new Vector<Integer>()); tree.get(1).add(3); tree.get(3).add(1); tree.get(1).add(6); tree.get(6).add(1); tree.get(1).add(2); tree.get(2).add(1); tree.get(3).add(4); tree.get(4).add(3); tree.get(6).add(8); tree.get(8).add(6); tree.get(2).add(7); tree.get(7).add(2); tree.get(2).add(5); tree.get(5).add(2); tree.get(4).add(9); tree.get(9).add(4); tree.get(4).add(10); tree.get(10).add(4); System.out.println( minEdge( n)); }} // This code is contributed by Arnab Kundu
# Python3 program to find maximum# number to be removed to convert# a tree into forest containing trees# of even number of nodes # Return the number of nodes of # subtree having node as a root.def dfs(tree, visit, ans, node): num = 0 temp = 0 # Mark node as visited. visit[node] = 1 # Traverse the adjacency list # to find non-visited node. for i in range(len(tree[node])): if (visit[tree[node][i]] == 0): # Finding number of nodes of # the subtree of a subtree. temp = dfs(tree, visit, ans, tree[node][i]) # If nodes are even, increment # number of edges to removed. # Else leave the node as child # of subtree. if(temp % 2): num += temp else: ans[0] += 1 return num + 1 # Return the maximum number of# edge to remove to make forest.def minEdge(tree, n): visit = [0] * (n + 2) ans = [0] dfs(tree, visit, ans, 1) return ans[0] # Driver CodeN = 12n = 10 tree = [[] for i in range(n + 2)]tree[1].append(3)tree[3].append(1) tree[1].append(6)tree[6].append(1) tree[1].append(2)tree[2].append(1) tree[3].append(4)tree[4].append(3) tree[6].append(8)tree[8].append(6) tree[2].append(7)tree[7].append(2) tree[2].append(5)tree[5].append(2) tree[4].append(9)tree[9].append(4) tree[4].append(10)tree[10].append(4) print(minEdge(tree, n)) # This code is contributed by pranchalK
// C# program to find maximum number// to be removed to convert a tree into// forest containing trees of even number of nodesusing System;using System.Collections.Generic; class GFG{ static int N = 12, ans; static List<List<int>> tree = new List<List<int>>(); // Return the number of nodes of // subtree having node as a root. static int dfs(int []visit, int node) { int num = 0, temp = 0; // Mark node as visited. visit[node] = 1; // Traverse the adjacency list to // find non-visited node. for (int i = 0; i < tree[node].Count; i++) { if (visit[tree[node][i]] == 0) { // Finding number of nodes of the // subtree of a subtree. temp = dfs(visit, tree[node][i]); // If nodes are even, increment number of // edges to removed. // Else leave the node as child of subtree. if(temp % 2 != 0) num += temp; else ans++; } } return num + 1; } // Return the maximum number of edge // to remove to make forest. static int minEdge(int n) { int []visit = new int[n + 2]; ans = 0; dfs(visit, 1); return ans; } // Driver Code public static void Main(String []args) { int n = 10; //set the size of vector for(int i = 0; i < n + 2;i++) tree.Add(new List<int>()); tree[1].Add(3); tree[3].Add(1); tree[1].Add(6); tree[6].Add(1); tree[1].Add(2); tree[2].Add(1); tree[3].Add(4); tree[4].Add(3); tree[6].Add(8); tree[8].Add(6); tree[2].Add(7); tree[7].Add(2); tree[2].Add(5); tree[5].Add(2); tree[4].Add(9); tree[9].Add(4); tree[4].Add(10); tree[10].Add(4); Console.WriteLine(minEdge(n)); }} // This code is contributed by Rajput-Ji
<script> // JavaScript program to find maximum number// to be removed to convert a tree into// forest containing trees of even number of nodesvar N = 12, ans; var tree = Array(); // Return the number of nodes of// subtree having node as a root.function dfs(visit, node){ var num = 0, temp = 0; // Mark node as visited. visit[node] = 1; // Traverse the adjacency list to // find non-visited node. for (var i = 0; i < tree[node].length; i++) { if (visit[tree[node][i]] == 0) { // Finding number of nodes of the // subtree of a subtree. temp = dfs(visit, tree[node][i]); // If nodes are even, increment number of // edges to removed. // Else leave the node as child of subtree. if(temp % 2 != 0) num += temp; else ans++; } } return num + 1;} // Return the maximum number of edge// to remove to make forest.function minEdge(n){ var visit = Array(n+2).fill(0); ans = 0; dfs(visit, 1); return ans;} // Driver Codevar n = 10; //set the size of vectorfor(var i = 0; i < n + 2;i++) tree.push(new Array());tree[1].push(3);tree[3].push(1);tree[1].push(6);tree[6].push(1);tree[1].push(2);tree[2].push(1);tree[3].push(4);tree[4].push(3);tree[6].push(8);tree[8].push(6);tree[2].push(7);tree[7].push(2);tree[2].push(5);tree[5].push(2);tree[4].push(9);tree[9].push(4);tree[4].push(10);tree[10].push(4);document.write(minEdge(n)); </script>
Output:
2
Time Complexity: O(n).Reference: http://stackoverflow.com/questions/12043252/obtain-forest-out-of-tree-with-even-number-of-nodes This article is contributed by Anuj Chauhan. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
PranchalKatiyar
andrew1234
Rajput-Ji
arorakashish0911
itsok
DFS
Tree
DFS
Tree
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Tree Traversals (Inorder, Preorder and Postorder)
AVL Tree | Set 1 (Insertion)
Binary Tree | Set 1 (Introduction)
Level Order Binary Tree Traversal
Binary Tree | Set 3 (Types of Binary Tree)
Inorder Tree Traversal without Recursion
Binary Tree | Set 2 (Properties)
Write a Program to Find the Maximum Depth or Height of a Tree
Decision Tree
A program to check if a binary tree is BST or not | [
{
"code": null,
"e": 37201,
"s": 37173,
"text": "\n28 Jun, 2021"
},
{
"code": null,
"e": 37442,
"s": 37201,
"text": "Given a tree of n even nodes. The task is to find the maximum number of edges to be removed from the given tree to obtain forest of trees having even number of nod... |
How to Install Rust on Termux? - GeeksforGeeks | 06 Dec, 2021
Rust is a general-purpose programming language. It is an open-source system programming language. Rust can be installed in Termux. Mainly C# programming language is used in Rust. Rust is famous for its speed, memory safety & parallelism.
In this article, we will look into the process of installing rust on termux.
Step 1: Open Termux in mobile.
Step 2: Use the below command to install rust on termux:
pkg install rust
Step 3: Press Y to continue & wait for some time.
Step 4: Then Progress will start & wait till finishes.
Step 5: After that installation will be completed.
Step 6: Now use the below command to verify the installation:
rustc --version
Hence installation is successful.
how-to-install
Picked
How To
Installation Guide
Rust
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Install FFmpeg on Windows?
How to Add External JAR File to an IntelliJ IDEA Project?
How to Set Git Username and Password in GitBash?
How to create a nested RecyclerView in Android
How to Create and Setup Spring Boot Project in Eclipse IDE?
Installation of Node.js on Linux
How to Install FFmpeg on Windows?
How to Install Pygame on Windows ?
How to Add External JAR File to an IntelliJ IDEA Project?
How to Install Jupyter Notebook on MacOS? | [
{
"code": null,
"e": 26307,
"s": 26279,
"text": "\n06 Dec, 2021"
},
{
"code": null,
"e": 26546,
"s": 26307,
"text": "Rust is a general-purpose programming language. It is an open-source system programming language. Rust can be installed in Termux. Mainly C# programming language i... |
Random number generator in Java | To generate random numbers in Java, use.
import java.util.Random;
Now, take Random class and create an object.
Random num = new Random();
Now, in a loop, use the nextInt() method since it is used to get the next random integer value. You can also set a range, like for 0 to 20, write it as.
nextInt( 20 );
Let us see the complete example wherein the range is 1 to 10.
Live Demo
import java.util.Random;
public class Demo {
public static void main( String args[] ) {
Random num = new Random();
int res;
for ( int i = 1; i <= 5; i++ ) {
res = 1 + num.nextInt( 10 );
System.out.printf( "%d ", res );
}
}
}
4 5 9 6 9 | [
{
"code": null,
"e": 1103,
"s": 1062,
"text": "To generate random numbers in Java, use."
},
{
"code": null,
"e": 1128,
"s": 1103,
"text": "import java.util.Random;"
},
{
"code": null,
"e": 1173,
"s": 1128,
"text": "Now, take Random class and create an object."... |
Conversion of Struct data type to Hex String and vice versa - GeeksforGeeks | 18 Apr, 2022
Most of the log files produced in system are either in binary(0,1) or hex(0x) formats. Sometimes you might need to map this data into readable format. Conversion of this hex information into system defined data types such as ‘int/string/float’ is comparatively easy. On the other hand when you have some user defined data types such as ‘struct’ the process can be complicated. Following basic program will help you with above mentioned operation. While implementing in real world you need more error handling.
CPP
// C++ Program to convert a 'struct' in 'hex string'// and vice versa#include <iostream>#include <iomanip>#include <sstream>#include <string> using namespace std; struct Student_data{ int student_id; char name[16];}; void convert_to_hex_string(ostringstream &op, const unsigned char* data, int size){ // Format flags ostream::fmtflags old_flags = op.flags(); // Fill characters char old_fill = op.fill(); op << hex << setfill('0'); for (int i = 0; i < size; i++) { // Give space between two hex values if (i>0) op << ' '; // force output to use hex version of ascii code op << "0x" << setw(2) << static_cast<int>(data[i]); } op.flags(old_flags); op.fill(old_fill);} void convert_to_struct(istream& ip, unsigned char* data, int size){ // Get the line we want to process string line; getline(ip, line); istringstream ip_convert(line); ip_convert >> hex; // Read in unsigned ints, as wrote out hex version // of ascii code unsigned int u = 0; int i = 0; while ((ip_convert >> u) && (i < size)) if((0x00 <= u) && (0xff >= u)) data[i++] = static_cast<unsigned char>(u);} // Driver codeint main(){ Student_data student1 = {1, "Rohit"}; ostringstream op; // Function call to convert 'struct' into 'hex string' convert_to_hex_string(op, reinterpret_cast<const unsigned char*>(&student1), sizeof(Student_data)); string output = op.str(); cout << "After conversion from struct to hex string:\n" << output << endl; // Get the hex string istringstream ip(output); Student_data student2 = {0}; // Function call to convert 'hex string' to 'struct' convert_to_struct(ip, reinterpret_cast<unsigned char*>(&student2), sizeof(Student_data)); cout << "\nAfter Conversion form hex to struct: \n"; cout << "Id \t: " << student2.student_id << endl; cout << "Name \t: "<< student2.name << endl; return 0;}
Output:
After conversion from struct to hex string:
0x01 0x00 0x00 0x00 0x52 0x6f 0x68 0x69 0x74 0x00 0x00 0x00 0x00 0x00
0x00 0x00 0x00 0x00 0x00 0x00
After Conversion form hex to struct:
Id : 1
Name : Rohit
This article is contributed by Rohit Kasle. If you like GeeksforGeeks and would like to contribute, you can also write an article and mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above
simmytarika5
cpp-structure
C Language
C++
CPP
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
TCP Server-Client implementation in C
Exception Handling in C++
Multithreading in C
'this' pointer in C++
Arrow operator -> in C/C++ with Examples
Vector in C++ STL
Inheritance in C++
Initialize a vector in C++ (6 different ways)
Map in C++ Standard Template Library (STL)
C++ Classes and Objects | [
{
"code": null,
"e": 25479,
"s": 25451,
"text": "\n18 Apr, 2022"
},
{
"code": null,
"e": 25990,
"s": 25479,
"text": "Most of the log files produced in system are either in binary(0,1) or hex(0x) formats. Sometimes you might need to map this data into readable format. Conversion o... |
C Program to print pyramid pattern - GeeksforGeeks | 25 Oct, 2021
Given a positive integer n. Task is to print the pyramid pattern as described in the examples below.Examples:
Input : 2
Output :
1
22
1
Input : 3
Output :
1
22
333
22
1
Input : 5
Output :
1
22
333
4444
55555
4444
333
22
1
Algorithm:
//Print All Pattern
printPattern(int N)
// Print upper pattern
/* 1
22
333
...
N...(N)times */
for i->1 to N
for j->1 to i
print i
print next line
// Print lower triangle
/* N-1....(N-1)times
...
333
22
1 */
for i->N-1 to 0
for j->i to 0
print i
print next line
C++
Java
Python3
JavaScript
// C++ program to print the pyramid pattern#include <bits/stdc++.h>using namespace std; // Print the pattern upto nvoid printPattern(int n){ // Printing upper part for (int i = 1; i <= n; i++) { for (int j = 1; j <= i; j++) cout << i; cout << "\n"; } // printing lower part for (int i = n - 1; i > 0; i--) { for (int j = i; j > 0; j--) cout << i; cout << "\n"; }} // Driver Codeint main(){ int n = 8; printPattern(n); return 0;}
// Java program to print the pyramid pattern class GFG { // Print the pattern upto nstatic void printPattern(int n){ // Printing upper part for (int i = 1; i <= n; i++) { for (int j = 1; j <= i; j++) System.out.print(i); System.out.print("\n"); } // printing lower part for (int i = n - 1; i > 0; i--) { for (int j = i; j > 0; j--) System.out.print(i); System.out.print("\n"); }} // Driver codepublic static void main(String arg[]){ int n = 8; printPattern(n);}} // This code is contributed by Anant Agarwal.
# Python program to print# the pyramid pattern # Print the pattern upto ndef printPattern(n): # Printing upper part for i in range(n+1): for j in range(1,i+1): print(i,end="") print("") # printing lower part for i in range(n - 1,0,-1): for j in range(i,0,-1): print(i,end="") print("") # driver coden = 8printPattern(n) # This code is contributed# by Anant Agarwal.
C#]
<?php
// php program to print the
// pyramid pattern
// Print the pattern upto n
function printPattern($n)
{
// Printing upper part
for ($i = 1; $i <= $n; $i++)
{
for ($j = 1; $j <= $i; $j++)
echo $i;
echo "\n";
}
// printing lower part
for ($i = $n - 1; $i > 0; $i--)
{
for ($j = $i; $j > 0; $j--)
echo $i;
echo "\n";
}
}
// Driver Code
$n = 8;
printPattern($n);
// This code is contributed by mits
?>
<script> // javascript program to print the pyramid pattern // Print the pattern upto nfunction printPattern(n){ // Printing upper part for (var i = 1; i <= n; i++) { for (var j = 1; j <= i; j++) document.write(i); document.write("<br>"); } // printing lower part for (var i = n - 1; i > 0; i--) { for (var j = i; j > 0; j--) document.write(i); document.write("<br>"); }} // Driver codevar n = 8;printPattern(n); // This code is contributed by 29AjayKumar</script>
Output:
1
22
333
4444
55555
666666
7777777
88888888
7777777
666666
55555
4444
333
22
1
This article is contributed by Sahil Rajput. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
Mithun Kumar
29AjayKumar
ankita_saini
pattern-printing
School Programming
pattern-printing
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Interfaces in Java
Operator Overloading in C++
Overriding in Java
Polymorphism in C++
Types of Operating Systems
Friend class and function in C++
Different ways of Reading a text file in Java
Constructors in Java
Inheritance in Java
Python program to check if a string is palindrome or not | [
{
"code": null,
"e": 24196,
"s": 24168,
"text": "\n25 Oct, 2021"
},
{
"code": null,
"e": 24308,
"s": 24196,
"text": "Given a positive integer n. Task is to print the pyramid pattern as described in the examples below.Examples: "
},
{
"code": null,
"e": 24559,
"s"... |
Java Program to Get the Basic File Attributes - GeeksforGeeks | 05 Feb, 2021
Basic File Attributes are the attributes that are associated with a file in a file system, these attributes are common to many file systems. In order to get the basic file attributes, we have to use the BasicFileAttributes interface. This interface is introduced in 2007 and is a part of the nio package.
The basic file attributes contain some information related to files like creation time, last access time, last modified time, size of the file(in bytes), this attributes also tell us that whether the file is regular or not, whether the file is a directory, or whether the file is a symbolic link or whether the file is something is other than a regular file, directory, or symbolic link.
Methods that are used to get the basic file attributes are:
Below is the Java Program to get the basic file attributes:
Java
// Java Program to get the basic file attributes of the fileimport java.io.IOException;import java.nio.file.Files;import java.nio.file.Path;import java.nio.file.Paths;import java.nio.file.attribute.BasicFileAttributes;import java.sql.Timestamp;import java.util.Date;public class GFG { public static void main(String args[]) throws IOException { // path of the file String path = "C:/Users/elavi/Desktop/GFG_File.txt"; // creating a object of Path class Path file = Paths.get(path); // creating a object of BasicFileAttributes BasicFileAttributes attr = Files.readAttributes( file, BasicFileAttributes.class); System.out.println("creationTime Of File Is = " + attr.creationTime()); System.out.println("lastAccessTime Of File Is = " + attr.lastAccessTime()); System.out.println("lastModifiedTime Of File Is = " + attr.lastModifiedTime()); System.out.println("size Of File Is = " + attr.size()); System.out.println("isRegularFile Of File Is = " + attr.isRegularFile()); System.out.println("isDirectory Of File Is = " + attr.isDirectory()); System.out.println("isOther Of File Is = " + attr.isOther()); System.out.println("isSymbolicLink Of File Is = " + attr.isSymbolicLink()); }}
Note: Above Program will run only on system IDE, it will not run on an online IDE.
Java-Files
Picked
Technical Scripter 2020
Java
Java Programs
Technical Scripter
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Different ways of Reading a text file in Java
Stream In Java
Constructors in Java
Generics in Java
Exceptions in Java
Convert a String to Character array in Java
Java Programming Examples
Convert Double to Integer in Java
Implementing a Linked List in Java using Class
How to Iterate HashMap in Java? | [
{
"code": null,
"e": 23948,
"s": 23920,
"text": "\n05 Feb, 2021"
},
{
"code": null,
"e": 24254,
"s": 23948,
"text": "Basic File Attributes are the attributes that are associated with a file in a file system, these attributes are common to many file systems. In order to get the ba... |
Sort and separate odd and even numbers in an Array using custom comparator - GeeksforGeeks | 12 Jan, 2022
Given an array arr[], containing N elements, the task is to sort and separate odd and even numbers in an Array using a custom comparator.
Example:
Input: arr[] = { 5, 3, 2, 8, 7, 4, 6, 9, 1 }Output: 2 4 6 8 1 3 5 7 9
Input: arr[] = { 12, 15, 6, 2, 7, 13, 9, 4 }Output: 2 4 6 12 7 9 13 15
Approach: As we know std::sort() is used for sorting in increasing order but we can manipulate sort() using a custom comparator to some specific sorting.Now, to separate them, the property that can be used is that the last bit of an even number is 0 and in an odd number, it is 1. So, make the custom comparator to sort the elements based on the last bit of that number.
Below is the implementation of the above approach:
C++
Java
C#
Javascript
// C++ program for the above approach #include <bits/stdc++.h>using namespace std; // Creating custom comparatorbool compare(int a, int b){ // If both are odd or even // then sorting in increasing order if ((a & 1) == (b & 1)) { return a < b; } // Sorting on the basis of last bit if // if one is odd and the other one is even return (a & 1) < (b & 1);} // Function tovoid separateOddEven(int* arr, int N){ // Separating them using sort comparator sort(arr, arr + N, compare); for (int i = 0; i < N; ++i) { cout << arr[i] << ' '; }} // Driver Codeint main(){ int arr[] = { 12, 15, 6, 2, 7, 13, 9, 4 }; int N = sizeof(arr) / sizeof(int); separateOddEven(arr, N);}
// Java program for the above approachimport java.util.*; class GFG{ // Creating custom comparatorstatic boolean comparecust(Integer a, Integer b){ // If both are odd or even // then sorting in increasing order if ((a & 1) == (b & 1)) { return a < b; } // Sorting on the basis of last bit if // if one is odd and the other one is even return (a & 1) < (b & 1);} // Function tostatic void separateOddEven(Integer []arr, int N){ // Separating them using sort comparator Arrays.sort(arr, new Comparator<Integer>() { @Override public int compare(Integer a, Integer b) { // If both are odd or even // then sorting in increasing order if ((a & 1) == (b & 1)) { return a < b?-1:1; } // Sorting on the basis of last bit if // if one is odd and the other one is even return ((a & 1) < (b & 1))?-1:1; } }); for (int i = 0; i < N; ++i) { System.out.print(arr[i] +" "); }} // Driver Codepublic static void main(String[] args){ Integer arr[] = { 12, 15, 6, 2, 7, 13, 9, 4 }; int N = arr.length; separateOddEven(arr, N);}} // This code is contributed by 29AjayKumar
// C# program for the above approachusing System;using System.Collections; class compare : IComparer{ // Call CaseInsensitiveComparer.Compare public int Compare(Object x, Object y) { int a = (int)x; int b = (int)y; // If both are odd or even // then sorting in increasing order if ((a & 1) == (b & 1)) { return a < b ? -1 : 1; } // Sorting on the basis of last bit if // if one is odd and the other one is even return ((a & 1) < (b & 1)) ? -1 : 1; }} class GFG{ // Function tostatic void separateOddEven(int []arr, int N){ // Separating them using sort comparator // Instantiate the IComparer object IComparer cmp = new compare(); Array.Sort(arr, cmp); for(int i = 0; i < N; ++i) { Console.Write(arr[i] + " "); }} // Driver Codepublic static void Main(String[] args){ int []arr = { 12, 15, 6, 2, 7, 13, 9, 4 }; int N = arr.Length; separateOddEven(arr, N);}} // This code is contributed by shikhasingrajput
<script>// Javascript program for the above approach // Creating custom comparatorfunction compare(a, b){ // If both are odd or even // then sorting in increasing order if ((a & 1) == (b & 1)) { return a > b; } // Sorting on the basis of last bit if // if one is odd and the other one is even return (a & 1) > (b & 1);} // Function tofunction separateOddEven(arr, N){ // Separating them using sort comparator arr.sort(compare); for (let i = 0; i < N; ++i) { document.write(arr[i] + " "); }} // Driver Codelet arr = [ 12, 15, 6, 2, 7, 13, 9, 4 ];let N = arr.length;separateOddEven(arr, N); // This code is contributed by Samim Hossain Mondal.</script>
2 4 6 12 7 9 13 15
Time Complexity: O(N*log N)Auxiliary Space: O(1)
samim2000
29AjayKumar
shikhasingrajput
sagartomar9927
C++
Sorting
Sorting
CPP
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Operator Overloading in C++
Polymorphism in C++
Sorting a vector in C++
Friend class and function in C++
Pair in C++ Standard Template Library (STL) | [
{
"code": null,
"e": 24201,
"s": 24173,
"text": "\n12 Jan, 2022"
},
{
"code": null,
"e": 24339,
"s": 24201,
"text": "Given an array arr[], containing N elements, the task is to sort and separate odd and even numbers in an Array using a custom comparator."
},
{
"code": nul... |
MySQL Database Connection | PHP provides mysql_connect function to open a database connection. This function takes five parameters and returns a MySQL link identifier on success, or FALSE on failure.
connection mysql_connect(server,user,passwd,new_link,client_flag);
server
Optional − The host name running database server. If not specified then default value is localhost:3306.
user
Optional − The username accessing the database. If not specified then default is the name of the user that owns the server process.
passwd
Optional − The password of the user accessing the database. If not specified then default is an empty password.
new_link
Optional − If a second call is made to mysql_connect() with the same arguments, no new connection will be established; instead, the identifier of the already opened connection will be returned.
client_flags
Optional − A combination of the following constants −
MYSQL_CLIENT_SSL − Use SSL encryption
MYSQL_CLIENT_SSL − Use SSL encryption
MYSQL_CLIENT_COMPRESS − Use compression protocol
MYSQL_CLIENT_COMPRESS − Use compression protocol
MYSQL_CLIENT_IGNORE_SPACE − Allow space after function names
MYSQL_CLIENT_IGNORE_SPACE − Allow space after function names
MYSQL_CLIENT_INTERACTIVE − Allow interactive timeout seconds of inactivity before closing the connection
MYSQL_CLIENT_INTERACTIVE − Allow interactive timeout seconds of inactivity before closing the connection
NOTE − You can specify server, user, passwd in php.ini file instead of using them again and again in your every PHP scripts. Check php.ini file configuration.
Its simplest function mysql_close PHP provides to close a database connection. This function takes connection resource returned by mysql_connect function. It returns TRUE on success or FALSE on failure.
bool mysql_close ( resource $link_identifier );
If a resource is not specified then last opend database is closed.
Try out following example to open and close a database connection −
<?php
$dbhost = 'localhost:3036';
$dbuser = 'guest';
$dbpass = 'guest123';
$conn = mysql_connect($dbhost, $dbuser, $dbpass);
if(! $conn ) {
die('Could not connect: ' . mysql_error());
}
echo 'Connected successfully';
mysql_close($conn);
?>
45 Lectures
9 hours
Malhar Lathkar
34 Lectures
4 hours
Syed Raza
84 Lectures
5.5 hours
Frahaan Hussain
17 Lectures
1 hours
Nivedita Jain
100 Lectures
34 hours
Azaz Patel
43 Lectures
5.5 hours
Vijay Kumar Parvatha Reddy
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2929,
"s": 2757,
"text": "PHP provides mysql_connect function to open a database connection. This function takes five parameters and returns a MySQL link identifier on success, or FALSE on failure."
},
{
"code": null,
"e": 2997,
"s": 2929,
"text": "connection... |
JavaScript DOM EventListener | Add an event listener that fires when a user clicks a button:
The addEventListener() method attaches an event handler to the specified element.
The addEventListener() method attaches an event handler to an element without overwriting existing event handlers.
You can add many event handlers to one element.
You can add many event handlers of the same type to one element, i.e two "click" events.
You can add event listeners to any DOM object not only HTML elements. i.e the window object.
The addEventListener() method makes it easier to control how the event reacts to bubbling.
When using the addEventListener() method, the JavaScript is separated from the HTML markup, for better readability
and allows you to add event listeners even when you do not control the HTML markup.
You can easily remove an event listener by using the removeEventListener() method.
The first parameter is the type of the event (like "click" or "mousedown"
or any other HTML DOM Event.)
The second parameter is the function we want to call when the event occurs.
The third parameter is a boolean value specifying whether to use event bubbling or event capturing. This parameter is optional.
Note that you don't use the
"on" prefix for the event; use "click" instead of "onclick".
Alert "Hello World!" when the user clicks on an element:
You can also refer to an external "named" function:
Alert "Hello World!" when the user clicks on an element:
The addEventListener() method allows you to add many events to the same
element, without overwriting existing events:
You can add events of different types to the same element:
The addEventListener() method allows you to add event listeners on any HTML
DOM object such as HTML elements, the HTML document, the window object, or other
objects that support events, like the xmlHttpRequest object.
Add an event listener that fires when a user resizes the window:
When passing parameter values, use
an "anonymous function" that calls the specified function with the parameters:
There are two ways of event propagation in the HTML DOM, bubbling and capturing.
Event propagation is a way of defining the element order when an event occurs.
If you have a <p> element inside a <div> element, and the user clicks on the <p> element, which element's
"click" event should be handled first?
In bubbling the inner most element's event is handled first and then the outer:
the <p> element's click event is handled first, then the <div> element's click event.
In capturing the outer most element's event is handled first and then the inner:
the <div> element's click event will be handled first, then the <p> element's click event.
With the addEventListener() method you can specify the propagation type by using the "useCapture" parameter:
The default value is false, which will use the bubbling propagation, when the value is set to true, the event uses the capturing propagation.
The removeEventListener() method removes event handlers that have been
attached with the addEventListener() method:
For a list of all HTML DOM events, look at our complete HTML DOM Event Object Reference.
Use the eventListener to assign an onclick event to the <button> element.
<button id="demo"></button>
<script>
document.getElementById("demo").("", myFunction);
</script>
Start the Exercise
We just launchedW3Schools videos
Get certifiedby completinga course today!
If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail:
help@w3schools.com
Your message has been sent to W3Schools. | [
{
"code": null,
"e": 62,
"s": 0,
"text": "Add an event listener that fires when a user clicks a button:"
},
{
"code": null,
"e": 144,
"s": 62,
"text": "The addEventListener() method attaches an event handler to the specified element."
},
{
"code": null,
"e": 259,
... |
Out of memory exception in Java:
| Whenever you create an object in Java it is stored in the heap area of the JVM. If the JVM is not able to allocate memory for the newly created objects an exception named OutOfMemoryError is thrown.
This usually occurs when we are not closing objects for long time or, trying to act huge amount of data at once.
There are 3 types of errors in OutOfMemoryError −
Java heap space.
GC Overhead limit exceeded.
Permgen space.
Live Demo
public class SpaceErrorExample {
public static void main(String args[]) throws Exception {
Float[] array = new Float[10000 * 100000];
}
}
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at sample.SpaceErrorExample.main(SpaceErrorExample.java:7)
Live Demo
import java.util.ArrayList;
import java.util.ListIterator;
public class OutOfMemoryExample{
public static void main(String args[]) {
//Instantiating an ArrayList object
ArrayList<String> list = new ArrayList<String>();
//populating the ArrayList
list.add("apples");
list.add("mangoes");
list.add("oranges");
//Getting the Iterator object of the ArrayList
ListIterator<String> it = list.listIterator();
while(it.hasNext()) {
it.add("");
}
}
}
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at sample.SpaceErrorExample.main(SpaceErrorExample.java:7) | [
{
"code": null,
"e": 1261,
"s": 1062,
"text": "Whenever you create an object in Java it is stored in the heap area of the JVM. If the JVM is not able to allocate memory for the newly created objects an exception named OutOfMemoryError is thrown."
},
{
"code": null,
"e": 1374,
"s": 12... |
VB.Net - PictureBox Control | The PictureBox control is used for displaying images on the form. The Image property of the control allows you to set an image both at design time or at run time.
Let's create a picture box by dragging a PictureBox control from the Toolbox and dropping it on the form.
The following are some of the commonly used properties of the PictureBox control −
AllowDrop
Specifies whether the picture box accepts data that a user drags on it.
ErrorImage
Gets or specifies an image to be displayed when an error occurs during the image-loading process or if the image load is cancelled.
Image
Gets or sets the image that is displayed in the control.
ImageLocation
Gets or sets the path or the URL for the image displayed in the control.
InitialImage
Gets or sets the image displayed in the control when the main image is loaded.
SizeMode
Determines the size of the image to be displayed in the control. This property takes its value from the PictureBoxSizeMode enumeration, which has values −
Normal − the upper left corner of the image is placed at upper left part of the picture box
Normal − the upper left corner of the image is placed at upper left part of the picture box
StrechImage − allows stretching of the image
StrechImage − allows stretching of the image
AutoSize − allows resizing the picture box to the size of the image
AutoSize − allows resizing the picture box to the size of the image
CenterImage − allows centering the image in the picture box
CenterImage − allows centering the image in the picture box
Zoom − allows increasing or decreasing the image size to maintain the size ratio.
Zoom − allows increasing or decreasing the image size to maintain the size ratio.
TabIndex
Gets or sets the tab index value.
TabStop
Specifies whether the user will be able to focus on the picture box by using the TAB key.
Text
Gets or sets the text for the picture box.
WaitOnLoad
Specifies whether or not an image is loaded synchronously.
The following are some of the commonly used methods of the PictureBox control −
CancelAsync
Cancels an asynchronous image load.
Load
Displays an image in the picture box
LoadAsync
Loads image asynchronously.
ToString
Returns the string that represents the current picture box.
The following are some of the commonly used events of the PictureBox control −
CausesValidationChanged
Overrides the Control.CausesValidationChanged property.
Click
Occurs when the control is clicked.
Enter
Overrides the Control.Enter property.
FontChanged
Occurs when the value of the Font property changes.
ForeColorChanged
Occurs when the value of the ForeColor property changes.
KeyDown
Occurs when a key is pressed when the control has focus.
KeyPress
Occurs when a key is pressed when the control has focus.
KeyUp
Occurs when a key is released when the control has focus.
Leave
Occurs when input focus leaves the PictureBox.
LoadCompleted
Occurs when the asynchronous image-load operation is completed, been canceled, or raised an exception.
LoadProgressChanged
Occurs when the progress of an asynchronous image-loading operation has changed.
Resize
Occurs when the control is resized.
RightToLeftChanged
Occurs when the value of the RightToLeft property changes.
SizeChanged
Occurs when the Size property value changes.
SizeModeChanged
Occurs when SizeMode changes.
TabIndexChanged
Occurs when the value of the TabIndex property changes.
TabStopChanged
Occurs when the value of the TabStop property changes.
TextChanged
Occurs when the value of the Text property changes.
In this example, let us put a picture box and a button control on the form. We set the image property of the picture box to logo.png, as we used before. The Click event of the button named Button1 is coded to stretch the image to a specified size −
Public Class Form1
Private Sub Form1_Load(sender As Object, e As EventArgs) Handles MyBase.Load
' Set the caption bar text of the form.
Me.Text = "tutorialspoint.com"
End Sub
Private Sub Button1_Click(sender As Object, e As EventArgs) Handles Button1.Click
PictureBox1.ClientSize = New Size(300, 300)
PictureBox1.SizeMode = PictureBoxSizeMode.StretchImage
End Sub
End Class
Design View −
When the application is executed, it displays −
Clicking on the button results in −
63 Lectures
4 hours
Frahaan Hussain
103 Lectures
12 hours
Arnold Higuit
60 Lectures
9.5 hours
Arnold Higuit
97 Lectures
9 hours
Arnold Higuit
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2463,
"s": 2300,
"text": "The PictureBox control is used for displaying images on the form. The Image property of the control allows you to set an image both at design time or at run time."
},
{
"code": null,
"e": 2569,
"s": 2463,
"text": "Let's create a pict... |
Add delay in Arduino | In order to add time delays in Arduino, you can use the delay() function. It takes as an argument the value of the delay in milliseconds. An example execution is given below −
void setup() {
// put your setup code here, to run once:
Serial.begin(9600);
}
void loop() {
// put your main code here, to run repeatedly:
Serial.print("Hello!");
delay(2000);
}
The above code prints "Hello!" every 2 seconds. As you may have guessed, the minimum delay you can introduce using the delay function is 1 milli-second. What if you want an even shorted delay? Arduino has a delayMicroseconds() function for that, which takes in the value of the delay in microseconds as the argument.
void setup() {
// put your setup code here, to run once:
Serial.begin(9600);
}
void loop() {
// put your main code here, to run repeatedly:
Serial.print("Hello!");
delayMicroseconds(2000);
}
The above code prints "Hello!" every 2 milliseconds (2000 microseconds). | [
{
"code": null,
"e": 1238,
"s": 1062,
"text": "In order to add time delays in Arduino, you can use the delay() function. It takes as an argument the value of the delay in milliseconds. An example execution is given below −"
},
{
"code": null,
"e": 1432,
"s": 1238,
"text": "void s... |
How To Change The Order Of Columns In Pandas | Towards Data Science | Reordering columns in pandas DataFrames is one of the most common operations we want to perform. This is usually useful when it comes down to presenting results to other people as we need to order (at least a few) columns in some logical order.
In today’s article we are going to discuss how to change the order of columns in pandas DataFrames using
slicing of the original frame — mostly relevant when you need to re-order most of the columns
insert() method — if you want to insert a single column into a specified index
set_index() — if you need to move a column to the front of the DataFrame
and, reindex() method — mostly relevant to cases where you can specify column indices in the order you wish them to appear (e.g. in alphabetical order)
First, let’s create an example DataFrame that we’ll reference throughout this guide.
import pandas as pddf = pd.DataFrame({ 'colA':[1, 2, 3], 'colB': ['a', 'b', 'c'], 'colC': [True, False, False], 'colD': [10, 20, 30],})print(df)# colA colB colC colD# 0 1 a True 10# 1 2 b False 20# 2 3 c False 30
The easiest way is to slice the original DataFrame using a list containing the column names in the new order you wish them to follow:
df = df[['colD', 'colB', 'colC', 'colA']]print(df)# colD colB colC colA# 0 10 a True 1# 1 20 b False 2# 2 30 c False 3
This method is probably good enough if you want to re-order most of the columns’ names (and probably your DataFrame does have too many columns).
If you need to insert column into DataFrame at specified location then pandas.DataFrame.insert() should do the trick. However, you should make sure that the column is first taken out of the original DataFrame otherwise a ValueError will be raised with the following message:
ValueError: cannot insert column_name, already exists
Therefore, before calling insert() we first need to do a pop() over the DataFrame in order to drop the column from the original DataFrame and retain its information. For instance, if we want to place colD as the first column of the frame we first need to pop() the column and then insert it back, this time to the desired index.
col = df.pop("colD")df.insert(0, col.name, col)print(df)# colD colA colB colC# 0 10 1 a True# 1 20 2 b False# 2 30 3 c False
If you want to move a column to the front of a pandas DataFrame, then set_index() is your friend.
First, you specify the column we wish to move to the front, as the index of the DataFrame and then reset the index so that the old index is added as a column, and a new sequential index is used. Again, notice how we pop() the column so that it gets dropped before is being added as an index. This is required otherwise a name collision will occur when attempting to make the old index the first column of the DataFrame.
df.set_index(df.pop('colD'), inplace=True)# colA colB colC # colD# 10 1 a True# 20 2 b False# 30 3 c Falsedf.reset_index(inplace=True)# colD colA colB colC# 0 10 1 a True# 1 20 2 b False# 2 30 3 c False
Finally, if you want to specify column indices in the order you wish them to appear (e.g. in alphabetical order) you can use reindex() method to conform the DataFrame to new index.
For example, let’s suppose we need to order the column names alphabetically in descending order
df = df.reindex(columns=sorted(df.columns, reverse=True))# colD colC colB colA# 0 10 True a 1# 1 20 False b 2# 2 30 False c 3
Note that the above is equivalent to
df.reindex(sorted(df.columns, reverse=True), axis='columns')
In today’s short guide we discussed how to change the order of columns in pandas DataFrames in many different ways. Make sure you pick the right method based on your specific requirements.
You may also like | [
{
"code": null,
"e": 417,
"s": 172,
"text": "Reordering columns in pandas DataFrames is one of the most common operations we want to perform. This is usually useful when it comes down to presenting results to other people as we need to order (at least a few) columns in some logical order."
},
{
... |
Can we use stored procedure to insert records into two tables at once in MySQL? | Yes, you can use stored procedure to insert into two tables in a single query. Let us first create a table −
mysql> create table DemoTable
(
StudentId int NOT NULL AUTO_INCREMENT PRIMARY KEY,
StudentFirstName varchar(20)
);
Query OK, 0 rows affected (0.56 sec)
Here is the query to create second table −
mysql> create table DemoTable2
(
ClientId int NOT NULL AUTO_INCREMENT PRIMARY KEY,
ClientName varchar(20),
ClientAge int
);
Query OK, 0 rows affected (0.76 sec)
Following is the query to create stored procedure to insert into two tables created above −
mysql> DELIMITER //
mysql> CREATE PROCEDURE insert_into_twoTables(name varchar(100),age int)
BEGIN
INSERT INTO DemoTable(StudentFirstName) VALUES(name);
INSERT INTO DemoTable2(ClientName,ClientAge) VALUES(name,age);
END
//
Query OK, 0 rows affected (0.14 sec)
mysql> DELIMITER ;
Now call the stored procedure with the help of CALL command −
mysql> call insert_into_twoTables('Tom',38);
Query OK, 1 row affected, 1 warning (0.41 sec)
Check the record is inserted into both tables or not.
The query to display all records from the first table is as follows −
mysql> select *from DemoTable;
This will produce the following output −
+-----------+------------------+
| StudentId | StudentFirstName |
+-----------+------------------+
| 1 | Tom |
+-----------+------------------+
1 row in set (0.00 sec)
Following is the query to display all records from the second table −
mysql> select *from DemoTable2;
This will produce the following output −
+----------+------------+-----------+
| ClientId | ClientName | ClientAge |
+----------+------------+-----------+
| 1 | Tom | 38 |
+----------+------------+-----------+
1 row in set (0.00 sec) | [
{
"code": null,
"e": 1171,
"s": 1062,
"text": "Yes, you can use stored procedure to insert into two tables in a single query. Let us first create a table −"
},
{
"code": null,
"e": 1329,
"s": 1171,
"text": "mysql> create table DemoTable\n(\n StudentId int NOT NULL AUTO_INCREMEN... |
How to use InputFilter to limit characters in an editText in Android? | This example demonstrates how do I use inputfilter to limit characters in an editText in android.
Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project.
Step 2 − Add the following code to res/layout/activity_main.xml.
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<EditText
android:id="@+id/editText"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_centerHorizontal="true"
android:layout_marginTop="40dp" />
</RelativeLayout>
Step 3 − Add the following code to src/MainActivity.java
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.text.InputFilter;
import android.widget.EditText;
import android.widget.Toast;
public class MainActivity extends AppCompatActivity {
EditText editText;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
editText = findViewById(R.id.editText);
int maxTextLength = 15;
editText.setFilters(new InputFilter[]{new InputFilter.LengthFilter(maxTextLength)});
Toast.makeText(this, "EditText limit set to 15 characters", Toast.LENGTH_SHORT).show();
}
}
Step 4 − Add the following code to androidManifest.xml
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android" package="app.com.sample">
<application
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="@style/AppTheme">
<activity android:name=".MainActivity">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen –
Click here to download the project code. | [
{
"code": null,
"e": 1160,
"s": 1062,
"text": "This example demonstrates how do I use inputfilter to limit characters in an editText in android."
},
{
"code": null,
"e": 1289,
"s": 1160,
"text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill a... |
Change Color of Range in ggplot2 Heatmap in R - GeeksforGeeks | 18 Jul, 2021
A heatmap depicts the relationship between two attributes of a dataframe as a color-coded tile. A heatmap produces a grid with multiple attributes of the dataframe, representing the relationship between the two attributes taken at a time.
Dataset in use: bestsellers
Let us first create a regular heatmap with colors provided by default. We will use geom_tile() function of ggplot2 library. It is essentially used to create heatmaps.
Syntax: geom_tile(x,y,fill)
Parameter:
x: position on x-axis
y: position on y-axis
fill: numeric values that will be translated to colors
To this function, Var1 and Var2 of the melted dataframe are passed to x and y respectively. These represent the relationship between attributes taken two at a time. To fill parameters provide, since that will be used to color-code the tiles based on some numeric value.
Example:
R
library(ggplot2)library(reshape2) df<-read.csv("bestsellers.csv") data<-cor(df[sapply(df,is.numeric)])data1<-melt(data) ggplot(data1, aes(x=Var1, y=Var2, fill=value))+geom_tile()
Output:
In this method, the starting and the ending value of the colors to define a range is given as an argument.
Syntax: scale_fill_gradient(low, high, guide)
Parameter:
low: starting value
high: ending value
guide: type of legend
Example:
R
library(ggplot2)library(reshape2) df<-read.csv("bestsellers.csv")data<-cor(df[sapply(df,is.numeric)])data1<-melt(data) ggplot(data1,aes(x=Var1, y=Var2, fill=value))+geom_tile()+scale_fill_gradient(low = "#86ebc9", high = "#09855c", guide = "colorbar")
Output:
Up until now, we were adding colors to the continuous values, in this method, the values are first converted into discrete ranges using cut() function.
Syntax: cut(data, breaks)
Where breaks take a vector with values to divide the data by. Now again plot a heatmap but with the new data created after making it discrete. To add colors to such heatmap in ranges, use scale_fill_manual() with a vector of the colors for each range.
Syntax: scale_fill_manual(interval, values=vector of colors)
Example:
R
library(ggplot2)library(reshape2) df<-read.csv("bestsellers.csv")data<-cor(df[sapply(df,is.numeric)])data1<-melt(data) data2<-data1data2$group<-cut(data2$value, breaks = c(-1,-0.5,0,0.5, 1)) ggplot(data2,aes(x=Var1, y=Var2, fill=group))+geom_tile()+ scale_fill_manual(breaks = levels(data2$group), values = c("#86ebc9", "#869ceb", "#b986eb","#a1eb86"))
Output:
Picked
R-ggplot
R Language
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Change Color of Bars in Barchart using ggplot2 in R
How to Change Axis Scales in R Plots?
Group by function in R using Dplyr
How to Split Column Into Multiple Columns in R DataFrame?
Replace Specific Characters in String in R
How to filter R DataFrame by values in a column?
How to filter R dataframe by multiple conditions?
R - if statement
How to import an Excel File into R ?
Time Series Analysis in R | [
{
"code": null,
"e": 24851,
"s": 24823,
"text": "\n18 Jul, 2021"
},
{
"code": null,
"e": 25090,
"s": 24851,
"text": "A heatmap depicts the relationship between two attributes of a dataframe as a color-coded tile. A heatmap produces a grid with multiple attributes of the dataframe... |
How to update a pickle file in Python? - GeeksforGeeks | 23 Nov, 2021
Python pickle module is used for serializing and de-serializing a Python object structure. Any object in Python can be pickled so that it can be saved on a disk. What pickle does is that it “serializes” the object first before writing it to file. Pickling is a way to convert a python object (list, dict, etc.) into a character stream. The idea is that this character stream contains all the information necessary to reconstruct the object in another python script
Pickle serializes a single object at a time, and reads back a single object – the pickled data is recorded in sequence on the file. If you simply do pickle.load you would be reading the first object serialized into the file (not the last one as you’ve written). After unserializing the first object, the file-pointer is at the beginning of the next object – if you simply call pickle.load again, it will read that next object – do that until the end of the file
Functions Used:
load()– used to read a pickled object representation from the open file object file and return the reconstituted object hierarchy specified.
Syntax:
pickle.load(file, *, fix_imports = True, encoding = “ASCII”, errors = “strict”)
seek(0)– Pickle records can be concatenated into a file, so yes, you can just pickle.load(f) multiple times, but the files themselves are not indexed in a way that would let you seek into a given record. What your f.seek(0) is doing is seeking into the third byte in the file, which is in the middle of a pickle record, and thus is unpickable. If you need random access, you might want to look into the built-in shelve module which builds a dictionary-like interface on top of a pickle using a database file module.
dump()– used to write a pickled representation of obj to the open file object file
Syntax:
pickle.dump(obj, file, protocol = None, *, fix_imports = True)
truncate()– changes the file size
To start with, we first have to insert it into a pickle file. The implementation is given below:
Approach
Import module
Open file in write mode
Enter data
Dump data to the file
Continue until the choice is yes
Close File
Program:
Python3
# codeimport pickleprint("GFG") def write_file(): f = open("travel.txt", "wb") op = 'y' while op == 'y': Travelcode = int(input("enter the travel id")) Place = input("Enter the Place") Travellers = int(input("Enter the number of travellers")) buses = int(input("Enter the number of buses")) pickle.dump([Travelcode, Place, Travellers, buses], f) op = input("Dp you want to continue> (y or n)") f.close() print("entering the details of passengers in the pickle file")write_file()
Now since we have data entered into the file the approach to update data from it is given below along with implementation based on it:
Approach
Import module
Open file
Enter some information regarding data to be deleted
Update the appropriate data
Close file
Program:
Python3
import pickle def read_file(): f = open("travel.txt", 'rb') while True: try: L = pickle.load(f) print("Place", L[1], "\t\t Travellers :", L[2], "\t\t Buses :", L[3]) except EOFError: print("Completed reading details") break f.close() def update_details(): f1 = open("travel.txt", "rb+") travelList = [] print("For a example i will be updating only Buses details") t_code = int(input("Enter the travel code for the updation: ")) while True: try: L = pickle.load(f1) if L[0] == t_code: buses = int(input("Enter the number Buses ...")) L[3] = buses travelList.append(L) except EOFError: print("Completed Updating details") break f1.seek(0) f1.truncate() for i in range(len(travelList)): pickle.dump(travelList[i], f1) else: f1.close() print("update the file")update_details()read_file()
Output:
punamsingh628700
python-utility
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Python Dictionary
Read a file line by line in Python
Enumerate() in Python
How to Install PIP on Windows ?
Iterate over a list in Python
Different ways to create Pandas Dataframe
Python String | replace()
Python program to convert a list to string
Reading and Writing to text files in Python
Create a Pandas DataFrame from Lists | [
{
"code": null,
"e": 24321,
"s": 24293,
"text": "\n23 Nov, 2021"
},
{
"code": null,
"e": 24786,
"s": 24321,
"text": "Python pickle module is used for serializing and de-serializing a Python object structure. Any object in Python can be pickled so that it can be saved on a disk. W... |
ASP.NET - Client Side | ASP.NET client side coding has two aspects:
Client side scripts : It runs on the browser and in turn speeds up the execution of page. For example, client side data validation which can catch invalid data and warn the user accordingly without making a round trip to the server.
Client side scripts : It runs on the browser and in turn speeds up the execution of page. For example, client side data validation which can catch invalid data and warn the user accordingly without making a round trip to the server.
Client side source code : ASP.NET pages generate this. For example, the HTML source code of an ASP.NET page contains a number of hidden fields and automatically injected blocks of JavaScript code, which keeps information like view state or does other jobs to make the page work.
Client side source code : ASP.NET pages generate this. For example, the HTML source code of an ASP.NET page contains a number of hidden fields and automatically injected blocks of JavaScript code, which keeps information like view state or does other jobs to make the page work.
All ASP.NET server controls allow calling client side code written using JavaScript or VBScript. Some ASP.NET server controls use client side scripting to provide response to the users without posting back to the server. For example, the validation controls.
Apart from these scripts, the Button control has a property OnClientClick, which allows executing client-side script, when the button is clicked.
The traditional and server HTML controls have the following events that can execute a script when they are raised:
We have already discussed that, ASP.NET pages are generally written in two files:
The content file or the markup file ( .aspx)
The code-behind file
The content file contains the HTML or ASP.NET control tags and literals to form the structure of the page. The code behind file contains the class definition. At run-time, the content file is parsed and transformed into a page class.
This class, along with the class definition in the code file, and system generated code, together make the executable code (assembly) that processes all posted data, generates response, and sends it back to the client.
Consider the simple page:
<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs"
Inherits="clientside._Default" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" >
<head runat="server">
<title>
Untitled Page
</title>
</head>
<body>
<form id="form1" runat="server">
<div>
<asp:TextBox ID="TextBox1" runat="server"></asp:TextBox>
<asp:Button ID="Button1" runat="server" OnClick="Button1_Click" Text="Click" />
</div>
<hr />
<h3> <asp:Label ID="Msg" runat="server" Text=""> </asp:Label> </h3>
</form>
</body>
</html>
When this page is run on the browser, the View Source option shows the HTML page sent to the browser by the ASP.Net runtime:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" >
<head>
<title>
Untitled Page
</title>
</head>
<body>
<form name="form1" method="post" action="Default.aspx" id="form1">
<div>
<input type="hidden" name="__VIEWSTATE" id="__VIEWSTATE"
value="/wEPDwUKMTU5MTA2ODYwOWRk31NudGDgvhhA7joJum9Qn5RxU2M=" />
</div>
<div>
<input type="hidden" name="__EVENTVALIDATION" id="__EVENTVALIDATION"
value="/wEWAwKpjZj0DALs0bLrBgKM54rGBhHsyM61rraxE+KnBTCS8cd1QDJ/"/>
</div>
<div>
<input name="TextBox1" type="text" id="TextBox1" />
<input type="submit" name="Button1" value="Click" id="Button1" />
</div>
<hr />
<h3><span id="Msg"></span></h3>
</form>
</body>
</html>
If you go through the code properly, you can see that first two <div> tags contain the hidden fields which store the view state and validation information.
51 Lectures
5.5 hours
Anadi Sharma
44 Lectures
4.5 hours
Kaushik Roy Chowdhury
42 Lectures
18 hours
SHIVPRASAD KOIRALA
57 Lectures
3.5 hours
University Code
40 Lectures
2.5 hours
University Code
138 Lectures
9 hours
Bhrugen Patel
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2391,
"s": 2347,
"text": "ASP.NET client side coding has two aspects:"
},
{
"code": null,
"e": 2624,
"s": 2391,
"text": "Client side scripts : It runs on the browser and in turn speeds up the execution of page. For example, client side data validation which c... |
Struts 2 - The Ajax Tags | Struts uses the DOJO framework for the AJAX tag implementation. First of all, to proceed with this example, you need to add struts2-dojo-plugin-2.2.3.jar to your classpath.
You can get this file from the lib folder of your struts2 download (C:\struts-2.2.3all\struts-2.2.3\lib\struts2-dojo-plugin-2.2.3.jar)
For this exercies, let us modify HelloWorld.jsp as follows −
<%@ page contentType = "text/html; charset = UTF-8"%>
<%@ taglib prefix = "s" uri = "/struts-tags"%>
<%@ taglib prefix = "sx" uri = "/struts-dojo-tags"%>
<html>
<head>
<title>Hello World</title>
<s:head />
<sx:head />
</head>
<body>
<s:form>
<sx:autocompleter label = "Favourite Colour"
list = "{'red','green','blue'}" />
<br />
<sx:datetimepicker name = "deliverydate" label = "Delivery Date"
displayformat = "dd/MM/yyyy" />
<br />
<s:url id = "url" value = "/hello.action" />
<sx:div href="%{#url}" delay="2000">
Initial Content
</sx:div>
<br/>
<sx:tabbedpanel id = "tabContainer">
<sx:div label = "Tab 1">Tab 1</sx:div>
<sx:div label = "Tab 2">Tab 2</sx:div>
</sx:tabbedpanel>
</s:form>
</body>
</html>
When we run the above example, we get the following output −
Let us now go through this example one step at a time.
First thing to notice is the addition of a new tag library with the prefix sx. This (struts-dojo-tags) is the tag library specifically created for the ajax integration.
Then inside the HTML head we call the sx:head. This initializes the dojo framework and makes it ready for all AJAX invocations within the page. This step is important - your ajax calls will not work without the sx:head being initialized.
First we have the autocompleter tag. The autocompleter tag looks pretty much like a select box. It is populated with the values red, green and blue. But the different between a select box and this one is that it auto completes. That is, if you start typing in gr, it will fill it with "green". Other than that this tag is very much similar to the s:select tag which we covered earlier.
Next, we have a date time picker. This tag creates an input field with a button next to it. When the button is pressed, a popup date time picker is displayed. When the user selects a date, the date is filled into the input text in the format that is specified in the tag attribute. In our example, we have specified dd/MM/yyyy as the format for the date.
Next we create a url tag to the system.action file which we created in the earlier exercises. It doesn't have to be the system.action - it could be any action file that you created earlier. Then we have a div with the hyperlink set to the url and delay set to 2 seconds. What happens when you run this is, the "Initial Content" will be displayed for 2 seconds, then the div's content will be replaced with the contents from the hello.action execution.
Finally we have a simple tab panel with two tabs. The tabs are divs themseleves with the labels Tab 1 and Tab2.
It should be worth noting that the AJAX tag integration in Struts is still a work in progress and the maturity of this integration is slowly increasing with every release.
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2419,
"s": 2246,
"text": "Struts uses the DOJO framework for the AJAX tag implementation. First of all, to proceed with this example, you need to add struts2-dojo-plugin-2.2.3.jar to your classpath."
},
{
"code": null,
"e": 2554,
"s": 2419,
"text": "You can g... |
PHP strlen() Function - GeeksforGeeks | 01 Dec, 2021
In this article, we will see how to get the length of the string using strlen() function in PHP, along with understanding its implementation through the examples.
The strlen() is a built-in function in PHP which returns the length of a given string. It takes a string as a parameter and returns its length. It calculates the length of the string including all the whitespaces and special characters.
Syntax:
strlen($string);
Parameters: The strlen() function accepts only one parameter $string which is mandatory. This parameter represents the string whose length is needed to be returned.
Return Value: The function returns the length of the $string including all the whitespaces and special characters.
Below programs illustrate the strlen() function in PHP:
Example 1: The below example demonstrates the use of the strlen() function in PHP.
PHP
<?php // PHP program to find the // length of a given string $str = "GeeksforGeeks"; // prints the length of the string // including the space echo strlen($str);?>
Output:
13
Example 2: This example demonstrates the use of the strlen() function where the string has special characters and escape sequences.
PHP
<?php // PHP program to find the // length of a given string which has // special characters $str = "\n GeeksforGeeks Learning;"; // here '\n' has been counted as 1 echo strlen($str);?>
Output:
25
Reference: http://php.net/manual/en/function.strlen.php
PHP is a server-side scripting language designed specifically for web development. You can learn PHP from the ground up by following this PHP Tutorial and PHP Examples.
Akanksha_Rai
bhaskargeeksforgeeks
PHP-string
PHP
Web Technologies
PHP
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
How to Insert Form Data into Database using PHP ?
How to convert array to string in PHP ?
How to Upload Image into Database and Display it using PHP ?
How to check whether an array is empty using PHP?
Comparing two dates in PHP
Roadmap to Become a Web Developer in 2022
Installation of Node.js on Linux
Top 10 Projects For Beginners To Practice HTML and CSS Skills
How to fetch data from an API in ReactJS ?
How to insert spaces/tabs in text using HTML/CSS? | [
{
"code": null,
"e": 24827,
"s": 24799,
"text": "\n01 Dec, 2021"
},
{
"code": null,
"e": 24990,
"s": 24827,
"text": "In this article, we will see how to get the length of the string using strlen() function in PHP, along with understanding its implementation through the examples."... |
XSS-Loader - XSS Scanner and Payload Generator - GeeksforGeeks | 14 Sep, 2021
Cross-Site Scripting or XSS vulnerability is the flaw included in the OWASP Top 10 Vulnerabilities. In this Security Flaw, the Attacker generates a malicious JavaScript Payload code that has the intention to steal the cookies of the victim or to perform an account takeover. Sometimes this Flaw can create a severe problem on the back end of the web application. The malicious code is passed through user inputs, parameters, uploaded files, etc. If the information is handled properly before sending it to the webserver, then the application can be saved from an XSS attack.
XSS-Loader is a toolkit that allows the user to create payloads for XSS injection, scan websites for potential XSS exploits and use the power of Google Search Engine to discover websites that may be vulnerable to XSS Vulnerability. XSS-Loader tool is developed in the Python Language. XSS-Loader tool is open source, free to use, and available on GitHub. This tool supports various types of payload generation like:
DIV PAYLOAD
MUTATION PAYLOAD
BASIC PAYLOAD
UPPER PAYLOAD etc.
This tool supports XSS Scanning on the target domain URL, The executed payload is displayed with the full URL on the terminal itself.
Note: Make Sure You have Python Installed on your System, as this is a python-based tool. Click to check the Installation process: Python Installation Steps on Linux
Step 1: Use the following command to install the tool in your Kali Linux operating system.
git clone https://github.com/capture0x/XSS-LOADER/
Step 2: Now use the following command to move into the directory of the tool. You have to move in the directory in order to run the tool.
cd XSS-LOADER
Step 3: You are in the directory of the XSS-Loader. Now you have to install a dependency of the XSS-Loader using the following command.
sudo pip3 install -r requirements.txt
Step 4: All the dependencies have been installed in your Kali Linux operating system. Now use the following command to run the tool and check the help section.
python3 payloader.py -h
Example 1: BASIC PAYLOAD
Select Option 1 -> BASIC PAYLOAD
In this Example, We are generating a Basic Payload for XSS.
Select Option 20 -> MUTATION PAYLOAD
The tool has generated the Mutational Payload.
Example 2: ENTER YOUR PAYLOAD
Select Option 6 -> ENTER YOUR PAYLOAD
In this Example, We are specifying our own custom payload.
We have given our custom payload as input to the tool.
Select Option 1 -> UPPER CASE
We are changing our payload from Lower Case to Upper Case.
Our Custom Payload is changed from Lower Case to Upper Case.
Example 3: XSS SCANNER
Select Option 7 -> XSS SCANNER
In this Example, We are testing the target domain for XSS Security Flaw.
Target URL -> http://testphp.vulnweb.com/search.php?test=query
We have specified the target domain URL.
Select Option 1 -> BASIC PAYLOAD LIST
We are using the Basic Payload List which will be tested on the target domain.
The testing process is started.
Example 4: XSS DORK FINDER
Select Option 8 -> XSS DORK FINDER
In this example, We will be using the XSS Dork Finder for Advanced Search.
Kali-Linux
Linux-Tools
Linux-Unix
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
scp command in Linux with Examples
nohup Command in Linux with Examples
mv command in Linux with examples
chown command in Linux with Examples
Docker - COPY Instruction
Thread functions in C/C++
nslookup command in Linux with Examples
SED command in Linux | Set 2
Named Pipe or FIFO with example C program
uniq Command in LINUX with examples | [
{
"code": null,
"e": 24015,
"s": 23987,
"text": "\n14 Sep, 2021"
},
{
"code": null,
"e": 24590,
"s": 24015,
"text": "Cross-Site Scripting or XSS vulnerability is the flaw included in the OWASP Top 10 Vulnerabilities. In this Security Flaw, the Attacker generates a malicious JavaS... |
JQuery | How to implement Star-Rating system using RateYo - GeeksforGeeks | 18 Jan, 2019
rateYo:- rateYo is a jQuery plugin to create a star-rating widget that allows to fill the background color of the un-rated part of an SVG(scalable vector graphics) based star on mouse hover. It is Fully customizable and scalable to fit any design needs.
Steps to implement Star-Rating system using RateYo:
Installation:1. # NPM
$ npm install rateYo
2. #Bower
$ bower install rateYo
you can also use the Google-hosted/ Microsoft-hosted content delivery network (CDN) to include a version of jQuery.<!-- Latest compiled and minified CSS --><link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/rateYo/2.3.2/jquery.rateyo.min.css"> <!-- Latest compiled and minified JavaScript --><script src="https://cdnjs.cloudflare.com/ajax/libs/rateYo/2.3.2/jquery.rateyo.min.js"></script>Add the required stylesheet in the head section of html page.<link rel="stylesheet" type="text/css" href="jquery.rateyo.min.css"> Create a div that will serve as a star rating container.<div id="rateYo"></div>Add the required stylesheet in the body section of html page and link the javaScript file of the rateYo plugin to perform various functions.<script type="text/javascript" src="jquery.min.js"></script> <script type="text/javascript" src="jquery.rateyo.min.js"></script>Call a plugin to render a basic star rating into the rateYo div created by you.$("#rateYo").rateYo();
Installation:1. # NPM
$ npm install rateYo
2. #Bower
$ bower install rateYo
you can also use the Google-hosted/ Microsoft-hosted content delivery network (CDN) to include a version of jQuery.<!-- Latest compiled and minified CSS --><link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/rateYo/2.3.2/jquery.rateyo.min.css"> <!-- Latest compiled and minified JavaScript --><script src="https://cdnjs.cloudflare.com/ajax/libs/rateYo/2.3.2/jquery.rateyo.min.js"></script>
1. # NPM
$ npm install rateYo
2. #Bower
$ bower install rateYo
you can also use the Google-hosted/ Microsoft-hosted content delivery network (CDN) to include a version of jQuery.
<!-- Latest compiled and minified CSS --><link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/rateYo/2.3.2/jquery.rateyo.min.css"> <!-- Latest compiled and minified JavaScript --><script src="https://cdnjs.cloudflare.com/ajax/libs/rateYo/2.3.2/jquery.rateyo.min.js"></script>
Add the required stylesheet in the head section of html page.<link rel="stylesheet" type="text/css" href="jquery.rateyo.min.css">
<link rel="stylesheet" type="text/css" href="jquery.rateyo.min.css">
Create a div that will serve as a star rating container.<div id="rateYo"></div>
<div id="rateYo"></div>
Add the required stylesheet in the body section of html page and link the javaScript file of the rateYo plugin to perform various functions.<script type="text/javascript" src="jquery.min.js"></script> <script type="text/javascript" src="jquery.rateyo.min.js"></script>
<script type="text/javascript" src="jquery.min.js"></script> <script type="text/javascript" src="jquery.rateyo.min.js"></script>
Call a plugin to render a basic star rating into the rateYo div created by you.$("#rateYo").rateYo();
$("#rateYo").rateYo();
Example
<!DOCTYPE html><html> <head> <title>rating</title> <link rel="stylesheet" type="text/css" href="jquery.rateyo.min.css"></head> <body> <div style="width: 600px; margin: 30px auto"> <div id="rateYo"></div> </div> <script type="text/javascript" src="jquery.min.js"></script> <script type="text/javascript" src="jquery.rateyo.min.js"></script> <script> $("#rateYo").rateYo({ rating: 1.5, spacing: "10px", numStars: 5, minValue: 0, maxValue: 5, normalFill: 'black', ratedFill: 'orange', }) </script> </body> </html>
OUTPUT
Web-Programs
JQuery
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
How to prevent Body from scrolling when a modal is opened using jQuery ?
jQuery | ajax() Method
How to get the value in an input text box using jQuery ?
Difference Between JavaScript and jQuery
QR Code Generator using HTML, CSS and jQuery
Roadmap to Become a Web Developer in 2022
Installation of Node.js on Linux
Top 10 Projects For Beginners To Practice HTML and CSS Skills
How to fetch data from an API in ReactJS ?
How to insert spaces/tabs in text using HTML/CSS? | [
{
"code": null,
"e": 25675,
"s": 25647,
"text": "\n18 Jan, 2019"
},
{
"code": null,
"e": 25929,
"s": 25675,
"text": "rateYo:- rateYo is a jQuery plugin to create a star-rating widget that allows to fill the background color of the un-rated part of an SVG(scalable vector graphics)... |
Column Chart with negative values | Following is an example of a Column Chart with negative values.
We have already seen the configurations used to draw a chart in Highcharts Configuration Syntax chapter. Now, let us see an example of a basic column chart with negative values.
app.component.ts
import { Component } from '@angular/core';
import * as Highcharts from 'highcharts';
@Component({
selector: 'app-root',
templateUrl: './app.component.html',
styleUrls: ['./app.component.css']
})
export class AppComponent {
highcharts = Highcharts;
chartOptions = {
chart: {
type: 'column'
},
title: {
text: 'Column chart with negative values'
},
xAxis:{
categories: ['Apples', 'Oranges', 'Pears', 'Grapes', 'Bananas']
},
series: [
{
name: 'John',
data: [5, 3, 4, 7, 2]
},
{
name: 'Jane',
data: [2, -2, -3, 2, 1]
}, {
name: 'Joe',
data: [3, 4, 4, -2, 5]
}
]
};
}
Verify the result.
16 Lectures
1.5 hours
Anadi Sharma
28 Lectures
2.5 hours
Anadi Sharma
11 Lectures
7.5 hours
SHIVPRASAD KOIRALA
16 Lectures
2.5 hours
Frahaan Hussain
69 Lectures
5 hours
Senol Atac
53 Lectures
3.5 hours
Senol Atac
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2099,
"s": 2035,
"text": "Following is an example of a Column Chart with negative values."
},
{
"code": null,
"e": 2277,
"s": 2099,
"text": "We have already seen the configurations used to draw a chart in Highcharts Configuration Syntax chapter. Now, let us s... |
How to convert byte array to an object stream in C#? | Stream is the abstract base class of all streams and it Provides a generic view of a sequence of bytes. The Streams Object involve three fundamental operations such as Reading, Writing and Seeking. A stream be can be reset which leads to performance improvements.
A byte array can be converted to a memory stream using MemoryStream Class.
MemoryStream stream = new MemoryStream(byteArray);
Let us consider a byte array with 5 values 1, 2, 3, 4, 5.
Live Demo
using System;
using System.IO;
namespace DemoApplication {
class Program {
static void Main(string[] args) {
byte[] byteArray = new byte[5] {1, 2, 3, 4, 5 };
using (MemoryStream stream = new MemoryStream(byteArray)) {
using (BinaryReader reader = new BinaryReader(stream)) {
for (int i = 0; i < byteArray.Length; i++) {
byte result = reader.ReadByte();
Console.WriteLine(result);
}
}
}
Console.ReadLine();
}
}
}
The output of the above code is
1
2
3
4
5 | [
{
"code": null,
"e": 1326,
"s": 1062,
"text": "Stream is the abstract base class of all streams and it Provides a generic view of a sequence of bytes. The Streams Object involve three fundamental operations such as Reading, Writing and Seeking. A stream be can be reset which leads to performance imp... |
Lodash _.merge() Method - GeeksforGeeks | 13 Sep, 2020
Lodash is a JavaScript library that works on the top of underscore.js. Lodash helps in working with arrays, strings, objects, numbers, etc.
The _.merge() method is used to merge two or more objects starting with the left-most to the right-most to create a parent mapping object. When two keys are the same, the generated object will have value for the rightmost key. If more than one object is the same, the newly generated object will have only one key and value corresponding to those objects.
Syntax:
_.merge( object, sources )
Parameters: This method accepts two parameters as mentioned above and described below:
object: This parameter holds the destination object.
sources: This parameter holds the source object. It is an optional parameter.
Return Value: This method returns the merged object.
Example 1:
Javascript
// Requiring the lodash library const _ = require("lodash"); // Using the _.merge() method console.log( _.merge({ cpp: "12" }, { java: "23" }, { python:"35" }) ); // When two keys are the sameconsole.log( _.merge({ cpp: "12" }, { cpp: "23" }, { java: "23" }, { python:"35" })); // When more than one object is the sameconsole.log( _.merge({ cpp: "12" }, { cpp: "12" }, { java: "23" }, { python:"35" }));
Output:
{cpp: '12', java: '23', python: '35'}
{cpp: '23', java: '23', python: '35'}
{cpp: '12', java: '23', python: '35'}
Example 2:
Javascript
// Requiring the lodash library const _ = require("lodash"); // The destination objectvar object = { 'amit': [{ 'susanta': 20 }, { 'durgam': 40 }]}; // The source objectvar other = { 'amit': [{ 'chinmoy': 30 }, { 'kripamoy': 50 }]}; // Using the _.merge() methodconsole.log(_.merge(object, other));
Output:
{ 'amit': [{'chinmoy': 30, 'susanta': 20 },
{ 'durgam': 40, 'kripamoy': 50 }] }
JavaScript-Lodash
JavaScript
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Difference between var, let and const keywords in JavaScript
Differences between Functional Components and Class Components in React
Convert a string to an integer in JavaScript
Set the value of an input field in JavaScript
Difference Between PUT and PATCH Request
Installation of Node.js on Linux
Roadmap to Become a Web Developer in 2022
How to fetch data from an API in ReactJS ?
Top 10 Projects For Beginners To Practice HTML and CSS Skills
How to insert spaces/tabs in text using HTML/CSS? | [
{
"code": null,
"e": 25138,
"s": 25110,
"text": "\n13 Sep, 2020"
},
{
"code": null,
"e": 25278,
"s": 25138,
"text": "Lodash is a JavaScript library that works on the top of underscore.js. Lodash helps in working with arrays, strings, objects, numbers, etc."
},
{
"code": n... |
How to parse a JSON string using Streaming API in Java? | The Streaming API consists of an important interface JsonParser and this interface contains methods to parse JSON in a streaming way and provides forward, read-only access to JSON data. The Json class contains the methods to create parsers from input sources. We can parse a JSON using the static method createParser() of Json class.
public static JsonParser createParser(Reader reader)
import java.io.*;
import javax.json.Json;
import javax.json.stream.JsonParser;
import javax.json.stream.JsonParser.Event;
public class JSONParseringTest {
public static void main(String[] args) {
String jsonString = "{\"name\":\"Adithya\",\"employeeId\":\"115\",\"age\":\"30\"}";
JsonParser parser = Json.createParser(new StringReader(jsonString));
while(parser.hasNext()) {
Event event = parser.next();
if(event == Event.KEY_NAME) {
switch(parser.getString()) {
case "name":
parser.next();
System.out.println("Name: " + parser.getString());
break;
case "employeeId":
parser.next();
System.out.println("EmployeeId: " + parser.getString());
break;
case "age":
parser.next();
System.out.println("Age: " + parser.getString());
break;
}
}
}
}
}
Name: Adithya
EmployeeId: 115
Age: 30 | [
{
"code": null,
"e": 1396,
"s": 1062,
"text": "The Streaming API consists of an important interface JsonParser and this interface contains methods to parse JSON in a streaming way and provides forward, read-only access to JSON data. The Json class contains the methods to create parsers from input so... |
Hosting Flutter Website On Firebase For Free - GeeksforGeeks | 29 Dec, 2020
Flutter is a user interface development software development kit. Flutter is an open-source project maintained by Google. It enables software developers to make beautiful natively compiled applications for IOS, Android, Web, and Desktop with a single code base. Although developing flutter applications is very fun and interesting, but it is also important to showcase the product to the end-user. And one of the best ways to do that is by hosting the flutter web application on firebase hosting.
Firebase is also a goggle’s product available to developers as a backend and a service. It is mainly used to develop and maintain the backend of the software applications. Firebase provides multiple services like real-time database and ML kit and else to help developers focus on the functionality of the application rather than struggling to implement it. So, here we are going to use firebase hosting to host our flutter application. And the most important thing is there are no charges involved to start working with firebase.
Flutter project which is to be hosted. Here we are going to host an interactive story app.
A Gmail account.
Node JS installed on the computer. It will enable us to install firebase CLI (command line interface).
Now follow the below steps to Host a flutter web app on Firebase for free:
Step 1: Create a new project on firebase
The first step is to create a project in firebase. Visit firebase.google.com and sign-up if you haven’t already and go to console. Here, we will create a new project and give it any name of our choice.
Step 2: Creating flutter web app
In this step, we will create the web application of the flutter project we have already prepared. I have made an interactive story app that changes the story based on the user’s input. To create the web application of the flutter project we will use the following command ‘flutter build web‘. This will create a light and smooth flutter web application n the build folder inside the web directory. You can even check if there is any problem with the build or not by using this command :
flutter run -d chrome --release
Step 3: Registering App
In this step, we will create a web instance in the firebase project that we have created and register our web app and generate a name (URL) for the flutter web app.
Step 4: Adding Firebase SKD
Now we will add the firebase system development kit in our flutter app. It helps firebase identify the web app, track its version, keep track of its usage. It is done by adding two or three scripts in the body of the index.html page.
Step 5: Installing Firebase CLI
In this step, we will install a firebase command-line interface that lets us interact with firebase and use its functionalities. It is done by running the below command in the terminal:
npm install -g firebase-tools
Step 6: Deploying the app
This is the final step we will deploy our flutter web app to the firebase hosting. First, we need to run the command ‘firebase login‘ to confirm we are connected with the firebase. Then we will initialize the process by running the command ‘firebase inti‘. And now we need to select a few options that you can see in the video below. And after all the initializing steps are done then we will the command ‘firebase deploy‘ or the one was given on the firebase website. This command will push all the files to the hosting server and it will return us a URL to the web app we have successfully hosted. And in this case, it was https://gfg-flutter-story.web.app/, you can check this out if you want.
Full video with all the steps.
Flutter
Advanced Computer Subject
Android
Dart
Flutter
Web Technologies
Android
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Copying Files to and from Docker Containers
Principal Component Analysis with Python
Fuzzy Logic | Introduction
Classifying data using Support Vector Machines(SVMs) in Python
How to create a REST API using Java Spring Boot
MVVM (Model View ViewModel) Architecture Pattern in Android
Bottom Navigation Bar in Android
Android Architecture
How to Create and Add Data to SQLite Database in Android?
Broadcast Receiver in Android With Example | [
{
"code": null,
"e": 24256,
"s": 24228,
"text": "\n29 Dec, 2020"
},
{
"code": null,
"e": 24753,
"s": 24256,
"text": "Flutter is a user interface development software development kit. Flutter is an open-source project maintained by Google. It enables software developers to make be... |
Visualization in Python: Finding Routes between Points | by Wei-Meng Lee | Towards Data Science | In my previous article, I talked about visualizing your geospatial data using the folium library.
towardsdatascience.com
Another task that developers often have to perform with geospatial data is to map out the routing paths between various points of interests. And so in this article I am going to show you how to:
Geocode your locations
Find the shortest distance between two locations
The first step to routing is to install the OSMnx package.
The OSMnx is a Python package that lets you download geospatial data from OpenStreetMap and model, project, visualize, and analyze real-world street networks and any other geospatial geometries.
Installing the OSMnx package is a little tricky — before you perform the usual pip/conda install, you have to use the following step:
$ conda config --prepend channels conda-forge$ conda install osmnx
Note: If conda install osmnx fails, use pip install osmnx
Once the above steps are performed, OSMnx should now be available for use.
You can now make use of the OSMnx package together with the NetworkX package to find the route between two points.
NetworkX is a Python package for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks.
The following code snippet finds the shortest walking distance between two locations in San Francisco:
import osmnx as oximport networkx as nxox.config(log_console=True, use_cache=True)# define the start and end locations in latlngstart_latlng = (37.78497,-122.43327)end_latlng = (37.78071,-122.41445)# location where you want to find your routeplace = 'San Francisco, California, United States'# find shortest route based on the mode of travelmode = 'walk' # 'drive', 'bike', 'walk'# find shortest path based on distance or timeoptimizer = 'time' # 'length','time'# create graph from OSM within the boundaries of some # geocodable place(s)graph = ox.graph_from_place(place, network_type = mode)# find the nearest node to the start locationorig_node = ox.get_nearest_node(graph, start_latlng)# find the nearest node to the end locationdest_node = ox.get_nearest_node(graph, end_latlng)# find the shortest pathshortest_route = nx.shortest_path(graph, orig_node, dest_node, weight=optimizer)
The default method for finding the shortest path is ‘dijkstra’. You can change this by setting the method parameter of the shortest_path() function to ’bellman-ford’.
The shortest_route variable now holds a collection of path to walk from one point to another in the shortest time:
[5287124093, 65314192, 258759765, 65314189, 5429032435, 65303568, 65292734, 65303566, 2220968863, 4014319583, 65303561, 65303560, 4759501665, 65303559, 258758548, 4759501667, 65303556, 65303554, 65281835, 65303553, 65303552, 65314163, 65334128, 65317951, 65333826, 65362158, 65362154, 5429620634, 65308268, 4064226224, 7240837048, 65352325, 7240837026, 7240837027]
Obviously, having a list of paths is not very useful by itself. A more meaning way to interpret the result is to plot the paths using the plot_route_folium() function:
shortest_route_map = ox.plot_route_folium(graph, shortest_route)shortest_route_map
The plot_route_folium() function returns a folium map (folium.folium.Map). When displayed in Jupyter Notebook, it looks like this:
The default tileset used is cartodbpositron. If you want to change it, you can set the tiles argument to the tileset that you want to use. The following code snippet shows the map displayed using the openstreetmap tileset:
shortest_route_map = ox.plot_route_folium(graph, shortest_route, tiles='openstreetmap')shortest_route_map
Here is the map displayed using the openstreetmap tileset:
If you want to let the user choose their preferred tileset during runtime, use the following code snippet:
import foliumfolium.TileLayer('openstreetmap').add_to(shortest_route_map)folium.TileLayer('Stamen Terrain').add_to(shortest_route_map)folium.TileLayer('Stamen Toner').add_to(shortest_route_map)folium.TileLayer('Stamen Water Color').add_to(shortest_route_map)folium.TileLayer('cartodbpositron').add_to(shortest_route_map)folium.TileLayer('cartodbdark_matter').add_to(shortest_route_map)folium.LayerControl().add_to(shortest_route_map)shortest_route_map
The user can now choose their own tileset to display the map:
Besides finding the shortest path for walking, you can also plot the shortest path for driving:
# find shortest route based on the mode of travelmode = 'drive' # 'drive', 'bike', 'walk'# find shortest path based on distance or timeoptimizer = 'time' # 'length','time'
Here is the path for driving:
How about biking?
# find shortest route based on the mode of travelmode = 'bike' # 'drive', 'bike', 'walk'# find shortest path based on distance or timeoptimizer = 'time' # 'length','time'
Here is the shortest path for biking:
You can also find the shortest path based on distance, instead of time:
# find shortest route based on the mode of travelmode = 'bike' # 'drive', 'bike', 'walk'# find shortest path based on distance or timeoptimizer = 'length' # 'length','time'
This is the shortest distance for biking:
I will leave the rest of the combinations for you to try out.
When finding routes between two points, it is not very convenient to specify the latitude and longitude of the locations (unless you already have the coordinates in your dataset). Instead, it would be far easier to just specify their friendly names. You can actually perform this step (called geocoding) using the geopy module.
Geocoding is the process of converting an address into its coordinates. Reverse geocoding, on the other hand, turns a pair of coordinates into a friendly address.
To install the geopy module, type the following command in Terminal:
$ pip install geopy
The following code snippet creates an instance of the Nominatim geocoder class for OpenStreetMap data. It then calls the geocode()method to geocode the location for Golden Gate Bridge. Using the geocoded location, you can now extract the latitude and longitude of the location:
from geopy.geocoders import Nominatimlocator = Nominatim(user_agent = "myapp")location = locator.geocode("Golden Gate Bridge")print(location.latitude, location.longitude)# 37.8303213 -122.4797496print(location.point)# 37 49m 49.1567s N, 122 28m 47.0986s Wprint(type(location.point))# <class 'geopy.point.Point'>
You can verify the result by going to https://www.google.com/maps and pasting the latitude and longitude into the search box:
Let’s now modify our original code so that we can geocode the start and end points:
import osmnx as oximport networkx as nxox.config(log_console=True, use_cache=True)from geopy.geocoders import Nominatimlocator = Nominatim(user_agent = "myapp")# define the start and end locations in latlng# start_latlng = (37.78497,-122.43327)# end_latlng = (37.78071,-122.41445)start_location = "Hilton San Francisco Union Square"end_location = "Golden Gateway Tennis & Swim Club"# stores the start and end points as geopy.point.Point objectsstart_latlng = locator.geocode(start_location).pointend_latlng = locator.geocode(end_location).point# location where you want to find your routeplace = 'San Francisco, California, United States'# find shortest route based on the mode of travelmode = 'bike' # 'drive', 'bike', 'walk'# find shortest path based on distance or timeoptimizer = 'length' # 'length','time'# create graph from OSM within the boundaries of some # geocodable place(s)graph = ox.graph_from_place(place, network_type = mode)# find the nearest node to the start locationorig_node = ox.get_nearest_node(graph, start_latlng)# find the nearest node to the end locationdest_node = ox.get_nearest_node(graph, end_latlng)...
Note that the get_nearest_node() function can accept the location coordinates either as a tuple containing latitude and longitude, or a geopy.point.Point object.
Here is the shortest biking distance from Hilton hotel in Union Square to the Golden Gateway Tennis & Swim Club in San Francisco:
It would be clearer if you could display markers indicating the starting point as well as the ending point. As I described in my previous article, you can use the Marker class in folium to display a marker with a popup.
The following code snippet displays two markers — green indicating the starting point, and red indicating the end point:
import folium# Marker class only accepts coordinates in tuple formstart_latlng = (start_latlng[0],start_latlng[1])end_latlng = (end_latlng[0],end_latlng[1])start_marker = folium.Marker( location = start_latlng, popup = start_location, icon = folium.Icon(color='green'))end_marker = folium.Marker( location = end_latlng, popup = end_location, icon = folium.Icon(color='red'))# add the circle marker to the mapstart_marker.add_to(shortest_route_map)end_marker.add_to(shortest_route_map)shortest_route_map
Note that the Marker class only accepts coordinates in tuple form. Hence you need to modify start_latlng and end_latlng to become tuples containing latitude and longitude.
Here are the two markers indicating the start and end of the path:
In the earlier section, you used theplot_route_folium() function to plot the shortest path of two points on a folium map:
shortest_route_map = ox.plot_route_folium(graph, shortest_route)shortest_route_map
There is one more function that you might be interested — plot_graph_route(). Instead of outputting an interactive map, it produces a static graph. This is useful if you want to produce a poster/image showing the path of two points.
The following code snippet produces a static graph using the points used in the previous section:
import osmnx as oximport networkx as nxox.config(log_console=True, use_cache=True)graph = ox.graph_from_place(place, network_type = mode)orig_node = ox.get_nearest_node(graph, start_latlng)dest_node = ox.get_nearest_node(graph, end_latlng)shortest_route = nx.shortest_path(graph, orig_node, dest_node, weight=optimizer)fig, ax = ox.plot_graph_route(graph, shortest_route, save=True)
You will see the following output:
I hope you find inspiration from this article and start using it to create routes for points of interest. You might want to let users enter their current locations, and then plot the routes to show them how to go to their destinations. In any case, have fun!
weimenglee.medium.com
Reference: Boeing, G. 2017. OSMnx: New Methods for Acquiring, Constructing, Analyzing, and Visualizing Complex Street Networks. Computers, Environment and Urban Systems 65, 126–139. doi:10.1016/j.compenvurbsys.2017.05.004 | [
{
"code": null,
"e": 263,
"s": 165,
"text": "In my previous article, I talked about visualizing your geospatial data using the folium library."
},
{
"code": null,
"e": 286,
"s": 263,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 481,
"s": 286,
"text... |
Python | Ways to sort letters of string alphabetically - GeeksforGeeks | 11 May, 2020
Given a string of letters, write a python program to sort the given string in an alphabetical order.
Examples:
Input : PYTHON
Output : HNOPTY
Input : Geeks
Output : eeGks
Method #1 : Using sorted() with join()
# Python3 program to sort letters # of string alphabetically def sortString(str): return ''.join(sorted(str)) # Driver codestr = 'PYTHON'print(sortString(str))
HNOPTY
Method #2 : Using sorted() with accumulate()
# Python3 program to sort letters # of string alphabeticallyfrom itertools import accumulate def sortString(str): return tuple(accumulate(sorted(str)))[-1] # Driver codestr = 'PYTHON'print(sortString(str))
HNOPTY
Method #3 : Using sorted() with reduce()
Another alternative is to use reduce() method. It applies a join function on the sorted list using ‘+’ operator.
# Python3 program to sort letters # of string alphabeticallyfrom functools import reduce def sortString(str): return reduce(lambda a, b : a + b, sorted(str)) # Driver codestr = 'PYTHON'print(sortString(str))
HNOPTY
Using sorted() with join()
# Python3 program to sort letters # of string alphabeticallyfrom itertools import accumulate def sortString(str): return "".join(sorted(str, key = lambda x:x.lower())) # Driver codestr = 'Geeks'print(sortString(str))
eeGks
Python-sort
Python-string-functions
Python
Python Programs
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Box Plot in Python using Matplotlib
Python | Get dictionary keys as a list
Bar Plot in Matplotlib
Multithreading in Python | Set 2 (Synchronization)
Python Dictionary keys() method
Defaultdict in Python
Python | Get dictionary keys as a list
Python | Split string into list of characters
Python program to check whether a number is Prime or not
Python | Convert a list to dictionary | [
{
"code": null,
"e": 23925,
"s": 23897,
"text": "\n11 May, 2020"
},
{
"code": null,
"e": 24026,
"s": 23925,
"text": "Given a string of letters, write a python program to sort the given string in an alphabetical order."
},
{
"code": null,
"e": 24036,
"s": 24026,
... |
Julia - Basic Operators | In this chapter, we shall discuss different types of operators in Julia.
In Julia, we get all the basic arithmetic operators across all the numeric primitive types. It also provides us bitwise operators as well as efficient implementation of comprehensive collection of standard mathematical functions.
Following table shows the basic arithmetic operators that are supported on Julia’s primitive numeric types −
The promotion system of Julia makes these arithmetic operations work naturally and automatically on the mixture of argument types.
Following example shows the use of arithmetic operators −
julia> 2+20-5
17
julia> 3-8
-5
julia> 50*2/10
10.0
julia> 23%2
1
julia> 2^4
16
Following table shows the bitwise operators that are supported on Julia’s primitive numeric types −
Following example shows the use of bitwise operators −
julia> ∼1009
-1010
julia> 12&23
4
julia> 12 & 23
4
julia> 12 | 23
31
julia> 12 ⊻ 23
27
julia> xor(12, 23)
27
julia> ∼UInt32(12)
0xfffffff3
julia> ∼UInt8(12)
0xf3
Each arithmetic as well as bitwise operator has an updating version which can be formed by placing an equal sign (=) immediately after the operator. This updating operator assigns the result of the operation back into its left operand. It means that a +=1 is equal to a = a+1.
Following is the list of the updating versions of all the binary arithmetic and bitwise operators −
+=
+=
-=
-=
*=
*=
/=
/=
\=
\=
÷=
÷=
%=
%=
^=
^=
&=
&=
|=
|=
⊻=
⊻=
>>>=
>>>=
>>=
>>=
<<=
<<=
Following example shows the use of updating operators −
julia> A = 100
100
julia> A +=100
200
julia> A
200
For each binary operation like ^, there is a corresponding “dot”(.) operation which is used on the entire array, one by one. For instance, if you would try to perform [1, 2, 3] ^ 2, then it is not defined and not possible to square an array. On the other hand, [1, 2, 3] .^ 2 is defined as computing the vectorized result. In the same sense, this vectorized “dot” operator can also be used with other binary operators.
Following example shows the use of “dot” operator −
julia> [1, 2, 3].^2
3-element Array{Int64,1}:
1
4
9
Following table shows the numeric comparison operators that are supported on Julia’s primitive numeric types −
Following example shows the use of numeric comparison operators −
julia> 100 == 100
true
julia> 100 == 101
false
julia> 100 != 101
true
julia> 100 == 100.0
true
julia> 100 < 500
true
julia> 100 > 500
false
julia> 100 >= 100.0
true
julia> -100 <= 100
true
julia> -100 <= -100
true
julia> -100 <= -500
false
julia> 100 < -10.0
false
In Julia, the comparisons can be arbitrarily chained. In case of numerical code, the chaining comparisons are quite convenient. The && operator for scalar comparisons and & operator for elementwise comparison allows chained comparisons to work fine on arrays.
Following example shows the use of chained comparison −
julia> 100 < 200 <= 200 < 300 == 300 > 200 >= 100 == 100 < 300 != 500
true
In the following example, let us check out the evaluation behavior of chained comparisons −
julia> M(a) = (println(a); a)
M (generic function with 1 method)
julia> M(1) < M(2) <= M(3)
2
1
3
true
julia> M(1) > M(2) <= M(3)
2
1
false
From highest precedence to lowest, the following table shows the order and associativity of operations applied by Julia −
We can also use Base.operator_precedence function to check the numerical precedence of a given operator. The example is given below −
julia> Base.operator_precedence(:-), Base.operator_precedence(:+), Base.operator_precedence(:.)
(11, 11, 17)
73 Lectures
4 hours
Lemuel Ogbunude
24 Lectures
3 hours
Mohammad Nauman
29 Lectures
2.5 hours
Stone River ELearning
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2151,
"s": 2078,
"text": "In this chapter, we shall discuss different types of operators in Julia."
},
{
"code": null,
"e": 2381,
"s": 2151,
"text": "In Julia, we get all the basic arithmetic operators across all the numeric primitive types. It also provides ... |
HTML | <td> nowrap Attribute - GeeksforGeeks | 31 Oct, 2019
The HTML <td> nowrap Attribute is used to specify that the content present inside the cell should not wrap. It contains the Boolean value. It is not supported by HTML 5.
Syntax:
<td nowrap>
Example:
<!DOCTYPE html><html> <head> <title>HTML nowrap Attribute</title> <style> table, th, td { border: 1px solid black; border-collapse: collapse; padding: 6px; } </style></head> <body style="text-align:center"> <h1 style="color: green;">GeeksforGeeks</h1> <h2>HTML <td>nowrap Attribute</h2> <center> <table> <tr> <th>Name</th> <th>Age</th> </tr> <tr> <td nowrap>Ajay</td> <!-- This cell will take up space on two rows --> <td rowspan="2">24</td> </tr> <tr> <td>Priya</td> </tr> </table> </center> </body> </html>
Output:
Supported Browsers: The browsers supported by HTML <td> nowrap Attribute are listed below:
Google Chrome
Internet Explorer
Firefox
Apple Safari
Opera
Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course.
HTML-Attributes
HTML
Web Technologies
HTML
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Top 10 Projects For Beginners To Practice HTML and CSS Skills
How to insert spaces/tabs in text using HTML/CSS?
How to set the default value for an HTML <select> element ?
How to update Node.js and NPM to next version ?
How to set input type date in dd-mm-yyyy format using HTML ?
Roadmap to Become a Web Developer in 2022
Installation of Node.js on Linux
How to fetch data from an API in ReactJS ?
Top 10 Projects For Beginners To Practice HTML and CSS Skills
How to insert spaces/tabs in text using HTML/CSS? | [
{
"code": null,
"e": 31678,
"s": 31650,
"text": "\n31 Oct, 2019"
},
{
"code": null,
"e": 31848,
"s": 31678,
"text": "The HTML <td> nowrap Attribute is used to specify that the content present inside the cell should not wrap. It contains the Boolean value. It is not supported by H... |
TestNG - Environment | TestNG is a framework for Java, so the very first requirement is to have JDK installed in your machine.
Open the console and execute a java command based on the operating system you have installed on your system.
Let's verify the output for all the operating systems −
java version "15.0.2" 2021-01-19
Java(TM) SE Runtime Environment (build 15.0.2+7-27)
Java HotSpot(TM) 64-Bit Server VM (build 15.0.2+7-27, mixed mode, sharing)
openjdk version "11.0.11" 2021-04-20
OpenJDK Runtime Environment (build 11.0.11+9-Ubuntu-0ubuntu2.20.04)
OpenJDK 64-Bit Server VM (build 11.0.11+9-Ubuntu-0ubuntu2.20.04, mixed mode, sharing)
java version "1.7.0_25"
Java(TM) SE Runtime Environment (build 1.7.0_25-b15)
Java HotSpot(TM) 64-Bit Server VM (build 23.25-b01, mixed mode)
If you do not have Java, install the Java Software Development Kit (SDK) from https://www.oracle.com/technetwork/java/javase/downloads/index.html. We are assuming Java 1.7.0_25 as the installed version for this tutorial.
Set the JAVA_HOME environment variable to point to the base directory location, where Java is installed on your machine. For example,
Append Java compiler location to System Path.
Verify Java Installation using the command java -version as explained above.
Download the latest version of TestNG jar file from http://www.testng.org or from here. At the time of writing this tutorial, we have downloaded testng-7.4.jar and copied it onto /work/testng folder.
Set the TESTNG_HOME environment variable to point to the base directory location, where TestNG jar is stored on your machine. The following table shows how to set the environment variable in Windows, Linux, and Mac, assuming that we've stored testng-7.4.jar at the location /work/testng.
Set the CLASSPATH environment variable to point to the TestNG jar location.
Create a java class file named TestNGSimpleTest at /work/testng/src
import org.testng.annotations.Test;
import static org.testng.Assert.assertEquals;
public class TestNGSimpleTest {
@Test
public void testAdd() {
String str = "TestNG is working fine";
AssertEquals("TestNG is working fine", str);
}
}
TestNG can be invoked in several different ways −
With a testng.xml file.
With ANT.
From the command line.
Let us invoke using the testng.xml file. Create an xml file with the name testng.xml in /work/testng/src to execute Test case(s).
<?xml version = "1.0" encoding = "UTF-8"?>
<!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd" >
<suite name = "Suite1">
<test name = "test1">
<classes>
<class name = "TestNGSimpleTest"/>
</classes>
</test>
</suite>
Compile the class using javac compiler as follows −
/work/testng/src$ javac TestNGSimpleTest.java
Now, invoke the testng.xml to see the result −
/work/testng/src$ java org.testng.TestNG testng.xml
Verify the output.
===============================================
Suite
Total tests run: 1, Passes: 1, Failures: 0, Skips: 0
===============================================
38 Lectures
4.5 hours
Lets Kode It
15 Lectures
1.5 hours
Quaatso Learning
28 Lectures
3 hours
Dezlearn Education
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2164,
"s": 2060,
"text": "TestNG is a framework for Java, so the very first requirement is to have JDK installed in your machine."
},
{
"code": null,
"e": 2273,
"s": 2164,
"text": "Open the console and execute a java command based on the operating system you ... |
How to check if a C/C++ string is an int? | There are several methods to check that string is an int or not and one of those method is to use isdigit() to check the string.
Here is an example to check whether a string is an int or not in C++ language,
Live Demo
#include<iostream>
#include<string.h>
using namespace std;
int main() {
char str[] = "3257fg";
for (int i = 0; i < strlen(str); i++) {
if(isdigit(str[i]))
cout<<"The string contains int\n";
else
cout<<"The string does not contain int\n";
}
return 0;
}
Here is the output
The string contains int
The string contains int
The string contains int
The string contains int
The string does not contain int
The string does not contain int
In the above program, the actual code of checking the string is present in main() function. Using built-in method isdigit(), each character of string is checked. If the string character is a number, it will print that string contains int. If string contains character or alphabet, it will print that string does not contain int.
for (int i = 0; i < strlen(str); i++) {
if(isdigit(str[i]))
printf("The string contains int\n");
else
printf("The string does not contain int\n");
} | [
{
"code": null,
"e": 1191,
"s": 1062,
"text": "There are several methods to check that string is an int or not and one of those method is to use isdigit() to check the string."
},
{
"code": null,
"e": 1270,
"s": 1191,
"text": "Here is an example to check whether a string is an in... |
Perform SUM and SUBTRACTION on the basis of a condition in a single MySQL query? | For this, use CASE statement and set for both SUM and SUBTRACTION. Let us first create a table −
mysql> create table DemoTable866(
Status varchar(100),
Amount int
);
Query OK, 0 rows affected (0.63 sec)
Insert some records in the table using insert command −
mysql> insert into DemoTable866 values('ACTIVE',50);
Query OK, 1 row affected (0.10 sec)
mysql> insert into DemoTable866 values('INACTIVE',70);
Query OK, 1 row affected (0.12 sec)
mysql> insert into DemoTable866 values('INACTIVE',20);
Query OK, 1 row affected (0.15 sec)
mysql> insert into DemoTable866 values('ACTIVE',100);
Query OK, 1 row affected (0.14 sec)
mysql> insert into DemoTable866 values('ACTIVE',200);
Query OK, 1 row affected (0.11 sec)
Display all records from the table using select statement −
mysql> select *from DemoTable866;
This will produce the following output −
+----------+--------+
| Status | Amount |
+----------+--------+
| ACTIVE | 50 |
| INACTIVE | 70 |
| INACTIVE | 20 |
| ACTIVE | 100 |
| ACTIVE | 200 |
+----------+--------+
5 rows in set (0.00 sec)
Following is the query to perform SUM or SUBTRACTION in the same column with a single MySQL query −
mysql> select sum(CASE WHEN Status = 'ACTIVE' then Amount
WHEN Status = 'INACTIVE' then -Amount
END) AS RemainingAmount
from DemoTable866;
This will produce the following output −
+-----------------+
| RemainingAmount |
+-----------------+
| 260 |
+-----------------+
1 row in set (0.06 sec) | [
{
"code": null,
"e": 1159,
"s": 1062,
"text": "For this, use CASE statement and set for both SUM and SUBTRACTION. Let us first create a table −"
},
{
"code": null,
"e": 1271,
"s": 1159,
"text": "mysql> create table DemoTable866(\n Status varchar(100),\n Amount int\n);\nQuery ... |
VBA - Day Function | The Day function returns a number between 1 and 31 that represents the day of the specified date.
Day(date)
Add a button and add the following function.
Private Sub Constant_demo_Click()
msgbox(Day("2013-06-30"))
End Sub
When you execute the above function, it produces the following output.
30
101 Lectures
6 hours
Pavan Lalwani
41 Lectures
3 hours
Arnold Higuit
80 Lectures
5.5 hours
Prashant Panchal
25 Lectures
2 hours
Prashant Panchal
26 Lectures
2 hours
Arnold Higuit
92 Lectures
10.5 hours
Vijay Kumar Parvatha Reddy
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2033,
"s": 1935,
"text": "The Day function returns a number between 1 and 31 that represents the day of the specified date."
},
{
"code": null,
"e": 2045,
"s": 2033,
"text": "Day(date) \n"
},
{
"code": null,
"e": 2090,
"s": 2045,
"text": "... |
Android - Progress Circle | The easiest way to make a progress circle is using through a class called ProgressDialog. The loading bar can also be made through that class. The only logical difference between bar and circle is , that the former is used when you know the total time for waiting for a particular task whereas the later is used when you don't know the waiting time.
In order to this , you need to instantiate an object of this class. Its syntax is.
ProgressDialog progress = new ProgressDialog(this);
Now you can set some properties of this dialog. Such as, its style,its text e.t.c
progress.setMessage("Downloading Music :) ");
progress.setProgressStyle(ProgressDialog.STYLE_SPINNER);
progress.setIndeterminate(true);
Apart from these methods, there are other methods that are provided by the ProgressDialog class.
getMax()
This methods returns the maximum value of the progress
incrementProgressBy(int diff)
This method increment the progress bar by the difference of value passed as a parameter
setIndeterminate(boolean indeterminate)
This method set the progress indicator as determinate or indeterminate
setMax(int max)
This method set the maximum value of the progress dialog
setProgress(int value)
This method is used to update the progress dialog with some specific value
show(Context context, CharSequence title, CharSequence message)
This is a static method, used to display progress dialog
This example demonstrates the spinning use of the progress dialog. It display a spinning progress dialog on pressing the button.
To experiment with this example, you need to run this on an actual device on after developing the application according to the steps below.
Following is the content of the modified main activity file src/MainActivity.java.
package com.example.sairamkrishna.myapplication;
import android.app.ProgressDialog;
import android.app.Activity;
import android.os.Bundle;
import android.os.Handler;
import android.view.View;
import android.widget.Button;
public class MainActivity extends Activity {
Button b1;
private ProgressDialog progressBar;
private int progressBarStatus = 0;
private Handler progressBarbHandler = new Handler();
private long fileSize = 0;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
b1=(Button)findViewById(R.id.button);
b1.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
progressBar = new ProgressDialog(v.getContext());
progressBar.setCancelable(true);
progressBar.setMessage("File downloading ...");
progressBar.setProgressStyle(ProgressDialog.STYLE_SPINNER);
progressBar.setProgress(0);
progressBar.setMax(100);
progressBar.show();
progressBarStatus = 0;
fileSize = 0;
new Thread(new Runnable() {
public void run() {
while (progressBarStatus < 100) {
progressBarStatus = downloadFile();
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
progressBarbHandler.post(new Runnable() {
public void run() {
progressBar.setProgress(progressBarStatus);
}
});
}
if (progressBarStatus >= 100) {
try {
Thread.sleep(2000);
} catch (InterruptedException e) {
e.printStackTrace();
}
progressBar.dismiss();
}
}
}).start();
}
});
}
public int downloadFile() {
while (fileSize <= 1000000) {
fileSize++;
if (fileSize == 100000) {
return 10;
}else if (fileSize == 200000) {
return 20;
}else if (fileSize == 300000) {
return 30;
}else if (fileSize == 400000) {
return 40;
}else if (fileSize == 500000) {
return 50;
}else if (fileSize == 700000) {
return 70;
}else if (fileSize == 800000) {
return 80;
}
}
return 100;
}
}
Modify the content of res/layout/activity_main.xml to the following
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent"
android:layout_height="match_parent" android:paddingLeft="@dimen/activity_horizontal_margin"
android:paddingRight="@dimen/activity_horizontal_margin"
android:paddingTop="@dimen/activity_vertical_margin"
android:paddingBottom="@dimen/activity_vertical_margin" tools:context=".MainActivity">
<TextView android:text="Music Palyer" android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/textview"
android:textSize="35dp"
android:layout_alignParentTop="true"
android:layout_centerHorizontal="true" />
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Tutorials point"
android:id="@+id/textView"
android:layout_below="@+id/textview"
android:layout_centerHorizontal="true"
android:textColor="#ff7aff24"
android:textSize="35dp" />
<Button
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="download"
android:id="@+id/button"
android:layout_alignParentBottom="true"
android:layout_centerHorizontal="true"
android:layout_marginBottom="112dp" />
<ImageView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/imageView"
android:src="@drawable/abc"
android:layout_below="@+id/textView"
android:layout_centerHorizontal="true" />
</RelativeLayout>
Modify the res/values/string.xml to the following
<resources>
<string name="app_name">My Application</string>
</resources>
This is the default AndroidManifest.xml
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.example.sairamkrishna.myapplication" >
<application
android:allowBackup="true"
android:icon="@drawable/ic_launcher"
android:label="@string/app_name"
android:theme="@style/AppTheme" >
<activity
android:name="com.example.sairamkrishna.myapplication.MainActivity"
android:label="@string/app_name" >
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
Let's try to run your application. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Before starting your application, Android studio will display following window to select an option where you want to run your Android application.
Just press the button to start the Progress Dialog. After pressing , following screen would appear.
46 Lectures
7.5 hours
Aditya Dua
32 Lectures
3.5 hours
Sharad Kumar
9 Lectures
1 hours
Abhilash Nelson
14 Lectures
1.5 hours
Abhilash Nelson
15 Lectures
1.5 hours
Abhilash Nelson
10 Lectures
1 hours
Abhilash Nelson
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 3957,
"s": 3607,
"text": "The easiest way to make a progress circle is using through a class called ProgressDialog. The loading bar can also be made through that class. The only logical difference between bar and circle is , that the former is used when you know the total time f... |
ASP.NET WP - Working with Images | In this chapter, we will discuss how to add and display images on your website. You can add images to your website and to individual pages when you are developing your website. If an image is already available on your site, then you can use HTML <img> tag to display it on a page.
Let’s have a look into a simple example by creating a new folder in the project and name it Images and then add some images in that folder.
Now add another cshtml file and Name it as DynamicImages.cshtml.
Click OK and then replace the following code in the DynamicImages.cshtml file.
@{
var imagePath = "";
if (Request["Choice"] != null){ imagePath = "images/" + Request["Choice"]; }
}
<!DOCTYPE html>
<html>
<body>
<h1>Display Images</h1>
<form method = "post" action = "">
I want to see:
<select name = "Choice">
<option value = "index.jpg">Nature 1</option>
<option value = "index1.jpg">Nature 2</option>
<option value = "index2.jpg">Nature 3</option>
</select>
<input type = "submit" value = "Submit" />
@if (imagePath != ""){
<p><img src = "@imagePath" alt = "Sample" /></p>
}
</form>
</body>
</html>
As you can see, the body of the page has a drop-down list which is a <select> tag and it is named Choice. The list has three options, and the value attributes of each list option has the name of one of the images that has been put in the images folder.
In the above code, the list lets the user select a friendly name like Nature 1 and it then passes the .jpg file name when the page is submitted.
In the code, you can get the user's selection from the list by reading Request["Choice"]. To begin with, it will see if there is any selection then it will set a path for the image that consists of the name of the folder for the images and the user's image file name.
Let’s run the application and specify the following url http://localhost:36905/DynamicImages then you will see the following output.
Let’s click on the Submit button and you will see that index.jpg file is loaded on the page as shown in the following screenshot.
If you would like to select another photo from the dropdown list, let’s say Nature 2, then click the Submit button and it will update the photo on the page.
You can display an image dynamically only when it is available on your website, but sometimes you will have to display images which will not be available on your website. So you will need to upload it first and then you can display that image on your web page.
Let’s have a look into a simple example in which we will upload image, first we will create a new CSHTML file.
Enter UploadImage.cshtml in the Name field and click OK. Now let’s replace the following code in UploadImage.cshtml file
@{ WebImage photo = null;
var newFileName = "";
var imagePath = "";
if(IsPost){
photo = WebImage.GetImageFromRequest();
if(photo != null){
newFileName = Guid.NewGuid().ToString() + "_" +
Path.GetFileName(photo.FileName);
imagePath = @"images\" + newFileName;
photo.Save(@"~\" + imagePath);
}
}
}
<!DOCTYPE html>
<html>
<head>
<title>Image Upload</title>
</head>
<body>
<form action = "" method = "post" enctype = "multipart/form-data">
<fieldset>
<legend> Upload Image </legend>
<label for = "Image">Image</label>
<input type = "file" name = "Image" size = "35"/>
<br/>
<input type = "submit" value = "Upload" />
</fieldset>
</form>
<h1>Uploaded Image</h1>
@if(imagePath != ""){
<div class = "result"><img src = "@imagePath" alt = "image" /></div>
}
</body>
</html>
Let’s run this application and specify the following url − http://localhost:36905/UploadImage then you will see the following output.
To upload the image, click on Choose File and then browse to the image which you want to upload. Once the image is selected then the name of the image will be displayed next to the Choose File button as shown in the following screenshot.
As you can see the that images.jpg image is selected, let’s click on the Upload button to upload the image.
51 Lectures
5.5 hours
Anadi Sharma
44 Lectures
4.5 hours
Kaushik Roy Chowdhury
42 Lectures
18 hours
SHIVPRASAD KOIRALA
57 Lectures
3.5 hours
University Code
40 Lectures
2.5 hours
University Code
138 Lectures
9 hours
Bhrugen Patel
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2566,
"s": 2285,
"text": "In this chapter, we will discuss how to add and display images on your website. You can add images to your website and to individual pages when you are developing your website. If an image is already available on your site, then you can use HTML <img> t... |
Docker - Setting Node.js | Node.js is a JavaScript framework that is used for developing server-side applications. It is an open source framework that is developed to run on a variety of operating systems. Since Node.js is a popular framework for development, Docker has also ensured it has support for Node.js applications.
We will now see the various steps for getting the Docker container for Node.js up and running.
Step 1 − The first step is to pull the image from Docker Hub. When you log into Docker Hub, you will be able to search and see the image for Node.js as shown below. Just type in Node in the search box and click on the node (official) link which comes up in the search results.
Step 2 − You will see that the Docker pull command for node in the details of the repository in Docker Hub.
Step 3 − On the Docker Host, use the Docker pull command as shown above to download the latest node image from Docker Hub.
Once the pull is complete, we can then proceed with the next step.
Step 4 − On the Docker Host, let’s use the vim editor and create one Node.js example file. In this file, we will add a simple command to display “HelloWorld” to the command prompt.
In the Node.js file, let’s add the following statement −
Console.log(‘Hello World’);
This will output the “Hello World” phrase when we run it through Node.js.
Ensure that you save the file and then proceed to the next step.
Step 5 − To run our Node.js script using the Node Docker container, we need to execute the following statement −
sudo docker run –it –rm –name = HelloWorld –v “$PWD”:/usr/src/app
–w /usr/src/app node node HelloWorld.js
The following points need to be noted about the above command −
The –rm option is used to remove the container after it is run.
The –rm option is used to remove the container after it is run.
We are giving a name to the container called “HelloWorld”.
We are giving a name to the container called “HelloWorld”.
We are mentioning to map the volume in the container which is /usr/src/app to our current present working directory. This is done so that the node container will pick up our HelloWorld.js script which is present in our working directory on the Docker Host.
We are mentioning to map the volume in the container which is /usr/src/app to our current present working directory. This is done so that the node container will pick up our HelloWorld.js script which is present in our working directory on the Docker Host.
The –w option is used to specify the working directory used by Node.js.
The –w option is used to specify the working directory used by Node.js.
The first node option is used to specify to run the node image.
The first node option is used to specify to run the node image.
The second node option is used to mention to run the node command in the node container.
The second node option is used to mention to run the node command in the node container.
And finally we mention the name of our script.
And finally we mention the name of our script.
We will then get the following output. And from the output, we can clearly see that the Node container ran as a container and executed the HelloWorld.js script.
70 Lectures
12 hours
Anshul Chauhan
41 Lectures
5 hours
AR Shankar
31 Lectures
3 hours
Abhilash Nelson
15 Lectures
2 hours
Harshit Srivastava, Pranjal Srivastava
33 Lectures
4 hours
Mumshad Mannambeth
13 Lectures
53 mins
Musab Zayadneh
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2638,
"s": 2340,
"text": "Node.js is a JavaScript framework that is used for developing server-side applications. It is an open source framework that is developed to run on a variety of operating systems. Since Node.js is a popular framework for development, Docker has also ensu... |
How to use Radio button in Android? | This example demonstrates how do I use Radio button in android.
Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project.
Step 2 − Add the following code to res/layout/activity_main.xml.
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<RadioGroup
android:id="@+id/radioGender"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_centerInParent="true">
<RadioButton
android:id="@+id/radioMale"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/radioMale"
android:checked="true" />
<RadioButton
android:id="@+id/radioFemale"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/radioFemale" />
</RadioGroup>
<Button
android:layout_below="@id/radioGender"
android:id="@+id/btnDisplay"
android:layout_centerInParent="true"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/btnDisplay" />
</RelativeLayout>
Step 3 − Add the following code to src/MainActivity.java
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.view.View;
import android.widget.Button;
import android.widget.RadioButton;
import android.widget.RadioGroup;
import android.widget.Toast;
public class MainActivity extends AppCompatActivity {
RadioGroup radioGroup;
RadioButton radioButton;
Button button;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
addListenerButton();
}
private void addListenerButton() {
radioGroup = findViewById(R.id.radioGender);
button = findViewById(R.id.btnDisplay);
button.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
int selectedID = radioGroup.getCheckedRadioButtonId();
radioButton = findViewById(selectedID);
Toast.makeText(MainActivity.this, radioButton.getText(),Toast.LENGTH_SHORT).show();
}
});
}
}
Step 4 − Open res/values/strings.xml and add the following code
<resources>
<string name="app_name">ample</string>
<string name="radioMale">Male</string>
<string name="radioFemale">Female</string>
<string name="btnDisplay">Display</string>
</resources>
Step 5 − Add the following code to androidManifest.xml
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android" package="app.com.sample">
<application
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="@style/AppTheme">
<activity android:name=".MainActivity">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen −
Click here to download the project code. | [
{
"code": null,
"e": 1126,
"s": 1062,
"text": "This example demonstrates how do I use Radio button in android."
},
{
"code": null,
"e": 1255,
"s": 1126,
"text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a ne... |
Building our first neural network in keras | by Sanchit Tanwar | Towards Data Science | Signup for my live computer vision course: https://bit.ly/cv_coursem
In this article, we will make our first neural network(ANN) using keras framework. This tutorial is part of the deep learning workshop. The link to lessons will be given below as soon as I update them. Github link of this repo is here. Link to the jupyter notebook of this tutorial is here.
Index
Introduction to machine learning and deep learning.Introduction to neural networks.Introduction to python.Building our first neural network in keras. < — You are hereA comprehensive guide to CNN.Image classification with CNN.
Introduction to machine learning and deep learning.
Introduction to neural networks.
Introduction to python.
Building our first neural network in keras. < — You are here
A comprehensive guide to CNN.
Image classification with CNN.
Before starting, I would like to give an overview of how to structure any deep learning project.
Preprocess and load data- As we have already discussed data is the key for the working of neural network and we need to process it before feeding to the neural network. In this step, we will also visualize data which will help us to gain insight into the data.Define model- Now we need a neural network model. This means we need to specify the number of hidden layers in the neural network and their size, the input and output size.Loss and optimizer- Now we need to define the loss function according to our task. We also need to specify the optimizer to use with learning rate and other hyperparameters of the optimizer.Fit model- This is the training step of the neural network. Here we need to define the number of epochs for which we need to train the neural network.
Preprocess and load data- As we have already discussed data is the key for the working of neural network and we need to process it before feeding to the neural network. In this step, we will also visualize data which will help us to gain insight into the data.
Define model- Now we need a neural network model. This means we need to specify the number of hidden layers in the neural network and their size, the input and output size.
Loss and optimizer- Now we need to define the loss function according to our task. We also need to specify the optimizer to use with learning rate and other hyperparameters of the optimizer.
Fit model- This is the training step of the neural network. Here we need to define the number of epochs for which we need to train the neural network.
After fitting model, we can test it on test data to check whether the case of overfitting. We can save the weights of the model and use it later whenever required.
We will use simple data of mobile price range classifier. The dataset consists of 20 features and we need to predict the price range in which phone lies. These ranges are divided into 4 classes. The features of our dataset include
'battery_power', 'blue', 'clock_speed', 'dual_sim', 'fc', 'four_g','int_memory', 'm_dep', 'mobile_wt', 'n_cores', 'pc', 'px_height','px_width', 'ram', 'sc_h', 'sc_w', 'talk_time', 'three_g','touch_screen', 'wifi'
Before feeding data to our neural network we need it in a specific way so we need to process it accordingly. The preprocessing of data depends on the type of data. Here we will discuss how to handle tabular data and in later tutorials, we will handle image dataset. Let’s start the coding part
#Dependenciesimport numpy as npimport pandas as pd#dataset importdataset = pd.read_csv(‘data/train.csv’) #You need to change #directory accordinglydataset.head(10) #Return 10 rows of data
Our dataset looks like this.
#Changing pandas dataframe to numpy arrayX = dataset.iloc[:,:20].valuesy = dataset.iloc[:,20:21].values
This code as discussed in python module will make two arrays X and y. X contains features and y will contain classes.
#Normalizing the datafrom sklearn.preprocessing import StandardScalersc = StandardScaler()X = sc.fit_transform(X)
This step is used to normalize the data. Normalization is a technique used to change the values of an array to a common scale, without distorting differences in the ranges of values. It is an important step and you can check the difference in accuracies on our dataset by removing this step. It is mainly required in case the dataset features vary a lot as in our case the value of battery power is in the 1000’s and clock speed is less than 3. So if we feed unnormalized data to the neural network, the gradients will change differently for every column and thus the learning will oscillate. Study further from this link.
The X will now be changed to this form:
Normalized data:[-0.90259726 -0.9900495 0.83077942 -1.01918398 -0.76249466 -1.04396559 -1.38064353 0.34073951 1.34924881 -1.10197128 -1.3057501 -1.40894856 -1.14678403 0.39170341 -0.78498329 0.2831028 1.46249332 -1.78686097 -1.00601811 0.98609664]
Next step is to one hot encode the classes. One hot encoding is a process to convert integer classes into binary values. Consider an example, let’s say there are 3 classes in our dataset namely 1,2 and 3. Now we cannot directly feed this to neural network so we convert it in the form:
1- 1 0 0
2- 0 1 0
3- 0 0 1
Now there is one unique binary value for the class. The new array formed will be of shape (n, number of classes), where n is the number of samples in our dataset. We can do this using simple function by sklearn:
from sklearn.preprocessing import OneHotEncoderohe = OneHotEncoder()y = ohe.fit_transform(y).toarray()
Our dataset has 4 classes so our new label array will look like this:
One hot encoded array:[[0. 1. 0. 0.] [0. 0. 1. 0.] [0. 0. 1. 0.] [0. 0. 1. 0.] [0. 1. 0. 0.]]
Now our dataset is processed and ready to feed in the neural network.
Generally, it is better to split data into training and testing data. Training data is the data on which we will train our neural network. Test data is used to check our trained neural network. This data is totally new for our neural network and if the neural network performs well on this dataset, it shows that there is no overfitting. Read more about this here.
from sklearn.model_selection import train_test_splitX_train,X_test,y_train,y_test = train_test_split(X,y,test_size = 0.1)
This will split our dataset into training and testing. Training data will have 90% samples and test data will have 10% samples. This is specified by the test_size argument.
Now we are done with the boring part and let’s build a neural network.
Keras is a simple tool for constructing a neural network. It is a high-level framework based on tensorflow, theano or cntk backends.
In our dataset, the input is of 20 values and output is of 4 values. So the input and output layer is of 20 and 4 dimensions respectively.
#Dependenciesimport kerasfrom keras.models import Sequentialfrom keras.layers import Dense# Neural networkmodel = Sequential()model.add(Dense(16, input_dim=20, activation=’relu’))model.add(Dense(12, activation=’relu’))model.add(Dense(4, activation=’softmax’))
In our neural network, we are using two hidden layers of 16 and 12 dimension.
Now I will explain the code line by line.
Sequential specifies to keras that we are creating model sequentially and the output of each layer we add is input to the next layer we specify.
model.add is used to add a layer to our neural network. We need to specify as an argument what type of layer we want. The Dense is used to specify the fully connected layer. The arguments of Dense are output dimension which is 16 in the first case, input dimension which is 20 for input dimension and the activation function to be used which is relu in this case. The second layer is similar, we dont need to specify input dimension as we have defined the model to be sequential so keras will automatically consider input dimension to be same as the output of last layer i.e 16. In the third layer(output layer) the output dimension is 4(number of classes). Now as we have discussed earlier, the output layer takes different activation functions and for the case of multiclass classification, it is softmax.
Now we need to specify the loss function and the optimizer. It is done using compile function in keras.
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
Here loss is cross entropy loss as discussed earlier. Categorical_crossentropy specifies that we have multiple classes. The optimizer is Adam. Metrics is used to specify the way we want to judge the performance of our neural network. Here we have specified it to accuracy.
Now we are done with building a neural network and we will train it.
Training step is simple in keras. model.fit is used to train it.
history = model.fit(X_train, y_train, epochs=100, batch_size=64)
Here we need to specify the input data-> X_train, labels-> y_train, number of epochs(iterations), and batch size. It returns the history of model training. History consists of model accuracy and losses after each epoch. We will visualize it later.
Usually, the dataset is very big and we cannot fit complete data at once so we use batch size. This divides our data into batches each of size equal to batch_size. Now only this number of samples will be loaded into memory and processed. Once we are done with one batch it is flushed from memory and the next batch will be processed.
Now we have started the training of our neural network.
Epoch 1/1001600/1600 [==============================] - 1s 600us/step - loss: 1.3835 - acc: 0.3019Epoch 2/1001600/1600 [==============================] - 0s 60us/step - loss: 1.3401 - acc: 0.3369Epoch 3/1001600/1600 [==============================] - 0s 72us/step - loss: 1.2986 - acc: 0.3756Epoch 4/1001600/1600 [==============================] - 0s 63us/step - loss: 1.2525 - acc: 0.4206Epoch 5/1001600/1600 [==============================] - 0s 62us/step - loss: 1.1982 - acc: 0.4675...Epoch 97/1001600/1600 [==============================] - 0s 55us/step - loss: 0.0400 - acc: 0.9937Epoch 98/1001600/1600 [==============================] - 0s 62us/step - loss: 0.0390 - acc: 0.9950Epoch 99/1001600/1600 [==============================] - 0s 57us/step - loss: 0.0390 - acc: 0.9937Epoch 100/1001600/1600 [==============================] - 0s 60us/step - loss: 0.0380 - acc: 0.9950
It will take around a minute to train. And after 100 epochs the neural network will be trained. The training accuracy is reached 99.5 % so our model is trained.
Now we can check the model’s performance on test data:
y_pred = model.predict(X_test)#Converting predictions to labelpred = list()for i in range(len(y_pred)): pred.append(np.argmax(y_pred[i]))#Converting one hot encoded test label to labeltest = list()for i in range(len(y_test)): test.append(np.argmax(y_test[i]))
This step is inverse one hot encoding process. We will get integer labels using this step. We can predict on test data using a simple method of keras, model.predict(). It will take the test data as input and will return the prediction outputs as softmax.
from sklearn.metrics import accuracy_scorea = accuracy_score(pred,test)print('Accuracy is:', a*100)
We get an accuracy of 93.5%.
We can use test data as validation data and can check the accuracies after every epoch. This will give us an insight into overfitting at the time of training only and we can take steps before the completion of all epochs. We can do this by changing fit function as:
history = model.fit(X_train, y_train,validation_data = (X_test,y_test), epochs=100, batch_size=64)
Now the training step output will also contain validation accuracy.
Train on 1600 samples, validate on 400 samplesEpoch 1/1001600/1600 [==============================] - 1s 823us/step - loss: 1.4378 - acc: 0.2406 - val_loss: 1.4118 - val_acc: 0.2875Epoch 2/1001600/1600 [==============================] - 0s 67us/step - loss: 1.3852 - acc: 0.2825 - val_loss: 1.3713 - val_acc: 0.3175Epoch 3/1001600/1600 [==============================] - ETA: 0s - loss: 1.3474 - acc: 0.326 - 0s 50us/step - loss: 1.3459 - acc: 0.3231 - val_loss: 1.3349 - val_acc: 0.3650Epoch 4/1001600/1600 [==============================] - 0s 56us/step - loss: 1.3078 - acc: 0.3700 - val_loss: 1.2916 - val_acc: 0.4225Epoch 5/1001600/1600 [==============================] - 0s 74us/step - loss: 1.2600 - acc: 0.4094 - val_loss: 1.2381 - val_acc: 0.4575....Epoch 95/1001600/1600 [==============================] - 0s 37us/step - loss: 0.0615 - acc: 0.9869 - val_loss: 0.1798 - val_acc: 0.9250Epoch 96/1001600/1600 [==============================] - 0s 43us/step - loss: 0.0611 - acc: 0.9850 - val_loss: 0.1812 - val_acc: 0.9225Epoch 97/1001600/1600 [==============================] - 0s 45us/step - loss: 0.0595 - acc: 0.9894 - val_loss: 0.1813 - val_acc: 0.9275Epoch 98/1001600/1600 [==============================] - 0s 44us/step - loss: 0.0592 - acc: 0.9869 - val_loss: 0.1766 - val_acc: 0.9275Epoch 99/1001600/1600 [==============================] - 0s 43us/step - loss: 0.0575 - acc: 0.9894 - val_loss: 0.1849 - val_acc: 0.9275Epoch 100/1001600/1600 [==============================] - 0s 38us/step - loss: 0.0574 - acc: 0.9869 - val_loss: 0.1821 - val_acc: 0.9275
Our model is working fine. Now we will visualize training and validation losses and accuracies.
import matplotlib.pyplot as pltplt.plot(history.history['acc'])plt.plot(history.history['val_acc'])plt.title('Model accuracy')plt.ylabel('Accuracy')plt.xlabel('Epoch')plt.legend(['Train', 'Test'], loc='upper left')plt.show()
plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('Model loss') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['Train', 'Test'], loc='upper left') plt.show()
In the next chapter, we will discuss the convolution neural network(CNN) used for image data. In case of any doubt or if I have done anything wrong please comment. | [
{
"code": null,
"e": 241,
"s": 172,
"text": "Signup for my live computer vision course: https://bit.ly/cv_coursem"
},
{
"code": null,
"e": 532,
"s": 241,
"text": "In this article, we will make our first neural network(ANN) using keras framework. This tutorial is part of the deep ... |
How to turn on a particular bit in a number? | 05 May, 2021
Given a number n and a value k, turn on the k’th bit in n.Examples:
Input: n = 4, k = 2
Output: 6
Input: n = 3, k = 3
Output: 7
Input: n = 64, k = 4
Output: 72
Input: n = 64, k = 5
Output: 80
The idea is to use bitwise << and | operators. Using expression “(1 << (k – 1))“, we get a number which has all bits unset, except the k’th bit. If we do bitwise | of this expression with n, we get a number which has all bits same as n except the k’th bit which is 1.Below is the implementation of above idea.
C++
Java
Python 3
C#
PHP
Javascript
// CPP program to turn on a particular bit#include <iostream>using namespace std; // Returns a number that has all bits same as n// except the k'th bit which is made 1int turnOnK(int n, int k){ // k must be greater than 0 if (k <= 0) return n; // Do | of n with a number with all // unset bits except the k'th bit return (n | (1 << (k - 1)));} // Driver program to test above functionint main(){ int n = 4; int k = 2; cout << turnOnK(n, k); return 0;}
// Java program to turn on a particular// bitclass GFG { // Returns a number that has all // bits same as n except the k'th // bit which is made 1 static int turnOnK(int n, int k) { // k must be greater than 0 if (k <= 0) return n; // Do | of n with a number with // all unset bits except the // k'th bit return (n | (1 << (k - 1))); } // Driver program to test above // function public static void main(String [] args) { int n = 4; int k = 2; System.out.print(turnOnK(n, k)); }} // This code is contributed by Smitha
# Python 3 program to turn on a# particular bit # Returns a number that has all# bits same as n except the k'th# bit which is made 1def turnOnK(n, k): # k must be greater than 0 if (k <= 0): return n # Do | of n with a number # with all unset bits except # the k'th bit return (n | (1 << (k - 1))) # Driver program to test above# functionn = 4k = 2print(turnOnK(n, k)) # This code is contributed by# Smitha
// C# program to turn on a particular// bitusing System; class GFG { // Returns a number that has all // bits same as n except the k'th // bit which is made 1 static int turnOnK(int n, int k) { // k must be greater than 0 if (k <= 0) return n; // Do | of n with a number // with all unset bits except // the k'th bit return (n | (1 << (k - 1))); } // Driver program to test above // function public static void Main() { int n = 4; int k = 2; Console.Write(turnOnK(n, k)); }} // This code is contributed by Smitha
<?php// PHP program to turn on a particular bit // Returns a number that has// all bits same as n except// the k'th bit which is made 1function turnOnK($n, $k){ // k must be greater than 0 if ($k <= 0) return $n; // Do | of n with a number with all // unset bits except the k'th bit return ($n | (1 << ($k - 1)));} // Driver Code $n = 4; $k = 2; echo turnOnK($n, $k); // This code is contributed by m_kit?>
<script> // Javascript program to turn on a particular bit // Returns a number that has all // bits same as n except the k'th // bit which is made 1 function turnOnK(n, k) { // k must be greater than 0 if (k <= 0) return n; // Do | of n with a number // with all unset bits except // the k'th bit return (n | (1 << (k - 1))); } let n = 4; let k = 2; document.write(turnOnK(n, k)); // This code is contributed by suresh07.</script>
6
jit_t
Smitha Dinesh Semwal
suresh07
Bit Magic
Bit Magic
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Set, Clear and Toggle a given bit of a number in C
Builtin functions of GCC compiler
Find two numbers from their sum and XOR
Calculate XOR from 1 to n.
Find a number X such that (X XOR A) is minimum and the count of set bits in X and B are equal
Calculate square of a number without using *, / and pow()
Equal Sum and XOR of three Numbers
Reverse actual bits of the given number
Unique element in an array where all elements occur k times except one
Find XOR of two number without using XOR operator | [
{
"code": null,
"e": 53,
"s": 25,
"text": "\n05 May, 2021"
},
{
"code": null,
"e": 123,
"s": 53,
"text": "Given a number n and a value k, turn on the k’th bit in n.Examples: "
},
{
"code": null,
"e": 254,
"s": 123,
"text": "Input: n = 4, k = 2\nOutput: 6\n\n... |
Traverse matrix in L shape | 04 May, 2021
Given a N * M matrix. The task is to traverse the given matrix in L shape as shown in below image.
Examples:
Input : n = 3, m = 3
a[][] = { { 1, 2, 3 },
{ 4, 5, 6 },
{ 7, 8, 9 } }
Output : 1 4 7 8 9 2 5 6 3
Input : n = 3, m = 4
a[][] = { { 1, 2, 3 },
{ 4, 5, 6 },
{ 7, 8, 9 },
{ 10, 11, 12} }
Output : 1 4 7 10 11 12 2 5 8 9 3 6
Observe there will be m (number of columns) number of L shapes that need to be traverse. So we will traverse each L shape in two parts, first vertical (top to down) and then horizontal (left to right). To traverse in vertically, observe for each column j, 0 <= j <= m – 1, we need to traverse n – j elements vertically. So for each column j, traverse from a[0][j] to a[n-1-j][j]. Now, to traverse horizontally for each L shape, observe the corresponding row for each column j will be (n-1-j)th row and the first element will be (j+1)th element from the beginning of the row. So, for each L shape or for each column j, to traverse horizontally, traverse from a[n-1-j][j+1] to a[n-1-j][m-1].Below is the implementation of this approach:
C++
Java
Python3
C#
Javascript
// C++ program to traverse a m x n matrix in L shape.#include <iostream>using namespace std; #define MAX 100 // Printing matrix in L shapevoid traverseLshape(int a[][MAX], int n, int m){ // for each column or each L shape for (int j = 0; j < m; j++) { // traversing vertically for (int i = 0; i <= n - j - 1; i++) cout << a[i][j] << " "; // traverse horizontally for (int k = j + 1; k < m; k++) cout << a[n - 1 - j][k] << " "; }} // Driven Programint main(){ int n = 4; int m = 3; int a[][MAX] = { { 1, 2, 3 }, { 4, 5, 6 }, { 7, 8, 9 }, { 10, 11, 12 } }; traverseLshape(a, n, m); return 0;}
// Java Program to traverse a m x n matrix in L shape.public class GFG{ static void traverseLshape(int a[][], int n, int m) { // for each column or each L shape for (int j = 0; j < m; j++) { // traversing vertically for (int i = 0; i <= n - j - 1; i++) System.out.print(a[i][j] + " "); // traverse horizontally for (int k = j + 1; k < m; k++) System.out.print(a[n - 1 - j][k] + " "); } } // Driver Code public static void main(String args[]) { int n = 4; int m = 3; int a[][] = { { 1, 2, 3 }, { 4, 5, 6 }, { 7, 8, 9 }, { 10, 11, 12 } }; traverseLshape(a, n, m); }}
# Python3 program to traverse a# m x n matrix in L shape. # Printing matrix in L shapedef traverseLshape(a, n, m): # for each column or each L shape for j in range(0, m): # traversing vertically for i in range(0, n - j): print(a[i][j], end = " "); # traverse horizontally for k in range(j + 1, m): print(a[n - 1 - j][k], end = " "); # Driven Codeif __name__ == '__main__': n = 4; m = 3; a = [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]]; traverseLshape(a, n, m); # This code is contributed by PrinciRaj1992
// C# Program to traverse a m x n matrix in L shape. using System; public class GFG{ static void traverseLshape(int[,] a, int n, int m) { // for each column or each L shape for (int j = 0; j < m; j++) { // traversing vertically for (int i = 0; i <= n - j - 1; i++) Console.Write(a[i,j] + " "); // traverse horizontally for (int k = j + 1; k < m; k++) Console.Write(a[n - 1 - j,k] + " "); } } // Driver Code public static void Main() { int n = 4; int m = 3; int[,] a = { { 1, 2, 3 }, { 4, 5, 6 }, { 7, 8, 9 }, { 10, 11, 12 } }; traverseLshape(a, n, m); }}
<script> // Javascript program to traverse a m x n matrix in L shape. var MAX = 100; // Printing matrix in L shapefunction traverseLshape(a, n, m){ // for each column or each L shape for (var j = 0; j < m; j++) { // traversing vertically for (var i = 0; i <= n - j - 1; i++) document.write( a[i][j] + " "); // traverse horizontally for (var k = j + 1; k < m; k++) document.write( a[n - 1 - j][k] + " "); }} // Driven Programvar n = 4;var m = 3;var a = [ [ 1, 2, 3 ], [ 4, 5, 6 ], [ 7, 8, 9 ], [ 10, 11, 12 ] ];traverseLshape(a, n, m); </script>
1 4 7 10 11 12 2 5 8 9 3 6
Time Complexity: O(n * m)
Auxiliary Space: O(1)
bilal-hungund
tufan_gupta2000
princiraj1992
souravmahato348
itsok
Matrix
School Programming
Matrix
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Unique paths in a Grid with Obstacles
Traverse a given Matrix using Recursion
Find median in row wise sorted matrix
Zigzag (or diagonal) traversal of Matrix
A Boolean Matrix Question
Python Dictionary
Reverse a string in Java
Arrays in C/C++
Introduction To PYTHON
Interfaces in Java | [
{
"code": null,
"e": 52,
"s": 24,
"text": "\n04 May, 2021"
},
{
"code": null,
"e": 153,
"s": 52,
"text": "Given a N * M matrix. The task is to traverse the given matrix in L shape as shown in below image. "
},
{
"code": null,
"e": 165,
"s": 153,
"text": "Exam... |
Static Local Function in C# 8.0 | 21 Sep, 2021
In C# 7.0, local functions are introduced. The local function allows you to declare a method inside the body of an already defined method. Or in other words, we can say that a local function is a private function of a function whose scope is limited to that function in which it is created. The type of local function is similar to the type of function in which it is defined. You can only call the local function from their container members.
Example:
C#
// Simple C# program to// illustrate local functionusing System; class Program { // Main method public static void Main() { // Here SubValue is the local // function of the main function void SubValue(int a, int b) { Console.WriteLine("Value of a is: " + a); Console.WriteLine("Value of b is: " + b); Console.WriteLine("final result: {0}", a - b); Console.WriteLine(); } // Calling Local function SubValue(30, 10); SubValue(80, 60); }}
Output:
Value of a is: 30
Value of b is: 10
final result: 20
Value of a is: 80
Value of b is: 60
final result: 20
But in C# 7.0 you are not allowed to use static modifier with local function or in other words you are not allowed to create a static local function. This feature is added in C# 8.0. In C# 8.0, you are allowed to use a static modifier with the local function. This ensures that the static local function does not reference any variable from the enclosing or surrounding scope. If static local function tries to access the variable from the enclosed scope, then the compiler will throw an error. Let us discuss this concept with the help of the given examples:
Example 1:
C#
// Simple C# program to illustrate// the static local functionusing System; class Program { // Main method public static void Main() { // Here AreaofCircle is the local // function of the main function void AreaofCircle(double a) { double ar; Console.WriteLine("Radius of the circle: " + a); ar = 3.14 * a * a; Console.WriteLine("Area of circle: " + ar); // Calling static local function circumference(a); // Circumference is the Static local function static void circumference(double radii) { double cr; cr = 2 * 3.14 * radii; Console.WriteLine("Circumference of the circle is: " + cr); } } // Calling function AreaofCircle(30); }}
Output:
Radius of the circle: 30
Area of circle: 2826
Circumference of the circle is: 188.4
Example 2:
C#
// Simple C# program to illustrate// the static local functionusing System; class Program { // Main method public static void Main() { // Here AreaofCircle is the local // the function of the main function void AreaofCircle(double a) { double ar; Console.WriteLine("Radius of the circle: " + a); ar = 3.14 * a * a; Console.WriteLine("Area of circle: " + ar); // Circumference is the Static local function // If circumference() try to access the enclosing // scope variable, then the compile will give error static void circumference() { double cr; cr = 2 * 3.14 * a; Console.WriteLine("Circumference of the circle is: " + cr); } } // Calling function AreaofCircle(30); }}
Output:
Error CS8421: A static local function cannot contain a reference to 'a'. (CS8421) (f)
surindertarika1234
simmytarika5
CSharp-8.0
C#
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n21 Sep, 2021"
},
{
"code": null,
"e": 473,
"s": 28,
"text": "In C# 7.0, local functions are introduced. The local function allows you to declare a method inside the body of an already defined method. Or in other words, we can say that a... |
Largest lexicographic array with at-most K consecutive swaps | 13 Sep, 2021
Given an array arr[], find the lexicographically largest array that can be obtained by performing at-most k consecutive swaps.
Examples :
Input : arr[] = {3, 5, 4, 1, 2}
k = 3
Output : 5, 4, 3, 2, 1
Explanation : Array given : 3 5 4 1 2
After swap 1 : 5 3 4 1 2
After swap 2 : 5 4 3 1 2
After swap 3 : 5 4 3 2 1
Input : arr[] = {3, 5, 1, 2, 1}
k = 3
Output : 5, 3, 2, 1, 1
Brute Force Approach : Generate all permutation of the array and then pick the one which satisfies the condition of at most K swaps. The time complexity of this approach is O(n!).
Optimized Approach : In this greedy approach, first find the largest element present in the array which is greater than(if the 1st position element is not the greatest) the 1st position and which can be placed at the 1st position with at-most K swaps. After finding that element, note its index. Then, swap elements of the array and update K value. Apply this procedure for other positions till k is non-zero or array becomes lexicographically largest.
Below is the implementation of above approach :
C++
Java
Python3
C#
PHP
Javascript
// C++ program to find lexicographically// maximum value after k swaps.#include <bits/stdc++.h>using namespace std; // Function which modifies the arrayvoid KSwapMaximum(int arr[], int n, int k){ for (int i = 0; i < n - 1 && k > 0; ++i) { // Here, indexPosition is set where we // want to put the current largest integer int indexPosition = i; for (int j = i + 1; j < n; ++j) { // If we exceed the Max swaps // then break the loop if (k <= j - i) break; // Find the maximum value from i+1 to // max k or n which will replace // arr[indexPosition] if (arr[j] > arr[indexPosition]) indexPosition = j; } // Swap the elements from Maximum indexPosition // we found till now to the ith index for (int j = indexPosition; j > i; --j) swap(arr[j], arr[j - 1]); // Updates k after swapping indexPosition-i // elements k -= indexPosition - i; }} // Driver codeint main(){ int arr[] = { 3, 5, 4, 1, 2 }; int n = sizeof(arr) / sizeof(arr[0]); int k = 3; KSwapMaximum(arr, n, k); // Print the final Array for (int i = 0; i < n; ++i) cout << arr[i] << " ";}
// Java program to find// lexicographically// maximum value after// k swaps.import java.io.*; class GFG{ static void SwapInts(int array[], int position1, int position2) { // Swaps elements // in an array. // Copy the first // position's element int temp = array[position1]; // Assign to the // second element array[position1] = array[position2]; // Assign to the // first element array[position2] = temp; } // Function which // modifies the array static void KSwapMaximum(int []arr, int n, int k) { for (int i = 0; i < n - 1 && k > 0; ++i) { // Here, indexPosition // is set where we want to // put the current largest // integer int indexPosition = i; for (int j = i + 1; j < n; ++j) { // If we exceed the // Max swaps then // break the loop if (k <= j - i) break; // Find the maximum value // from i+1 to max k or n // which will replace // arr[indexPosition] if (arr[j] > arr[indexPosition]) indexPosition = j; } // Swap the elements from // Maximum indexPosition // we found till now to // the ith index for (int j = indexPosition; j > i; --j) SwapInts(arr, j, j - 1); // Updates k after swapping // indexPosition-i elements k -= indexPosition - i; } } // Driver code public static void main(String args[]) { int []arr = { 3, 5, 4, 1, 2 }; int n = arr.length; int k = 3; KSwapMaximum(arr, n, k); // Print the final Array for (int i = 0; i < n; ++i) System.out.print(arr[i] + " "); }} // This code is contributed by// Manish Shaw(manishshaw1)
# Python program to find# lexicographically# maximum value after# k swaps. arr = [3, 5, 4, 1, 2] # Function which# modifies the arraydef KSwapMaximum(n, k) : global arr for i in range(0, n - 1) : if (k > 0) : # Here, indexPosition # is set where we want to # put the current largest # integer indexPosition = i for j in range(i + 1, n) : # If we exceed the Max swaps # then break the loop if (k <= j - i) : break # Find the maximum value # from i+1 to max k or n # which will replace # arr[indexPosition] if (arr[j] > arr[indexPosition]) : indexPosition = j # Swap the elements from # Maximum indexPosition # we found till now to # the ith index for j in range(indexPosition, i, -1) : t = arr[j] arr[j] = arr[j - 1] arr[j - 1] = t # Updates k after swapping # indexPosition-i elements k = k - indexPosition - i # Driver coden = len(arr)k = 3 KSwapMaximum(n, k) # Print the final Arrayfor i in range(0, n) : print ("{} " . format(arr[i]), end = "") # This code is contributed by# Manish Shaw(manishshaw1)
// C# program to find// lexicographically// maximum value after// k swaps.using System; class GFG{ static void SwapInts(int[] array, int position1, int position2) { // Swaps elements in an array. // Copy the first position's element int temp = array[position1]; // Assign to the second element array[position1] = array[position2]; // Assign to the first element array[position2] = temp; } // Function which // modifies the array static void KSwapMaximum(int []arr, int n, int k) { for (int i = 0; i < n - 1 && k > 0; ++i) { // Here, indexPosition // is set where we want to // put the current largest // integer int indexPosition = i; for (int j = i + 1; j < n; ++j) { // If we exceed the // Max swaps then // break the loop if (k <= j - i) break; // Find the maximum value // from i+1 to max k or n // which will replace // arr[indexPosition] if (arr[j] > arr[indexPosition]) indexPosition = j; } // Swap the elements from // Maximum indexPosition // we found till now to // the ith index for (int j = indexPosition; j > i; --j) SwapInts(arr, j, j - 1); // Updates k after swapping // indexPosition-i elements k -= indexPosition - i; } } // Driver code static void Main() { int []arr = new int[]{ 3, 5, 4, 1, 2 }; int n = arr.Length; int k = 3; KSwapMaximum(arr, n, k); // Print the final Array for (int i = 0; i < n; ++i) Console.Write(arr[i] + " "); }}// This code is contributed by// Manish Shaw(manishshaw1)
<?php// PHP program to find// lexicographically// maximum value after// k swaps. function swap(&$x, &$y){ $x ^= $y ^= $x ^= $y;} // Function which// modifies the arrayfunction KSwapMaximum(&$arr, $n, $k){ for ($i = 0; $i < $n - 1 && $k > 0; $i++) { // Here, indexPosition // is set where we want to // put the current largest // integer $indexPosition = $i; for ($j = $i + 1; $j < $n; $j++) { // If we exceed the Max swaps // then break the loop if ($k <= $j - $i) break; // Find the maximum value // from i+1 to max k or n // which will replace // arr[indexPosition] if ($arr[$j] > $arr[$indexPosition]) $indexPosition = $j; } // Swap the elements from // Maximum indexPosition // we found till now to // the ith index for ($j = $indexPosition; $j > $i; $j--) swap($arr[$j], $arr[$j - 1]); // Updates k after swapping // indexPosition-i elements $k -= $indexPosition - $i; }} // Driver code$arr = array( 3, 5, 4, 1, 2 );$n = count($arr);$k = 3; KSwapMaximum($arr, $n, $k); // Print the final Arrayfor ($i = 0; $i < $n; $i++) echo ($arr[$i]." "); // This code is contributed by// Manish Shaw(manishshaw1)?>
<script> // JavaScript program to find// lexicographically// maximum value after// k swaps. function SwapLets(array, position1, position2) { // Swaps elements // in an array. // Copy the first // position's element let temp = array[position1]; // Assign to the // second element array[position1] = array[position2]; // Assign to the // first element array[position2] = temp; } // Function which // modifies the array function KSwapMaximum(arr, n, k) { for (let i = 0; i < n - 1 && k > 0; ++i) { // Here, indexPosition // is set where we want to // put the current largest // integer let indexPosition = i; for (let j = i + 1; j < n; ++j) { // If we exceed the // Max swaps then // break the loop if (k <= j - i) break; // Find the maximum value // from i+1 to max k or n // which will replace // arr[indexPosition] if (arr[j] > arr[indexPosition]) indexPosition = j; } // Swap the elements from // Maximum indexPosition // we found till now to // the ith index for (let j = indexPosition; j > i; --j) SwapLets(arr, j, j - 1); // Updates k after swapping // indexPosition-i elements k -= indexPosition - i; } } // Driver code let arr = [ 3, 5, 4, 1, 2 ]; let n = arr.length; let k = 3; KSwapMaximum(arr, n, k); // Print the final Array for (let i = 0; i < n; ++i) document.write(arr[i] + " "); // This code is contributed by coode_hunt.</script>
5 4 3 1 2
Time Complexity: O(N*N) Auxiliary Space: O(1)
manishshaw1
nidhi_biet
code_hunt
ankita_saini
arorakashish0911
lexicographic-ordering
Arrays
Greedy
Arrays
Greedy
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Introduction to Arrays
K'th Smallest/Largest Element in Unsorted Array | Set 1
Python | Using 2D arrays/lists the right way
Find Second largest element in an array
Introduction to Data Structures
Dijkstra's shortest path algorithm | Greedy Algo-7
Prim’s Minimum Spanning Tree (MST) | Greedy Algo-5
Write a program to print all permutations of a given string
Kruskal’s Minimum Spanning Tree Algorithm | Greedy Algo-2
Huffman Coding | Greedy Algo-3 | [
{
"code": null,
"e": 54,
"s": 26,
"text": "\n13 Sep, 2021"
},
{
"code": null,
"e": 182,
"s": 54,
"text": "Given an array arr[], find the lexicographically largest array that can be obtained by performing at-most k consecutive swaps. "
},
{
"code": null,
"e": 194,
... |
Multi-Label Image Classification – Prediction of image labels | 26 Oct, 2021
There are so many things we can do using computer vision algorithms:
Object detection
Image segmentation
Image translation
Object tracking (in real-time), and a whole lot more.
What is Multi-Label Image Classification? Let’s understand the concept of multi-label image classification with an intuitive example. If I show you an image of a ball, you’ll easily classify it as a ball in your mind. The next image I show you are of a terrace. Now we can divide the two images in two classes i.e. ball or no-ball. When we have only two classes in which the images can be classified, this is known as a binary image classification problem.
When there are more than two categories in which the images can be classified.
An image does not belong to more than one category
If both of the above conditions are satisfied, it is referred to as a multi-class image classification problem.Prerequisites: Let’s start with some pre-requisites:Here, we will be using the following languages and editors:
Language/Interpreter : Python 3 (preferably python 3.8) from python.org
Editor : Jupyter iPython Notebook
OS : Windows 10 x64
DataSet: Please download any image dataset from Kaggle or Internet.
Python Requirements :This project requires the following libraries to be installed via pip: Numpy, Pandas, MatPlotLib, Scikit Learn, Scikit Image.
In the CMD window, run the following command to install the requirements: —————————————————————————- | pip install numpy pandas matplotlib notebook scikit-image scikit-learn | —————————————————————————- Note : replace pip with conda if you use anaconda!!Now run jupyter and open the notebook in the files you downloaded earlier.
Steps to be followed:
Steps for label classification
Step 1: Importing the libraries we need.
python3
# system librariesimport osimport warnings # ignoring all the warningswarnings.simplefilter('ignore') # import data handling librariesimport numpy as npimport pandas as pd # importing data visualisation librariesimport matplotlib.pyplot as plt% matplotlib inline # import image processing libraryfrom skimage.io import imread, imshowfrom skimage.transform import resizefrom skimage.color import rgb2grey
Step 2: Reading of target images into the project In this portion of the article, we will be instructing python to read images one by one and then insert the pixel data of the images into arrays that we can use. Then we’ll be creating file lists by Python’s os library.
os.listdir(path) returns a list containing the names of the entries in the directory given by path.
python3
r = os.listdir(r"C:\Users\Garima Singh\Desktop\data\mugshots\r")# This is the path to the image folder v = os.listdir(r"C:\Users\Garima Singh\Desktop\data\mugshots\v")d = os.listdir(r"C:\Users\Garima Singh\Desktop\data\mugshots\d") print(r[0:10])
Step 3: Creating and importing data from the images and Setting up a limit. Here, we will use NumPy and scikit-image’s imread function. Since we have the downloaded data, we can quickly count how many images per subject we have. For example, suppose you have 100 images in each folder (r, v and d), you can set a variable limit with values 100. Next step is to create empty arrays for this data and filling up these arrays with data. We will quickly make 3 arrays to accommodate the data of the series of images. We create an array filled with “None” values using the following code snippet:
python3
limit = 100# Creating the list of blank spaces that can potentially hold data:ra_images = [None]*limit # Creating loop variables:i, j = 0, 0 # This part of the code loops over all the images# in the list "r" and reads it into the jth element# of the array we made in line 2.for i in r: if(j<limit): ra_images[j] = imread(r"C:\Users\data\mugshots\r\\" + i) j+= 1 else: break # Similarly, we will fill up the data into the other 2 arrays
Step 4: Assembly of Data set and Flattening and reshaping of the arrays. In this section, we will be using pandas Data Frame to merge these 3 data arrays into a single data array. Right now our image array is of size 28×28. We need to make this array into an array of 28^2 x 1. This basically means we have to make take each image and convert it into a row of data in our dataset.
python3
# Finding out the number of columns per image in our dataset.# We will use the shape function on any one image in our array# and use the dimensions we get as the number of columns in row.number_of_columns = ra_grey[1].shape[0] * ra_grey[1].shape[1]print(number_of_columns) # This means we will be using this many columns# per row in our dataset.# Our dataset has 300 images, so:# Our dataset will be an array of dimensions# 784 x 300 => 300 images of 784 pixels each.
Step 5: Flattening and Reshaping the data. This is the part of the code that first converts the 28×28 array into a column vector (i.e. 784 x 1 matrix).
python3
print(ra_grey[0].shape)for i in range(limit): ra_grey[i] = np.ndarray.flatten(ra_grey[i]).reshape(number_of_columns, 1)print(ra_grey[0].shape) # We will use NumPy's dstack and rollaxis to remove the extra axis(the 1 part in last output) that we saw in the above code output. ra_grey = np.dstack(ra_grey)print(ra_grey.shape)ra_grey = np.rollaxis(ra_grey, axis = 2, start = 0)print(ra_grey.shape)ra_grey = ra_grey.reshape(limit, number_of_columns)print(ra_grey.shape)
Doing the above for the rest of the data i.e. do the above 2 steps for the next 2 arrays.
Step 6: Converting arrays to dataframes As discussed before, pandas makes a spreadsheet software like environment for our tables. Lets convert our arrays to dataframes:
python3
ra_data = pd.DataFrame(ra_grey)dh_data = pd.DataFrame(dh_grey)vi_data = pd.DataFrame(vi_grey) ra_data print(ra_data)
Step 7: Adding a name to the images. In this step we add a column containing the name of our subjects.This is called labelling our images. The model will try to predict based on the values and it will output one of these labels.
python3
ra_data["label"]="R"dh_data["label"]="D"vi_data["label"]="V" vi_data # Joining and mixing the data into one dataframe.# First, we will start with joining all 3 dataframes# made in 3.2 into a single dataframe, using concat function.# Note: It is recommended to join the first 2,# then join the last one into the first pair. act = pd.concat([ra_data, dh_data])actor = pd.concat([act, vi_data]) actor
Step 8: Shuffling the data and Printing the final data set This is the last stage of this section. We will be shuffling the data, so that it may seem mixed.
python3
from sklearn.utils import shuffleout = shuffle(actor).reset_index() out # Drop the column named indexout = out.drop(['index'], axis = 1)out
Step 9: Coding the machine learning algorithm + Testing Accuracy. In this section we will code in the machine learning algorithm and find out our algorithm’s accuracy.
python3
# First, we will extract the x and y values of our dataset x = out.values[:, :-1]y = out.values[:, -1] print(x[0:3])print(y[0:3]) # From the above output, we can see that:# x - stores the image data.# y - stores the label data.
Step 10: Importing ML libraries and ML Coding We will import a few ML libraries, all of these will come from sklearn and its classes.
python3
from sklearn.decomposition import PCA from sklearn.svm import SVC from sklearn.pipeline import make_pipeline from sklearn.model_selection import train_test_split from sklearn.model_selection import GridSearchCV from sklearn import metrics # Here we will use train_test_split to create our training and testing data.x_train, x_test, y_train, y_test = train_test_split(x, y, random_state = 0) pca = PCA(n_components = 150, whiten = True, random_state = 0)svc = SVC(kernel ='rbf', class_weight ='balanced')model = make_pipeline(pca, svc) params = {'svc__C': [x for x in range(1, 6)], 'svc__gamma': [0.001, 0.005, 0.006, 0.01, 0.05, 0.06, 0.004, 0.04]} grid = GridSearchCV(model, params)% time grid.fit(x_train, y_train)print(grid.best_params_) model = grid.best_estimator_ypred = model.predict(x_test) ypred[0:3]
We will use the PCA class and SVC class to create our model object. The make_pipeline will help us create an easy model that can be tested by GridSearchSV.
GridSearchSV is the function that will create a model with all of the parameters in EVERY combination possible and tell us which is the best combination.
Now that we have the model with the best parameters for our data, we use these parameters in our model and test its accuracy. Step 11: Diagrams and getting accuracy Lets see a visualized diagram of faces vs predicted labels:
python3
fig, ax = plt.subplots(4, 4, sharex = True, sharey = True, figsize = (10, 10)) for i, axi in enumerate(ax.flat): axi.imshow(x_test[i].reshape(imsize).astype(np.float64), cmap = "gray", interpolation = "nearest") axi.set_title('Label : {}'.format(ypred[i])) # Finally, we test our accuracy in using the following code:print(metrics.accuracy_score(y_test, ypred) * 100)
Conclusion: Labeling the images to create the training data for machine learning or AI is not a difficult task. You just need the right techniques for it. This articles showed an image labeling process from scratch to mastery.
sooda367
sumitgumber28
Machine Learning
Python
Machine Learning
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 52,
"s": 24,
"text": "\n26 Oct, 2021"
},
{
"code": null,
"e": 123,
"s": 52,
"text": "There are so many things we can do using computer vision algorithms: "
},
{
"code": null,
"e": 140,
"s": 123,
"text": "Object detection"
},
{
"co... |
Consuming a Rest API with Axios in Vue.js | 30 Jun, 2021
Many times when building an application for the web that you may want to consume and display data from an API in VueJS using JavaScript fetch API, Vue resource, jquery ajax API, but a very popular and most recommended approach is to use Axios, a promise-based HTTP client.
Axios is a great HTTP client library. Similar to JavaScript fetch API, it uses Promises by default. It’s also quite easy to use with VueJS.
Creating VueJS Application and Installing Module:
Step 1: Create a Vue application using the following command.vue create vue-app
Step 1: Create a Vue application using the following command.
vue create vue-app
Step 2: Install the Axios module using the following command.npm install axios
Step 2: Install the Axios module using the following command.
npm install axios
Step 3: We can include Vue.js into HTML using the following CDN link:<script src=”https://cdn.jsdelivr.net/npm/vue@2.5.17/dist/vue.js”></script>Project Directory: It will look like this.
Step 3: We can include Vue.js into HTML using the following CDN link:
<script src=”https://cdn.jsdelivr.net/npm/vue@2.5.17/dist/vue.js”></script>
Project Directory: It will look like this.
Project structure
index.html
<!DOCTYPE html><html> <head> <meta charset="utf-8" /> <script src="https://cdn.jsdelivr.net/npm/vue@2.5.17/dist/vue.js"> </script> <script src="https://cdnjs.cloudflare.com/ajax/libs/axios/0.18.0/axios.js"> </script> <link rel="stylesheet" href="css/style.css"></head> <body> <div id="app-vue"> <div class="users"> <div v-if="errored"> <p> We're sorry, we're not able to retrieve this information at the moment, please try back later </p> </div> <div v-else> <h4 v-if="loading"> Loading... </h4> <div v-for="post in posts" :key="post" class="post"> {{post.title}} </div> </div> </div> </div> <script> new Vue({ el: '#app-vue', data() { return { posts: null, loading: false, errored: false } }, created() { // Creating loader this.loading = true; this.posts = null axios.get( `http://jsonplaceholder.typicode.com/posts`) .then(response => { // JSON responses are // automatically parsed this.posts = response.data }) // Dealing with errors .catch(error => { console.log(error) this.errored = true }) } }); </script></body> </html>
style.css
#app-vue { display: flex; justify-content: center; font-family: 'Karla', sans-serif; font-size: 20px;} .post { width: 300px; border: 1px solid black; display: flex; flex-direction: row; padding: 20px; background: #FFEEE4; margin: 10px;}
Steps to Run Application: If you have installed the Vue app, you can run your application using this command.
npm run serve
Output: If you are using it as CDN then copy the path of your HTML and paste it on your browser.
Output of our application
Conclusion: There are many ways to work with Vue and axios beyond consuming and displaying an API. You can also communicate with Serverless Functions, post/edit/delete from an API where you have to write access, and many other benefits.
HTML-Questions
Picked
Vue.JS
HTML
JavaScript
Web Technologies
HTML
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
REST API (Introduction)
Design a Tribute Page using HTML & CSS
Build a Survey Form using HTML and CSS
Angular File Upload
Form validation using jQuery
Difference between var, let and const keywords in JavaScript
Differences between Functional Components and Class Components in React
Remove elements from a JavaScript Array
Roadmap to Learn JavaScript For Beginners
Difference Between PUT and PATCH Request | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n30 Jun, 2021"
},
{
"code": null,
"e": 301,
"s": 28,
"text": "Many times when building an application for the web that you may want to consume and display data from an API in VueJS using JavaScript fetch API, Vue resource, jquery ajax AP... |
C++ program to implement Full Adder - GeeksforGeeks | 19 Aug, 2021
Prerequisite : Full AdderWe are given three inputs of Full Adder A, B,C-IN. The task is to implement the Full Adder circuit and Print output i.e. sum and C-Out of three inputs.
Introduction : A Full Adder is a combinational circuit that performs an addition operation on three 1-bit binary numbers. The Full Adder has three input states and two output states. The two outputs are Sum and Carry.
Here we have three inputs A, B, Cin and two outputs Sum, Cout. And the truth table for Full Adder is
Logical Expression :
SUM = C-IN XOR ( A XOR B )C-0UT= A B + B C-IN + A C-IN
Examples –
Input : A=1, B=0,C-In=0 Output : Sum=1, C-Out=0 Explanation – Here from logical expression Sum= C-IN XOR (A XOR B ) i.e. 0 XOR (1 XOR 0) =1 , C-Out= A B + B C-IN + A C-IN i.e., 1 AND 0 + 0 AND 0 + 1 AND 0 = 0 . Input : A=1, B=1,C-In=0 Output: Sum=0, C-Out=1
Input : A=1, B=0,C-In=0 Output : Sum=1, C-Out=0 Explanation – Here from logical expression Sum= C-IN XOR (A XOR B ) i.e. 0 XOR (1 XOR 0) =1 , C-Out= A B + B C-IN + A C-IN i.e., 1 AND 0 + 0 AND 0 + 1 AND 0 = 0 .
Input : A=1, B=1,C-In=0 Output: Sum=0, C-Out=1
Approach :
Initialize the variables Sum and C_Out for storing outputs.
First we will take three inputs A ,B and C_In.
By applying C-IN XOR (A XOR B ) we get the value of Sum.
By applying A B + B C-IN + A C-IN we get the value of C_Out.
C++
// C++ program to implement full adder#include <bits/stdc++.h>using namespace std;// Function to print sum and C-Outvoid Full_Adder(int A,int B,int C_In){ int Sum , C_Out; // Calculating value of sum Sum = C_In ^ (A ^ B); //Calculating value of C-Out C_Out = (A & B) || (B & C_In) || (A & C_In); // printing the values cout<<"Sum = "<<Sum<<endl; cout<<"C-Out = "<<C_Out<<endl; } // Driver code int main() { int A = 1; int B = 0; int C_In = 0; // passing three inputs of fulladder as arguments to get result function Full_Adder(A, B, C_In); return 0;}
Sum = 1
C-Out = 0
C++ Programs
Digital Electronics & Logic Design
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
cin in C++
CSV file management using C++
Shallow Copy and Deep Copy in C++
Passing a function as a parameter in C++
C Program to Swap two Numbers
Program for Decimal to Binary Conversion
Full Adder in Digital Logic
Introduction of K-Map (Karnaugh Map)
IEEE Standard 754 Floating Point Numbers
Program for Binary To Decimal Conversion | [
{
"code": null,
"e": 24814,
"s": 24786,
"text": "\n19 Aug, 2021"
},
{
"code": null,
"e": 24991,
"s": 24814,
"text": "Prerequisite : Full AdderWe are given three inputs of Full Adder A, B,C-IN. The task is to implement the Full Adder circuit and Print output i.e. sum and C-Out of ... |
Tryit Editor v3.7 | CSS 2D Transforms
Tryit: The skew() method | [
{
"code": null,
"e": 27,
"s": 9,
"text": "CSS 2D Transforms"
}
] |
Understanding ARIMA (Time Series Modeling) | by Tony Yiu | Towards Data Science | What’s that old Mark Twain quote again? “History doesn’t repeat itself but it often rhymes.”
I love analyzing time series. So I’m probably biased when I say that I think Mark Twain is right. We definitely should treat all forecasts skeptically — the future is inherently uncertain and no amount of computing or data will ever change that fact. But by observing and analyzing the trends of history, we can at least unravel a small portion of that uncertainty.
ARIMA models are a subset of linear regression models that attempt to use the past observations of the target variable to forecast its future values. A key aspect of ARIMA models is that in their basic form, they do not consider exogenous variables. Rather, the forecast is made purely with past values of the target variable (or features crafted from those past values).
ARIMA stands for Autoregressive Integrated Moving Average. Let’s walk through each piece of the ARIMA model so that we fully understand it.
This is the easiest part. Autoregressive means that we regress the target variable on its own past values. That is, we use lagged values of the target variable as our X variables:
Y = B0 + B1*Y_lag1 + B2*Y_lag2 + ... + Bn*Y_lagn
That’s pretty straightforward. All this equation is saying is that the currently observed value of Y is some linear function of its past n values (where n is a parameter we choose; and B0, B1, etc. are the regression betas that we fit when we train our model). The previous equation is commonly called an AR(n) model where n denotes the number of lags. We can easily make this a forecast of the future by changing around the notation a little bit:
Y_forward1 = B0 + B1*Y + B2*Y_lag1 + B3*Y_lag3 + ... Bn*Y_lag(n-1)
Now we are predicting the future value (1 time step ahead) using the current value and its past lags.
Integrated denotes that we apply a differencing step to the data. That is, instead of running a regression like the following:
Y_forward1 = B0 + B1*Y + B2*Y_lag1 + ...
We do this:
Y_forward1 - Y = B0 + B1*(Y - Y_lag1) + B2*(Y_lag1 - Y_lag2) + ...
What the second equation is saying is that the future change in Y is a linear function of the past changes in Y. Why bother with differencing? The reason is that differences are generally much more stationary than the raw undifferenced values. When we do time series modeling, we like our Y variables to be mean variance stationary. This means that the main statistical properties of a model do not vary depending on when the sample was taken. Models built on stationary data are generally more robust.
Take real GDP (real means that it’s been adjusted for inflation) for instance. It’s obvious from the plot that the raw GDP data is not stationary. It’s rising making the mean GDP in the first half of the plot much lower than the mean in the second half.
If we difference the data, we get the following plot. Notice that it is now significantly more stationary (the mean and variance are roughly consistent over the years).
When I first studied time series, I assumed the moving average was simply just the trailing moving average of the Y variable (e.g. a 200 day moving average). But while they are somewhat similar in spirit, they are distinct mathematical entities.
A moving average model is summarized by the following equation:
Y = B0 + B1*E_lag1 + B2*E_lag2 + ... + Bn*E_lagn
Similarly to the AR part, we are doing something with historical values here, hence all the lags. But what is this E? E is commonly called error in most explanations of MA models, and it represents the random residual deviations between the model and the target variable (if you’re asking how it’s possible that we can have errors before we’ve even fit the model, hold that thought for just a second).
The full equation of a basic regression model is:
Y = B0 + B1*X + E
We need the E in the equation to show that the regression output, B0 + B1*X, is merely an approximation of Y. The plot to the left shows what I mean. The black dots are what we are trying to predict and the blue line is our prediction. And while we have successfully captured the general trend, there will always be some idiosyncratic variance that is not capture-able. The E term accounts for this un-captured part — in other words, E represents the difference between the exact answer and the approximately correct answer delivered by our model.
So if you’ve been reading carefully, you might ask, “Don’t we need to have a model first before there can be model errors (Es)?” You are exactly right. E is what we call an unobservable parameter. Unlike lagged values of Y (our target variable) or exogenous X variables, E is not directly observable. On a side note, this also means that we can’t fit an ARIMA model using OLS (because we have unobservable parameters). Rather, we need an iterative estimation method like MLE (Maximum Likelihood Estimation) that can simultaneously estimate both the beta parameters and residuals (and the betas on the residual terms).
So an MA model forecasts Y using the model’s past errors (similarly to the AR model, we tell it how many past errors we want it to consider).
Let’s take a second to think about why this works. Consider the following simplified MA(1) model:
Y = u + B1*E_lag1where u = the mean of YE = Y - predicted
The first term, u, means that our model centers its forecast around the mean of Y (it’s like saying knowing nothing else, I will guess the mean). The second term B1*E_lag1 is where the error comes in. Let’s assume that for now we estimate B1 to be 0.2. And error (E) is defined as the actual values of Y less the model’s prediction.
This means that if the most recent error (E_lag1) was positive (meaning that the actual was greater than our prediction), we will shift our forecast up by 1/5 of the error amount. This has the effect of making the model slightly less wrong (because the second term, B1*E_lag1 pushes the model’s forecasts slightly towards the correct answer).
So if the model knows its own errors, then why does it not constantly overfit (in other words, why are we not biasing our model by unfairly giving it some of the answers)? Barring overfitting, MA models are not inherently biased because model errors are independent and approximately normally distributed (error is a random variable). And because they’re random variables, the errors (E_lag1) can take on widely divergent values depending on at which time steps they’re observed. This inherent noisiness in the errors means that with just one beta (B1) to fit, it would be more or less impossible to find a B1 that perfectly adjusts every error so that their product perfectly plugs each and every hole between the actual and predicted Ys. In other words, the MA(1) model can only use its knowledge of the errors to very approximately nudge itself back in the right direction.
Of course, like with any other model, we can overfit an MA model by allowing more features — in this case, it would be more and more lagged errors (which means fitting a lot more betas). But such an overfit model would perform terribly out of sample.
Let’s put it all together by building an ARIMA model to forecast real GDP. We will use the statsmodel implementation of ARIMA. I pulled the data for real GDP from FRED (the Federal Reserve Bank of St. Louis’ data repository).
The ARIMA function from statsmodel requires at least two arguments:
The data — in this case, we give it a Pandas series of raw real GDP values (we don’t need to difference it in advance as the ARIMA algorithm will do it for us).The second argument, order, tells the ARIMA function how many components of each model type to consider in the following sequence — (AR lags, time steps between difference, MA lags).
The data — in this case, we give it a Pandas series of raw real GDP values (we don’t need to difference it in advance as the ARIMA algorithm will do it for us).
The second argument, order, tells the ARIMA function how many components of each model type to consider in the following sequence — (AR lags, time steps between difference, MA lags).
Let’s check out what an AR(4) model looks like (with no MA). We take a 1 time step difference to make our data stationary:
from statsmodels.tsa.arima_model import ARIMAmod = ARIMA(master_df['GDP'], order=(4,1,0))mod.fit()predictions = mod.fit().predict()
Our forecast is plotted in the graph below. If you look carefully, the orange line (our prediction) lags the blue line (actual). This is bad. It means our forecast is always behind reality; so we would always be a few steps behind if we follow this model. But that’s to be expected from an AR model, which attempts to predict the future by extrapolating the recent past (if only it were that easy).
It’s also important to note that in this demonstration, I am not splitting out a train and test set; and I’m using the entire dataset to fit my parameters (as opposed to an expanding window). In practice, we would want a much more robust test of the model’s ability to predict out of sample. So we would want to use point in time data (as opposed to revised GDP data) and an expanding window regression so that at each time step, we estimate parameters with only the data that would have been available to us at that point in history.
Now let’s add the 2 MA components to our model:
mod = ARIMA(master_df['GDP'], order=(4,1,2))mod.fit()predictions = mod.fit().predict()
We’ve given our model the ability to course correct a bit by allowing it to consider the magnitudes and directions of its errors. This comes at the expense of a longer model estimation process — MLE takes some time to converge as opposed to OLS, which is very fast. Let’s see how our new forecast looks:
Not much better. But we shouldn’t expect massive improvement merely from adding a few MA components. AR and MA components are both derived from the target variable’s past values — so they are both attempts to forecast the future by extrapolating the past. You would expect this to work O.K. during periods of calm, but to fail miserably when attempting to predict turning points.
So after all this, we ended up with a lackluster model. But don’t despair. An ARIMA model is not meant to be a perfect forecasting tool. Rather it’s a first step. Features derived from the past values of our target variable are meant to be complements rather than substitutes for exogenous variables. So in reality, our GDP model would consist of not just AR and MA components, but exogenous ones that correlate well with GDP such as inflation, stock returns, interest rates, etc.
Also, the fitted betas themselves are often of interest. For example, if we are building a simulation of real GDP, then we need to measure GDP’s autocorrelation (autocorrelation means the correlation between the current change in GDP and its past values). Because if there is autocorrelation, then we definitely don’t want to build a GDP model where we simulate each quarter over quarter change in GDP as independent from all others. That would be wrong and our model would produce results that detach from reality. So analyzing the betas of our ARIMA model help us better understand the statistical properties of the target variable of interest.
If you liked this article and my writing in general, please consider supporting my writing by signing up for Medium via my referral link here. Thanks! | [
{
"code": null,
"e": 265,
"s": 172,
"text": "What’s that old Mark Twain quote again? “History doesn’t repeat itself but it often rhymes.”"
},
{
"code": null,
"e": 631,
"s": 265,
"text": "I love analyzing time series. So I’m probably biased when I say that I think Mark Twain is ri... |
Cluster-then-predict for classification tasks | by Cole Brendel | Towards Data Science | Supervised classification problems require a dataset with (a) a categorical dependent variable (the “target variable”) and (b) a set of independent variables (“features”) which may (or may not!) be useful in predicting the class. The modeling task is to learn a function mapping features and their values to a target class. An example of this is Logistic Regression.
Unsupervised learning takes a dataset with no labels and attempts to find some latent structure within the data. K-means is one such algorithm. In this article, I will show you how to increase your classifier’s performance by using k-means to discover latent “clusters” in your dataset and either use these clusters as new features in your dataset or to partition your dataset by cluster and train a separate classifier on each.
We begin by generating a nonce dataset using sklearn’s make_classification utility. We will simulate a multi-class classification problem and generate 15 features for prediction.
from sklearn.datasets import make_classificationX, y = make_classification(n_samples=1000, n_features=8, n_informative=5, n_classes=4)
We now have a dataset of 1000 rows with 4 classes and 8 features, 5 of which are informative (the other 3 being random noise). We convert these to a pandas dataframe for easier manipulation.
import pandas as pddf = pd.DataFrame(X, columns=['f{}'.format(i) for i in range(8)])
We can now divide our data into a train and test set (75/25) split.
from sklearn.model_selection import train_test_splitX_train, X_test, y_train, y_test = train_test_split(df, y, test_size=0.25, random_state=90210)
Firstly, you will want to determine what the optimal k is given the dataset.
For the sake of brevity and so as not to distract from the purpose of this article, I refer the reader to this excellent tutorial: How to Determine the Optimal K for K-Means? should you want to read further on this matter.
In our case, because we used the make_classification utility, the parameter
n_clusters_per_class
is already set and defaults to 2. Therefore, we do not need to determine the optimal k; however, we do need to identify the clusters! We will use the following function to find the 2 clusters in the training set, then predict them for our test set.
import numpy as npfrom sklearn.cluster import KMeansfrom typing import Tupledef get_clusters(X_train: pd.DataFrame, X_test: pd.DataFrame, n_clusters: int) -> Tuple[pd.DataFrame, pd.DataFrame]: """ applies k-means clustering to training data to find clusters and predicts them for the test set """ clustering = KMeans(n_clusters=n_clusters, random_state=8675309,n_jobs=-1) clustering.fit(X_train) # apply the labels train_labels = clustering.labels_ X_train_clstrs = X_train.copy() X_train_clstrs['clusters'] = train_labels # predict labels on the test set test_labels = clustering.predict(X_test) X_test_clstrs = X_test.copy() X_test_clstrs['clusters'] = test_labels return X_train_clstrs, X_test_clstrsX_train_clstrs, X_test_clstrs = get_clusters(X_train, X_test, 2)
We now have a new feature called “clusters” with a value of 0 or 1.
Before we fit any models, we need to scale our features: this ensures all features are on the same numerical scale. With a linear model like logistic regression, the magnitude of the coefficients learned during training will depend on the scale of the features. If you had features that were on the scale of 0–1 and other features on the scale of say 0–100, the coefficients could not be reliably compared.
To scale the features, we use the following function which computes z-scores for each of the features and maps the learnings from the train set to the test set.
from sklearn.preprocessing import StandardScalerdef scale_features(X_train: pd.DataFrame, X_test: pd.DataFrame) -> Tuple[pd.DataFrame, pd.DataFrame]: """ applies standard scaler (z-scores) to training data and predicts z-scores for the test set """ scaler = StandardScaler() to_scale = [col for col in X_train.columns.values] scaler.fit(X_train[to_scale]) X_train[to_scale] = scaler.transform(X_train[to_scale]) # predict z-scores on the test set X_test[to_scale] = scaler.transform(X_test[to_scale]) return X_train, X_testX_train_scaled, X_test_scaled = scale_features(X_train_clstrs, X_test_clstrs)
We are now ready to run some experiments!
I chose to use Logistic Regression for this problem because it is extremely fast and inspection of the coefficients allows one to quickly assess feature importance.
To run our experiments, we will build a logistic regression model on 4 datasets:
Dataset with no clustering information(base)Dataset with “clusters” as a feature (cluster-feature)Dataset for df[“clusters”] == 0 (clusters-0)Dataset for df[“clusters”] == 1 (clusters-1)
Dataset with no clustering information(base)
Dataset with “clusters” as a feature (cluster-feature)
Dataset for df[“clusters”] == 0 (clusters-0)
Dataset for df[“clusters”] == 1 (clusters-1)
Out study is a 1x4 between-groups design with dataset [base, cluster-feature, clusters-0, clusters-1] as the only factor. The following creates our datasets.
# to divide the df by cluster, we need to ensure we use the correct class labels, we'll use pandas to do thattrain_clusters = X_train_scaled.copy()test_clusters = X_test_scaled.copy()train_clusters['y'] = y_traintest_clusters['y'] = y_test# locate the "0" clustertrain_0 = train_clusters.loc[train_clusters.clusters < 0] # after scaling, 0 went negtivetest_0 = test_clusters.loc[test_clusters.clusters < 0]y_train_0 = train_0.y.valuesy_test_0 = test_0.y.values# locate the "1" clustertrain_1 = train_clusters.loc[train_clusters.clusters > 0] # after scaling, 1 dropped slightlytest_1 = test_clusters.loc[test_clusters.clusters > 0]y_train_1 = train_1.y.valuesy_test_1 = test_1.y.values# the base dataset has no "clusters" featureX_train_base = X_train_scaled.drop(columns=['clusters'])X_test_base = X_test_scaled.drop(columns=['clusters'])# drop the targets from the training setX_train_0 = train_0.drop(columns=['y'])X_test_0 = test_0.drop(columns=['y'])X_train_1 = train_1.drop(columns=['y'])X_test_1 = test_1.drop(columns=['y'])datasets = { 'base': (X_train_base, y_train, X_test_base, y_test), 'cluster-feature': (X_train_scaled, y_train, X_test_scaled, y_test), 'cluster-0': (X_train_0, y_train_0, X_test_0, y_test_0), 'cluster-1': (X_train_1, y_train_1, X_test_1, y_test_1),}
To efficiently run our experiments, we’ll use the following function which loops through the 4 datasets and runs 5-fold cross-valdiation on each. For each dataset, we obtain 5 estimates for each classifier’s: accuracy, weighted precision, weighted recall, and weighted f1. We will plot these to observe general performance. We then obtain classification reports from each model on its respective test set to evaluate fine-grained performance.
from sklearn.linear_model import LogisticRegressionfrom sklearn import model_selectionfrom sklearn.metrics import classification_reportdef run_exps(datasets: dict) -> pd.DataFrame: ''' runs experiments on a dict of datasets ''' # initialize a logistic regression classifier model = LogisticRegression(class_weight='balanced', solver='lbfgs', random_state=999, max_iter=250) dfs = [] results = [] conditions = [] scoring = ['accuracy','precision_weighted','recall_weighted','f1_weighted']for condition, splits in datasets.items(): X_train = splits[0] y_train = splits[1] X_test = splits[2] y_test = splits[3] kfold = model_selection.KFold(n_splits=5, shuffle=True, random_state=90210) cv_results = model_selection.cross_validate(model, X_train, y_train, cv=kfold, scoring=scoring) clf = model.fit(X_train, y_train) y_pred = clf.predict(X_test) print(condition) print(classification_report(y_test, y_pred))results.append(cv_results) conditions.append(condition)this_df = pd.DataFrame(cv_results) this_df['condition'] = condition dfs.append(this_df)final = pd.concat(dfs, ignore_index=True) # We have wide format data, lets use pd.melt to fix this results_long = pd.melt(final,id_vars=['condition'],var_name='metrics', value_name='values') # fit time metrics, we don't need these time_metrics = ['fit_time','score_time'] results = results_long[~results_long['metrics'].isin(time_metrics)] # get df without fit data results = results.sort_values(by='values') return resultsdf = run_exps(datasets)
Let’s plot our results and see how each dataset affected classifier performance.
import matplotlibimport matplotlib.pyplot as pltimport seaborn as snsplt.figure(figsize=(20, 12))sns.set(font_scale=2.5)g = sns.boxplot(x="condition", y="values", hue="metrics", data=df, palette="Set3")plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)plt.title('Comparison of Dataset by Classification Metric')
pd.pivot_table(df, index='condition',columns=['metrics'],values=['values'], aggfunc='mean')
In general, it appears that our “base” dataset, with no clustering information, creates the worst performing classifier. By adding our binary “clusters” as a feature, we see a modest boost to performance; however, when we fit a model on each cluster, we see the largest boost in performance.
When we look at classification reports for fine-grained performance evaluation, the picture becomes very clear: when the datasets are segmented by cluster, we see a large boost to performance.
base precision recall f1-score support 0 0.48 0.31 0.38 64 1 0.59 0.59 0.59 71 2 0.42 0.66 0.51 50 3 0.59 0.52 0.55 65 accuracy 0.52 250 macro avg 0.52 0.52 0.51 250weighted avg 0.53 0.52 0.51 250cluster-feature precision recall f1-score support 0 0.43 0.36 0.39 64 1 0.59 0.62 0.60 71 2 0.40 0.56 0.47 50 3 0.57 0.45 0.50 65 accuracy 0.50 250 macro avg 0.50 0.50 0.49 250weighted avg 0.50 0.50 0.49 250cluster-0 precision recall f1-score support 0 0.57 0.41 0.48 29 1 0.68 0.87 0.76 30 2 0.39 0.45 0.42 20 3 0.73 0.66 0.69 29 accuracy 0.61 108 macro avg 0.59 0.60 0.59 108weighted avg 0.61 0.61 0.60 108cluster-1 precision recall f1-score support 0 0.41 0.34 0.38 35 1 0.54 0.46 0.50 41 2 0.49 0.70 0.58 30 3 0.60 0.58 0.59 36 accuracy 0.51 142 macro avg 0.51 0.52 0.51 142weighted avg 0.51 0.51 0.51 142
Consider the class “0”, the f1 scores across the four datasets are
Base — “0” F1: 0.38
Cluster-feature — “0” F1: 0.39
Cluster-0 — “0” F1: 0.48
Cluster-1 — “0” F1:0.38
For the “0” class, the model trained on the cluster-0 dataset shows ~23% relative improvement in f1 score over the other models and datasets.
In this article, I have shown how you can leverage “cluster-then-predict” for your classification problems and have teased some results suggesting that this technique can boost performance. There is still much more that can be done in terms of cluster creation and evaluation of the results.
In our case, we had a dataset with 2 clusters; however, in your problems you may have many more clusters to find. (Once you determine the optimal k using the elbow method on your dataset!)
In the case of k>2, you can treat the “clusters” feature as a categorical variable and apply one-hot encoding to use them in your model. As k increases, you may run into issues of overfitting should you decide to fit a model for each cluster.
If you find that K-Means is not increasing the performance of your classifier, perhaps your data is better suited for another clustering algorithm — see this article for an introduction to Hierarchical Clustering on imbalanced datasets.
As with all data science problems, experiment, experiment, experiment! Run tests for different techniques and let the data guide your modeling decisions.
Data structures for statistical computing in python, McKinney, Proceedings of the 9th Python in Science Conference, Volume 445, 2010.
@software{reback2020pandas, author = {The pandas development team}, title = {pandas-dev/pandas: Pandas}, month = feb, year = 2020, publisher = {Zenodo}, version = {latest}, doi = {10.5281/zenodo.3509134}, url = {https://doi.org/10.5281/zenodo.3509134}}
Harris, C.R., Millman, K.J., van der Walt, S.J. et al. Array programming with NumPy. Nature 585, 357–362 (2020). DOI: 10.1038/s41586–020–2649–2.
Scikit-learn: Machine Learning in Python, Pedregosa et al., JMLR 12, pp. 2825–2830, 2011.
J. D. Hunter, “Matplotlib: A 2D Graphics Environment”, Computing in Science & Engineering, vol. 9, no. 3, pp. 90–95, 2007.
Waskom, M. L., (2021). seaborn: statistical data visualization. Journal of Open Source Software, 6(60), 3021, https://doi.org/10.21105/joss.03021 | [
{
"code": null,
"e": 539,
"s": 172,
"text": "Supervised classification problems require a dataset with (a) a categorical dependent variable (the “target variable”) and (b) a set of independent variables (“features”) which may (or may not!) be useful in predicting the class. The modeling task is to l... |
C++ Pointers | You learned from the previous chapter, that we can get the memory
address of a variable by using the &
operator:
A pointer however, is a variable that stores the memory address as its value.
A pointer variable points to a data type (like int or string) of the same
type, and is created with the * operator. The address of the variable you're working with is assigned to the pointer:
Create a pointer variable with the name ptr, that points to a string variable, by using the
asterisk sign
* (string* ptr).
Note that the type of the pointer has to match the type of the variable you're
working with.
Use the & operator to store the memory address of the
variable called food, and assign it to the pointer.
Now, ptr holds the value of food's memory address.
Tip: There are three ways to declare pointer variables, but the first way is preferred:
Create a pointer variable with the name ptr, that should point to a string variable named food:
string food = "Pizza";
= &;
Start the Exercise
We just launchedW3Schools videos
Get certifiedby completinga course today!
If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail:
help@w3schools.com
Your message has been sent to W3Schools. | [
{
"code": null,
"e": 115,
"s": 0,
"text": "You learned from the previous chapter, that we can get the memory \naddress of a variable by using the & \noperator:"
},
{
"code": null,
"e": 194,
"s": 115,
"text": "A pointer however, is a variable that stores the memory address as its ... |
Querying the number of distinct colors in a subtree of a colored tree using BIT in C++ | In this tutorial, we will be discussing a program to find querying the number of distinct colors in a subtree of a colored tree using BIT.
For this we will be provided with rooted tree where each node has a color denoted by given array. Our task is to find all the distinct coloured nodes below the given node in the tree.
Live Demo
#include<bits/stdc++.h>
#define MAXIMUM_COLOUR 1000005
#define MAXIMUM_NUMBER 100005
using namespace std;
vector<int> tree[MAXIMUM_NUMBER];
vector<int> table[MAXIMUM_COLOUR];
int isTraversing[MAXIMUM_COLOUR];
int bit[MAXIMUM_NUMBER], getVisTime[MAXIMUM_NUMBER],
getEndTime[MAXIMUM_NUMBER];
int getFlatTree[2 * MAXIMUM_NUMBER];
bool vis[MAXIMUM_NUMBER];
int tim = 0;
vector< pair< pair<int, int>, int> > queries;
//storing results of each queryingTree
int ans[MAXIMUM_NUMBER];
void update(int idx, int val) {
while ( idx < MAXIMUM_NUMBER ) {
bit[idx] += val;
idx += idx & -idx;
}
}
int queryingTree(int idx) {
int result = 0;
while ( idx > 0 ) {
result += bit[idx];
idx -= idx & -idx;
}
return result;
}
void preformingDFS(int v, int color[]) {
//marking the node visited
vis[v] = 1;
getVisTime[v] = ++tim;
getFlatTree[tim] = color[v];
vector<int>::iterator it;
for (it=tree[v].begin(); it!=tree[v].end(); it++)
if (!vis[*it])
preformingDFS(*it, color);
getEndTime[v] = ++tim;
getFlatTree[tim] = color[v];
}
//adding an edge to the tree
void addingNewEdge(int u, int v) {
tree[u].push_back(v);
tree[v].push_back(u);
}
void markingFirstFind(int n) {
for (int i = 1 ; i <= 2 * n ; i++) {
table[getFlatTree[i]].push_back(i);
if (table[getFlatTree[i]].size() == 1) {
update(i, 1);
isTraversing[getFlatTree[i]]++;
}
}
}
void calcQuery() {
int j = 1;
for (int i=0; i<queries.size(); i++) {
for ( ; j < queries[i].first.first ; j++ ) {
int elem = getFlatTree[j];
update( table[elem][isTraversing[elem] - 1], -1);
if ( isTraversing[elem] < table[elem].size() ){
update(table[elem][ isTraversing[elem] ], 1);
isTraversing[elem]++;
}
}
ans[queries[i].second] = queryingTree(queries[i].first.second);
}
}
//counting distinct color nodes
void calcAllColours(int color[], int n, int qVer[], int qn) {
preformingDFS(1, color);
for (int i=0; i<qn; i++)
queries.push_back(make_pair(make_pair(getVisTime[qVer[i]] , getEndTime[qVer[i]]), i) );
sort(queries.begin(), queries.end());
markingFirstFind(n);
calcQuery();
for (int i=0; i<queries.size() ; i++) {
cout << "All distinct colours in the given tree: " << ans[i] << endl;
}
}
int main() {
int number = 6;
int color[] = {0, 2, 3, 3, 4, 1};
addingNewEdge(1, 2);
addingNewEdge(1, 3);
addingNewEdge(2, 4);
int queryVertices[] = {3, 2};
int qn = sizeof(queryVertices)/sizeof(queryVertices[0]);
calcAllColours(color, number, queryVertices, qn);
return 0;
}
All distinct colours in the given tree: 1
All distinct colours in the given tree: 2 | [
{
"code": null,
"e": 1201,
"s": 1062,
"text": "In this tutorial, we will be discussing a program to find querying the number of distinct colors in a subtree of a colored tree using BIT."
},
{
"code": null,
"e": 1385,
"s": 1201,
"text": "For this we will be provided with rooted tr... |
Check If the Rune is a Letter or not in Golang - GeeksforGeeks | 27 Sep, 2019
Rune is a superset of ASCII or it is an alias of int32. It holds all the characters available in the world’s writing system, including accents and other diacritical marks, control codes like tab and carriage return, and assigns each one a standard number. This standard number is known as a Unicode code point or rune in the Go language.You are allowed to check the given rune is a letter or not with the help of IsLetter() function. This function returns true if the given rune is a letter, or return false if the given rune is not a letter. This function is defined under Unicode package, so for accessing this method you need to import the Unicode package in your program.
Syntax:
func IsLetter(r rune) bool
The return type of this function is boolean. Let us discuss this concept with the help of given examples:
Example 1:
// Go program to illustrate how to// check the given rune is a letterpackage main import ( "fmt" "unicode") // Main functionfunc main() { // Creating rune rune_1 := 'g' rune_2 := 'e' rune_3 := '1' rune_4 := '4' rune_5 := 'S' // Checking the given rune // is a letter or not // Using IsLetter() function res_1 := unicode.IsLetter(rune_1) res_2 := unicode.IsLetter(rune_2) res_3 := unicode.IsLetter(rune_3) res_4 := unicode.IsLetter(rune_4) res_5 := unicode.IsLetter(rune_5) // Displaying results fmt.Println(res_1) fmt.Println(res_2) fmt.Println(res_3) fmt.Println(res_4) fmt.Println(res_5) }
Output:
true
true
false
false
true
Example 2:
// Go program to illustrate how to// check the given rune is a letterpackage main import ( "fmt" "unicode") // Main functionfunc main() { // Creating a slice of rune val := []rune{'g', 'E', '3', 'K', '1'} // Checking each element of the given // slice of the rune is a letter or not // Using IsLetter() function for i := 0; i < len(val); i++ { if unicode.IsLetter(val[i]) == true { fmt.Println("It is a letter") } else { fmt.Println("It is not a letter") } }}
Output:
It is a letter
It is a letter
It is not a letter
It is a letter
It is not a letter
Golang
Go Language
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Arrays in Go
Golang Maps
Slices in Golang
How to compare times in Golang?
How to Trim a String in Golang?
Inheritance in GoLang
Different Ways to Find the Type of Variable in Golang
How to Parse JSON in Golang?
Interfaces in Golang
Pointers in Golang | [
{
"code": null,
"e": 24408,
"s": 24380,
"text": "\n27 Sep, 2019"
},
{
"code": null,
"e": 25084,
"s": 24408,
"text": "Rune is a superset of ASCII or it is an alias of int32. It holds all the characters available in the world’s writing system, including accents and other diacritica... |
C program to print odd line contents of a File followed by even line content - GeeksforGeeks | 08 Jul, 2020
Pre-requisite: Basics of File Handling in C
Given a text file in a directory, the task is to print all the odd line content of the file first then print all the even line content.
Examples:
Input: file1.txt:WelcometoGeeksforGeeksOutput:Odd line contents:WelcomeGeeksforGeeksEven line contents:to
Input: file1.txt:1. This is Line1.2. This is Line2.3. This is Line3.4. This is Line4.
Output:Odd line contents:1. This is Line1.3. This is Line3.Even line contents:2. This is Line2.4. This is Line4.
Approach:
Open the file in a+ mode.Insert a new line at the end of the file, so that the output doesn’t get effected.Print odd lines of the file by keeping a check, that doesn’t print even lines of the file.Rewind the file pointer.Reinitialize check.Print even lines of the file by keeping a check, that doesn’t print odd lines of the file.
Open the file in a+ mode.
Insert a new line at the end of the file, so that the output doesn’t get effected.
Print odd lines of the file by keeping a check, that doesn’t print even lines of the file.
Rewind the file pointer.
Reinitialize check.
Print even lines of the file by keeping a check, that doesn’t print odd lines of the file.
Below is the implementation of the above approach:
// C program for the above approach #include <stdio.h> // Function which prints the file content// in Odd Even mannervoid printOddEvenLines(char x[]){ // Opening the path entered by user FILE* fp = fopen(x, "a+"); // If file is null, then return if (!fp) { printf("Unable to open/detect file"); return; } // Insert a new line at the end so // that output doesn't get effected fprintf(fp, "\n"); // fseek() function to move the // file pointer to 0th position fseek(fp, 0, 0); int check = 0; char buf[100]; // Print Odd lines to stdout while (fgets(buf, sizeof(buf), fp)) { // If check is Odd, then it is // odd line if (!(check % 2)) { printf("%s", buf); } check++; } check = 1; // fseek() function to rewind the // file pointer to 0th position fseek(fp, 0, 0); // Print Even lines to stdout while (fgets(buf, sizeof(buf), fp)) { if (!(check % 2)) { printf("%s", buf); } check++; } // Close the file fclose(fp); return;} // Driver Codeint main(){ // Input filename char x[] = "file1.txt"; // Function Call printOddEvenLines(x); return 0;}
Input File:Output File:
Akanksha_Rai
C-File Handling
C Language
C Programs
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
TCP Server-Client implementation in C
Multithreading in C
Exception Handling in C++
'this' pointer in C++
Arrow operator -> in C/C++ with Examples
Strings in C
Arrow operator -> in C/C++ with Examples
C Program to read contents of Whole File
UDP Server-Client implementation in C
Header files in C/C++ and its uses | [
{
"code": null,
"e": 24234,
"s": 24206,
"text": "\n08 Jul, 2020"
},
{
"code": null,
"e": 24278,
"s": 24234,
"text": "Pre-requisite: Basics of File Handling in C"
},
{
"code": null,
"e": 24414,
"s": 24278,
"text": "Given a text file in a directory, the task is ... |
MongoDB and Python - GeeksforGeeks | 20 Apr, 2022
Prerequisite : MongoDB : An introductionMongoDB is a cross-platform, document-oriented database that works on the concept of collections and documents. MongoDB offers high speed, high availability, and high scalability.The next question which arises in the mind of the people is “Why MongoDB”?Reasons to opt for MongoDB :
It supports hierarchical data structure (Please refer docs for details)It supports associate arrays like Dictionaries in Python.Built-in Python drivers to connect python-application with Database. Example- PyMongoIt is designed for Big Data.Deployment of MongoDB is very easy.
It supports hierarchical data structure (Please refer docs for details)
It supports associate arrays like Dictionaries in Python.
Built-in Python drivers to connect python-application with Database. Example- PyMongo
It is designed for Big Data.
Deployment of MongoDB is very easy.
MongoDB vs RDBMS
MongoDB and PyMongo Installation Guide
First start MongoDB from command prompt using :Method 1:mongodorMethod 2:net start MongoDBSee port number by default is set 27017 (last line in above image).Python has a native library for MongoDB. The name of the available library is “PyMongo”. To import this, execute the following command:from pymongo import MongoClientCreate a connection : The very first after importing the module is to create a MongoClient.from pymongo import MongoClientclient = MongoClient()After this, connect to the default host and port. Connection to the host and port is done explicitly. The following command is used to connect the MongoClient on the localhost which runs on port number 27017.client = MongoClient(‘host’, port_number)example:- client = MongoClient(‘localhost’, 27017)It can also be done using the following command:client = MongoClient(“mongodb://localhost:27017/”)Access DataBase Objects : To create a database or switch to an existing database we use:Method 1 : Dictionary-stylemydatabase = client[‘name_of_the_database’]Method2 :mydatabase = client.name_of_the_databaseIf there is no previously created database with this name, MongoDB will implicitly create one for the user.Note : The name of the database fill won’t tolerate any dash (-) used in it. The names like my-Table will raise an error. So, underscore are permitted to use in the name.Accessing the Collection : Collections are equivalent to Tables in RDBMS. We access a collection in PyMongo in the same way as we access the Tables in the RDBMS. To access the table, say table name “myTable” of the database, say “mydatabase”.Method 1:mycollection = mydatabase[‘myTable’]Method 2 :mycollection = mydatabase.myTable>MongoDB store the database in the form of dictionaries as shown:>record = {
title: 'MongoDB and Python',
description: 'MongoDB is no SQL database',
tags: ['mongodb', 'database', 'NoSQL'],
viewers: 104
} ‘_id’ is the special key which get automatically added if the programmer forgets to add explicitly. _id is the 12 bytes hexadecimal number which assures the uniqueness of every inserted document.Insert the data inside a collection :Methods used:insert_one() or insert_many()We normally use insert_one() method document into our collections. Say, we wish to enter the data named as record into the ’myTable’ of ‘mydatabase’.rec = myTable.insert_one(record)The whole code looks likes this when needs to be implemented.# importing modulefrom pymongo import MongoClient # creation of MongoClientclient=MongoClient() # Connect with the portnumber and hostclient = MongoClient(“mongodb://localhost:27017/”) # Access databasemydatabase = client[‘name_of_the_database’] # Access collection of the databasemycollection=mydatabase[‘myTable’] # dictionary to be added in the databaserec={title: 'MongoDB and Python', description: 'MongoDB is no SQL database', tags: ['mongodb', 'database', 'NoSQL'], viewers: 104 } # inserting the data in the databaserec = mydatabase.myTable.insert(record)Querying in MongoDB : There are certain query functions which are used to filter the data in the database. The two most commonly used functions are:find()find() is used to get more than one single document as a result of query.for i in mydatabase.myTable.find({title: 'MongoDB and Python'}) print(i)This will output all the documents in the myTable of mydatabase whose title is ‘MongoDB and Python’.count()count() is used to get the numbers of documents with the name as passed int he parameters.print(mydatabase.myTable.count({title: 'MongoDB and Python'}))This will output the numbers of documents in the myTable of mydatabase whose title is ‘MongoDB and Python’.These two query functions can be summed to give a give the most filtered result as shown below.print(mydatabase.myTable.find({title: 'MongoDB and Python'}).count())To print all the documents/entries inside ‘myTable’ of database ‘mydatabase’ : Use the following code:from pymongo import MongoClient try: conn = MongoClient() print("Connected successfully!!!")except: print("Could not connect to MongoDB") # database name: mydatabasedb = conn.mydatabase # Created or Switched to collection names: myTablecollection = db.myTable # To find() all the entries inside collection name 'myTable'cursor = collection.find()for record in cursor: print(record)This article is contributed by Rishabh Bansal and Shaurya Uppal.If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.My Personal Notes
arrow_drop_upSave
First start MongoDB from command prompt using :Method 1:mongodorMethod 2:net start MongoDBSee port number by default is set 27017 (last line in above image).Python has a native library for MongoDB. The name of the available library is “PyMongo”. To import this, execute the following command:from pymongo import MongoClient
mongod
orMethod 2:
net start MongoDB
See port number by default is set 27017 (last line in above image).Python has a native library for MongoDB. The name of the available library is “PyMongo”. To import this, execute the following command:
from pymongo import MongoClient
Create a connection : The very first after importing the module is to create a MongoClient.from pymongo import MongoClientclient = MongoClient()After this, connect to the default host and port. Connection to the host and port is done explicitly. The following command is used to connect the MongoClient on the localhost which runs on port number 27017.client = MongoClient(‘host’, port_number)example:- client = MongoClient(‘localhost’, 27017)It can also be done using the following command:client = MongoClient(“mongodb://localhost:27017/”)
from pymongo import MongoClientclient = MongoClient()
After this, connect to the default host and port. Connection to the host and port is done explicitly. The following command is used to connect the MongoClient on the localhost which runs on port number 27017.
client = MongoClient(‘host’, port_number)example:- client = MongoClient(‘localhost’, 27017)
It can also be done using the following command:
client = MongoClient(“mongodb://localhost:27017/”)
Access DataBase Objects : To create a database or switch to an existing database we use:Method 1 : Dictionary-stylemydatabase = client[‘name_of_the_database’]Method2 :mydatabase = client.name_of_the_databaseIf there is no previously created database with this name, MongoDB will implicitly create one for the user.Note : The name of the database fill won’t tolerate any dash (-) used in it. The names like my-Table will raise an error. So, underscore are permitted to use in the name.
mydatabase = client[‘name_of_the_database’]
Method2 :
mydatabase = client.name_of_the_database
If there is no previously created database with this name, MongoDB will implicitly create one for the user.Note : The name of the database fill won’t tolerate any dash (-) used in it. The names like my-Table will raise an error. So, underscore are permitted to use in the name.
Accessing the Collection : Collections are equivalent to Tables in RDBMS. We access a collection in PyMongo in the same way as we access the Tables in the RDBMS. To access the table, say table name “myTable” of the database, say “mydatabase”.Method 1:mycollection = mydatabase[‘myTable’]Method 2 :mycollection = mydatabase.myTable>MongoDB store the database in the form of dictionaries as shown:>record = {
title: 'MongoDB and Python',
description: 'MongoDB is no SQL database',
tags: ['mongodb', 'database', 'NoSQL'],
viewers: 104
} ‘_id’ is the special key which get automatically added if the programmer forgets to add explicitly. _id is the 12 bytes hexadecimal number which assures the uniqueness of every inserted document.
mycollection = mydatabase[‘myTable’]
Method 2 :
mycollection = mydatabase.myTable
>MongoDB store the database in the form of dictionaries as shown:>
record = {
title: 'MongoDB and Python',
description: 'MongoDB is no SQL database',
tags: ['mongodb', 'database', 'NoSQL'],
viewers: 104
}
‘_id’ is the special key which get automatically added if the programmer forgets to add explicitly. _id is the 12 bytes hexadecimal number which assures the uniqueness of every inserted document.
Insert the data inside a collection :Methods used:insert_one() or insert_many()We normally use insert_one() method document into our collections. Say, we wish to enter the data named as record into the ’myTable’ of ‘mydatabase’.rec = myTable.insert_one(record)The whole code looks likes this when needs to be implemented.# importing modulefrom pymongo import MongoClient # creation of MongoClientclient=MongoClient() # Connect with the portnumber and hostclient = MongoClient(“mongodb://localhost:27017/”) # Access databasemydatabase = client[‘name_of_the_database’] # Access collection of the databasemycollection=mydatabase[‘myTable’] # dictionary to be added in the databaserec={title: 'MongoDB and Python', description: 'MongoDB is no SQL database', tags: ['mongodb', 'database', 'NoSQL'], viewers: 104 } # inserting the data in the databaserec = mydatabase.myTable.insert(record)
insert_one() or insert_many()
We normally use insert_one() method document into our collections. Say, we wish to enter the data named as record into the ’myTable’ of ‘mydatabase’.
rec = myTable.insert_one(record)
The whole code looks likes this when needs to be implemented.
# importing modulefrom pymongo import MongoClient # creation of MongoClientclient=MongoClient() # Connect with the portnumber and hostclient = MongoClient(“mongodb://localhost:27017/”) # Access databasemydatabase = client[‘name_of_the_database’] # Access collection of the databasemycollection=mydatabase[‘myTable’] # dictionary to be added in the databaserec={title: 'MongoDB and Python', description: 'MongoDB is no SQL database', tags: ['mongodb', 'database', 'NoSQL'], viewers: 104 } # inserting the data in the databaserec = mydatabase.myTable.insert(record)
Querying in MongoDB : There are certain query functions which are used to filter the data in the database. The two most commonly used functions are:find()find() is used to get more than one single document as a result of query.for i in mydatabase.myTable.find({title: 'MongoDB and Python'}) print(i)This will output all the documents in the myTable of mydatabase whose title is ‘MongoDB and Python’.count()count() is used to get the numbers of documents with the name as passed int he parameters.print(mydatabase.myTable.count({title: 'MongoDB and Python'}))This will output the numbers of documents in the myTable of mydatabase whose title is ‘MongoDB and Python’.These two query functions can be summed to give a give the most filtered result as shown below.print(mydatabase.myTable.find({title: 'MongoDB and Python'}).count())To print all the documents/entries inside ‘myTable’ of database ‘mydatabase’ : Use the following code:from pymongo import MongoClient try: conn = MongoClient() print("Connected successfully!!!")except: print("Could not connect to MongoDB") # database name: mydatabasedb = conn.mydatabase # Created or Switched to collection names: myTablecollection = db.myTable # To find() all the entries inside collection name 'myTable'cursor = collection.find()for record in cursor: print(record)This article is contributed by Rishabh Bansal and Shaurya Uppal.If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.My Personal Notes
arrow_drop_upSave
find()find() is used to get more than one single document as a result of query.for i in mydatabase.myTable.find({title: 'MongoDB and Python'}) print(i)This will output all the documents in the myTable of mydatabase whose title is ‘MongoDB and Python’.count()count() is used to get the numbers of documents with the name as passed int he parameters.print(mydatabase.myTable.count({title: 'MongoDB and Python'}))This will output the numbers of documents in the myTable of mydatabase whose title is ‘MongoDB and Python’.These two query functions can be summed to give a give the most filtered result as shown below.print(mydatabase.myTable.find({title: 'MongoDB and Python'}).count())To print all the documents/entries inside ‘myTable’ of database ‘mydatabase’ : Use the following code:from pymongo import MongoClient try: conn = MongoClient() print("Connected successfully!!!")except: print("Could not connect to MongoDB") # database name: mydatabasedb = conn.mydatabase # Created or Switched to collection names: myTablecollection = db.myTable # To find() all the entries inside collection name 'myTable'cursor = collection.find()for record in cursor: print(record)
find()find() is used to get more than one single document as a result of query.for i in mydatabase.myTable.find({title: 'MongoDB and Python'}) print(i)This will output all the documents in the myTable of mydatabase whose title is ‘MongoDB and Python’.
for i in mydatabase.myTable.find({title: 'MongoDB and Python'}) print(i)
This will output all the documents in the myTable of mydatabase whose title is ‘MongoDB and Python’.
count()count() is used to get the numbers of documents with the name as passed int he parameters.print(mydatabase.myTable.count({title: 'MongoDB and Python'}))This will output the numbers of documents in the myTable of mydatabase whose title is ‘MongoDB and Python’.These two query functions can be summed to give a give the most filtered result as shown below.print(mydatabase.myTable.find({title: 'MongoDB and Python'}).count())
print(mydatabase.myTable.count({title: 'MongoDB and Python'}))
This will output the numbers of documents in the myTable of mydatabase whose title is ‘MongoDB and Python’.
These two query functions can be summed to give a give the most filtered result as shown below.
print(mydatabase.myTable.find({title: 'MongoDB and Python'}).count())
To print all the documents/entries inside ‘myTable’ of database ‘mydatabase’ : Use the following code:from pymongo import MongoClient try: conn = MongoClient() print("Connected successfully!!!")except: print("Could not connect to MongoDB") # database name: mydatabasedb = conn.mydatabase # Created or Switched to collection names: myTablecollection = db.myTable # To find() all the entries inside collection name 'myTable'cursor = collection.find()for record in cursor: print(record)
from pymongo import MongoClient try: conn = MongoClient() print("Connected successfully!!!")except: print("Could not connect to MongoDB") # database name: mydatabasedb = conn.mydatabase # Created or Switched to collection names: myTablecollection = db.myTable # To find() all the entries inside collection name 'myTable'cursor = collection.find()for record in cursor: print(record)
This article is contributed by Rishabh Bansal and Shaurya Uppal.
If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
sagartomar9927
MongoDB
GBlog
Python
Technical Scripter
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Roadmap to Become a Web Developer in 2022
Must Do Coding Questions for Companies like Amazon, Microsoft, Adobe, ...
Socket Programming in C/C++
DSA Sheet by Love Babbar
GET and POST requests using Python
Read JSON file using Python
Adding new column to existing DataFrame in Pandas
Python map() function
How to get column names in Pandas dataframe
Read a file line by line in Python | [
{
"code": null,
"e": 41554,
"s": 41526,
"text": "\n20 Apr, 2022"
},
{
"code": null,
"e": 41876,
"s": 41554,
"text": "Prerequisite : MongoDB : An introductionMongoDB is a cross-platform, document-oriented database that works on the concept of collections and documents. MongoDB off... |
Can we declare more than one class in a single Java program?
| A single Java program contains two or more classes, it is possible in two ways in Java.
Nested Classes
Multiple non-nested classes
In the below example, the java program contains two classes, one class name is Computer and another is Laptop. Both classes have their own constructors and a method. In the main method, we can create an object of two classes and call their methods.
Live Demo
public class Computer {
Computer() {
System.out.println("Constructor of Computer class.");
}
void computer_method() {
System.out.println("Power gone! Shut down your PC soon...");
}
public static void main(String[] args) {
Computer c = new Computer();
Laptop l = new Laptop();
c.computer_method();
l.laptop_method();
}
}
class Laptop {
Laptop() {
System.out.println("Constructor of Laptop class.");
}
void laptop_method() {
System.out.println("99% Battery available.");
}
}
When we compile the above program, two .class files will be created which are Computer.class and Laptop.class. This has the advantage that we can reuse our .class file somewhere in other projects without compiling the code again. In short, the number of .class files created will be equal to the number of classes in the code. We can create as many classes as we want but writing many classes in a single file is not recommended as it makes code difficult to read rather we can create a single file for every class.
Constructor of Computer class.
Constructor of Laptop class.
Power gone! Shut down your PC soon...
99% Battery available.
Once the main class is compiled which has several inner classes, the compiler generates separate .class files for each of the inner classes.
Live Demo
// Main class
public class Main {
class Test1 { // Inner class Test1
}
class Test2 { // Inner class Test2
}
public static void main(String [] args) {
new Object() { // Anonymous inner class 1
};
new Object() { // Anonymous inner class 2
};
System.out.println("Welcome to Tutorials Point");
}
}
In the above program, we have a Main class that has four inner classes Test1, Test2, Anonymous inner class 1 and Anonymous inner class 2. Once we compile this class, it will generate the following class files.
Main.class
Main$Test1.class
Main$Test2.class
Main$1.class
Main$2.class
Welcome to Tutorials Point | [
{
"code": null,
"e": 1150,
"s": 1062,
"text": "A single Java program contains two or more classes, it is possible in two ways in Java."
},
{
"code": null,
"e": 1165,
"s": 1150,
"text": "Nested Classes"
},
{
"code": null,
"e": 1193,
"s": 1165,
"text": "Multiple... |
Satellite imagery access and analysis in Python & Jupyter notebooks | by Abdishakur | Towards Data Science | The vast amount of satellite imagery collected every day across the globe is huge. Frequent Global coverage of the Earth and high-resolution data with readily available data to the public makes it helpful in monitoring the earth and its environment.
In this tutorial, we will learn how to access satellite images, analyze and visualize them right in Jupyter notebooks environment with python. Satellite images are pixel wised data just like any other types of images you have used. In Geography and Remote sensing terminology, this is called Rasters. Raster images mainly consist of satellite images, Lidar data as well as Georeferenced maps. As we will see, rasters consist of a matrix of cells and rows and each cell/row holds information about the location, such as elevation, temperature and vegetation.
We will cover the following in this tutorial:
Querry, retrieve and download satellite images directly with Python in Jupyter notebook.Read and Write Raster images in Python.Create RGB and NDVI images from Sentinel 2 Bands
Querry, retrieve and download satellite images directly with Python in Jupyter notebook.
Read and Write Raster images in Python.
Create RGB and NDVI images from Sentinel 2 Bands
In this tutorial, we will use Sentinel 2 data. There are many options to access Sentinel 2 images and most of them will require you to access through website interaction whether directly via a downloading service utility or via the cloud. However, since we are using Jupyter notebook, we will access them right here using, sentinelsat a python library which makes searching, retrieving and downloading Sentinel satellite images easy. So let us start installing sentinelsat through pip.
pip install sentinelsat
Before we are able to use sentinelsat, we need to register a username in Copernicus Open Access Hub and note down your username and password and paste them here inside the code.
You are all set to use sentinelsat and download Sentinel Satellite images. In this tutorial, we will use boundary data from Roma city, Italy. In the southern part of Roma, there is a natural reserve called Castel Porziano which we will use as a boundary to clip from the whole satellite image tile.
I have the boundary of the natural reserve as Shapefile and we will read it with Geopandas and visualize it with Folium python library (Note that I have covered Geopandas and Vector data analysis in 3 part series of articles available here: Part 1, Part 2, and Part 3)
With the above code, we have read natural reserve shapefile in Geopandas and called it nReserve, then later created an empty base map in Folium centred around coordinates in the area, we call this m. Finally, we can add the Geopandas data to the base map we have created to visualize the Natural Reserve boundary we are interested in. Below you can see the map.
One last step before we can search and download sentinel 2 images is to create a footprint from the nReservegeometry. Here we will use Shapely Python library since our data is in Shapefiles and have read it already as Geopandas GeodataFrame. (Note that if you have Geojson data, sentinelsatprovides a handy way to convert your data into a proper format in the query).
Now we can run a query on the apiwe have created above. There are different ways you can construct your query here depending on your use case. In this example, we will create a query for Sentinel 2 images Level 2A with cloud coverage between 0 and 10 that fall or intersect with the footprint (Area of study). For the time period, we are interested only in Sentinel Level 2A satellite images taken between ‘20190601’ and ‘20190626’ (For reference on valid search queries please refer to scihub).
We get a dictionary of all products available in this period with the query specification. In this case, we receive only 6 images taken but you can tweak the query for your use case example expanding the time period or increasing the cloud coverage percentage.
From here we can create a GeodataFrame or Dataframe from the product dictionary and sort them according to cloud coverage percentage. I prefer GeodataFrame instead of plain dataframe as the first holds the geometry of each satellite image tile. Once we create the GeodataFrame and sort it. As we do not have many products here we call directly products_gdf_sorted table to see the attributes off all 6 rows.
The table below only shows the first columns of the products_gdf_sorted table. In the index, you have tiles which you can use to download the particular image. Additional columns also include a title which has the full name of the tile and some other useful columns like cloud coverage percentage.
Let us say we are interested in the first satellite image since this has the least cloud coverage of all available images. we can simply call download and provide the product name (Note that you can download all images at once with api.download_all() function).
This will take a while (Sentinel 2 Satellite image tiles are about 1 GB). Once the download is finished, we can simply unzip it. In the next section, we will use the downloaded satellite images to process, analyze and visualize them.
Once we unzip the downloaded folder, we get many subfolders and it is sometimes hard to navigate through these folders. Here is a tree of the folders.
Sentinel-2 data is multispectral with 13 bands in the visible, near infrared and shortwave infrared spectrum. These bands come in a different spatial resolution ranging from 10 m to 60 m, thus images can be categorized as high-medium resolution. While there are other higher resolution satellites available(1m to 0.5 cm), Sentinel-2 data is free and has a high revisit time (5 days) which makes it an excellent option to study environmental challenges. Here is a useful table for sentinel 2 bands colours.
The true colour of satellite images is often displayed in a combination of the red, green and blue bands. Let us first read the data with Rasterio and create an RGB image from Bands 4, 3, and 2.
First, we open an empty RGB.tiff in Rasterio with the same parameters — i.e. width, height, CRS, etc.. — of Band 4 ( You can choose any of the three bands). Then we need to write those bands to the empty RGB image.
One important preprocessing task to clip or mask an area of study. Since this RGB image is large and huge you save both computing power and time to clip and use only the area of interest. We will clip the Natural reserve area from the RGB image.
Here, we first reproject our Natural reserve with the same projection as the original image. Next, we open the RGB image, get the metadata and mask with the projected boundary.
The result is only the masked/clipped area of interest as shown in the above image.
Calculating Normalized Difference Vegetation Index (NDVI) is an important indicator to assess the presence/absence of green vegetation from the satellite images. To calculate the NDVI, you need Red band and Near-Infrared Band (NIR). Different satellite images assign different numbers for these bands. Sentinel Images have red in 4th band and NIR in the 8th band. The formula for NDVI calculation is:
nir - red /(nir + red)
To carry out this in Rasterio we need first to read the 4th and 8th bands as arrays. We also need to make sure that the arrays are floats.
The output is an NDVI image which shows vegetation level of areas in the satellite image as shown below. For example, water has low vegetation (shown red in the image).
Accessing Sentinel 2 images with Python is made easy with sentinelsat. In this tutorial, we have covered how to construct a query and retrieve on information from available images as well as how to download Sentinel 2 images within Jupyter notebooks. We have also seen how to preprocess, create RGB and NDVI images and visualize raster images with Rasterio. The code for this tutorial is available in this Github repository with Google Colab Notebooks that you can run directly. Feel free to experiment and let me know if you have comments or questions. | [
{
"code": null,
"e": 422,
"s": 172,
"text": "The vast amount of satellite imagery collected every day across the globe is huge. Frequent Global coverage of the Earth and high-resolution data with readily available data to the public makes it helpful in monitoring the earth and its environment."
},... |
Python Getting sublist element till N | In this tutorial, we are going to write a program that returns a sublist element till nth sublist in a list. Let's say we have the following list with 5 sublists.
[['Python', 'Java'], ['C', 'Pascal'], ['Javascript', 'PHP'], ['C#', 'C++'], ['React', 'Angular']] Now, we have to get the first element from the first three sublists. We can get the elements different approaches. Let's see some of them.
More generic and first thought of the most of programmers is to use loops. Let's see the code using loops.
Live Demo
# initializing the list and N
random_list = [['Python', 'Java'], ['C', 'Pascal'], ['Javascript', 'PHP'], ['C#
C++'], ['React', 'Angular']]
N = 3
# empty list to store final elements from the sublists
desired_elements = []
# iterating over the list till 3rd element
for i in range(N):
# storing the first element from the sublist
desired_elements.append(random_list[i][0])
# printing the elements
print(desired_elements)
If you run the above code, then you will get the following result.
['Python', 'C', 'Javascript']
We can use the list comprehensions in place of for loop. Let's see the same code using list comprehensions.
Live Demo
# initializing the list and N
random_list = [['Python', 'Java'], ['C', 'Pascal'], ['Javascript', 'PHP'], ['C#
C++'], ['React', 'Angular']]
N = 3
# getting first element from the sublists
desired_elements = [sublist[0] for sublist in random_list[:N]]
# printing the elements
print(desired_elements)
If you run the above code, then you will get the following result.
['Python', 'C', 'Javascript']
Python provides a lot of built-in modules and methods. Let's use them to solve our problem.are going to use the map, itemgetter, and islice methods to achieve the output as expected.Let's see the code.
Live Demo
# importing the required methods
import operator # for itemgetter
import itertools # for islice
# initializing the list and N
random_list = [['Python', 'Java'], ['C', 'Pascal'], ['Javascript', 'PHP'], ['C#
C++'], ['React', 'Angular']]
N = 3
# getting first element from the sublists
desired_elements = list(map(operator.itemgetter(0), itertools.islice(random_list, N)))
# printing the elements
print(desired_elements)
If you run the above code, then you will get the following result.
['Python', 'C', 'Javascript']
You can take any element in place of the first element. We have taken the first element for the demonstration. If you have any doubts in the tutorial, mention them in the comment section. | [
{
"code": null,
"e": 1225,
"s": 1062,
"text": "In this tutorial, we are going to write a program that returns a sublist element till nth sublist in a list. Let's say we have the following list with 5 sublists."
},
{
"code": null,
"e": 1462,
"s": 1225,
"text": "[['Python', 'Java']... |
How to find and filter Duplicate rows in Pandas ? | Sometimes during our data analysis, we need to look at the duplicate rows to understand more about our data rather than dropping them straight away.
Luckily, in pandas we have few methods to play with the duplicates.
This method allows us to extract duplicate rows in a DataFrame. We will use a new dataset with duplicates. I have downloaded the Hr Dataset from link.
import pandas as pd
import numpy as np
# Import HR Dataset with certain columns
df = pd.read_csv("https://raw.githubusercontent.com/sasankac/TestDataSet/master/HRDataset.csv",
usecols = ("Employee_Name""PerformanceScore","Position","CitizenDesc"))
#Sort the values on employee name and make it permanent
df.sort_values("Employee_Name"inplace = True)
df.head(3)
The way duplicated() works by default is by keep parameter , This parameter is going to mark the very first occurrence of each value as a non-duplicate.
This method does not mark a row as duplicate if it exists more than once, rather it marks each subsequent row after the first row as duplicate. Confused? Let me try to explain one more time with an example, suppose there are 3 apples in a basket what this method does is mark the first apple as non-duplicate and the rest of the two apples as duplicates.
df["Employee_Name"].head(3)
0 Adinolfi
1 Adinolfi
2 Adinolfi
Name: Employee_Name, dtype: object
df["Employee_Name"].duplicated().head(3)
0 False
1 True
2 True
Name: Employee_Name, dtype: bool
Now to extract the duplicates out (remember the first occurrence is not a duplicate rather the subsequence occurrence are duplicates and will be outputted by this method) we need to pass this method to a data frame.
df.shape
(310, 4)
df[df["Employee_Name"].duplicated()]
79 rows × 4 columns
From the output above there are 310 rows with 79 duplicates which are extracted by using the .duplicated() method.
By default, this method is going to mark the first occurrence of the value as non-duplicate, we can change this behavior by passing the argument keep = last.
What this parameter is going to do is to mark the first two apples as duplicates and the last one as non-duplicate.
df[df["Employee_Name"].duplicated(keep="last")]
The keep parameter will also accept an additional argument “false” which will mark all the values occurring more than once as duplicates, in our case all the 3 apples will be marked as duplicates rather the first or last as shown in the above examples.
Note – When specifying the false parameter do not use the quotes.
df[df"Employee_Name"].duplicated(keep=False)]
Now finally, to extract the unique values from a dataset we can use the “~” (tilda) symbol to negate the values
df_unique~df["Employee_Name"].duplicated(keep=False)df[df_unique]
This method is pretty similar to the previous method, however this method can be on a DataFrame rather than on a single series.
NOTE :- This method looks for the duplicates rows on all the columns of a DataFrame and drops them.
len(df)
310
len(df.drop_duplicates())
290
The subset parameter accepts a list of column names as string values in which we can check for duplicates.
df1=df.drop_duplicates(subset=["Employee_Name"],keep="first")df1
We can specify multiple columns and use all the keep parameters discussed in the previous section.
df1=df.drop_duplicates(subset="Employee_Name""CitizenDesc"],keep=False)df1
The unique methods find the unique values in a series and return the unique values as an Array. This method does not exclude missing values.
len(df["Employee_Name"])
310
df["Employee_Name"].unique()
array(['Adinolfi', 'Anderson', 'Andreola', 'Athwal', 'Beak', 'Bondwell',
'Bozzi', 'Bramante', 'Brill', 'Brown', 'Burkett', 'Butler',
'Carabbio', 'Carey', 'Carr', 'Carter', 'Chace', 'Champaigne',
'Chan', 'Chang', 'Chivukula', 'Cierpiszewski', 'Cisco', 'Clayton',
'Cloninger', 'Close', 'Clukey', 'Cockel', 'Cole', 'Cornett',
'Costa', 'Crimmings', 'Daneault', 'Daniele', 'Darson', 'Davis',
'DeGweck', 'Del Bosque', 'Demita', 'Desimone', 'DiNocco',
'Dickinson', 'Dietrich', 'Digitale', 'Dobrin', 'Dolan', 'Dougall',
'Dunn', 'Eaton', 'Employee_Name', 'Engdahl', 'England', 'Erilus',
'Estremera', 'Evensen', 'Exantus', 'Faller', 'Fancett', 'Favis',
'Ferguson', 'Fernandes', 'Ferreira', 'Fidelia', 'Fitzpatrick',
'Foreman', 'Foss', 'Foster-Baker', 'Fraval', 'Friedman', 'Galia',
'Garcia', 'Garneau', 'Gaul', 'Gentry', 'Gerke', 'Gill', 'Gonzales',
'Gonzalez', 'Good', 'Handschiegl', 'Hankard', 'Harrison',
'Heitzman', 'Horton', 'Houlihan', 'Howard', 'Hubert', 'Hunts',
'Hutter', 'Huynh', 'Immediato', 'Ivey', 'Jackson', 'Jacobi',
'Jeannite', 'Jeremy Prater', 'Jhaveri', 'Johnson', 'Johnston',
'Jung', 'Kampew', 'Keatts', 'Khemmich', 'King', 'Kinsella',
'Kirill', 'Knapp', 'Kretschmer', 'LaRotonda', 'Lajiri', 'Langford',
'Langton', 'Latif', 'Le', 'LeBel', 'LeBlanc', 'Leach', 'Leruth',
'Liebig', 'Linares', 'Linden', 'Lindsay', 'Lundy', 'Lunquist',
'Lydon', 'Lynch', 'MacLennan', 'Mahoney', 'Manchester', 'Mancuso',
'Mangal', 'Martin', 'Martins', 'Maurice', 'McCarthy', 'McKinzie',
'Mckenna', 'Meads', 'Medeiros', 'Merlos', 'Miller', 'Monkfish',
'Monroe', 'Monterro', 'Moran', 'Morway', 'Motlagh', 'Moumanil',
'Mullaney', 'Murray', 'Navathe', 'Ndzi', 'Newman', 'Ngodup',
'Nguyen', 'Nowlan', 'O'hare', 'Oliver', 'Onque', 'Osturnka',
'Owad', 'Ozark', 'Panjwani', 'Patronick', 'Pearson', 'Pelech',
'Pelletier', 'Perry', 'Peters', 'Peterson', 'Petingill',
'Petrowsky', 'Pham', 'Pitt', 'Potts', 'Power', 'Punjabhi',
'Purinton', 'Quinn', 'Rachael', 'Rarrick', 'Rhoads', 'Riordan',
'Rivera', 'Roberson', 'Robertson', 'Robinson', 'Roby', 'Roehrich',
'Rogers', 'Roper', 'Rose', 'Rossetti', 'Roup', 'Ruiz', 'Saada',
'Saar-Beckles', 'Sadki', 'Sahoo', 'Salter', 'Sander', 'Semizoglou',
'Sewkumar', 'Shepard', 'Shields', 'Simard', 'Singh', 'Sloan',
'Smith', 'Soto', 'South', 'Sparks', 'Spirea', 'Squatrito',
'Stanford', 'Stanley', 'Steans', 'Stoica', 'Strong', 'Sullivan',
'Sutwell', 'Sweetwater', 'Szabo', 'Tavares', 'Tejeda', 'Veera',
'Von Massenbach', 'Wallace', 'Wang', 'Zhou', 'Zima'], dtype=object)
len(df["Employee_Name"].unique())
231
This method returns the number of unique values in a series. This method by default excludes the missing values using the parameter dropna = True.
You can pass the False argument to dropna parameter to not drop the missing values.
df["Employee_Name"].nunique()
231
df["Employee_Name"].nunique(dropna=False)
231 | [
{
"code": null,
"e": 1211,
"s": 1062,
"text": "Sometimes during our data analysis, we need to look at the duplicate rows to understand more about our data rather than dropping them straight away."
},
{
"code": null,
"e": 1279,
"s": 1211,
"text": "Luckily, in pandas we have few me... |
Count of Squares that are parallel to the coordinate axis from the given set of N points - GeeksforGeeks | 27 Mar, 2020
Given an array of points points[] in a cartesian coordinate system, the task is to find the count of the squares that are parallel to the coordinate axis.
Examples:
Input:points[] = {(0, 0), (0, 2), (2, 0), (2, 2), (1, 1)}Output: 1Explanation:As the points (0, 0), (0, 2), (2, 0), (2, 2) forms square which is parallel to the X-axis and Y-axis, Hence the count of such squares is 1.
Input:points[] = {(2, 0), (0, 2), (2, 2), (0, 0), (-2, 2), (-2, 0)}Output: 2Explanation:As the points (0, 0), (0, 2), (2, 0), (2, 2) forms one square, whereas points (0, 0), (0, 2), (-2, 0), (-2, 2) forms other square which is parallel to the X-axis and Y-axis,Hence the count of such squares is 2.
Approach: The idea is to choose two points from the array of points such that these two points are parallel to co-ordinate axis and then find other two points of the square with the help of the distance between the points. If those points exist in the array then, there is one such possible square.
Below is the implementation of the above approach:
C++
// C++ implementation to find count of Squares// that are parallel to the coordinate axis// from the given set of N points #include <bits/stdc++.h>using namespace std; #define sz(x) int(x.size()) // Function to get distance// between two pointsint get_dis(pair<int, int> p1, pair<int, int> p2){ int a = abs(p1.first - p2.first); int b = abs(p1.second - p2.second); return ((a * a) + (b * b));} // Function to check that points// forms a square and parallel to// the co-ordinate axisbool check(pair<int, int> p1, pair<int, int> p2, pair<int, int> p3, pair<int, int> p4){ int d2 = get_dis(p1, p2); int d3 = get_dis(p1, p3); int d4 = get_dis(p1, p4); if (d2 == d3 && 2 * d2 == d4 && 2 * get_dis(p2, p4) == get_dis(p2, p3)) { return true; } if (d3 == d4 && 2 * d3 == d2 && 2 * get_dis(p3, p2) == get_dis(p3, p4)) { return true; } if (d2 == d4 && 2 * d2 == d3 && 2 * get_dis(p2, p3) == get_dis(p2, p4)) { return true; } return false;} // Function to find all the squares which is// parallel to co-ordinate axisint count(map<pair<int, int>, int> hash, vector<pair<int, int> > v, int n){ int ans = 0; map<pair<int, int>, int> vis; // Loop to choose two points // from the array of points for (int i = 0; i < n; i++) { for (int j = 0; j < n; j++) { if (i == j) continue; pair<int, int> p1 = make_pair(v[i].first, v[j].second); pair<int, int> p2 = make_pair(v[j].first, v[i].second); set<pair<int, int> > s; s.insert(v[i]); s.insert(v[j]); s.insert(p1); s.insert(p2); if (sz(s) != 4) continue; // Condition to check if the // other points are present in the map if (hash.find(p1) != hash.end() && hash.find(p2) != hash.end()) { if ((!vis[v[i]] || !vis[v[j]] || !vis[p1] || !vis[p2]) && (check(v[i], v[j], p1, p2))) { vis[v[i]] = 1; vis[v[j]] = 1; vis[p1] = 1; vis[p2] = 1; ans++; } } } } cout << ans; return ans;} // Function to Count the number of squaresvoid countOfSquares(vector<pair<int, int> > v, int n){ ios_base::sync_with_stdio(0); cin.tie(0); map<pair<int, int>, int> hash; // Declaring iterator to a vector vector<pair<int, int> >::iterator ptr; // Adding the points to hash for (ptr = v.begin(); ptr < v.end(); ptr++) hash[*ptr] = 1; // Count the number of squares count(hash, v, n);} // Driver Codeint main(){ int n = 5; vector<pair<int, int> > v; v.push_back(make_pair(0, 0)); v.push_back(make_pair(0, 2)); v.push_back(make_pair(2, 0)); v.push_back(make_pair(2, 2)); v.push_back(make_pair(0, 1)); // Function call countOfSquares(v, n); return 0;}
1
Performance Analysis:
Time Complexity: As in the above approach, there are two loops which takes O(N2) time, Hence the Time Complexity will be O(N2).
Auxiliary Space Complexity: As in the above approach, there is extra space used, Hence the auxiliary space complexity will be O(N).
Maths
Geometric
Mathematical
Mathematical
Geometric
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Program for distance between two points on earth
Convex Hull | Set 1 (Jarvis's Algorithm or Wrapping)
Closest Pair of Points | O(nlogn) Implementation
Convex Hull | Set 2 (Graham Scan)
Line Clipping | Set 1 (Cohen–Sutherland Algorithm)
Program for Fibonacci numbers
Write a program to print all permutations of a given string
C++ Data Types
Set in C++ Standard Template Library (STL)
Program to find GCD or HCF of two numbers | [
{
"code": null,
"e": 25642,
"s": 25614,
"text": "\n27 Mar, 2020"
},
{
"code": null,
"e": 25797,
"s": 25642,
"text": "Given an array of points points[] in a cartesian coordinate system, the task is to find the count of the squares that are parallel to the coordinate axis."
},
... |
Finding square root of a number without using library functions - JavaScript | We are required to write a JavaScript function that takes in a number and calculates its square root without using the Math.sqrt() function.
Following is the code −
const square = (n, i, j) => {
let mid = (i + j) / 2;
let mul = mid * mid;
if ((mul === n) || (Math.abs(mul - n) < 0.00001)){
return mid;
}else if (mul < n){
return square(n, mid, j);
}else{
return square(n, i, mid);
}
}
// Function to find the square root of n
const findSqrt = num => {
let i = 1;
const found = false;
while (!found){
// If n is a perfect square
if (i * i === num){
return i;
}else if (i * i > num){
let res = square(num, i - 1, i);
return res;
};
i++;
}
}
console.log(findSqrt(33));
This will produce the following output in console −
5.744562149047852
We looped over from i = 1.
If i * i = n, then we returned i as n is a perfect square whose square root is i., else we find the smallest i for which i * i is just greater than n.
Now we know the square root of n lies in the interval i – 1 and i. And then we used the Binary Search algorithm to find the square root. | [
{
"code": null,
"e": 1203,
"s": 1062,
"text": "We are required to write a JavaScript function that takes in a number and calculates its square root without using the Math.sqrt() function."
},
{
"code": null,
"e": 1227,
"s": 1203,
"text": "Following is the code −"
},
{
"co... |
C library function - atexit() | The C library function int atexit(void (*func)(void)) causes the specified function func to be called when the program terminates. You can register your termination function anywhere you like, but it will be called at the time of the program termination.
Following is the declaration for atexit() function.
int atexit(void (*func)(void))
func − This is the function to be called at the termination of the program.
func − This is the function to be called at the termination of the program.
This function returns a zero value if the function is registered successfully, otherwise a non-zero value is returned if it is failed.
The following example shows the usage of atexit() function.
#include <stdio.h>
#include <stdlib.h>
void functionA () {
printf("This is functionA\n");
}
int main () {
/* register the termination function */
atexit(functionA );
printf("Starting main program...\n");
printf("Exiting main program...\n");
return(0);
}
Let us compile and run the above program that will produce the following result −
Starting main program...
Exiting main program...
This is functionA
12 Lectures
2 hours
Nishant Malik
12 Lectures
2.5 hours
Nishant Malik
48 Lectures
6.5 hours
Asif Hussain
12 Lectures
2 hours
Richa Maheshwari
20 Lectures
3.5 hours
Vandana Annavaram
44 Lectures
1 hours
Amit Diwan
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2262,
"s": 2007,
"text": "The C library function int atexit(void (*func)(void)) causes the specified function func to be called when the program terminates. You can register your termination function anywhere you like, but it will be called at the time of the program termination... |
Introduction to Anomaly Detection in Python with PyCaret | by Moez Ali | Towards Data Science | PyCaret is an open-source, low-code machine learning library in Python that automates machine learning workflows. It is an end-to-end machine learning and model management tool that speeds up the experiment cycle exponentially and makes you more productive.
In comparison with the other open-source machine learning libraries, PyCaret is an alternate low-code library that can be used to replace hundreds of lines of code with few lines only. This makes experiments exponentially fast and efficient. PyCaret is essentially a Python wrapper around several machine learning libraries and frameworks such as scikit-learn, XGBoost, LightGBM, CatBoost, spaCy, Optuna, Hyperopt, Ray, and a few more.
The design and simplicity of PyCaret are inspired by the emerging role of citizen data scientists, a term first used by Gartner. Citizen Data Scientists are power users who can perform both simple and moderately sophisticated analytical tasks that would previously have required more technical expertise.
To learn more about PyCaret, you can check the official website or GitHub.
In this tutorial we will learn:
Getting Data: How to import data from the PyCaret repository.
Setting up Environment: How to set up an unsupervised anomaly detection experiment in PyCaret.
Create Model: How to create a model and assign anomaly labels to the dataset for analysis.
Plot Model: How to analyze model performance using various plots.
Predict Model: How to assign anomaly labels to new/unseen dataset based on the trained model?
Save / Load Model: How to save/load model for future use?
Installation is easy and will only take a few minutes. PyCaret’s default installation from pip only installs hard dependencies as listed in the requirements.txt file.
pip install pycaret
To install the full version:
pip install pycaret[full]
Anomaly Detection is the task of identifying the rare items, events, or observations that raise suspicions by differing significantly from the majority of the data. Typically the anomalous items will translate to some kind of problems such as bank fraud, a structural defect, medical problems, or errors in a text. There are three broad categories of anomaly detection techniques that exist:
Unsupervised anomaly detection: Unsupervised anomaly detection techniques detect anomalies in an unlabeled test data set under the assumption that the majority of the instances in the dataset are normal by looking for instances that seem to fit least to the remainder of the data set.
Supervised anomaly detection: This technique requires a dataset that has been labeled as “normal” and “abnormal” and involves training a classifier.
Semi-supervised anomaly detection: This technique constructs a model representing normal behavior from a given normal training dataset, and then tests the likelihood of a test instance to be generated by the learned model.
PyCaret’s anomaly detection module (pycaret.anomaly) is an unsupervised machine learning module that performs the task of identifying rare items, events, or observations that raise suspicions by differing significantly from the majority of the data.
PyCaret anomaly detection module provides several pre-processing features that can be configured when initializing the setup through setup function. It has over 12 algorithms and a few plots to analyze the results of anomaly detection. PyCaret's anomaly detection module also implements a unique function tune_model that allows you to tune the hyperparameters of the anomaly detection model to optimize the supervised learning objective such as AUC for classification or R2 for regression.
In this tutorial, we will use a dataset from UCI called Mice Protein Expression. The data set consists of the expression levels of 77 proteins that produced detectable signals in the nuclear fraction of the cortex. The dataset contains a total of 1080 measurements per protein. Each measurement can be considered as an independent sample (mouse).
Higuera C, Gardiner KJ, Cios KJ (2015) Self-Organizing Feature Maps Identify Proteins Critical to Learning in a Mouse Model of Down Syndrome. PLoS ONE 10(6): e0129126. [Web Link] journal.pone.0129126
You can download the data from the original source found here and load it using pandas (Learn How) or you can use PyCaret’s data repository to load the data using the get_data() function (This will require an internet connection).
This dataset is licensed under a Creative Commons Attribution 4.0 International (CC BY 4.0) license.
This allows for the sharing and adaptation of the datasets for any purpose, provided that the appropriate credit is given. (Source)
from pycaret.datasets import get_datadataset = get_data('mice')
# check the shape of datadataset.shape>>> (1080, 82)
In order to demonstrate the use of the predict_model function on unseen data, a sample of 5% (54 records) has been withheld from the original dataset to be used for predictions at the end of the experiment.
data = dataset.sample(frac=0.95, random_state=786)data_unseen = dataset.drop(data.index)data.reset_index(drop=True, inplace=True)data_unseen.reset_index(drop=True, inplace=True)print('Data for Modeling: ' + str(data.shape))print('Unseen Data For Predictions: ' + str(data_unseen.shape))>>> Data for Modeling: (1026, 82)>>> Unseen Data For Predictions: (54, 82)
The setup function in PyCaret initializes the environment and creates the transformation pipeline for modeling and deployment. setup must be called before executing any other function in pycaret. It takes only one mandatory parameter: a pandas dataframe. All other parameters are optional can be used to customize the preprocessing pipeline.
When setup is executed, PyCaret's inference algorithm will automatically infer the data types for all features based on certain properties. The data type should be inferred correctly but this is not always the case. To handle this, PyCaret displays a prompt, asking for data types confirmation, once you execute the setup. You can press enter if all data types are correct or type quit to exit the setup.
Ensuring that the data types are correct is really important in PyCaret as it automatically performs multiple type-specific preprocessing tasks which are imperative for machine learning models.
Alternatively, you can also use numeric_features and categorical_features parameters in the setup to pre-define the data types.
from pycaret.anomaly import *exp_ano101 = setup(data, normalize = True, ignore_features = ['MouseID'], session_id = 123)
Once the setup has been successfully executed it displays the information grid which contains some important information about the experiment. Most of the information is related to the pre-processing pipeline which is constructed when setup is executed. The majority of these features are out of scope for this tutorial, however, a few important things to note are:
session_id: A pseudo-random number distributed as a seed in all functions for later reproducibility. If no session_id is passed, a random number is automatically generated that is distributed to all functions. In this experiment, session_id is set as 123 for later reproducibility.
Missing Values: When there are missing values in original data it will show as True. Notice that Missing Values in the information grid above is Trueas the data contains missing values which are automatically imputed using mean for numeric features and constant for categorical features. The method of imputation can be changed using numeric_imputation and categorical_imputation parameter in the setup.
Original Data: Displays the original shape of the dataset. In this experiment (1026, 82) means 1026 samples and 82 features.
Transformed Data: Displays the shape of the transformed dataset. Notice that the shape of the original dataset (1026, 82) is transformed into (1026, 91). The number of features has increased due to the encoding of categorical features in the dataset.
Numeric Features: Number of features inferred as numeric. In this dataset, 77 out of 82 features are inferred as numeric.
Categorical Features: Number of features inferred as categorical. In this dataset, 5 out of 82 features are inferred as categorical. Also, notice we have ignored one categorical feature i.e. MouseID using ignore_feature parameter.
Notice how a few tasks that are imperative to perform modeling are automatically handled such as missing value imputation, categorical encoding, etc. Most of the parameters in the setup function are optional and used for customizing the pre-processing pipeline. These parameters are out of scope for this tutorial but I will write more about them later.
Creating an anomaly detection model in PyCaret is simple and similar to how you would have created a model in supervised modules of PyCaret. The anomaly detection model is created using create_model function which takes one mandatory parameter i.e. name of the model as a string. This function returns a trained model object. See the example below:
iforest = create_model('iforest')print(iforest)>>> OUTPUTIForest(behaviour='new', bootstrap=False, contamination=0.05, max_features=1.0, max_samples='auto', n_estimators=100, n_jobs=-1, random_state=123, verbose=0)
We have created an Isolation Forest model using create_model. Notice the contamination parameter is set 0.05 which is the default value when you do not pass the fraction parameter. fraction parameter determines the proportion of outliers in the dataset. In the example below, we will create One Class Support Vector Machine model with 0.025 fraction.
svm = create_model('svm', fraction = 0.025)print(svm)>>> OUTPUTOCSVM(cache_size=200, coef0=0.0, contamination=0.025, degree=3, gamma='auto',kernel='rbf', max_iter=-1, nu=0.5, shrinking=True, tol=0.001, verbose=False)
To see the complete list of models available in the model library, please check the documentation or use the models function.
models()
Now that we have created a model, we would like to assign the anomaly labels to our dataset (1080 samples) to analyze the results. We will achieve this by using assign_model function.
iforest_results = assign_model(iforest)iforest_results.head()
Notice that two columns Anomaly and Score are added towards the end. 0 stands for inliers and 1 for outliers/anomalies. Anomaly_Score are the values computed by the algorithm. Outliers are assigned with larger anomaly scores. Notice that iforest_results also includes MouseID that we have dropped during setup. It wasn't used for the model and is only appended to the dataset when you use assign_model.
plot_model function can be used to analyze the anomaly detection model over different aspects. This function takes a trained model object and returns a plot.
plot_model(iforest, plot = 'tsne')
plot_model(iforest, plot = 'umap')
The predict_model function is used to assign anomaly labels to a new unseen dataset. We will now use our trained iforest model to predict the data stored in data_unseen. This variable was created at the beginning of the tutorial and contains 54 samples from the original dataset that were never exposed to PyCaret.
unseen_predictions = predict_model(iforest, data=data_unseen)unseen_predictions.head()
Anomaly column indicates the outlier (1 = outlier, 0 = inlier). Anomaly_Score is the values computed by the algorithm. Outliers are assigned with larger anomaly scores. You can also use predict_model function to label the training data.
data_predictions = predict_model(iforest, data = data)data_predictions.head()
We have now finished the experiment by using our iforest model to predict labels on unseen data.
This brings us to the end of our experiment, but one question is still to be asked: What happens when you have more new data to predict? Do you have to go through the entire experiment again? The answer is no, PyCaret’s inbuilt function save_model allows you to save the model along with the entire transformation pipeline for later use.
save_model(iforest,’Final IForest Model 25Nov2020')
To load a saved model at a future date in the same or an alternative environment, we would use PyCaret’s load_model function and then easily apply the saved model on new unseen data for prediction.
saved_iforest = load_model('Final IForest Model 25Nov2020')new_prediction = predict_model(saved_iforest, data=data_unseen)new_prediction.head()
We have only covered the basics of PyCaret’s Anomaly Detection Module. In the following tutorials, we will go deeper into advanced pre-processing techniques that allow you to fully customize your machine learning pipeline and are a must-know for any data scientist.
Thank you for reading 🙏
⭐ Tutorials New to PyCaret? Check out our official notebooks!📋 Example Notebooks created by the community.📙 Blog Tutorials and articles by contributors.📚 Documentation The detailed API docs of PyCaret📺 Video Tutorials Our video tutorial from various events.📢 Discussions Have questions? Engage with community and contributors.🛠️ Changelog Changes and version history.🌳 Roadmap PyCaret’s software and community development plan.
I write about PyCaret and its use-cases in the real world, If you would like to be notified automatically, you can follow me on Medium, LinkedIn, and Twitter. | [
{
"code": null,
"e": 430,
"s": 172,
"text": "PyCaret is an open-source, low-code machine learning library in Python that automates machine learning workflows. It is an end-to-end machine learning and model management tool that speeds up the experiment cycle exponentially and makes you more productiv... |
10 TensorFlow Tricks Every ML Practitioner Must Know | by Rohan Jagtap | Towards Data Science | TensorFlow 2.x offers a lot of simplicity in building models and in overall TensorFlow usage. So what’s new in TF2?
Easy model building with Keras and eager execution.
Robust model deployment in production on any platform.
Powerful experimentation for research.
Simplifying the API by cleaning up deprecated APIs and reducing duplication.
In this article, we will explore 10 features of TF 2.0, that make working with TensorFlow smoother, reduces lines of code and increases efficiency as these functions/classes belong to the TensorFlow API.
The tf.data API offers functions for data pipelining and related operations. We can build pipelines, map preprocessing functions, shuffle or batch a dataset and much more.
>>> dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1])>>> iter(dataset).next().numpy()8
# Shuffle>>> dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]).shuffle(6)>>> iter(dataset).next().numpy()0# Batch>>> dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]).batch(2)>>> iter(dataset).next().numpy()array([8, 3], dtype=int32)# Shuffle and Batch>>> dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]).shuffle(6).batch(2)>>> iter(dataset).next().numpy()array([3, 0], dtype=int32)
>>> dataset0 = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1])>>> dataset1 = tf.data.Dataset.from_tensor_slices([1, 2, 3, 4, 5, 6])>>> dataset = tf.data.Dataset.zip((dataset0, dataset1))>>> iter(dataset).next()(<tf.Tensor: shape=(), dtype=int32, numpy=8>, <tf.Tensor: shape=(), dtype=int32, numpy=1>)
def into_2(num): return num * 2>>> dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]).map(into_2)>>> iter(dataset).next().numpy()16
This is one of the best features of the tensorflow.keras API (in my opinion). The ImageDataGenerator is capable of generating dataset slices while batching and preprocessing along with data augmentation in real-time.
The Generator allows data flow directly from directories or from dataframes.
A misconception about data augmentation in ImageDataGenerator is that, it adds more data to the existing dataset. Although that is the actual definition of data augmentation, in ImageDataGenerator, the images in the dataset are transformed dynamically at different steps in training so that the model can be trained on noisy data it hasn’t seen.
train_datagen = ImageDataGenerator( rescale=1./255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True)
Here, the rescaling is done on all the samples (for normalizing), while the other parameters are for augmentation.
train_generator = train_datagen.flow_from_directory( 'data/train', target_size=(150, 150), batch_size=32, class_mode='binary')
We specify the directory for the real-time data flow. This can be done using dataframes as well.
train_generator = flow_from_dataframe( dataframe, x_col='filename', y_col='class', class_mode='categorical', batch_size=32)
The x_col parameter defines the full path of the image whereas the y_col parameter defines the label column for classification.
The model can be directly fed with the generator. Although the steps_per_epoch parameter needs to be specified which is essentially number_of_samples // batch_size.
model.fit( train_generator, validation_data=val_generator, epochs=EPOCHS, steps_per_epoch=(num_samples // batch_size), validation_steps=(num_val_samples // batch_size))
Data augmentation is necessary. In case of insufficient data, making changes in the data and treating it as a separate datapoint is a very effective way of training under less data.
The tf.image API has tools for transforming images which can be later used for data augmentation with tf.data API discussed earlier.
flipped = tf.image.flip_left_right(image)visualise(image, flipped)
saturated = tf.image.adjust_saturation(image, 5)visualise(image, saturated)
rotated = tf.image.rot90(image)visualise(image, rotated)
cropped = tf.image.central_crop(image, central_fraction=0.5)visualise(image, cropped)
pip install tensorflow-datasets
This is a very useful library as it is a single go-to point for a dump of very well-known datasets from various domains collected by TensorFlow.
import tensorflow_datasets as tfdsmnist_data = tfds.load("mnist")mnist_train, mnist_test = mnist_data["train"], mnist_data["test"]assert isinstance(mnist_train, tf.data.Dataset)
A detailed list of the datasets available in tensorflow-datasets can be found on the Datasets page of the documentation.
Audio, Image, Image classification, Object Detection, Structured, Summarization, Text, Translate, Video are the genres offered by tfds.
Transfer Learning is the new cool in Machine Learning and it is as important as it sounds. It is infeasible and impractical to train a benchmark model which is already trained by someone else and under generous resources (for eg. multiple expensive GPUs that one may not afford). Transfer Learning, addresses this issue. A pretrained model can be reused for a given use case or can be extended for a different use case.
TensorFlow offers benchmark pretrained models that can be extended easily for the desired use case.
base_model = tf.keras.applications.MobileNetV2( input_shape=IMG_SHAPE, include_top=False, weights='imagenet')
This base_model can be easily extended with additional layers or with different models. For eg:
model = tf.keras.Sequential([ base_model, global_average_layer, prediction_layer])
For a detailed list of other models and/or modules under tf.keras.applications, refer the docs page.
An Estimator is TensorFlow’s high-level representation of a complete model, and it has been designed for easy scaling and asynchronous training
— TensorFlow Docs
Premade estimators offer a very high level abstraction of a model, so you can directly focus on training the model without worrying about the lower level intricacies. For example:
linear_est = tf.estimator.LinearClassifier( feature_columns=feature_columns)linear_est.train(train_input_fn)result = linear_est.evaluate(eval_input_fn)
This shows how easy it is to build and train an estimator using tf.estimator. Estimators can also be customised.
TensorFlow has many premade estimators including LinearRegressor, BoostedTreesClassifier, etc. A complete, detailed list of estimators can be found at the TensorFlow docs.
Neural Nets are known for many layer deep networks wherein the layers can be of different types. TensorFlow contains many predefined layers (like Dense, LSTM, etc.). But for more complex architectures, the logic of a layer is much more complex than a primary layer. For such instances, TensorFlow allows building custom layers. This can be done by subclassing the tf.keras.layers.Layer class.
class CustomDense(tf.keras.layers.Layer): def __init__(self, num_outputs): super(CustomDense, self).__init__() self.num_outputs = num_outputs def build(self, input_shape): self.kernel = self.add_weight( "kernel", shape=[int(input_shape[-1]), self.num_outputs] ) def call(self, input): return tf.matmul(input, self.kernel)
As stated in the documentation, The best way to implement your own layer is extending the tf.keras.Layer class and implementing:
__init__ , where you can do all input-independent initialization.build, where you know the shapes of the input tensors and can do the rest of the initialization.call, where you do the forward computation.
__init__ , where you can do all input-independent initialization.
build, where you know the shapes of the input tensors and can do the rest of the initialization.
call, where you do the forward computation.
Although the kernel initialization can be done in __init__ itself, it is considered better to be initialized in build as otherwise, you would have to explicitly specify the input_shape on every instance of a new layer creation.
The tf.keras Sequential and the Model API makes training models easier. However, most of the time while training complex models, custom loss functions are used. Moreover, the model training can also differ from the default (for eg. applying gradients separately to different model components).
TensorFlow’s automatic differentiation helps calculating gradients in an efficient way. These primitives are used in defining custom training loops.
def train(model, inputs, outputs, learning_rate): with tf.GradientTape() as t: # Computing Losses from Model Prediction current_loss = loss(outputs, model(inputs)) # Gradients for Trainable Variables with Obtained Losses dW, db = t.gradient(current_loss, [model.W, model.b]) # Applying Gradients to Weights model.W.assign_sub(learning_rate * dW) model.b.assign_sub(learning_rate * db)
This loop can be repeated for multiple epochs and with a more customised setting as per the use case.
Saving a TensorFlow model can be of two types:
SavedModel : Saving the complete state of the model along with all the parameters. This is independent of source codes. model.save_weights('checkpoint')Checkpoints
SavedModel : Saving the complete state of the model along with all the parameters. This is independent of source codes. model.save_weights('checkpoint')
Checkpoints
Checkpoints capture the exact values of all the parameters used by a model. Models built with the Sequential API or the Model API can simply be saved in the SavedModel format.
However, for custom models, checkpoints are required.
Checkpoints do not contain any description of the computation defined by the model and thus are typically only useful when source code that will use the saved parameter values is available.
checkpoint_path = “save_path”# Defining a Checkpointckpt = tf.train.Checkpoint(model=model, optimizer=optimizer)# Creating a CheckpointManager Objectckpt_manager = tf.train.CheckpointManager(ckpt, checkpoint_path, max_to_keep=5)# Saving a Modelckpt_manager.save()
TensorFlow matches variables to checkpointed values by traversing a directed graph with named edges, starting from the object being loaded.
if ckpt_manager.latest_checkpoint: ckpt.restore(ckpt_manager.latest_checkpoint)
This is a fairly new feature in TensorFlow.
!pip install keras-tuner
Hyper-parameter tuning or Hypertuning is the process of cherrypicking parameters that define the configuration of a ML model. These factors are the deciding factors for the performance of a model in the aftermath of feature engineering and preprocessing.
# model_builder is a function that builds a model and returns ittuner = kt.Hyperband( model_builder, objective='val_accuracy', max_epochs=10, factor=3, directory='my_dir', project_name='intro_to_kt')
Along with HyperBand, BayesianOptimization and RandomSearch are also available for tuning.
tuner.search( img_train, label_train, epochs = 10, validation_data=(img_test,label_test), callbacks=[ClearTrainingOutput()])# Get the optimal hyperparametersbest_hps = tuner.get_best_hyperparameters(num_trials=1)[0]
Further, we train the model using the optimal hyper-parameters:
model = tuner.hypermodel.build(best_hps)model.fit( img_train, label_train, epochs=10, validation_data=(img_test, label_test))
If you have multiple GPUs and wish to optimize training by dispersing the training loop over multiple GPUs, TensorFlow’s various distributed training strategies are capable of optimizing the GPU usage and manipulate training over the GPUs for you.
tf.distribute.MirroredStrategy is the most common strategy used. How does it work anyway? The docs state:
All the variables and the model graph is replicated on the replicas.
Input is evenly distributed across the replicas.
Each replica calculates the loss and gradients for the input it received.
The gradients are synced across all the replicas by summing them.
After the sync, the same update is made to the copies of the variables on each replica.
strategy = tf.distribute.MirroredStrategy()with strategy.scope(): model = tf.keras.Sequential([ tf.keras.layers.Conv2D( 32, 3, activation='relu', input_shape=(28, 28, 1) ), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Flatten(), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.Dense(10) ]) model.compile( loss="sparse_categorical_crossentropy", optimizer="adam", metrics=['accuracy'] )
For other strategies and custom training loops, refer the documentation.
TensorFlow is sufficient for building almost all the components of a ML pipeline. The takeaway from this tutorial is an introduction to the various APIs provided by TensorFlow and a quick guide on how to use them.
Here is a link to the GitHub repository of the code. Feel free to fork it.
The code used in this guide is referred from the following official TensorFlow Documentation: | [
{
"code": null,
"e": 288,
"s": 172,
"text": "TensorFlow 2.x offers a lot of simplicity in building models and in overall TensorFlow usage. So what’s new in TF2?"
},
{
"code": null,
"e": 340,
"s": 288,
"text": "Easy model building with Keras and eager execution."
},
{
"cod... |
# and ## Operators in C ? | In this section we will see what are the Stringize operator(#) and Token Pasting operator(##) in C. The Stringize operator is a preprocessor operator. It sends commands to compiler to convert a token into string. We use this operator at the macro definition.
Using stringize operator we can convert some text into string without using any quotes.
Live Demo
#include<stdio.h>
#define STR_PRINT(x) #x
main() {
printf(STR_PRINT(This is a string without double quotes));
}
This is a string without double quotes
The Token Pasting operator is a preprocessor operator. It sends commands to compiler to add or concatenate two tokens into one string. We use this operator at the macro definition.
Live Demo
#include<stdio.h>
#define STR_CONCAT(x, y) x##y
main() {
printf("%d", STR_CONCAT(20, 50));
}
2050 | [
{
"code": null,
"e": 1321,
"s": 1062,
"text": "In this section we will see what are the Stringize operator(#) and Token Pasting operator(##) in C. The Stringize operator is a preprocessor operator. It sends commands to compiler to convert a token into string. We use this operator at the macro defini... |
2D Histograms with Plotly. How to create more informative... | by Soner Yıldırım | Towards Data Science | Plotly Python (plotly.py) is an open-source plotting library built on plotly javascript (plotly.js). One of the things I like about plotly.py is that it offers a high-level API (plotly express) and a low level API (graph objects) to create visualizations. With plotly express, we can create a dynamic and informative plot with very few lines of code. On the other hand, we need to write more code with graph objects but have more control over what we create.
In this post, we will create 2D histograms, also called density plots, using plotly express.
Histograms are commonly used plots in data analyses to get an overview of the distribution of data. In histograms, the distribution of numerical or categorical data is shown with bars. Each bar represents a value range or category and the height of the bar is proportional to the number of values that fall into that range.
Let’s first create 1D histograms and then upgrade to 2D histograms (or density maps). We will use the famous titanic survival dataset which is available here on Kaggle.
We start with reading the data into a pandas dataframe:
import numpy as npimport pandas as pddf = pd.read_csv("/content/titanic_train.csv")print(df.shape)df.head()
I only use the training dataset which includes data of 891 passengers. We first create histograms on “Age” and “Fare” columns.
#import plotly expressimport plotly.express as pxfig = px.histogram(df, x="Age", title="Histogram of Age", width=800, height=500)fig.show()
We have a wide range of ages but mostly between 20 and 30. Let’s see how the distribution of “Fare” column looks like.
There are some extreme values in the fare column. For demonstration purposes, I will drop the rows with a fare greater than 100. These rows comprise approximately 6% of the entire dataset.
len(df[df.Fare > 100]) / len(df)0.05948372615039282df = df[df.Fare < 100]
We can now plot the histogram of “Fare” column.
len(df[df.Fare > 100]) / len(df)0.05948372615039282df = df[df.Fare < 100]
Most of the ticket prices are less than 20 and numbers decrease as we go up to 100.
It is time to introduce 2D histograms which combine 2 different histograms on x-axis and y-axis. Thus, we are able to visualize the density of overlaps or concurrence.
len(df[df.Fare > 100]) / len(df)0.05948372615039282df = df[df.Fare < 100]
It seems like a scatter plot with regions instead of showing individual points. We have a grid that partitions fare-age combinations. Interactive plots of plotly allow you to see the range each partition represents as well as the number of points in those regions. The yellowish partitions contain the highest number of passengers. As the color gets darker, number of passengers that falls into the partitions decreases.
We can also visualize the histograms that constitute this density plot using marginal parameters.
fig = px.density_heatmap(df, x="Age", y="Fare", title="Density Map of Age and Fare", marginal_x="histogram", marginal_y="histogram", width=800, height=500)fig.show()
We are able to see both age and fare histograms in addition to the density plot.
As you may already know, the purpose of the titanic survival dataset is to predict whether a passenger survived based on the data given in the dataset. The features (class, age, gender, fare...) are used as independent variables to predict the target (survived) variable. Before implementing a machine learning model, we can use data visualizations to have an idea if certain features affect the survival rate.
Let’s distinguish the density plot of “Age” and “Fare” based on “survived” column using the facet_col parameter.
fig = px.density_heatmap(df, x="Age", y="Fare", title="Density Map of Age and Fare Based on Survival", facet_col="Survived")fig.show()
Density plots look similar but we can conclude that being in the yellowish area on the left plot decreases the chance of survival. For these two grids, the ratio of not-survived to survived passengers (183/66) are higher than the same ratio in the entire dataset (535/303).
df.Survived.value_counts()0 535 1 303 Name: Survived, dtype: int64
We have covered 2D histograms (density plots) with plotly. Of course, this is just a little of what can be done with this amazing library. There are many other plot types that we can dynamically create with plotly. Its syntax is easy to understand as well. I will try to cover more complex plots in the upcoming posts. You can also check the plotly documentation which I think is well-documented with many different examples. Just like any other topic, the best way to get familiar with plotly is to practice. Thus, I suggest creating lots of plots to sharpen your skills.
Thank you for reading. Please let me know if you have any feedback. | [
{
"code": null,
"e": 630,
"s": 171,
"text": "Plotly Python (plotly.py) is an open-source plotting library built on plotly javascript (plotly.js). One of the things I like about plotly.py is that it offers a high-level API (plotly express) and a low level API (graph objects) to create visualizations.... |
Finding remainder of a large number - GeeksforGeeks | 20 Jul, 2021
Number System is an important concept for solving GATE Aptitude questions and aptitude for entrance exams for different companies.
The below is an important question which has been asked in many exams.
Question : If 7126 is not divisible by 48, find the remainder?
Normal Approach : For calculating the remainder we first calculate the original value for number 7126 and divide it by 48 and obtain the remainder. It is very long and time taking process and it is not at all feasible to solve it in this way. So we use some important mathematical concepts related to divisibility to solve this problem.
Speedy approach : Important concepts for solving problem,
(xn – an) divisible by (x – a) for every n (n belongs to integers)
(xn – an) divisible by (x + a) for every even number n (n belongs to integers)
(xn – an) divisible by (x + a) for every odd number n (n belongs to integers)
And we also use another basic formula;
Dividend = divisor x quotient + remainder
The given number is in a form such that the base is very near to 48.
This is done by using the formula ( amn ) = (am)n
7126 = (72)63 = 4963
Now by using our mathematical formulae we should add or subtract a number to 4963 such that it is divisible by 48.
(4963 – 1) = (4963 - 163)
By comparing it with (xn – an) we can write,
x = 49, n = 63 and a = 1
Therefore from the above we get that ( 4963 – 163) is divisible by (49-1) and (49+1) So, (4963 – 1) is divisible by 48 Let (4963 – 1)/48 = q (where q is the quotient )
4963 – 1 = 48 x q
4963 = 48 x q + 1
7126 = 48 x q + 1
Comparing with.
Dividend = divisor * quotient + remainder
So from above when the dividend = 7126 and the divisor = 48, then the remainder is 1. So when 7126 is divided by 48, the remainder is 1.
In this way we can obtain the remainder for such large numbers. It takes very less time and is very useful in competitive exams.
varmasuyash
gabaa406
Aptitude
Engineering Mathematics
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Newton's Divided Difference Interpolation Formula
Discrete Mathematics | Hasse Diagrams
Mathematics | Walks, Trails, Paths, Cycles and Circuits in Graph
Arrow Symbols in LaTeX
Set Notations in LaTeX
Mathematics | Graph Isomorphisms and Connectivity
Univariate, Bivariate and Multivariate data and its analysis
Logic Notations in LaTeX
Activation Functions
Various Implicants in K-Map | [
{
"code": null,
"e": 24680,
"s": 24652,
"text": "\n20 Jul, 2021"
},
{
"code": null,
"e": 24812,
"s": 24680,
"text": "Number System is an important concept for solving GATE Aptitude questions and aptitude for entrance exams for different companies. "
},
{
"code": null,
... |
BERT to the rescue!. A step-by-step tutorial on simple text... | by Dima Shulga | Towards Data Science | In this post, I want to show how to apply BERT to a simple text classification problem. I assume that you’re more or less familiar with what BERT is on a high level, and focus more on the practical side by showing you how to utilize it in your work. Roughly speaking, BERT is a model that knows to represent text. You give it some sequence as an input, it then looks left and right several times and produces a vector representation for each word as the output. In their paper, the authors describe two ways to work with BERT, one as with “feature extraction” mechanism. That is, we use the final output of BERT as an input to another model. This way we’re “extracting” features from text using BERT and then use it in a separate model for the actual task in hand. The other way is by “fine-tuning” BERT. That is, we add additional layer/s on top of BERT and then train the whole thing together. This way, we train our additional layer/s and also change (fine-tune) the BERTs weights. Here I want to show the second method and present a step-by-step solution to a very simple and popular text classification task — IMDB Movie reviews sentiment classification. This task may be not the hardest task to solve and applying BERT to it might be slightly overkilling, but most of the steps shown here are the same for almost every task, no matter how complex it is.
Before diving into the actual code, let’s understand the general structure of BERT and what we need to do to use it in a classification task. As mentioned before, generally, the input to BERT is a sequence of words, and the output is a sequence of vectors. BERT allows us to perform different tasks based on its output. So for different task type, we need to change the input and/or the output slightly. In the figure below, you can see 4 different task types, for each task type, we can see what should be the input and the output of the model.
You can see that for the input, there’s always a special [CLS] token (stands for classification) at the start of each sequence and a special [SEP] token that separates two parts of the input.
For the output, if we’re interested in classification, we need to use the output of the first token (the [CLS] token). For more complicated outputs, we can use all the other tokens output.
We are interested in “Single Sentence Classification” (top right), so we’ll add the special [CLS] token and use its output as an input to a linear layer followed by sigmoid activation, that performs the actual classification.
Now let’s understand the task in hand: given a movie review, predict whether it’s positive or negative. The dataset we use is 50,000 IMDB reviews (25K for train and 25K for test) from the PyTorch-NLP library. Each review is tagged pos or neg . There are 50% positive reviews and 50% negative reviews both in train and test sets.
You can find all the code in this notebook.
We load the data using the pytorch-nlp library:
train_data, test_data = imdb_dataset(train=True, test=True)
Each instance in this dataset is a dictionary with 2 fields: text and sentimet
{ 'sentiment': 'pos', 'text': 'Having enjoyed Joyces complex nove...'}
We create two variables for each set, one for texts and one for the labels:
train_texts, train_labels = list(zip(*map(lambda d: (d['text'], d['sentiment']), train_data)))test_texts, test_labels = list(zip(*map(lambda d: (d['text'], d['sentiment']), test_data)))
Next, we need to tokenize our texts. BERT was trained using the WordPiece tokenization. It means that a word can be broken down into more than one sub-words. For example, if I tokenize the sentence “Hi, my name is Dima” I’ll get:
tokenizer.tokenize('Hi my name is Dima')# OUTPUT['hi', 'my', 'name', 'is', 'dim', '##a']
This kind of tokenization is beneficial when dealing with out of vocabulary words, and it may help better represent complicated words. The sub-words are constructed during the training time and depend on the corpus the model was trained on. We could use any other tokenization technique of course, but we’ll get the best results if we tokenize with the same tokenizer the BERT model was trained on. The PyTorch-Pretrained-BERT library provides us with tokenizer for each of BERTS models. Here we use the basic bert-base-uncased model, there are several other models, including much larger models. Maximum sequence size for BERT is 512, so we’ll truncate any review that is longer than this.
The code below creates the tokenizer, tokenizes each review, adds the special [CLS] token, and then takes only the first 512 tokens for both train and test sets:
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)train_tokens = list(map(lambda t: ['[CLS]'] + tokenizer.tokenize(t)[:511], train_texts))test_tokens = list(map(lambda t: ['[CLS]'] + tokenizer.tokenize(t)[:511], test_texts))
Next, we need to convert each token in each review to an id as present in the tokenizer vocabulary. If there’s a token that is not present in the vocabulary, the tokenizer will use the special [UNK] token and use its id:
train_tokens_ids = list(map(tokenizer.convert_tokens_to_ids, train_tokens))test_tokens_ids = list(map(tokenizer.convert_tokens_to_ids, train_tokens_ids))
Finally, we need to pad our input so it will have the same size of 512. It means that for any review that is shorter than 512 tokens, we’ll add zeros to reach 512 tokens:
train_tokens_ids = pad_sequences(train_tokens_ids, maxlen=512, truncating="post", padding="post", dtype="int")test_tokens_ids = pad_sequences(test_tokens_ids, maxlen=512, truncating="post", padding="post", dtype="int")
Our target variable is currently a list of neg and pos strings. We’ll convert it to numpy arrays of booleans:
train_y = np.array(train_labels) == 'pos'test_y = np.array(test_labels) == 'pos'
We’ll use PyTorch and the excellent PyTorch-Pretrained-BERT library for the model building. Actually, there’s a very similar model already implemented in this library and we could’ve used this one. For this post, I want to implement it myself so we can better understand what’s going on.
Before we create our model, let’s see how we can use the BERT model as implemented in the PyTorch-Pretrained-BERT library:
bert = BertModel.from_pretrained('bert-base-uncased')x = torch.tensor(train_tokens_ids[:3])y, pooled = bert(x, output_all_encoded_layers=False)print('x shape:', x.shape)print('y shape:', y.shape)print('pooled shape:', pooled.shape)# OUTPUTx shape :(3, 512)y shape: (3, 512, 768)pooled shape: (3, 768)
First, we create the BERT model, then we create a PyTorch tensor with first 3 reviews from our training set and pass it to it. The output is two variables. Let's understand all the shapes: x is of size (3, 512) , we took only 3 reviews, 512 tokens each. y is of size (3, 512, 768) , this is the BERTs final layer output for each token. We could use output_all_encoded_layer=True to get the output of all the 12 layers. Each token in each review is represented using a vector of size 768.pooled is of size (3, 768) this is the output of our [CLS] token, the first token in our sequence.
Our goal is to take BERTs pooled output, apply a linear layer and a sigmoid activation. Here’s how our model looks like:
class BertBinaryClassifier(nn.Module): def __init__(self, dropout=0.1): super(BertBinaryClassifier, self).__init__() self.bert = BertModel.from_pretrained('bert-base-uncased') self.linear = nn.Linear(768, 1) self.sigmoid = nn.Sigmoid() def forward(self, tokens): _, pooled_output = self.bert(tokens, utput_all=False) linear_output = self.linear(dropout_output) proba = self.sigmoid(linear_output) return proba
Every model in PyTorch is a nn.Module object. It means that every model we built must provide 2 methods. The __init__ method declares all the different parts the model will use. In our case, we create the BERT model that we’ll fine-tune, the Linear layer, and the Sigmoid activation. The forward method is the actual code that runs during the forward pass (like the predict method in sklearn or keras). Here we take the tokens input and pass it to the BERT model. The output of BERT is 2 variables, as we have seen before, we use only the second one (the _ name is used to emphasize that this variable is not used). We take the pooled output and pass it to the linear layer. Finally, we use the Sigmoid activation to provide the actual probability.
The training is pretty standard. First, we prepare our tensors and data loaders:
train_tokens_tensor = torch.tensor(train_tokens_ids)train_y_tensor = torch.tensor(train_y.reshape(-1, 1)).float()test_tokens_tensor = torch.tensor(test_tokens_ids)test_y_tensor = torch.tensor(test_y.reshape(-1, 1)).float()train_dataset = TensorDataset(train_tokens_tensor, train_y_tensor)train_sampler = RandomSampler(train_dataset)train_dataloader = DataLoader(train_dataset, sampler=train_sampler, batch_size=BATCH_SIZE)test_dataset = TensorDataset(test_tokens_tensor, test_y_tensor)test_sampler = SequentialSampler(test_dataset)test_dataloader = DataLoader(test_dataset, sampler=test_sampler, batch_size=BATCH_SIZE)
We’ll use theAdam optimizer and the BinaryCrossEntropy (BCELoss) loss and train the model for 10 epochs:
bert_clf = BertBinaryClassifier()bert_clf = bert_clf.cuda()optimizer = Adam(bert_clf.parameters(), lr=3e-6)bert_clf.train()for epoch_num in range(EPOCHS): for step_num, batch_data in enumerate(train_dataloader): token_ids, labels = tuple(t.to(device) for t in batch_data) probas = bert_clf(token_ids) loss_func = nn.BCELoss() batch_loss = loss_func(probas, labels) bert_clf.zero_grad() batch_loss.backward() optimizer.step()
For those who’re not familiar with PyTorch, Let’s go over the code step by step.
First, we create the BertBinaryClassifier as we defined above. We move it to the GPU by applying bert_clf.cuda() . We create the Adam optimizer with our model parameters (that the optimizer will update) and a learning rate I found worked well.
For each step in each epoch, we do the following:
Move our tensors to GPU by applying .to(device)bert_clf(token_ids) gives us the probabilities (forward pass)Calculate the loss with loss_func(probas, labels)Zero the gradients from the previous stepCalculate and propagate the new gradients by batch_loss.backward()Update the model parameters with respect to the gradients by optimizer.step()
Move our tensors to GPU by applying .to(device)
bert_clf(token_ids) gives us the probabilities (forward pass)
Calculate the loss with loss_func(probas, labels)
Zero the gradients from the previous step
Calculate and propagate the new gradients by batch_loss.backward()
Update the model parameters with respect to the gradients by optimizer.step()
After 10 epochs, I got pretty good results.
Conclusion
BERT is a very powerful model and can be applied to many tasks. For me, it provided some very good results on tasks that I work on. I hope that this post helped you better understand the practical aspects of working with BERT. As mentioned before, you can find the code in this notebook | [
{
"code": null,
"e": 1531,
"s": 171,
"text": "In this post, I want to show how to apply BERT to a simple text classification problem. I assume that you’re more or less familiar with what BERT is on a high level, and focus more on the practical side by showing you how to utilize it in your work. Roug... |
HTML - Blocks | All the HTML elements can be categorized into two categories (a) Block Level Elements (b)Inline Elements.
Block elements appear on the screen as if they have a line break before and after them. For example, the <p>, <h1>, <h2>, <h3>, <h4>, <h5>, <h6>, <ul>, <ol>, <dl>, <pre>, <hr />, <blockquote>, and <address> elements are all block level elements. They all start on their own new line, and anything that follows them appears on its own new line.
Inline elements, on the other hand, can appear within sentences and do not have to appear on a new line of their own. The <b>, <i>, <u>, <em>, <strong>, <sup>, <sub>, <big>, <small>, <li>, <ins>, <del>, <code>, <cite>, <dfn>, <kbd>, and <var> elements are all inline elements.
There are two important tags which we use very frequently to group various other HTML tags (i) <div> tag and (ii) <span> tag
This is the very important block level tag which plays a big role in grouping various other HTML tags and applying CSS on group of elements. Even now <div> tag can be used to create webpage layout where we define different parts (Left, Right, Top etc.) of the page using <div> tag. This tag does not provide any visual change on the block but this has more meaning when it is used with CSS.
Following is a simple example of <div> tag. We will learn Cascading Style Sheet (CSS) in a separate chapter but we used it here to show the usage of <div> tag −
<!DOCTYPE html>
<html>
<head>
<title>HTML div Tag</title>
</head>
<body>
<!-- First group of tags -->
<div style = "color:red">
<h4>This is first group</h4>
<p>Following is a list of vegetables</p>
<ul>
<li>Beetroot</li>
<li>Ginger</li>
<li>Potato</li>
<li>Radish</li>
</ul>
</div>
<!-- Second group of tags -->
<div style = "color:green">
<h4>This is second group</h4>
<p>Following is a list of fruits</p>
<ul>
<li>Apple</li>
<li>Banana</li>
<li>Mango</li>
<li>Strawberry</li>
</ul>
</div>
</body>
</html>
This will produce the following result −
Following is a list of vegetables
Beetroot
Ginger
Potato
Radish
Following is a list of fruits
Apple
Banana
Mango
Strawberry
The HTML <span> is an inline element and it can be used to group inline-elements in an HTML document. This tag also does not provide any visual change on the block but has more meaning when it is used with CSS.
The difference between the <span> tag and the <div> tag is that the <span> tag is used with inline elements whereas the <div> tag is used with block-level elements.
Following is a simple example of <span> tag. We will learn Cascading Style Sheet (CSS) in a separate chapter but we used it here to show the usage of <span> tag −
<!DOCTYPE html>
<html>
<head>
<title>HTML span Tag</title>
</head>
<body>
<p>This is <span style = "color:red">red</span> and this is
<span style = "color:green">green</span></p>
</body>
</html>
This will produce the following result −
This is red and this is green
19 Lectures
2 hours
Anadi Sharma
16 Lectures
1.5 hours
Anadi Sharma
18 Lectures
1.5 hours
Frahaan Hussain
57 Lectures
5.5 hours
DigiFisk (Programming Is Fun)
54 Lectures
6 hours
DigiFisk (Programming Is Fun)
45 Lectures
5.5 hours
DigiFisk (Programming Is Fun)
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2480,
"s": 2374,
"text": "All the HTML elements can be categorized into two categories (a) Block Level Elements (b)Inline Elements."
},
{
"code": null,
"e": 2824,
"s": 2480,
"text": "Block elements appear on the screen as if they have a line break before and ... |
3 Types of Contextualized Word Embeddings Using BERT | by Arushi Prakash | Medium | Towards Data Science | Since Google launched the BERT model in 2018, the model and its capabilities have captured the imagination of data scientists in many areas. The model has been adapted to different domains, like SciBERT for scientific texts, bioBERT for biomedical texts, and clinicalBERT for clinical texts. The lofty model, with 110 million parameters, has also been compressed for easier use as ALBERT (90% compression) and DistillBERT (40% compression). The original BERT model and its adaptations have been used for improving the performance of search engines, content moderation, sentiment analysis, named entity recognition, and more.
In this article, I will demonstrate show three ways to get contextualized word embeddings from BERT using python, pytorch, and transformers.
The article is split into these sections:
What is transfer learning?
How have BERT embeddings been used for transfer learning?
Setting up PyTorch to get BERT embeddings
Extracting word embeddings (“Context-free” pre-trained embedding, “Context-based” pre-trained embedding, “Context-averaged” pre-trained embedding)
Conclusion
In transfer learning, knowledge embedded in a pre-trained machine learning model is used as a starting point to build models for a different task. Transfer learning applications have exploded in the fields of computer vision and natural language processing because it requires significantly lesser data and computational resources to develop useful models. It has been termed as the next frontier in machine learning.
BERT has been used for transfer learning in several natural language processing applications. Recent examples include detecting hate speech, classify health-related tweets, and sentiment analysis in the Bengali language.
Check out my Jupyter notebook for the full code
# Importing the relevant modulesfrom transformers import BertTokenizer, BertModelimport pandas as pdimport numpy as npimport torch# Loading the pre-trained BERT model#################################### Embeddings will be derived from# the outputs of this modelmodel = BertModel.from_pretrained(‘bert-base-uncased’, output_hidden_states = True,)# Setting up the tokenizer#################################### This is the same tokenizer that# was used in the model to generate# embeddings to ensure consistencytokenizer = BertTokenizer.from_pretrained(‘bert-base-uncased’)
We also need some functions to massage the input into the right form
def bert_text_preparation(text, tokenizer): """Preparing the input for BERT Takes a string argument and performs pre-processing like adding special tokens, tokenization, tokens to ids, and tokens to segment ids. All tokens are mapped to seg- ment id = 1. Args: text (str): Text to be converted tokenizer (obj): Tokenizer object to convert text into BERT-re- adable tokens and ids Returns: list: List of BERT-readable tokens obj: Torch tensor with token ids obj: Torch tensor segment ids """ marked_text = "[CLS] " + text + " [SEP]" tokenized_text = tokenizer.tokenize(marked_text) indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text) segments_ids = [1]*len(indexed_tokens) # Convert inputs to PyTorch tensors tokens_tensor = torch.tensor([indexed_tokens]) segments_tensors = torch.tensor([segments_ids]) return tokenized_text, tokens_tensor, segments_tensors
And another function to convert the input into embeddings
def get_bert_embeddings(tokens_tensor, segments_tensors, model): """Get embeddings from an embedding model Args: tokens_tensor (obj): Torch tensor size [n_tokens] with token ids for each token in text segments_tensors (obj): Torch tensor size [n_tokens] with segment ids for each token in text model (obj): Embedding model to generate embeddings from token and segment ids Returns: list: List of list of floats of size [n_tokens, n_embedding_dimensions] containing embeddings for each token """ # Gradient calculation id disabled # Model is in inference mode with torch.no_grad(): outputs = model(tokens_tensor, segments_tensors) # Removing the first hidden state # The first state is the input state hidden_states = outputs[2][1:] # Getting embeddings from the final BERT layer token_embeddings = hidden_states[-1] # Collapsing the tensor into 1-dimension token_embeddings = torch.squeeze(token_embeddings, dim=0) # Converting torchtensors to lists list_token_embeddings = [token_embed.tolist() for token_embed in token_embeddings] return list_token_embeddings
We are going to generate embeddings for the following texts
# Text corpus############### These sentences show the different# forms of the word 'bank' to show the# value of contextualized embeddingstexts = ["bank", "The river bank was flooded.", "The bank vault was robust.", "He had to bank on her for support.", "The bank was out of money.", "The bank teller was a man."]
Embeddings are generated in the following manner
# Getting embeddings for the target# word in all given contextstarget_word_embeddings = []for text in texts: tokenized_text, tokens_tensor, segments_tensors = bert_text_preparation(text, tokenizer) list_token_embeddings = get_bert_embeddings(tokens_tensor, segments_tensors, model) # Find the position 'bank' in list of tokens word_index = tokenized_text.index('bank') # Get the embedding for bank word_embedding = list_token_embeddings[word_index] target_word_embeddings.append(word_embedding)
Finally, distances between the embeddings for the word bank in different contexts are calculated using this code
from scipy.spatial.distance import cosine# Calculating the distance between the# embeddings of 'bank' in all the# given contexts of the wordlist_of_distances = []for text1, embed1 in zip(texts, target_word_embeddings): for text2, embed2 in zip(texts, target_word_embeddings): cos_dist = 1 - cosine(embed1, embed2) list_of_distances.append([text1, text2, cos_dist])distances_df = pd.DataFrame(list_of_distances, columns=['text1', 'text2', 'distance'])
We create a Pandas DataFrame to store all the distances.
Check out my Jupyter notebook for the full code
The first text (“bank”) generates a context-free text embedding. This is context-free since there are no accompanying words to provide context to the meaning of “bank”. In a way, this is the average across all embeddings of the word “bank”.
Understandably, this context-free embedding does not look like one usage of the word “bank”. This is evident in the cosine distance between the context-free embedding and all other versions of the word.
Embeddings generated for the word “bank” from each sentence with the word create a context-based embedding. These embeddings are the most common form of transfer learning and show the true power of the method.
In this example, the embeddings for the word “bank” when it means a financial institution are far from the embeddings for it when it means a riverbank or the verb form of the word.
When all the embeddings are averaged together, they create a context-averaged embedding. This style of embedding might be useful in some applications where one needs to get the average meaning of the word.
Surprisingly, the context-free and context-averaged versions of the word are not the same as shown by the cosine distance of 0.65 between them.
Transfer learning methods can bring value to natural language processing projects. In this article, I demonstrated a version of transfer learning by generating contextualized BERT embeddings for the word “bank” in varying contexts. I also showed how to extract three types of word embeddings — context-free, context-based, and context-averaged. It is important to understand the distinction between these embeddings and use the right one for your application.
Check out my Jupyter notebook for the full code | [
{
"code": null,
"e": 797,
"s": 172,
"text": "Since Google launched the BERT model in 2018, the model and its capabilities have captured the imagination of data scientists in many areas. The model has been adapted to different domains, like SciBERT for scientific texts, bioBERT for biomedical texts, ... |
Dart Programming - Boolean | Dart provides an inbuilt support for the Boolean data type. The Boolean data type in DART supports only two values – true and false. The keyword bool is used to represent a Boolean literal in DART.
The syntax for declaring a Boolean variable in DART is as given below −
bool var_name = true;
OR
bool var_name = false
void main() {
bool test;
test = 12 > 5;
print(test);
}
It will produce the following output −
true
Unlike JavaScript, the Boolean data type recognizes only the literal true as true. Any other value is considered as false. Consider the following example −
var str = 'abc';
if(str) {
print('String is not empty');
} else {
print('Empty String');
}
The above snippet, if run in JavaScript, will print the message ‘String is not empty’ as the if construct will return true if the string is not empty.
However, in Dart, str is converted to false as str != true. Hence the snippet will print the message ‘Empty String’ (when run in unchecked mode).
The above snippet if run in checked mode will throw an exception. The same is illustrated below −
void main() {
var str = 'abc';
if(str) {
print('String is not empty');
} else {
print('Empty String');
}
}
It will produce the following output, in Checked Mode −
Unhandled exception:
type 'String' is not a subtype of type 'bool' of 'boolean expression' where
String is from dart:core
bool is from dart:core
#0 main (file:///D:/Demos/Boolean.dart:5:6)
#1 _startIsolate.<anonymous closure> (dart:isolate-patch/isolate_patch.dart:261)
#2 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:148)
It will produce the following output, in Unchecked Mode −
Empty String
Note − The WebStorm IDE runs in checked mode, by default.
44 Lectures
4.5 hours
Sriyank Siddhartha
34 Lectures
4 hours
Sriyank Siddhartha
69 Lectures
4 hours
Frahaan Hussain
117 Lectures
10 hours
Frahaan Hussain
22 Lectures
1.5 hours
Pranjal Srivastava
34 Lectures
3 hours
Pranjal Srivastava
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2723,
"s": 2525,
"text": "Dart provides an inbuilt support for the Boolean data type. The Boolean data type in DART supports only two values – true and false. The keyword bool is used to represent a Boolean literal in DART."
},
{
"code": null,
"e": 2795,
"s": 272... |
How to set country code to column values with phone numbers in MySQL? | To set country code to phone numbers would mean to concatenate. You can use CONCAT() for this.
Let us first create a table −
mysql> create table DemoTable769 (MobileNumber varchar(100));
Query OK, 0 rows affected (0.54 sec)
Insert some records in the table using insert command −
mysql> insert into DemoTable769 values('8799432434');
Query OK, 1 row affected (0.24 sec)
mysql> insert into DemoTable769 values('9899996778');
Query OK, 1 row affected (0.15 sec)
mysql> insert into DemoTable769 values('7890908989');
Query OK, 1 row affected (0.21 sec)
mysql> insert into DemoTable769 values('9090898987');
Query OK, 1 row affected (0.20 sec)
Display all records from the table using select statement −
mysql> select *from DemoTable769;
This will produce the following output -
+--------------+
| MobileNumber |
+--------------+
| 8799432434 |
| 9899996778 |
| 7890908989 |
| 9090898987 |
+--------------+
4 rows in set (0.00 sec)
Following is the query to set country code to column values with phone numbers in MySQL −
mysql> update DemoTable769
set MobileNumber=concat('+91',MobileNumber);
Query OK, 4 rows affected (0.20 sec)
Rows matched: 4 Changed: 4 Warnings: 0
Let us check the description of the view −
mysql> select *from DemoTable769;
This will produce the following output -
+---------------+
| MobileNumber |
+---------------+
| +918799432434 |
| +919899996778 |
| +917890908989 |
| +919090898987 |
+---------------+
4 rows in set (0.00 sec) | [
{
"code": null,
"e": 1157,
"s": 1062,
"text": "To set country code to phone numbers would mean to concatenate. You can use CONCAT() for this."
},
{
"code": null,
"e": 1187,
"s": 1157,
"text": "Let us first create a table −"
},
{
"code": null,
"e": 1286,
"s": 1187,... |
Flutter- Screenshot Package - GeeksforGeeks | 12 Jan, 2022
Flutter is a popular framework by Google which is growing fast with its growing community. The Flutter has created a buzz through its libraries, making the development fast-paced.
Nowadays, everyone loves to take screenshots. If your application involves the use of screenshots, Flutter got a package for it. Not only for this purpose, but it is very helpful in the testing and debugging process. The fast-changing data screens can be captured through screenshots, and doing this manually is a boring task as well as time-wasting. The screenshot package automates the process of capturing the widgets you want to capture and storing them somewhere. If you want your user to capture only certain widgets of the screen, not an entire screen, this package is here to help you. In this article, we will implement a screenshot package in Flutter.
Follow the article to see how to do screenshot work in Flutter.
Step 1: Add the following dependency in your pubspec.yaml file.
Add the given dependency in pubspec.yaml file.
Dart
dependencies: screenshot: ^1.2.3
Now click on the pub get to configure it. Or add dependency directly to the pubspec.yaml from the terminal by writing the below code in the terminal.
flutter pub add screenshot
Step 2: Import the library.
Dart
import 'package:screenshot/screenshot.dart';
Step 3: Navigate to main.dart.
First, move to the main.dart and modify the main function. When you create a Flutter app, some lines of code are already written for you. We are going to keep it. Remove the stateless widget MyHomePage from the code, and keep only the below-shown code. Then give our first screen of the app in the home: HomePage().
Dart
import 'package:flutter/material.dart';import 'home.dart'; void main() { runApp(const MyApp());} class MyApp extends StatelessWidget { const MyApp({Key? key}) : super(key: key); // This widget is the root of your application. @override Widget build(BuildContext context) { return MaterialApp( title: 'Flutter Demo', theme: ThemeData( primarySwatch: Colors.green, ), home: HomePage(), ); }}
Step 4: Declare StatefulWidget() for HomePage() class
Create another dart file home.dart where we will create a stateful HomePage() class. In that HomePage() class, we have given screenshotController. After that, we have declared Scaffold() in which we have declared appbar that consists of the title of the app – “Screenshot Demo App”. In the body section, we have declared the Screenshot widget that takes screenshotController as a parameter wrapped with the center widget.
We have created two ElevatedButton, one shows the decreasing timer and the other is an increasing timer. We can take a screenshot of both the buttons by pressing another button that shows Capture above Widget. This will show the captured widget on a different screen. Remember, we need to wrap all the widgets inside the Screenshot widget whose screenshot you want. We have wrapped both the timers as well as their respective texts inside the Screenside widget. At the respective particular values, both timers will stop, and by clicking the refresh button their values will be reset.
Sometimes, it takes time to load the widgets on the screen, and they are invisible until they are not on the screen. But with this library, we can even capture them. To show that, we have created an invisible widget, that is captured when another button that shows Capture An Invisible Widget is pressed. This will show the invisible widget on another screen.
Dart
import 'dart:async';import 'dart:typed_data';import 'package:flutter/material.dart';import 'package:screenshot/screenshot.dart'; class HomePage extends StatefulWidget { @override _HomePageState createState() => _HomePageState();} class _HomePageState extends State<HomePage> { // Create an instance of ScreenshotController ScreenshotController screenshotController = ScreenshotController(); @override void initState() { super.initState(); } // create a variable of type Timer late Timer _timer; int _start = 0; int _startTwo = 61; // function to increment the timer until // 61 and set the state void increasingStartTimer() { const oneSec = const Duration(seconds: 1); _timer = new Timer.periodic( oneSec, (Timer timer) => setState( () { if (_start > 60) { timer.cancel(); } else { _start = _start + 1; } }, ), ); } // function to decrease the timer // until 1 and set the state void decreasingStartTimer() { const oneSec = const Duration(seconds: 1); _timer = new Timer.periodic( oneSec, (Timer timer) => setState( () { if (_startTwo < 0) { timer.cancel(); } else { _startTwo = _startTwo - 1; } }, ), ); } @override void dispose() { _timer.cancel(); super.dispose(); } @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: Text("GeeksForGeeks"), centerTitle: true, ), body: Center( child: Column( children: [ SizedBox(height: 30), Screenshot( controller: screenshotController, child: Column( children: [ Text("Decreasing Timer : "), SizedBox( height: 10, ), Container( padding: const EdgeInsets.all(30.0), decoration: BoxDecoration( border: Border.all(color: Colors.blueAccent, width: 5.0), color: Colors.amberAccent, ), child: Text(_startTwo.toString())), SizedBox( height: 25, ), Text("Increasing Timer : "), SizedBox( height: 10, ), Container( padding: const EdgeInsets.all(30.0), decoration: BoxDecoration( border: Border.all(color: Colors.blueAccent, width: 5.0), color: Colors.amberAccent, ), child: Text("$_start")), ], ), ), ElevatedButton( onPressed: () { // invoking both functions for timer to start increasingStartTimer(); decreasingStartTimer(); }, child: Text("start"), ), ElevatedButton( onPressed: () { setState(() { _start = 0; _startTwo = 61; }); }, child: Text("Refresh")), ElevatedButton( child: Text( 'Capture Above Widget', ), onPressed: () { // invoking capture on // screenshotController screenshotController .capture(delay: Duration(milliseconds: 10)) .then((capturedImage) async { // showing the captured widget // through ShowCapturedWidget ShowCapturedWidget(context, capturedImage!); }).catchError((onError) { print(onError); }); }, ), ElevatedButton( child: Text( 'Capture An Invisible Widget', ), onPressed: () { var container = Container( padding: const EdgeInsets.all(30.0), decoration: BoxDecoration( border: Border.all(color: Colors.blueAccent, width: 5.0), color: Colors.pink, ), child: Text( "This is an invisible widget", style: Theme.of(context).textTheme.headline6, )); // capturing all the widgets // that are invisible screenshotController .captureFromWidget( InheritedTheme.captureAll( context, Material(child: container)), delay: Duration(seconds: 1)) .then((capturedImage) { // showing the captured invisible widgets ShowCapturedWidget(context, capturedImage); }); }, ), ], ), ), ); } // function to show captured widget Future<dynamic> ShowCapturedWidget( BuildContext context, Uint8List capturedImage) { return showDialog( useSafeArea: false, context: context, builder: (context) => Scaffold( appBar: AppBar( title: Text("Captured widget screenshot"), ), body: Center( child: capturedImage != null ? Image.memory(capturedImage) : Container()), ), ); }}
The captured screenshots:
To save screenshots in the gallery, we need to write additional code for them in the previously shown code. We will be using a package – image_gallery_saver for this purpose. Add below dependency in pubspec.yaml file.
Dart
dependencies: image_gallery_saver: '^1.7.1'
Now, run pub get to configure it, and we need to import the library in our home.dart file.
Dart
import 'package:image_gallery_saver/image_gallery_saver.dart';
Now, we need to create a function to which we will pass captured images to save to the Gallery.
Dart
_saved(Uint8List image) async { final result = await ImageGallerySaver.saveImage(image); print("File Saved to Gallery"); }
Here, we created an asynchronous function that takes Uint8List type data as input. We can save images as files or bytes but we need to convert them to a specific type. Since the screenshots captured are in bytes format, we are using the saveImage function to save the screenshot. Now, we need to call the function, we will be calling this function each time we capture a screenshot, both of a visible widget and an invisible widget. See the complete code of the home.dart below.
Complete Source Code:
Dart
import 'dart:async';import 'dart:typed_data';import 'package:image_gallery_saver/image_gallery_saver.dart';import 'package:flutter/material.dart';import 'package:screenshot/screenshot.dart'; class HomePage extends StatefulWidget { @override _HomePageState createState() => _HomePageState();} class _HomePageState extends State<HomePage> { // Create an instance of ScreenshotController ScreenshotController screenshotController = ScreenshotController(); @override void initState() { super.initState(); } late Timer _timer; int _start = 0; int _startTwo = 61; void increasingStartTimer() { const oneSec = const Duration(seconds: 1); _timer = new Timer.periodic( oneSec, (Timer timer) => setState( () { if (_start > 60) { timer.cancel(); } else { _start = _start + 1; } }, ), ); } void decreasingStartTimer() { const oneSec = const Duration(seconds: 1); _timer = new Timer.periodic( oneSec, (Timer timer) => setState( () { if (_startTwo < 0) { timer.cancel(); } else { _startTwo = _startTwo - 1; } }, ), ); } @override void dispose() { _timer.cancel(); super.dispose(); } @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: Text("Screenshot Demo App"), ), body: Center( child: Column( children: [ SizedBox(height: 30), Screenshot( controller: screenshotController, child: Column( children: [ Text("Decreasing Timer : "), SizedBox( height: 10, ), Container( padding: const EdgeInsets.all(30.0), decoration: BoxDecoration( border: Border.all(color: Colors.blueAccent, width: 5.0), color: Colors.amberAccent, ), child: Text(_startTwo.toString())), SizedBox( height: 25, ), Text("Increasing Timer : "), SizedBox( height: 10, ), Container( padding: const EdgeInsets.all(30.0), decoration: BoxDecoration( border: Border.all(color: Colors.blueAccent, width: 5.0), color: Colors.amberAccent, ), child: Text("$_start")), ], ), ), ElevatedButton( onPressed: () { increasingStartTimer(); decreasingStartTimer(); }, child: Text("start"), ), ElevatedButton( onPressed: () { setState(() { _start = 0; _startTwo = 61; }); }, child: Text("Refresh")), ElevatedButton( child: Text( 'Capture Above Widget', ), onPressed: () { screenshotController .capture(delay: Duration(milliseconds: 10)) .then((capturedImage) async { ShowCapturedWidget(context, capturedImage!); _saved(capturedImage); }).catchError((onError) { print(onError); }); }, ), ElevatedButton( child: Text( 'Capture An Invisible Widget', ), onPressed: () { var container = Container( padding: const EdgeInsets.all(30.0), decoration: BoxDecoration( border: Border.all(color: Colors.blueAccent, width: 5.0), color: Colors.pink, ), child: Text( "This is an invisible widget", style: Theme.of(context).textTheme.headline6, )); screenshotController .captureFromWidget( InheritedTheme.captureAll( context, Material(child: container)), delay: Duration(seconds: 1)) .then((capturedImage) { ShowCapturedWidget(context, capturedImage); _saved(capturedImage); }); }, ), ], ), ), ); } Future<dynamic> ShowCapturedWidget( BuildContext context, Uint8List capturedImage) { return showDialog( useSafeArea: false, context: context, builder: (context) => Scaffold( appBar: AppBar( title: Text("Captured widget screenshot"), ), body: Center( child: capturedImage != null ? Image.memory(capturedImage) : Container()), ), ); } _saved(image) async { final result = await ImageGallerySaver.saveImage(image); print("File Saved to Gallery"); } }
Output:
sagartomar9927
simmytarika5
Flutter
Android
Dart
Flutter
Android
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Flutter - Custom Bottom Navigation Bar
How to Read Data from SQLite Database in Android?
Android Listview in Java with Example
Retrofit with Kotlin Coroutine in Android
How to Change the Background Color After Clicking the Button in Android?
Flutter - DropDownButton Widget
Listview.builder in Flutter
Flutter - Asset Image
Flutter - Custom Bottom Navigation Bar
Splash Screen in Flutter | [
{
"code": null,
"e": 25116,
"s": 25088,
"text": "\n12 Jan, 2022"
},
{
"code": null,
"e": 25296,
"s": 25116,
"text": "Flutter is a popular framework by Google which is growing fast with its growing community. The Flutter has created a buzz through its libraries, making the develop... |
C# | Check if two StringCollection objects are equal - GeeksforGeeks | 01 Feb, 2019
Equals(Object) Method which is inherited from the Object class is used to check if a specified StringCollection object is equal to another StringCollection object or not.
Syntax:
public virtual bool Equals (object obj);
Here, obj is the object which is to be compared with the current object.
Return Value: This method return true if the specified object is equal to the current object otherwise it returns false.
Below programs illustrate the use of above-discussed method:
Example 1:
// C# code to check if two// StringCollections are equal or notusing System;using System.Collections.Specialized; class GFG { // Driver code public static void Main() { // creating a StringCollection named myCol StringCollection myCol = new StringCollection(); // Adding elements in StringCollection myCol.Add("A"); myCol.Add("B"); myCol.Add("C"); myCol.Add("D"); myCol.Add("E"); // Checking whether myCol is // equal to itself or not Console.WriteLine(myCol.Equals(myCol)); }}
True
Example 2:
// C# code to check if two// StringCollections are equal or notusing System;using System.Collections.Specialized; class GFG { // Driver code public static void Main() { // creating a StringCollection named my1 StringCollection my1 = new StringCollection(); // Adding elements in StringCollection my1.Add("GFG"); my1.Add("Noida"); my1.Add("DS"); my1.Add("Geeks"); my1.Add("Classes"); // Creating a StringCollection named my2 StringCollection my2 = new StringCollection(); my2.Add("Australia"); my2.Add("Belgium"); my2.Add("Netherlands"); my2.Add("China"); my2.Add("Russia"); my2.Add("India"); // Checking whether my1 is // equal to my2 or not Console.WriteLine(my1.Equals(my2)); // Creating a new StringCollection StringCollection my3 = new StringCollection(); // Assigning my2 to my3 my3 = my2; // Checking whether my3 is // equal to my2 or not Console.WriteLine(my3.Equals(my2)); }}
False
True
Note: If the current instance is a reference type, the Equals(Object) method checks for reference equality.
CSharp-method
CSharp-Specialized-Namespace
CSharp-Specialized-StringCollection
C#
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Extension Method in C#
Destructors in C#
HashSet in C# with Examples
Top 50 C# Interview Questions & Answers
C# | How to insert an element in an Array?
C# | List Class
C# | Inheritance
Partial Classes in C#
Lambda Expressions in C#
Convert String to Character Array in C# | [
{
"code": null,
"e": 24302,
"s": 24274,
"text": "\n01 Feb, 2019"
},
{
"code": null,
"e": 24473,
"s": 24302,
"text": "Equals(Object) Method which is inherited from the Object class is used to check if a specified StringCollection object is equal to another StringCollection object ... |
Colorize Images Using Deoldify - GeeksforGeeks | 10 Jan, 2022
Deoldify is the project used to colorize and restore old images from Black and white format. It was developed by Jason Antic. Deoldify uses GAN architecture to colorize the image. It contains a generator that added color to the critic (Discriminator), the goal of which to criticize the coloring generated by the generator. It proposed a special type of GAN training method called No-GAN.
The author uses the following deep learning concepts in these models. These concepts are:
Self-attention: The authors use U-Net architecture for the generator, they also modified the architecture to use Spectral Normalization and self-attention in the model.
Two-Time Scale Update Rule: It is a way of training the GAN architecture. It’s just one to one generator/critic architecture and a higher critic learning rate. This is modified to incorporate a threshold critic loss that makes sure that the critic is “caught up” before moving on to generator training. This is particularly useful for NoGAN training.
No-GAN: This method of GAN training is developed by the authors of the model. The main idea behind that model that you get the benefits of GAN training while spending minimal time doing direct GAN training. We will discuss NoGAN in more detail.
Generator Loss: There are two types of NoGAN learning in the generator:Perpetual Loss: This loss is used in the generator to report and minimize the losses generated due to bias in the model.Critic Loss: It is loss used in the discriminator/critic.
Perpetual Loss: This loss is used in the generator to report and minimize the losses generated due to bias in the model.
Critic Loss: It is loss used in the discriminator/critic.
This is a new type of GAN training that is developed by the authors of Deoldify. It provides the benefits of GAN training while spending minimal time doing direct GAN training. Instead, we spent most time training generator and critic separately with more straight-forward, fast, and reliable conventional methods.
The steps are as follows:
First, we train the generator in a conventional way by itself with just the feature loss.
Next, we generate images from the trained generator and train the critic on distinguishing between those outputs and real images as a basic binary classifier.
Finally, train the generator and critic together in a GAN setting (starting right at the target size of 192px in this case).
All the important GAN training only takes place in a very small fraction of time. There’s an inflection point where it appears the critic has transferred all the useful knowledge to the generator. There appears to be no productive training after the model achieved the inflection point. The hard part appears to be finding the inflection point and the model is quite unstable, so the author has to create a lot of checkpoints. Another key thing about No-GAN is that you can repeat pre-training the critic on generated images after the initial GAN training, then repeat the GAN training itself in the same fashion.
There are 3 types of models that are trained by Deoldify:
Artistic: This model achieves the best results in terms of image coloration, in terms of details, and vibrancy. The models use a ResNet 34 backbone architecture with U-Net with an emphasis on the depth of layers on the decoder side. There are some drawbacks of the model such as this model does not provide stability for common tasks such as natural scenes and portraits and it takes a lot of time and parameter tuning to obtain the best results.
Stable: This model achieves the best results in landscapes and portraits. It provides better coloring to human faces instead of gray coloring on faces. The models use a ResNet 101 backbone architecture with U-Net with an emphasis on the depth of layers on the decoder side. This model generally has less weird miscoloration than the artistic model but is also less colorful.
Video: This model is optimized for smooth, consistent, and flicker-free video. This would be the least colorful of all the three models. The model is similar to the architecture ‘stable’ but differs in training.
Python3
# Clone deoldify Repository! git clone https://github.com/jantic/DeOldify.git DeOldify # change directory to DeOldify Repocd DeOldify # For Colab! pip install -r colab_requirements.txt# For Local Script! pip install -r requirements.txt # import pytorch libraryimport torch# check for GPUif not torch.cuda.is_available(): print('GPU not available.')# necessary importsimport fastaifrom deoldify.visualize import *import warningswarnings.filterwarnings("ignore", category=UserWarning, message=".*?Your .*? set is empty.*?")# download the artistic model!mkdir 'models'!wget https://data.deepai.org/deoldify/ColorizeArtistic_gen.pth -O ./models/ColorizeArtistic_gen.pth # use the get image colorizer function with artistic modelcolorizer = get_image_colorizer(artistic=True) # Here, we provide the parameters such as source URL, render factor etc.source_url = 'https://preview.redd.it/a702q2585j961.jpg?width=640'+'&crop=smart&auto=webp&s=a5f2523513bb24648737760369d2864eb1f57118' #@param {type:"string"}render_factor = 39 #@param {type: "slider", min: 7, max: 40}watermarked = False #@param {type:"boolean"} if source_url is not None and source_url !='': image_path = colorizer.plot_transformed_image_from_url(url=source_url, render_factor=render_factor, compare=True, watermarked=watermarked) show_image_in_notebook(image_path)else: print('Provide the valid image URL.')
DeOldify Results (Original B/W Image Credit here)
DeOldify Stable Results
Deoldify GitHub
sumitgumber28
Machine Learning
Python
Machine Learning
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Support Vector Machine Algorithm
k-nearest neighbor algorithm in Python
Intuition of Adam Optimizer
Singular Value Decomposition (SVD)
ML | Logistic Regression using Python
Read JSON file using Python
Adding new column to existing DataFrame in Pandas
Python map() function
How to get column names in Pandas dataframe | [
{
"code": null,
"e": 24344,
"s": 24316,
"text": "\n10 Jan, 2022"
},
{
"code": null,
"e": 24733,
"s": 24344,
"text": "Deoldify is the project used to colorize and restore old images from Black and white format. It was developed by Jason Antic. Deoldify uses GAN architecture to col... |
GS Lab interview Experience | Set 3 - GeeksforGeeks | 17 Apr, 2019
First RoundThe first round was written round which was conducted by Hackerearth.1st 5 questions were from aptitude, the 2nd section was from SQL queries, the 3rd section was from os and data structure and final section was 2 coding question.1st coding question was very cool number
#include <iostream>using namespace std; int main(){ long int q; long int r,k; cin>>r>>k; string s; long int count2=0; for(long int hh=5;hh<=r;hh++) { long int count=0; long int l=0; long int h=hh; while(h!=0) { s[l]=h%2; h=h/2; l++; } s[l]='\0'; for(long int i=0;i<l-2;i++) { if(s[i]==1 && s[i+1]==0 && s[i+2]==1) { count++; } } if(count>=k) { count2++; } } cout<<count2<<endl; return 0;}
The 2nd coding question was a pattern.
Second Round1st technical round they asked me about my project which I had done using tableau and they said to me if I had to do it in SQL how can I visualize data, they said to me to do the Er model of the table, then some join query and told me to explain how does it happenand then gave an instance how can I get information about a family from the table and how can I optimize by adding the new table.Too many SQL queries.
NOTE: You have to Qualify each round to go to the next round.
Third RoundThey asked me about my next project which was based on home automation and real-life case scenario where it can be implemented.Then questions about the operating system, more SQL, and advanced SQL and finally java questions.
Note:– U must be strong in SQL and os and networking.
HR RoundBe ready for some math questions and English questions through an android app where u have to cross a score.and be ready for technical questions in hr also and questions about family and some case scenarios.HR was very friendly.The final results were declared after a month where 3 people were selected out of 5 shortlisted people and I was one of them.
Write data structure in your CV if are really strong in it.some questions they asked my friends were1-merge point of a linked list2-balanced parenthesis3- diff between JVM and JRE4- BST5-searching a repeated value in an array without sorting. (can use any data structure)6-ted ed puzzle questions
For cracking this company u need to practice from cracking the coding interview book and start practicing from geeksforgeeks.
If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
GS Labs
Interview Experiences
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Microsoft Interview Experience for Internship (Via Engage)
Amazon Interview Experience for SDE-1 (On-Campus)
Infosys Interview Experience for DSE - System Engineer | On-Campus 2022
Amazon Interview Experience for SDE-1
Directi Interview | Set 7 (Programming Questions)
Oracle Interview Experience | Set 69 (Application Engineer)
Amazon Interview Experience for SDE1 (8 Months Experienced) 2022
Amazon Interview Experience for SDE-1(Off-Campus)
Amazon Interview Experience (Off-Campus) 2022
Amazon Interview Experience for SDE-1 | [
{
"code": null,
"e": 24227,
"s": 24199,
"text": "\n17 Apr, 2019"
},
{
"code": null,
"e": 24509,
"s": 24227,
"text": "First RoundThe first round was written round which was conducted by Hackerearth.1st 5 questions were from aptitude, the 2nd section was from SQL queries, the 3rd s... |
Print all leaf nodes of a binary tree from right to left in C++ | In this problem, we are given a binary tree and we have to print all leaf nodes of the binary tree from right to left.
Let’s take an example to understand the problem
Input −
Output − 7 4 1
To solve this problem, we will have to traverse the binary tree. This traversal can be done in two ways −
Preorder traversal − This traversal uses recursion. Here, we will traverse, root then left and then right subtree. If we encounter a leaf node then we will print it, else we check for children of the node and explore them to find leaf node.
Program to show the implementation of our solution −
Live Demo
#include <iostream>
using namespace std;
struct Node {
int data;
struct Node *left, *right;
};
Node* insertNode(int data) {
Node* temp = new Node;
temp->data = data;
temp->left = temp->right = NULL;
return temp;
}
void findLeafNode(Node* root) {
if (!root)
return;
if (!root->left && !root->right) {
cout<<root->data<<"\t";
return;
}
if (root->right)
findLeafNode(root->right);
if (root->left)
findLeafNode(root->left);
}
int main() {
Node* root = insertNode(21);
root->left = insertNode(5);
root->right = insertNode(11);
root->left->left = insertNode(8);
root->left->right = insertNode(98);
root->right->left = insertNode(2);
root->right->right = insertNode(8);
cout<<"Leaf nodes of the tree from right to left are:\n";
findLeafNode(root);
return 0;
}
Leaf nodes of the tree from right to left are −
18 2 98 8
Postorder Traversal − This traversal to find the leaf node will use iteration. We will use a stack to store data and traverse the tree in a postorder manner (first right subtree then left subtree and then root) and print leaf nodes.
Program to show the implementation of our solution −
Live Demo
#include<bits/stdc++.h>
using namespace std;
struct Node {
Node* left;
Node* right;
int data;
};
Node* insertNode(int key) {
Node* node = new Node();
node->left = node->right = NULL;
node->data = key;
return node;
}
void findLeafNode(Node* tree) {
stack<Node*> treeStack;
while (1) {
if (tree) {
treeStack.push(tree);
tree = tree->right;
} else {
if (treeStack.empty())
break;
else {
if (treeStack.top()->left == NULL) {
tree = treeStack.top();
treeStack.pop();
if (tree->right == NULL)
cout<<tree->data<<"\t";
}
while (tree == treeStack.top()->left) {
tree = treeStack.top();
treeStack.pop();
if (treeStack.empty())
break;
}
if (!treeStack.empty())
tree = treeStack.top()->left;
else
tree = NULL;
}
}
}
}
int main(){
Node* root = insertNode(21);
root->left = insertNode(5);
root->right = insertNode(11);
root->left->left = insertNode(8);
root->left->right = insertNode(98);
root->right->left = insertNode(2);
root->right->right = insertNode(18);
cout<<"Leaf nodes of the tree from right to left are:\n";
findLeafNode(root);
return 0;
}
Leaf nodes of the tree from right to left are −
18 2 98 8 | [
{
"code": null,
"e": 1181,
"s": 1062,
"text": "In this problem, we are given a binary tree and we have to print all leaf nodes of the binary tree from right to left."
},
{
"code": null,
"e": 1229,
"s": 1181,
"text": "Let’s take an example to understand the problem"
},
{
"... |
Passing two dimensional array to a C++ function | C++ does not allow to pass an entire array as an argument to a function. However, You can pass a pointer to an array by specifying the array's name without an index. There are three ways to pass a 2D array to a function −
Specify the size of columns of 2D array
void processArr(int a[][10]) {
// Do something
}
Pass array containing pointers
void processArr(int *a[10]) {
// Do Something
}
// When callingint *array[10];
for(int i = 0; i < 10; i++)
array[i] = new int[10];
processArr(array);
Pass a pointer to a pointer
void processArr(int **a) {
// Do Something
}
// When calling:
int **array;
array = new int *[10];
for(int i = 0; i <10; i++)
array[i] = new int[10];
processArr(array);
#include<iostream>
using namespace std;
void processArr(int a[][2]) {
cout << "element at index 1,1 is " << a[1][1];
}
int main() {
int arr[2][2];
arr[0][0] = 0;
arr[0][1] = 1;
arr[1][0] = 2;
arr[1][1] = 3;
processArr(arr);
return 0;
}
This will give the output −
element at index 1,1 is 3 | [
{
"code": null,
"e": 1284,
"s": 1062,
"text": "C++ does not allow to pass an entire array as an argument to a function. However, You can pass a pointer to an array by specifying the array's name without an index. There are three ways to pass a 2D array to a function −"
},
{
"code": null,
... |
n'th Pentagonal Number - GeeksforGeeks | 25 Jan, 2022
Given an integer n, find the nth Pentagonal number. The first three pentagonal numbers are 1, 5, and 12 (Please see below diagram). The n’th pentagonal number Pn is the number of distinct dots in a pattern of dots consisting of the outlines of regular pentagons with sides up to n dots when the pentagons are overlaid so that they share one vertex [Source Wiki]Examples :
Input: n = 1
Output: 1
Input: n = 2
Output: 5
Input: n = 3
Output: 12
In general, a polygonal number (triangular number, square number, etc) is a number represented as dots or pebbles arranged in the shape of a regular polygon. The first few pentagonal numbers are: 1, 5, 12, etc. If s is the number of sides in a polygon, the formula for the nth s-gonal number P (s, n) is
nth s-gonal number P(s, n) = (s - 2)n(n-1)/2 + n
If we put s = 5, we get
n'th Pentagonal number Pn = 3*n*(n-1)/2 + n
Examples:
Pentagonal Number
Below are the implementations of the above idea in different programming languages.
C++
C
Java
Python3
C#
PHP
Javascript
// C++ program for above approach#include<bits/stdc++.h>using namespace std; // Finding the nth pentagonal numberint pentagonalNum(int n){ return (3 * n * n - n) / 2;} // Driver codeint main(){ int n = 10; cout << "10th Pentagonal Number is = " << pentagonalNum(n); return 0;} // This code is contributed by Code_Mech
// C program for above approach#include <stdio.h>#include <stdlib.h> // Finding the nth Pentagonal Numberint pentagonalNum(int n){ return (3*n*n - n)/2;} // Driver program to test above functionint main(){ int n = 10; printf("10th Pentagonal Number is = %d \n \n", pentagonalNum(n)); return 0;}
// Java program for above approachclass Pentagonal{ int pentagonalNum(int n) { return (3*n*n - n)/2; }} public class GeeksCode{ public static void main(String[] args) { Pentagonal obj = new Pentagonal(); int n = 10; System.out.printf("10th petagonal number is = " + obj.pentagonalNum(n)); }}
# Python program for finding pentagonal numbersdef pentagonalNum( n ): return (3*n*n - n)/2#Script Begins n = 10print ("10th Pentagonal Number is = ", pentagonalNum(n)) #Scripts Ends
// C# program for above approachusing System; class GFG { static int pentagonalNum(int n) { return (3 * n * n - n) / 2; } public static void Main() { int n = 10; Console.WriteLine("10th petagonal" + " number is = " + pentagonalNum(n)); }} // This code is contributed by vt_m.
<?php// PHP program for above approach // Finding the nth Pentagonal Numberfunction pentagonalNum($n){ return (3 * $n * $n - $n) / 2;} // Driver Code$n = 10;echo "10th Pentagonal Number is = ", pentagonalNum($n); // This code is contributed by ajit?>
<script> // Javascript program for above approach function pentagonalNum(n) { return (3 * n * n - n) / 2; } // Driver code to test above methods let n = 10; document.write("10th petagonal" + " number is = " + pentagonalNum(n)); // This code is contributed by avijitmondal1998.</script>
Output :
10th Pentagonal Number is = 145
Time Complexity: O(1)Auxiliary Space: O(1)Reference: https://en.wikipedia.org/wiki/Polygonal_numberThis article is contributed by Mazhar Imam Khan. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
jit_t
Code_Mech
avijitmondal1998
manikarora059
amartyaghoshgfg
series
Geometric
Mathematical
Mathematical
series
Geometric
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Circle and Lattice Points
Program for distance between two points on earth
Convex Hull | Set 1 (Jarvis's Algorithm or Wrapping)
Queries on count of points lie inside a circle
Convex Hull | Set 2 (Graham Scan)
Program for Fibonacci numbers
Write a program to print all permutations of a given string
C++ Data Types
Set in C++ Standard Template Library (STL)
Coin Change | DP-7 | [
{
"code": null,
"e": 24465,
"s": 24437,
"text": "\n25 Jan, 2022"
},
{
"code": null,
"e": 24839,
"s": 24465,
"text": "Given an integer n, find the nth Pentagonal number. The first three pentagonal numbers are 1, 5, and 12 (Please see below diagram). The n’th pentagonal number Pn i... |
PHP - json_encode() Function | The json_encode() function can return the JSON representation of a value.
string json_encode( mixed $value [, int $options = 0 [, int $depth = 512 ]] )
The json_encode() function can return a string containing the JSON representation of supplied value. The encoding is affected by supplied options, and additionally, the encoding of float values depends on the value of serialize_precision.
The json_encode() function can return a JSON encoded string on success or false on failure.
<?php
$post_data = array(
"item" => array(
"item_type_id" => 1,
"tring_key" => "AA",
"string_value" => "Hello",
"string_extra" => "App",
"is_public" => 1,
"is_public_for_contacts" => 0
)
);
echo json_encode($post_data)."\n";
?>
{"item":{"item_type_id":1,"tring_key":"AA","string_value":"Hello","string_extra":"App","is_public":1,"is_public_for_contacts":0}}
<?php
$array = array("Coffee", "Chocolate", "Tea");
// The JSON string created from the array
$json = json_encode($array, JSON_PRETTY_PRINT);
echo $json;
?>
[
"Coffee",
"Chocolate",
"Tea"
]
<?php
class Book {
public $title = "";
public $author = "";
public $yearofpublication = "";
}
$book = new Book();
$book->title = "Java";
$book->author = "James Gosling";
$book->yearofpublication = "1995";
$result = json_encode($book);
echo "The JSON representation is:".$result."\n";
echo "************************". "\n";
echo "Decoding the JSON data format into an PHP object:"."\n";
$decoded = json_decode($result);
var_dump($decoded);
echo $decoded->title."\n";
echo $decoded->author."\n";
echo $decoded->yearofpublication."\n";
echo "************************"."\n";
echo "Decoding the JSON data format into an PHP array:"."\n";
$json = json_decode($result,true);
// listing the array
foreach($json as $prop => $value)
echo $prop ." : ". $value;
?>
The JSON representation is:{"title":"Java","author":"James Gosling","yearofpublication":"1995"}
************************
Decoding the JSON data format into an PHP object:
object(stdClass)#2 (3) {
["title"]=>
string(4) "Java"
["author"]=>
string(13) "James Gosling"
["yearofpublication"]=>
string(4) "1995"
}
Java
James Gosling
1995
************************
Decoding the JSON data format into an PHP array:
title : Javaauthor : James Goslingyearofpublication : 1995
45 Lectures
9 hours
Malhar Lathkar
34 Lectures
4 hours
Syed Raza
84 Lectures
5.5 hours
Frahaan Hussain
17 Lectures
1 hours
Nivedita Jain
100 Lectures
34 hours
Azaz Patel
43 Lectures
5.5 hours
Vijay Kumar Parvatha Reddy
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2832,
"s": 2757,
"text": "The json_encode() function can return the JSON representation of a value. "
},
{
"code": null,
"e": 2911,
"s": 2832,
"text": "string json_encode( mixed $value [, int $options = 0 [, int $depth = 512 ]] )\n"
},
{
"code": null,... |
Code Division Multiplexing | Code division multiplexing (CDM) is a multiplexing technique that uses spread spectrum communication. In spread spectrum communications, a narrowband signal is spread over a larger band of frequency or across multiple channels via division. It does not constrict bandwidth’s digital signals or frequencies. It is less susceptible to interference, thus providing better data communication capability and a more secure private line.
When CDM is used to allow multiple signals from multiple users to share a common communication channel, the technology is called Code Division Multiple Access (CDMA). Each group of users is given a shared code and individual conversations are encoded in a digital sequence. Data is available on the shared channel, but only those users associated with a particular code can access the data.
Each communicating station is assigned a unique code. The codes stations have the following properties −
If code of one station is multiplied by code of another station, it yields 0.
If code of one station is multiplied by code of another station, it yields 0.
If code of one station is multiplied by itself, it yields a positive number equal to the number of stations.
If code of one station is multiplied by itself, it yields a positive number equal to the number of stations.
The communication technique can be explained by the following example −
Consider that there are four stations w, x, y and z that have been assigned the codes cw , cx, cy and cz and need to transmit data dw , dx, dy and dz respectively. Each station multiplies its code with its data and the sum of all the terms is transmitted in the communication channel.
Thus, the data in the communication channel is dw . cw+ dx . cx+ dy . cy+ dz . cz
Suppose that at the receiving end, station z wants to receive data sent by station y. In order to retrieve the data, it will multiply the received data by the code of station y which is dy.
data = (dw . cw+ dx . cx+ dy . cy+ dz . cz ) . cy
= dw . cw . cy + dx . cx . cy+ dy . cy . cy+ dz . cz .
cy
=0 + 0 + dy . 4 + 0 = 4dy
Thus, it can be seen that station z has received data from only station y while neglecting the other codes.
The codes assigned to the stations are carefully generated codes called chip sequences or more popularly orthogonal sequences. The sequences are comprised of +1 or –1. They hold certain properties so as to enable communication.
The properties are −
A sequence has m elements, where m is the number of stations.
A sequence has m elements, where m is the number of stations.
If a sequence is multiplied by a number, all elements are multiplied by that number.
If a sequence is multiplied by a number, all elements are multiplied by that number.
For multiplying two sequences, the corresponding positional elements are multiplied and summed to give the result.
For multiplying two sequences, the corresponding positional elements are multiplied and summed to give the result.
If a sequence is multiplied by itself, the result is m, i.e. the number of stations.
If a sequence is multiplied by itself, the result is m, i.e. the number of stations.
If a sequence is multiplied by another sequence, the result is 0.
If a sequence is multiplied by another sequence, the result is 0.
For adding two sequences, we add the corresponding positional elements.
For adding two sequences, we add the corresponding positional elements.
Let us ascertain the above properties through an example.
Consider the following chip sequences for the four stations w, x, y and z −
[+1 -1 -1 +1], [+1 +1 -1 -1], [+1 -1 +1 -1] and [+1 +1 +1 +1]
Each sequence has four elements.
Each sequence has four elements.
If [+1 -1 -1 +1] is multiplied by 6, we get [+6 -6 -6 +6].
If [+1 -1 -1 +1] is multiplied by 6, we get [+6 -6 -6 +6].
If [+1 -1 -1 +1] is multiplied by itself, i.e. [+1 -1 -1 +1]. [+1 -1 -1 +1], we get +1+1+1+1 = 4, which is equal to the number of stations.
If [+1 -1 -1 +1] is multiplied by itself, i.e. [+1 -1 -1 +1]. [+1 -1 -1 +1], we get +1+1+1+1 = 4, which is equal to the number of stations.
If [+1 -1 -1 +1] is multiplied by [+1 +1 -1 -1], we get +1-1+1-1 = 0
If [+1 -1 -1 +1] is multiplied by [+1 +1 -1 -1], we get +1-1+1-1 = 0
If [+1 -1 -1 +1] is added to [+1 +1 -1 -1], we get [+2 0 -2 0].
If [+1 -1 -1 +1] is added to [+1 +1 -1 -1], we get [+2 0 -2 0].
The commonly used orthogonal codes are Walsh codes. | [
{
"code": null,
"e": 1493,
"s": 1062,
"text": "Code division multiplexing (CDM) is a multiplexing technique that uses spread spectrum communication. In spread spectrum communications, a narrowband signal is spread over a larger band of frequency or across multiple channels via division. It does not ... |
Algorithm Specification-Introduction in Data Structure | An algorithm is defined as a finite set of instructions that, if followed, performs a particular task. All algorithms must satisfy the following criteria
Input. An algorithm has zero or more inputs, taken or collected from a specified set of objects.
Output. An algorithm has one or more outputs having a specific relation to the inputs.
Definiteness. Each step must be clearly defined; Each instruction must be clear and unambiguous.
Finiteness. The algorithm must always finish or terminate after a finite number of steps.
Effectiveness. All operations to be accomplished must be sufficiently basic that they can be done exactly and in finite length.
We can depict an algorithm in many ways.
Natural language: implement a natural language like English
Flow charts: Graphic representations denoted flowcharts, only if the algorithm is small and simple.
Pseudo code: This pseudo code skips most issues of ambiguity; no particularity regarding syntax programming language.
Example 1: Algorithm for calculating factorial value of a number
Step 1: a number n is inputted
Step 2: variable final is set as 1
Step 3: final<= final * n
Step 4: decrease n
Step 5: verify if n is equal to 0
Step 6: if n is equal to zero, goto step 8 (break out of loop)
Step 7: else goto step 3
Step 8: the result final is printed
A recursive algorithm calls itself which generally passes the return value as a parameter to the algorithm again. This parameter indicates the input while the return value indicates the output.
Recursive algorithm is defined as a method of simplification that divides the problem into sub-problems of the same nature. The result of one recursion is treated as the input for the next recursion. The repletion is in the self-similar fashion manner. The algorithm calls itself with smaller input values and obtains the results by simply accomplishing the operations on these smaller values. Generation of factorial, Fibonacci number series are denoted as the examples of recursive algorithms.
Example: Writing factorial function using recursion
intfactorialA(int n)
{
return n * factorialA(n-1);
} | [
{
"code": null,
"e": 1216,
"s": 1062,
"text": "An algorithm is defined as a finite set of instructions that, if followed, performs a particular task. All algorithms must satisfy the following criteria"
},
{
"code": null,
"e": 1313,
"s": 1216,
"text": "Input. An algorithm has zero... |
Kotlin Variables | Variables are containers for storing data values.
To create a variable, use var or val, and assign a value to it with the equal sign (=):
var variableName = value
val variableName = value
var name = "John"
val birthyear = 1975
println(name) // Print the value of name
println(birthyear) // Print the value of birthyear
The difference between var and val is that variables declared
with the var keyword
can be changed/modified, while val variables
cannot.
Unlike many other programming languages, variables in Kotlin do not need to be declared with a specified
type (like "String" for text or "Int" for numbers, if you are familiar with those).
To create a variable in Kotlin that should store text and another that should store a number, look at the following example:
var name = "John" // String (text)
val birthyear = 1975 // Int (number)
println(name) // Print the value of name
println(birthyear) // Print the value of birthyear
Kotlin is smart enough to understand that "John" is a String (text), and that 1975 is an Int
(number) variable.
However, it is possible to specify the type if you insist:
var name: String = "John" // String
val birthyear: Int = 1975 // Int
println(name)
println(birthyear)
You can also declare a variable without assigning the value, and assign the
value later. However, this is only possible when you specify the type:
This works fine:
var name: String
name = "John"
println(name)
This will generate an error:
var name
name = "John"
println(name)
Note: You will learn more about Data Types in the next chapter.
When you create a variable with the val keyword, the value
cannot be changed/reassigned.
The following example will generate an error:
val name = "John"
name = "Robert" // Error (Val cannot be reassigned)
println(name)
When using var, you can change the value whenever you want:
var name = "John"
name = "Robert"
println(name)
The val keyword is useful when you want a variable to always store the same value, like PI (3.14159...):
val pi = 3.14159265359
println(pi)
Like you have seen with the examples above, the println() method is often used to display variables.
To combine both text and a variable, use the + character:
val name = "John"
println("Hello " + name)
You can also use the + character to add a variable to another variable:
val firstName = "John "
val lastName = "Doe"
val fullName = firstName + lastName
println(fullName)
For numeric values, the + character works as
a mathematical operator:
val x = 5
val y = 6
println(x + y) // Print the value of x + y
From the example above, you can expect:
x stores the value 5
y stores the value 6
Then we use the println() method to display the value of x + y,
which is 11
A variable can have a short name (like x and y) or more descriptive names (age, sum, totalVolume).
The general rule for Kotlin variables are:
Names can contain letters, digits, underscores, and dollar signs
Names should start with a letter
Names can also begin with $ and _ (but we will not use it in this tutorial)
Names are case sensitive ("myVar" and "myvar" are different variables)
Names should start with a lowercase letter and it cannot contain whitespace
Reserved words (like Kotlin keywords, such as
var or
String) cannot be used as names
You might notice that we used firstName and lastName as variable names in the example above, instead of firstname and lastname. This is called "camelCase", and it is considered as good practice as it makes it easier to read when you have a variable name with different words in it, for example "myFavoriteFood", "rateActionMovies" etc.
We just launchedW3Schools videos
Get certifiedby completinga course today!
If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail:
help@w3schools.com
Your message has been sent to W3Schools. | [
{
"code": null,
"e": 50,
"s": 0,
"text": "Variables are containers for storing data values."
},
{
"code": null,
"e": 138,
"s": 50,
"text": "To create a variable, use var or val, and assign a value to it with the equal sign (=):"
},
{
"code": null,
"e": 188,
"s": 1... |
How to send a notification from a service in Android using Kotlin? | This example demonstrates how to send a notification from a service in Android using Kotlin.
Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project.
Step 2 − Add the following code to res/layout/activity_main.xml.
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="vertical"
tools:context=".MainActivity">
<EditText
android:id="@+id/editText"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:hint="Input" />
<Button
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:onClick="startService"
android:text="Start Service" />
<Button
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:onClick="stopService"
android:text="Stop Service" />
</LinearLayout>
Step 3 − Add the following code to src/MainActivity.kt
import android.content.Intent
import android.os.Bundle
import android.view.View
import android.widget.EditText
import androidx.appcompat.app.AppCompatActivity
import androidx.core.content.ContextCompat
class MainActivity : AppCompatActivity() {
lateinit var editText: EditText
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
title = "KotlinApp"
editText = findViewById(R.id.editText)
}
fun startService(view: View) {
val input: String = editText.text.toString()
val serviceIntent = Intent(this, ExampleService::class.java)
serviceIntent.putExtra("inputExtra", input)
ContextCompat.startForegroundService(this, serviceIntent)
}
fun stopService(view: View) {
val serviceIntent = Intent(this, ExampleService::class.java)
stopService(serviceIntent)
}
}
Step 4 − Create a new class for service (ExampleService.kt) and add the following −
import android.app.*
import android.content.Intent
import android.os.Build
import android.os.IBinder
import androidx.annotation.RequiresApi
import androidx.core.app.NotificationCompat
class ExampleService : Service() {
private val channelId = "Notification from Service"
@RequiresApi(Build.VERSION_CODES.O)
override fun onCreate() {
super.onCreate()
if (Build.VERSION.SDK_INT >= 26) {
val channel = if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) {
NotificationChannel(
channelId,
"Channel human readable title",
NotificationManager.IMPORTANCE_DEFAULT
)
} else {
TODO("VERSION.SDK_INT < O")
}
(getSystemService(NOTIFICATION_SERVICE) as NotificationManager).createNotificationChannel(
channel
)
}
}
override fun onStartCommand(intent: Intent, flags: Int, startId: Int): Int {
val input = intent.getStringExtra("inputExtra")
val notificationIntent = Intent(this, MainActivity::class.java)
val pendingIntent = PendingIntent.getActivity(
this,
0, notificationIntent, 0
)
val notification: Notification = NotificationCompat.Builder(this, channelId)
.setContentTitle("Example Service")
.setContentText(input)
.setSmallIcon(R.drawable.notification)
.setContentIntent(pendingIntent)
.build()
startForeground(1, notification)
return START_NOT_STICKY
}
override fun onBind(p0: Intent?): IBinder? {
return null
}
}
Step 5 − Add the following code to androidManifest.xml
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.q11">
<uses-permission android:name="android.permission.FOREGROUND_SERVICE"/>
<application
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="@style/AppTheme">
<activity android:name=".MainActivity">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click the Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen.
Click here to download the project code. | [
{
"code": null,
"e": 1155,
"s": 1062,
"text": "This example demonstrates how to send a notification from a service in Android using Kotlin."
},
{
"code": null,
"e": 1284,
"s": 1155,
"text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all re... |
C# | How to check whether a List contains a specified element - GeeksforGeeks | 01 Feb, 2019
List<T>.Contains(T) Method is used to check whether an element is in the List<T> or not.
Properties of List:
It is different from the arrays. A list can be resized dynamically but arrays cannot.
List class can accept null as a valid value for reference types and it also allows duplicate elements.
If the Count becomes equals to Capacity then the capacity of the List increases automatically by reallocating the internal array. The existing elements will be copied to the new array before the addition of the new element.
Syntax:
public bool Contains (T item);
Here, item is the object which is to be locate in the List<T>. The value can be null for reference types.
Return Value: This method returns True if the item is found in the List<T> otherwise returns False.
Below programs illustrate the use of List<T>.Contains(T) Method:
Example 1:
// C# Program to check whether the// element is present in the List// or notusing System;using System.Collections;using System.Collections.Generic; class Geeks { // Main Method public static void Main(String[] args) { // Creating an List<T> of Integers List<int> firstlist = new List<int>(); // Adding elements to List firstlist.Add(1); firstlist.Add(2); firstlist.Add(3); firstlist.Add(4); firstlist.Add(5); firstlist.Add(6); firstlist.Add(7); // Checking whether 4 is present // in List or not Console.Write(firstlist.Contains(4)); }}
Output:
True
Example 2:
// C# Program to check whether the// element is present in the List// or notusing System;using System.Collections;using System.Collections.Generic; class Geeks { // Main Method public static void Main(String[] args) { // Creating an List<T> of String List<String> firstlist = new List<String>(); // Adding elements to List firstlist.Add("Geeks"); firstlist.Add("For"); firstlist.Add("Geeks"); firstlist.Add("GFG"); firstlist.Add("C#"); firstlist.Add("Tutorials"); firstlist.Add("GeeksforGeeks"); // Checking whether Java is present // in List or not Console.Write(firstlist.Contains("Java")); }}
Output:
False
Reference:
https://docs.microsoft.com/en-us/dotnet/api/system.collections.generic.list-1.contains?view=netframework-4.7.2
CSharp-Collections-Namespace
CSharp-Generic-List
CSharp-Generic-Namespace
CSharp-method
C#
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
C# | Delegates
Destructors in C#
Extension Method in C#
C# | Constructors
C# | Abstract Classes
Introduction to .NET Framework
C# | Class and Object
C# | Data Types
C# | Encapsulation
HashSet in C# with Examples | [
{
"code": null,
"e": 24518,
"s": 24490,
"text": "\n01 Feb, 2019"
},
{
"code": null,
"e": 24607,
"s": 24518,
"text": "List<T>.Contains(T) Method is used to check whether an element is in the List<T> or not."
},
{
"code": null,
"e": 24627,
"s": 24607,
"text": "P... |
Insert values in two tables with a single stored procedure call in MySQL | Following is the syntax to insert values in two tables with a stored procedure −
DELIMITER //
CREATE PROCEDURE yourProcedureName(anyVariableName int)
BEGIN
insert into yourTableName1(yourColumnName1) values(yourVariableName);
insert into yourTableName2(yourColumnName2) values(yourVariableName);
END
//
Let us first create a table −
mysql> create table DemoTable1
-> (
-> StudentScore int
-> );
Query OK, 0 rows affected (0.58 sec)
Following is the second table −
mysql> create table DemoTable2
-> (
-> PlayerScore int
-> );
Query OK, 0 rows affected (0.52 sec)
Here is the query to create a stored procedure and insert values in two tables −
mysql> DELIMITER //
mysql> CREATE PROCEDURE insert_proc(value int )
-> BEGIN
-> insert into DemoTable1(StudentScore) values(value);
-> insert into DemoTable2(PlayerScore) values(value);
-> END
-> //
Query OK, 0 rows affected (0.16 sec)
mysql> DELIMITER ;
Now you can call the stored procedure using CALL command −
mysql> call insert_proc(89);
Query OK, 1 row affected (0.29 sec)
Display all records from both the tables using select statement −
mysql> select * from DemoTable1333;
+--------------+
| StudentScore |
+--------------+
| 89 |
+--------------+
1 row in set (0.00 sec)
mysql> select * from DemoTable1334;
+-------------+
| PlayerScore |
+-------------+
| 89 |
+-------------+
1 row in set (0.00 sec) | [
{
"code": null,
"e": 1143,
"s": 1062,
"text": "Following is the syntax to insert values in two tables with a stored procedure −"
},
{
"code": null,
"e": 1377,
"s": 1143,
"text": "DELIMITER //\nCREATE PROCEDURE yourProcedureName(anyVariableName int)\n BEGIN\n insert into yourT... |
How to Learn Julia When You Already Know Python | by DJ Passey | Towards Data Science | Julia is a newer, award-winning programming language that is simple to learn like Python but executes as fast as C. Don’t believe it? It’s really true. (Click here for a multiple language speed comparison.)
Julia offers more than just syntax and speed. To explain why they developed the language, the creators of Julia said:
“We want the speed of C with the dynamism of Ruby. We want a language that’s homoiconic, with true macros like Lisp, but with obvious, familiar mathematical notation like Matlab. We want something as usable for general programming as Python, as easy for statistics as R, as natural for string processing as Perl, as powerful for linear algebra as Matlab, as good at gluing programs together as the shell. Something that is dirt simple to learn, yet keeps the most serious hackers happy. [1]”
The jury is still out, but it feels like they delivered. When Julia 1.0 was released, the bones of a language with the potential to reach most, if not all, of their goals, was born.
At the same time, Julia has a long way to go before it reaches the maturity of mainstream programming languages. Julia’s packages need work and its documentation and learning resources can be improved. Luckily, an active (even zealous) developer community is working on these issues.
Even though the language is growing, there are a lot of reasons to learn Julia, especially if you are interested in machine learning, data science or scientific computing.
Python users will typically be able to pick up Julia syntax very quickly. The syntax is similar to Python and has many conventions that will be familiar to Python users.
However, programming in Julia is fundamentally different than programming Python. Chances are, the first Julia code written by Python users will look and act a lot like Python. While there are no huge problems with this approach, Julia that looks like Python will probably be inefficient and miss out on important aspects of the language.
Julia operates under different paradigms — generic functions, clever dispatch, and thoughtful typing (to name a few) and many of these ideas do not appear in Python at all. Therefore, the goal of this article is to teach three simple and important Julia concepts: The type hierarchy, multiple dispatch and user defined types. These concepts were chosen to help speed up Pythonic Julia programs, illustrate the ways that Julia is different than Python and introduce Python users to new programming ideas.
Therefore, because I am focusing on concepts, I won’t cover installing Julia and learning basic syntax here. For installation and syntax, I recommend the following resources:
The first chapter of Julia Programming for Operations Research by Changhyun Kwon contains an excellent installation and setup guide.
A comprehensive written guide to installation and syntax by J. Fernandez-Villaverde.
Learn syntax from Julia by Example.
Learn syntax from a video by Derek Banas.
Many of Julia’s speed, versatility and composability advantages are due, in part, to the typing system. In Python, types can be an afterthought, so thinking through typing may seem tedious. However, Julia keeps it simple and rewards careful thinking with a speed boost.
Concrete/Primitive Types
Julia uses two different kinds of types: concrete types and abstract types. Each one has different uses. Concrete types include the typical String, Bool, Int64, etc. and are used for standard computations. The type Float64 is a concrete type meaning that Float64 can be instantiated and used in computations.
Abstract Types
The types Union{}, AbstractFloat, Real, and Any are all abstract types. An abstract type cannot be instantiated. Instead, abstract types are containers for grouping together similar kinds of data. They are often used to indicate to the compiler that a function may be called on any subtype of the abstract type.
""" This function accepts Float16, Float32, Float64 because they are all subtypes of AbstractFloat"""function g(a::AbstractFloat) return floor(Int, a) end
The types Any and Union{} are special. Union{} is predefined to be the subtype of all types. It is the bottom of the type hierarchy. Similarly, every type is a subtype of Any, making it the top of the type hierarchy.
Why Use Abstract Types?
Abstract types are useful because functions defined to act on an abstract type, are able to act on all subtypes of the abstract type.
As an example, suppose a developer needs an array-like data structure. In Julia they can define their own application specific structure and make sure it satisfies the requirements of an AbstractArray type. Then, all functions in the Julia ecosystem, defined to operate on AbstractArray data will work on the developer’s array-like data structure. Because of this feature, many of Julia’s packages work together smoothly, even though they were not designed together.
Contrast this with the Python packages. Almost every package that uses arrays is designed to work with numpy arrays. This creates a huge dependency on numpy. If a programmer wants to create their own array and call numpy functions on it, it will probably raise errors. Very few Python libraries will work with a self defined object. In contrast, abstract types in Julia give developers more flexibility and help make packages more composable.
Operators
The binary operator :: is used to assert that a variable is a certain type. More specifically, the operator can initialize a variable as a specific type, signify that a function argument must be a certain type, or assert that a predefined variable is a specific type. Each of these uses is demonstrated below.
# Initialize a Float64x::Float64 = 100# Argument z must be an Int64function f(z::Int64) return z^2end# Assert x is a Float64 x::Float64 # (Does nothing)# Assert that x is an Int64 x::Int64 # (Raises error)
It is worth mentioning that we can declare variables or define functions without a type assert, e.g. x = 100. (The variable x will be an Int64 in this case.)
The subtype operator , <:, determines if one type is a subtype of another. If we want to compare the type of two variables x and y, evaluating the expression typeof(x) <: typeof(y) will return true if the type of variable x is a subtype of variable y’s type. As another example, consider the expression:
Union{} <: Float64 <: AbstractFloat <: Real <: Any
This evaluates to true signifying that we have ordered the types correctly in the type hierarchy. (The <: operator can compare these objects because they are types and not variables.)
More Reading On Types:
Julia documentation on types
A great tutorial on types from “Learn Julia the Hard Way”
One of the creators of Julia, (Stephan Karpinski) explaining the type system on Stack Overflow.
This concept is probably the most important concept to understand about Julia. From a development perspective, many of the advantages offered by Julia stem from multiple dispatch.
Multiple dispatch is when a function behaves differently depending on the types of its arguments. It is similar to function overloading but not exactly the same.
Multiple dispatch occurs when a programmer adds type annotations to a function definition. Consider the following example:
We need a function f to square its input and then compute its value mod 4. In Julia there are 3 equivalent ways to define f:
# Verbose definitionfunction f(x) return x^2 % 4end# Mathematical notationf(x) = x^2 % 4 # Like a Python lambda f = x -> x^2 % 4
Assume that we always need f to output an integer, but its input, x, can be a String, Float64, or Int64, and we won’t know the type of x until runtime. In Python, this is solved with:
def f_py(x): if type(x) == string: x = float(x) if type(x) == float: x = ceil(x) return x**2 % 4
In Julia, we could write a function that looked like the Python function above:
function f_py(x) if isa(x, String) x = parse(Float64, x) end if isa(x, Float64) x = ceil(Int64,x) end x^2 % 4end
However, we would be better off writing:
f(x::Int64) = x^2 % 4f(x::Float64) = f(ceil(Int64, x))f(x::String) = f(parse(Float64, x))
This collection of definitions does the same thing as the Python function f_py. However, the action of f on x depends on the type of x. Each of the three definitions specifies what f will do with a particular type.
If f is passed an Int64, it will square it and mod by four.If f is passed a Float64, it will compute the integer ceiling above the float and call f on that integer. This invokes the integer version of f described in 1.If f is passed a String, it will convert it to a Float64, then call f on the float, which will invoke the float version of f, described in 2. As we already saw, the Float64 version converts to an Int64, and calls the Int64 version of f.
If f is passed an Int64, it will square it and mod by four.
If f is passed a Float64, it will compute the integer ceiling above the float and call f on that integer. This invokes the integer version of f described in 1.
If f is passed a String, it will convert it to a Float64, then call f on the float, which will invoke the float version of f, described in 2. As we already saw, the Float64 version converts to an Int64, and calls the Int64 version of f.
When these functions are broadcasted over an array with 3 million elements of mixed types, the dispatched function finishes in 0.039 seconds. The Python version of f_py is 50 times slower than f. Furthermore, the dispatched function f is twice as fast as the pythonic Julia.
On one hand, Julia is fundamentally faster than Python, but we also see that multiple dispatch is faster than pythonic Julia. This is because in Julia, the correct version of f is determined at runtime with a lookup table, and this avoids multiple if statement evaluations.
As you can see, multiple dispatch is fast, and can be an effective solution to a variety of programming challenges, making it one of the most useful tools of the Julia language.
More on Multiple Dispatch
How Julia Uses Multiple Dispatch To Beat Python. More examples of multiple dispatch from DJ Passey.
The Unreasonable Effectiveness of Multiple Dispatch. A presentation by Stephan Karpinski.
Some thoughts on generic programming blog post by Erik Schnetter.
It may be a shock, but as it turns out, Julia isn’t object oriented. There are no classes, and no objects with member functions. However, by leveraging multiple dispatch and the type system, Julia gains the advantages of object oriented programming languages plus added flexibility.
Instead of objects, Julia uses structs, which are user defined composite types. Structs have no internal functions. They are just collections of named types. In the example below, we define a NBA player struct:
struct NBAPlayer name::String height::Int points::Float64 rebounds::Float64 assists::Float64end
The type NBAPlayer has a default constructor:
doncic = NBAPlayer("Luka Doncic", 79, 24.4, 8.5, 7.1)
Each field can be accessed with familiar dot notation: doncic.name, doncic.height, doncic.points, doncic.rebounds, and doncic.assists.
You can define additional constructors as long as they accept a different combination of types from the default. This is multiple dispatch at work:
# Constructor with no argumentsfunction NBAPlayer() # Make an empty player return NBAPlayer("", 0.0, 0.0, 0.0)end
With a struct defined, we can give new definitions to functions in the Julia base:
function Base.show(io::IO, player::NBAPlayer) print(io, player.name) print(io, ": ") print(io, (player.points, player.rebounds, player.assists))end
This defines how the struct NBAPlayer is displayed when it is printed. It is similar to defining a __repr__() function for a class in Python. However, instead of defining internal functions like we would in Python, in Julia we provide new definitions for how external functions should act on the struct.
Python allows developers to determine how certain operators should act on a class with magic methods. Programmers can write their own definition for how +, -, += and others should act on a class. However, this is fundamentally limited by the list of operators with magic methods. In Julia, any function can be given a definition for any combination of types or structs.
Though Julia is easy to pick up, it can be tricky to master. Learning these concepts puts developers on the road to mastering Julia. By practicing and experimenting with these ideas, you can develop the skills necessary to write high quality Julia programs.
[1] J. Bezanson, S. Karpinski V. Shah, A. Edelman, Why We Created Julia (2012), JuliaLang.org | [
{
"code": null,
"e": 379,
"s": 172,
"text": "Julia is a newer, award-winning programming language that is simple to learn like Python but executes as fast as C. Don’t believe it? It’s really true. (Click here for a multiple language speed comparison.)"
},
{
"code": null,
"e": 497,
"s... |
Exporting DTA File Using pandas.DataFrame.to_stata() function in Python - GeeksforGeeks | 14 Sep, 2021
This method is used to writes the DataFrame to a Stata dataset file. “dta” files contain a Stata dataset. DTA file is a database file and it is used by IWIS Chain Engineering.
Syntax : DataFrame.to_stata(path, convert_dates=None, write_index=True, time_stamp=None)
Parameters :
path : str, buffer or path object
convert_dates : dict
write_index : bool
time_stamp : datetime
Returns : DataFrame object to Stata dta format. Means return .dta file.
Example 1: Create DTA file
Here we will create dataframe and then saving into the DTA format using DataFrame.to_stata().
Python3
# importing packageimport numpyimport pandas as pd # create and view datadf = pd.DataFrame({ 'person': ["Rakesh", "Kishan", "Adesh", "Nitish"], 'weight': [50, 60, 70, 80]})display(df) # use pandas.DataFrame.to_stata method# to extract .dta filedf.to_stata('person.dta')
Output :
Example 2:
Python3
# importing packageimport pandas as pd # create and view datadf = pd.DataFrame({ 'mobiles': ["Apple", "MI", "Karban", "JIO"], 'prizes': [75000, 9999, 6999, 5999]})display(df) # use pandas.DataFrame.to_stata method# to extract .dta filedf.to_stata('mobiles.dta')
Output :
Python pandas-dataFrame-methods
Python-pandas
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Install PIP on Windows ?
How To Convert Python Dictionary To JSON?
How to drop one or multiple columns in Pandas Dataframe
Check if element exists in list in Python
Python | os.path.join() method
Selecting rows in pandas DataFrame based on conditions
Defaultdict in Python
Python | Get unique values from a list
Create a directory in Python
Python | Pandas dataframe.groupby() | [
{
"code": null,
"e": 24292,
"s": 24264,
"text": "\n14 Sep, 2021"
},
{
"code": null,
"e": 24469,
"s": 24292,
"text": "This method is used to writes the DataFrame to a Stata dataset file. “dta” files contain a Stata dataset. DTA file is a database file and it is used by IWIS Chain ... |
Generate random alpha-numeric string in JavaScript - GeeksforGeeks | 24 Jun, 2019
The task is to generate a random alpha-numeric string of specified length using javascript, we’re going to discuss few techniques.
Approach 1:
Creates a function which takes 2 arguments one is the length of the string that we want to generate and another is the characters that we want to be present in the string.
Declare new variable ans = ‘ ‘.
Traverse the string in reverse order using for loop.
Use JavaScript Math.random() method to generate the random index and multiple with the length of string.
Use JavaScript Math.floor( ) to round off it and add into the ans.
Example 1: This example uses the Math.random() method to generate the random index and then appends the character from the string we passed.
<!DOCTYPE HTML><html> <head> <title> Generate random alpha-numeric string in JavaScript </title></head> <body style="text-align:center;" id="body"> <h1 style="color:green;"> GeeksForGeeks </h1> <p id="GFG_UP" style="font-size: 19px; font-weight: bold;"> </p> <button onClick="GFG_Fun()"> click here </button> <p id="GFG_DOWN" style="color: green; font-size: 24px; font-weight: bold;"> </p> <script> var up = document.getElementById('GFG_UP'); var down = document.getElementById('GFG_DOWN'); up.innerHTML = 'Click on the button to generate alpha-numeric string'; function randomStr(len, arr) { var ans = ''; for (var i = len; i > 0; i--) { ans += arr[Math.floor(Math.random() * arr.length)]; } return ans; } function GFG_Fun() { down.innerHTML = randomStr(20, '12345abcde'); } </script></body> </html>
Output:
Before clicking on the button:
After clicking on the button:
Approach 2:
First generate a random number using Math.random() method.
Use JavaScript toString(36) to convert it into base 36 (26 char + 0 to 9) which is also alpha-numeric string.
Use JavaScript String.slice() method to get the part of string which is started from position 2.
Example 2: This example first generate a random number(0-1) and then converts it to base 36 which is also alpha-numeric string by using toString(36)() method.
<!DOCTYPE HTML><html> <head> <title> Generate random alpha-numeric string in JavaScript </title></head> <body style="text-align:center;" id="body"> <h1 style="color:green;"> GeeksForGeeks </h1> <p id="GFG_UP" style="font-size: 19px; font-weight: bold;"> </p> <button onClick="GFG_Fun()"> click here </button> <p id="GFG_DOWN" style="color: green; font-size: 24px; font-weight: bold;"> </p> <script> var up = document.getElementById('GFG_UP'); var down = document.getElementById('GFG_DOWN'); up.innerHTML = 'Click on the button to generate alpha-numeric string'; function GFG_Fun() { down.innerHTML = Math.random().toString(36).slice(2); } </script></body> </html>
Output:
Before clicking on the button:
After clicking on the button:
JavaScript-Misc
JavaScript
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Difference between var, let and const keywords in JavaScript
Convert a string to an integer in JavaScript
Differences between Functional Components and Class Components in React
How to append HTML code to a div using JavaScript ?
How to Open URL in New Tab using JavaScript ?
Express.js express.Router() Function
Installation of Node.js on Linux
Convert a string to an integer in JavaScript
How to set the default value for an HTML <select> element ?
Top 10 Angular Libraries For Web Developers | [
{
"code": null,
"e": 24596,
"s": 24568,
"text": "\n24 Jun, 2019"
},
{
"code": null,
"e": 24727,
"s": 24596,
"text": "The task is to generate a random alpha-numeric string of specified length using javascript, we’re going to discuss few techniques."
},
{
"code": null,
... |
Predict Bitcoin prices by using Signature time series modelling | by Rattana Pukdee | Towards Data Science | First, I would like to give a short introduction to the Signature method. According to Wikipedia, a rough path is a generalization of the notion of smooth path allowing to construct a robust solution theory for controlled differential equations driven by classically irregular signals, for example, a Wiener process. The theory was developed in the 1990s by Terry Lyons. The aim of the mathematics is to describe a smooth but potentially highly oscillatory and multidimensional path X effectively.
The Signature is a homomorphism from the monoid of paths (under concatenation) into the group-like elements of the free tensor algebra. It provides a graduated summary of path X. Here is a formal maths definition of the Signature transformation from A Primer on the Signature Method in Machine Learning.
For a path
Define the Signature of a path as follows
where
To make a long story short, Signature is a transformation of a path into a sequence that encapsulates summaries of the path.
These graduated summaries or features of a path are at the heart of the definition of a rough path; locally they remove the need to look at the fine structure of the path. Taylor’s theorem explains how any smooth function can, locally, be expressed as a linear combination of certain special functions (monomials based at that point).
Coordinate iterated integrals (terms of the signature) form a more subtle algebra of features that can describe a stream or path in an analogous way; they allow a definition of rough path and form a natural linear “basis” for continuous functions on paths.
There are many advantages of using Signature as a basis for functions on paths. First, Signature features are more robust for a rough path. Second, although the signature of a path is an infinitely long sequence, we can use a truncated version of it as a basis to approximate a continuous function without losing much information.
Moreover, Signature features are scalable. To give an illustration, if we want to model a future crude oil price as a function of historical crude oil prices.
We can use past oil prices as features. The more prices we use, the more information about its behaviour and the more accurate the model is, but with more computationally expensive. That is, 1,000 historical prices lead to 1,000 features and 1,000,000 historical prices lead to 1,000,000 features.Alternatively, we can construct a path from past oil prices and use the truncated signature of that path as features. Even when we use more past prices for our path, the number of features will be the same. For example, let X be a path of dimension 2, the level 2 truncated signature of X includes only 7 elements. That is using 1,000 past prices or 1,000,000 past prices will give us exactly 7 signature features. This would help with computation time (if we take into account the time we spend calculating the signature of a path) and potentially give us insight about the data (one signature feature may turn out to be important). However, we need to be careful about using a too low level of truncation which may lead to an underfitting.
We can use past oil prices as features. The more prices we use, the more information about its behaviour and the more accurate the model is, but with more computationally expensive. That is, 1,000 historical prices lead to 1,000 features and 1,000,000 historical prices lead to 1,000,000 features.
Alternatively, we can construct a path from past oil prices and use the truncated signature of that path as features. Even when we use more past prices for our path, the number of features will be the same. For example, let X be a path of dimension 2, the level 2 truncated signature of X includes only 7 elements. That is using 1,000 past prices or 1,000,000 past prices will give us exactly 7 signature features. This would help with computation time (if we take into account the time we spend calculating the signature of a path) and potentially give us insight about the data (one signature feature may turn out to be important). However, we need to be careful about using a too low level of truncation which may lead to an underfitting.
If you want more information about maths behind the Signature, I recommend you to read
Rough paths, Signatures and the modelling of functions on streams
A Primer on the Signature Method in Machine Learning
Here are examples of an application of the Signature features
Extracting information from the signature of a financial data stream
Sparse arrays of signatures for online character recognition
Application of the Signature Method to Pattern Recognition in the CEQUEL Clinical Trial
QuantStart — Rough Path Theory and Signatures Applied To Quantitative Finance
Derivatives pricing using signature payoffs
Recently, we have seen a remarkable rise of cryptocurrency trading where the most popular currency, Bitcoin, reached its peak at almost $20,000 USD/BTC at the end of 2017 while there was a big crash in November 2018 to around $3000 USD/BTC. The digital money is quite new in the financial market and we would say that its behaviour is almost unpredictable. Knowing that signature can capture meaningful properties of a path and can be used as a linear basis to approximate a continuous function of the path, the author wants to explore how would this promising method perform when we use it to predict the Bitcoin price. We will compare the result with XGBoost algorithm which is a current state-of-the-art machine learning algorithm.
We will use daily Bitcoin to USD prices data from https://www.cryptodatadownload.com/. We will use data from Gemini which is one of the biggest cryptocurrency trading platforms in the US. Our goal is to use a window of size 30 days to predict the mean price of the next 10 days. For the signature method, we will use a truncated signature of the 30-day prices as features + Lasso linear regression while for the XGBoost, we will use the 30-day prices as features.
We explore the dataset using pandas.
We need to remove the first row of the dataframe, reverse the dataframe and use Date as an index.
So now we are ready to work with the dataframe. Let’s visualise the price by plotting the Close price first.
#Plot to visualise dataimport matplotlib.pyplot as pltax = BTC_price.plot(y= 'Close', figsize=(12,6), legend=True, grid=True, use_index=True)plt.show()
There are a few interesting periods that we may be interested in, a boom on Oct 2017 along with crashes on Jan 2018 and Oct 2018. First, we will use data from Jan 2017 to Nov 2017 to see if the model could predict the boom period.
#select durationinitial_date = '2017-01-01'finish_date = '2017-12-01'BTC_price_time = BTC_price[initial_date:finish_date]
Next, we will construct features for our machine learning algorithm. Firstly, we will write a function that produces a window of historical prices of size h and means of the next future f prices.
We will test our function with a sequence of Close prices of length 10
BTC_price_time['Close'].head(10)
GetWindow(BTC_price_time.loc[:,'Close'].head(10), h_window = 5, f_window =2)
We get a dataframe that contains a rolling window of size 5 as shown below.
GetNextMean(BTC_price_time.loc[:,'Close'].head(10), h_window = 5, f_window =2)
GetNextMean provides us with a dataframe that contains means of 2 consecutive prices start at the 6th price. For example, 896.12 = (893.49+898.75)/2.
In addition to the prices window, we add a time column for features as well.
Now, let’s construct signature features. As mention above, signatures are in the form of iterated integral of a continuous path but we only have discrete data points. There are various ways to transform discrete data points we have into a continuous path. For example,
piece-wise linear interpolation
rectanlinear interpolation
Here are illustrations of each transformation from A Primer on the Signature Method in Machine Learning. For two one-dimensional sequence of length 4,
However, there is another interesting transformation which is a Lead-Lag transformation which transforms a one-dimensional into a two-dimensional path.
Here, we will use this Lead-Lag transform on our discrete data points and use the signature of that path for our features.
We test with a sequence (1,1),(2,4),(3,2),(4,6).
Next, it’s time to calculate the truncated signature of a path! We are lucky enough and don’t have to write a function to calculate the iterated integrals. Apparently, there is a package ESig that will do the (dirty) works for us though it is still in the active stages of development.
pip install esigimport esig.tosig as ts
Following is the documentation of ts.stream2sig(...).stream2sig(array(no_of_ticks x signal_dimension), signature_degree) reads a 2 dimensional numpy array of floats, “the data in stream space” and returns a numpy vector containing the signature of the vector series up to given signature_degree.
We will use this function to calculate the signature of a path (Time_lead, Price_lead, Price_lag).
Now, we are ready to calculate normal features and signature features. For simplicity, we will only use close price of the Bitcoin. What the following code do is to get normal window features with a time column, get prediction target which is mean of the future prices and calculate signature features. For the signature features, we use ESig package to find the level 2 truncated signature of a path (time_lead, price_lead, price_lag).
We can check the resulted features.
y.head()
pd.DataFrame(X_window).head()
pd.DataFrame(X_sig).head()
We split X_sig, y into train and test set for the model and we will predict 10 future prices.
Firstly, we will train our model on a Lasso linear regression with signature features. We use a time series split test cross-validation and gridsearchCV to tune a hyperparameter alpha.
Here is the error
and the indispensable part of a time series prediction is to visualise the result.
PlotResult(y_train, y_test, y_train_predict, y_test_predict, test_len, 'Lasso + Signature features')
The model can predict that there will be a boom in the future with an adequate accuracy of around 15%. Note that here we only use the level 2 truncated signature features. In theory, if we increase the level of truncation, we expect the model to be more accurate as we have more information
Now we will try our XGBoost model. We also use Time series split cross-validation and GridsearchCV to tune hyperparameters too.
We observe that the mean absolute error for the train set is quite low which is a sign of overfitting and the value for the test set is about the same as the Signature method. Let’s visualise the graph
PlotResult(y_train, y_test, y_train_predict, y_test_predict, test_len, 'XGBoost')
We can see that the model can’t predict the boom period at all and it guessed that the mean future prices are stable. One possible reason for this to happen is that the model hasn’t experienced such a sharp increase before.
We will use a period from Jan 2018 to Dec 2018 to test whether the model can predict a crash. We test with a duration where the price was quite stable.
#select durationinitial_date = '2018-01-01'finish_date = '2018-11-01'
The following is the result of the signature features.
We got a Convergence warning saying that the model does not converge and the resulted graph is quite weird. Note that here we use only the truncated signature level 2 which has only 12 features so they may not capture important information about the path.
We experiment with the truncated signature level 3.
The model did poorer in term or an error but we can see from the graph that it has a more realistic path than the level 2 truncated signature features.
On the other hand here is the result of the XGBoost algorithm.
The model did extremely well and made a prediction with an error of up to 0.5%.
Now, let’s challenge whether these model can predict a crash in Dec 2018.
We used level 3 truncated signature features and here is the result.
compared with the XGBoost model
It’s clear that none of the models can predict the crash in advance. We may reach the limit of using only historical prices here. In the real world, there are many factors that affect the cryptocurrency price such as news, the attractiveness of other financial products. However, technical analysis like this could give us a quick picture of the data.
We have learnt how to use the signature features to model a time series and compare it with the XGBoost algorithm which is the cutting-edge algorithm at the moment. The XBGoost has done extremely well in a period of stable prices while it could not give us much information when there is a boom or a bust. On the other hand, the Signature method did okay in a stable prices period but potentially can give some information about a significant change. Nonetheless, the data set that the author picked can be biased because we know beforehand that when is the boom period and when is the bust period. To conclude, the method is quite new and still need to prove itself and I encourage readers to have a play on it.
Note from Towards Data Science’s editors: While we allow independent authors to publish articles in accordance with our rules and guidelines, we do not endorse each author’s contribution. You should not rely on an author’s works without seeking professional advice. See our Reader Terms for details. | [
{
"code": null,
"e": 670,
"s": 172,
"text": "First, I would like to give a short introduction to the Signature method. According to Wikipedia, a rough path is a generalization of the notion of smooth path allowing to construct a robust solution theory for controlled differential equations driven by ... |
Data Processing In Rust With DataFusion (Arrow) | by Chengzhi Zhao | Towards Data Science | Rust is the most beloved language, according to StackOverflow, it is on the top of the list for four years! Data processing is getting simpler and faster with a framework like Apache Spark. However, the field of data processing is competitive. DataFusion (part of Arrow now) is one of the initial attempts of bringing data processing to the Rust. If you are interested in learning some aspects of data processing in Rust with DataFusion, I will show some code examples in Rust with DataFusion, as well as compare the query performance between DataFusion and Pandas.
Update: I initially wrote this article in DataFusion version 0.15.0. With release 1.0.0 of DataFusion and Arrow, I have added both the code and benchmark so we can see the improvement.
Andy Grove created DataFusion, and he had some great articles about building modern distributed computing, for example, How To Build A Modern Distributed Compute Platform. The DataFusion project is not for the production environment yet, as Andy mentioned,
“This project is a great way to learn about building a query engine, but this is quite early and not usable for any real-world work just yet.”
The project was donated to the Apache Arrow project in February 2019, and more people start to contribute to the Arrow version of DataFusion.
DataFusion is an in-memory query engine that uses Apache Arrow as the memory model. It supports executing SQL queries against CSV and Parquet files as well as querying directly against in-memory data.
The project description may not deliver too much excitement here, but since the entire project is done in Rust, it provides you ideas about writing your analytics SQL in Rust. Additionally, you can bring DataFusion as a library to your Cargo file for your Rust project easily.
To test run some code with DataFusion, first, we need to create a new Rust package
cargo new datafusion_test --bin
Then bring DataFusion as a dependency in Cargo.toml file
[dependencies]arrow = "1.0.0"datafusion = "1.0.0"
There are many datasets available online, Kaggle is one of the places that I usually go to and explore the new dataset. We are going to use The Movies Dataset, and the complete version of this dataset is about 676.68 MB. The movie dataset has the following schema
userId: intmovieId: intrating: doubletimestamp: long
To work with CSV file format, DataFusion used to requires us to provide the schema here, as version 1.0.0 introduced schema infer, this is not needed any more.
let schema = Arc::new(Schema::new(vec![ Field::new(“userId”, DataType::UInt32, false), Field::new(“movieId”, DataType::UInt32, false), Field::new(“rating”, DataType::Float64, false), Field::new(“timestamp”, DataType::Int16, false)]));
As more features introduced in version 1.0.0, DataFusion API brings lots exciting enhancement. You’d see improvements like schema infer, easier to print result and more. The Code is much concise and easy to read.
It won’t be a fair comparison since DataFusion is entirely new and lack lots of optimization. But it’s still interesting to see the current state of DataFusion and compare it with mature data processing package like pandas.
Disclaimer: I am running it on my personal Mac 13 (2 GHz Quad-Core Intel Core i5) to perform the benchmark, the result could be biased. Since the initial benchmark was published in debug mode, noticed the performance is significant different in release mode.
Query: “SELECT userId,movieId,rating FROM ratings LIMIT 10”DataFusion: 0.7sPandas: 6.15s
DataFusion is running very fast on random access ten rows here. On the other hand, pandas is about 6s slower.
Query: “SELECT userId, AVG(rating) FROM ratings GROUP BY userId”DataFusion: 18.57sPandas: 6.24s
As Pandas uses NumPy under the hood, it is not surprising to see good performance on the Pandas side. On DataFusion side, though it is slower than Pandas, the performance is also reasonable to perform those types of aggregations.
Query: “SELECT MAX(rating) FROM ratings”DataFusion: 15.28sPandas: 5.97s
As the previous analytics query seems is slow, this query would also be slow on DataFusion side.
As we discussed first, DataFusion is an exciting attempt in Rust to get into the competitive data compute market. As the DataFusion project is still an early stage and requires more contributions, it is not surprising to see some slow performance on certain types of queries. Also, as described in the project README, some key features are missing, so you have to be careful about the SQL command you write and double-check to see if it is currently supported or not.
Overall, DataFusion is an attractive beginning with Rust for the data world, and especially it is part of Apache Arrow now, DataFusion can leverage features easily from the Arrow ecosystem. I’d expect to see considerable performance improvement and more supported SQL features in the future version of DataFusion. | [
{
"code": null,
"e": 738,
"s": 172,
"text": "Rust is the most beloved language, according to StackOverflow, it is on the top of the list for four years! Data processing is getting simpler and faster with a framework like Apache Spark. However, the field of data processing is competitive. DataFusion ... |
How and why to Standardize your data: A python tutorial | Towards Data Science | Hi there.
This is my first Medium post. I am an electrical & computer engineer currently finishing my PhD studies in the biomedical engineering and computational neuroscience field. I have been working on machine learning problems for the past 4 years. A very common question that I see all around the web is how to standardize and why to do so, the data before fitting a machine learning model.
How does scikit-learn’s StandardScaler work ?
The first question that comes to one’s mind is:
Why to standardize in the first place?
Well, the idea is simple. Variables that are measured at different scales do not contribute equally to the model fitting & model learned function and might end up creating a bias. Thus, to deal with this potential problem feature-wise standardized (μ=0, σ=1) is usually used prior to model fitting.
To do that using scikit-learn, we first need to construct an input array X containing the features and samples with X.shape being[number_of_samples, number_of_features] .
Keep in mind that all scikit-learn machine learning (ML) functions expect as input an numpy array X with that shape i.e. the rows are the samples and the columns are the features/variables. Having said that, let’s assume that we have a matrix X where each row/line is a sample/observation and each column is a variable/feature.
Note: Tree-based models are usually not dependent on scaling, but non-tree models models such as SVM, LDA etc. are often hugely dependent on it.
The main idea is to normalize/standardize i.e. μ = 0 and σ = 1 your features/variables/columns of X, individually, before applying any machine learning model. Thus, StandardScaler() will normalize the features i.e. each column of X, INDIVIDUALLY so that each column/feature/variable will have μ = 0 and σ = 1.
from sklearn.preprocessing import StandardScalerimport numpy as np# 4 samples/observations and 2 variables/featuresX = np.array([[0, 0], [1, 0], [0, 1], [1, 1]])# the scaler object (model)scaler = StandardScaler()# fit and transform the datascaled_data = scaler.fit_transform(X) print(X)[[0, 0], [1, 0], [0, 1], [1, 1]])print(scaled_data)[[-1. -1.] [ 1. -1.] [-1. 1.] [ 1. 1.]]
Verify that the mean of each feature (column) is 0:
scaled_data.mean(axis = 0)array([0., 0.])
Verify that the std of each feature (column) is 1:
scaled_data.std(axis = 0)array([1., 1.])
StandardScaler removes the mean and scales each feature/variable to unit variance. This operation is performed feature-wise in an independent way.
StandardScaler can be influenced by outliers (if they exist in the dataset) since it involves the estimation of the empirical mean and standard deviation of each feature.
Manual way (not recommended): Visually inspect the data and remove outliers using outlier removal statistical methods such as the Interquartile Range (IQR) threshold method.
Recommended way: Use the RobustScaler that will just scale the features but in this case using statistics that are robust to outliers. This scaler removes the median and scales the data according to the quantile range (defaults to IQR: Interquartile Range). The IQR is the range between the 1st quartile (25th quantile) and the 3rd quartile (75th quantile).
That’s all for today! Hope you liked this first post! Next story coming next week. Stay tuned & safe.
If you liked and found this article useful, follow me and applaud my story to support me!
[1] https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html
[2] https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.RobustScaler.html
LinkedIn: https://www.linkedin.com/in/serafeim-loukas/
ResearchGate: https://www.researchgate.net/profile/Serafeim_Loukas
EPFL profile: https://people.epfl.ch/serafeim.loukas
Stack Overflow: https://stackoverflow.com/users/5025009/seralouk | [
{
"code": null,
"e": 182,
"s": 172,
"text": "Hi there."
},
{
"code": null,
"e": 568,
"s": 182,
"text": "This is my first Medium post. I am an electrical & computer engineer currently finishing my PhD studies in the biomedical engineering and computational neuroscience field. I ha... |
Tryit Editor v3.7 | Tryit: Responsive image on responsive webpage | [] |
File Upload Vulnerability of Web Applications - GeeksforGeeks | 26 Nov, 2021
In this article, we are going to learn about one more attack vectors in detail which are very important to learn in this world of lots of Web and Mobile Apps.
In almost every web application there is functionality for uploading file. This file maybe in form of text, video, image ,etc. However many web application does not have proper security check during uploading files and this results in vulnerability called File Upload Vulnerability. This one simple vulnerability leads to server side side scripting, arbitrary code execution, cross site scripting, CSRF attacks.
Even though some application have proper check on uploading files but still these security checks has bypass method to exploit this vulnerability these bypass are as following –
1. Case sensitive extension bypass: Web/Mobile application developer may add a blacklist of certain extension which are harmful according to developer. But sometime developer forgot whether their extension security check is case sensitive or not and anyone can bypass security check by making extensions of file as combination of lowercase and uppercase character to bypass security checks. As a developer it is good practice to check extension verification always consider the case sensitivity of file extension. Example: .PDf, .XmL, .Sh, php.
2. Image content Verification bypass: As a security concern developer always check the content of image to match with one of the valid file type. In PHP there are may functions to validate file one of the function is get getimagesize() this function basically read the file and return size in case of invalid file returns error message. There are techniques which can bypass this protection. Consider following code which upload a file.
PHP
<?php if(isset($_FILES['image'])) { $filename = $_FILES['image']['name']; $tmp = $_FILES['image']['tmp_name']; if(!getimagesize($_FILES['image']['tmp_name'])) { echo "Invalid Image File"; exit(0); } move_uploaded_file($tmp,"images/".$filename); echo "SUCCESS"; exit(0);} ?>
Attacker can bypass the such checks by embedding PHP code inside the comment section of JPG file and after that upload file with .php extension this can easily bypass the checks mentioned in above code. There are more techniques also available for File verification bypass as a developer always take care of all these bypasses during implementing feature of file upload.
This is sub attack under the File Upload Vulnerability, this attack mainly exploit the method of image parsing. During performing this attack malicious user take a valid JPG or JPEG file with the original dimension then attacker change the dimension of image to very large scale like 1000000 ×1000000 by using some automated tool by uploading such large file image parser allocate very large memory to it and results into server crash or out of memory situation.
The PNG file format contain a section, called zTXT, that allows zlib compressed data to be added to a PNG file. The technique here is that a large amount of repeated data, such as a series of zeros, are created, weighting over 70MB and then are DEFLATE compressed through zlib, resulting in compressed data of a few KBs. This is then added to the zTXT section of any regular PNG file. Sending repeated requests of this kind causes similar memory exhaustion like we’ve seen in the previous two examples. This issue affected the Paperclip gem as well.
This technique is similar to the previous technique, a malicious GIF is used to allocate a large amount of memory, eventually use the large amount of server memory. A GIF file contains a set of animations in the form of various image frames. Instead of flipping the pixels, we add a very large of amount of GIF frames, say 45,000-90,000. When parsing each frame, memory is allocated and eventually chokes up the server.
Always check the extension of file with their case sensitivity.
Filter the content of file before uploaded on server.
Don’t give the executable permission to uploaded file.
Always store the uploaded file in non public directory.
rajeev0719singh
Cyber-security
Information-Security
Network-security
PHP-function
PHP-Questions
PHP
Web Technologies
PHP
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to fetch data from localserver database and display on HTML table using PHP ?
PHP str_replace() Function
How to create admin login page using PHP?
Different ways for passing data to view in Laravel
How to pass form variables from one page to other page in PHP ?
Remove elements from a JavaScript Array
Installation of Node.js on Linux
Convert a string to an integer in JavaScript
How to fetch data from an API in ReactJS ?
How to insert spaces/tabs in text using HTML/CSS? | [
{
"code": null,
"e": 26217,
"s": 26189,
"text": "\n26 Nov, 2021"
},
{
"code": null,
"e": 26377,
"s": 26217,
"text": "In this article, we are going to learn about one more attack vectors in detail which are very important to learn in this world of lots of Web and Mobile Apps. "
... |
How to make a phone call from your Android App? - GeeksforGeeks | 17 Jan, 2020
In this article, you will make a basic android application which can be used to call some number through your android application.
You can do so with the help of Intent with action as ACTION_CALL. Basically Intent is a simple message object that is used to communicate between android components such as activities, content providers, broadcast receivers and services, here use to make phone call. This application basically contain one activity with edit text to write phone number on which you want to make a call and button to call that number.
Step 1. Permission code in Android-Manifest.xml fileYou need to take permission from user for phone call and for that CALL_PHONE permission is added in manifest file.Here is code of manifest file:Android-Manifest.xmlAndroid-Manifest.xml<?xml version="1.0" encoding="utf-8"?> <manifest xmlns:androclass="http://schemas.android.com/apk/res/android" package="com.geeksforgeeks.phonecall" android:versionCode="1" android:versionName="1.0" > <uses-sdk android:minSdkVersion="8" android:targetSdkVersion="16" /> <!--permission for phone call--> <uses-permission android:name="android.permission.CALL_PHONE" /> <application android:icon="@drawable/ic_launcher" android:label="@string/gfg" android:theme="@style/AppTheme" > <activity android:name="com.geeksforgeeks.phonecall.MainActivity" android:label="@string/gfg" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> </manifest>
Android-Manifest.xml
<?xml version="1.0" encoding="utf-8"?> <manifest xmlns:androclass="http://schemas.android.com/apk/res/android" package="com.geeksforgeeks.phonecall" android:versionCode="1" android:versionName="1.0" > <uses-sdk android:minSdkVersion="8" android:targetSdkVersion="16" /> <!--permission for phone call--> <uses-permission android:name="android.permission.CALL_PHONE" /> <application android:icon="@drawable/ic_launcher" android:label="@string/gfg" android:theme="@style/AppTheme" > <activity android:name="com.geeksforgeeks.phonecall.MainActivity" android:label="@string/gfg" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> </manifest>
Step 2. activity_main.xmlactivity_main.xml contains a Relative Layout which contains Edit text to write phone number on which you want to make phone call and button for starting intent or making call :activity_main.xmlactivity_main.xml<?xml version="1.0" encoding="utf-8"?><!--Relative Layout--><RelativeLayout xmlns:androclass="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".MainActivity"> <!--Edit text for phone number--> <EditText android:id="@+id/editText" android:layout_marginTop="30dp" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentTop="true" android:layout_centerHorizontal="true" /> <!--Button to make call--> <Button android:id="@+id/button" android:layout_marginTop="115dp" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Make Call!!" android:padding="5dp" android:layout_alignParentTop="true" android:layout_centerHorizontal="true" /> </RelativeLayout>
activity_main.xml
<?xml version="1.0" encoding="utf-8"?><!--Relative Layout--><RelativeLayout xmlns:androclass="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".MainActivity"> <!--Edit text for phone number--> <EditText android:id="@+id/editText" android:layout_marginTop="30dp" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentTop="true" android:layout_centerHorizontal="true" /> <!--Button to make call--> <Button android:id="@+id/button" android:layout_marginTop="115dp" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Make Call!!" android:padding="5dp" android:layout_alignParentTop="true" android:layout_centerHorizontal="true" /> </RelativeLayout>
Step 3. MainActivity.javaIn Main activity Intent object is created to redirect activity to call manager and action attribute of intent is set as ACTION_CALL.Phone number input by user is parsed through Uri and that is passed as data in Intent object which is than use to call that phone number.setOnClickListener is attached to button with intent object in it to make intent with action as ACTION_CALL to make phone call.Here is complete code:MainActivity.javaMainActivity.javapackage com.geeksforgeeks.phonecall; import android.os.Bundle;import android.support.v7.app.AppCompatActivity;import android.content.Intent;import android.widget.EditText;import android.view.View;import android.view.View.OnClickListener;import android.net.Uri;import android.widget.Button; public class MainActivity extends AppCompatActivity { // define objects for edit text and button EditText edittext; Button button; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // Getting instance of edittext and button button = findViewById(R.id.button); edittext = findViewById(R.id.editText); // Attach set on click listener to the button // for initiating intent button.setOnClickListener(new OnClickListener() { @Override public void onClick(View arg) { // getting phone number from edit text // and changing it to String String phone_number = edittext.getText().toString(); // Getting instance of Intent // with action as ACTION_CALL Intent phone_intent = new Intent(Intent.ACTION_CALL); // Set data of Intent through Uri // by parsing phone number phone_intent .setData(Uri.parse("tel:" + phone_number)); // start Intent startActivity(phone_intent); } }); }}
MainActivity.java
package com.geeksforgeeks.phonecall; import android.os.Bundle;import android.support.v7.app.AppCompatActivity;import android.content.Intent;import android.widget.EditText;import android.view.View;import android.view.View.OnClickListener;import android.net.Uri;import android.widget.Button; public class MainActivity extends AppCompatActivity { // define objects for edit text and button EditText edittext; Button button; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // Getting instance of edittext and button button = findViewById(R.id.button); edittext = findViewById(R.id.editText); // Attach set on click listener to the button // for initiating intent button.setOnClickListener(new OnClickListener() { @Override public void onClick(View arg) { // getting phone number from edit text // and changing it to String String phone_number = edittext.getText().toString(); // Getting instance of Intent // with action as ACTION_CALL Intent phone_intent = new Intent(Intent.ACTION_CALL); // Set data of Intent through Uri // by parsing phone number phone_intent .setData(Uri.parse("tel:" + phone_number)); // start Intent startActivity(phone_intent); } }); }}
Output:
android
Java
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Stream In Java
Interfaces in Java
ArrayList in Java
Stack Class in Java
Singleton Class in Java
Multidimensional Arrays in Java
Set in Java
Multithreading in Java
Collections in Java
Initializing a List in Java | [
{
"code": null,
"e": 26063,
"s": 26035,
"text": "\n17 Jan, 2020"
},
{
"code": null,
"e": 26194,
"s": 26063,
"text": "In this article, you will make a basic android application which can be used to call some number through your android application."
},
{
"code": null,
... |
Python - List XOR - GeeksforGeeks | 29 Dec, 2019
Sometimes, while programming, we have a problem in which we might need to perform certain bitwise operations among list elements. This is an essential utility as we come across bitwise operations many times. Let’s discuss certain ways in which XOR can be performed.
Method #1 : Using reduce() + lambda + “^” operatorThe above functions can be combined to perform this task. We can employ reduce() to accumulate the result of XOR logic specified by the lambda function. Works only with Python2.
# Python code to demonstrate working of# List XOR# Using reduce() + lambda + "^" operator # initializing listtest_list = [4, 6, 2, 3, 8, 9] # printing original listprint("The original list is : " + str(test_list)) # List XOR# Using reduce() + lambda + "^" operatorres = reduce(lambda x, y: x ^ y, test_list) # printing result print("The Bitwise XOR of list elements are : " + str(res))
The original list is : [4, 6, 2, 3, 8, 9]
The Bitwise XOR of list elements are : 2
Method #2 : Using reduce() + operator.ixorThis task can also be performed using this method. In this the task performed by lambda function in above method is performed using ior function for cumulative XOR operation. Works with Python2 only.
# Python code to demonstrate working of# List XOR# Using reduce() + operator.ixorfrom operator import ixor # initializing listtest_list = [4, 6, 2, 3, 8, 9] # printing original listprint("The original list is : " + str(test_list)) # List XOR# Using reduce() + operator.ixorres = reduce(ixor, test_list) # printing result print("The Bitwise XOR of list elements are : " + str(res))
The original list is : [4, 6, 2, 3, 8, 9]
The Bitwise XOR of list elements are : 2
Python list-programs
Python
Python Programs
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Python Dictionary
How to Install PIP on Windows ?
Enumerate() in Python
Different ways to create Pandas Dataframe
Python String | replace()
Python program to convert a list to string
Python | Get dictionary keys as a list
Python | Split string into list of characters
Python | Convert a list to dictionary
How to print without newline in Python? | [
{
"code": null,
"e": 25345,
"s": 25317,
"text": "\n29 Dec, 2019"
},
{
"code": null,
"e": 25611,
"s": 25345,
"text": "Sometimes, while programming, we have a problem in which we might need to perform certain bitwise operations among list elements. This is an essential utility as w... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.