title
stringlengths
3
221
text
stringlengths
17
477k
parsed
listlengths
0
3.17k
Python - Get sum of tuples having same first value
Tuples are python collections or arrays which are ordered but unchangeable. If we get a number of tuples where the first element is the same, then we may have a scenario when we need to add the second elements of those tuples whose first elements are equal. In this method we will first consider a list made up of tuples. Then convert them to dictionary so that we can associate the elements in the tuple as key value pair. Then we apply the for loop with summing the value for each key o the dictionary. Finally use the map function to get back the list which has the summed up values. Live Demo List = [(3,19),(7, 31), (7, 50), (1, 25.5), (1, 12)] # Converting it to a dictionary tup = {i:0 for i, v in List} for key, value in List: tup[key] = tup[key]+value # using map result = list(map(tuple, tup.items())) print(result) Running the above code gives us the following result: [(3, 19), (7, 81), (1, 37.5)] Here we take a similar approach as above but use the defaultdict method of collections module. Now instead of using the map function, we access the dictionary items and convert them to a list. Live Demo from collections import defaultdict # list of tuple List = [(3,19),(7, 31), (7, 50), (1, 25.5), (1, 12)] dict = defaultdict(int) for key, value in List: dict[key] = dict[key]+value # Printing output print(list(dict.items())) Running the above code gives us the following result [(3, 19), (7, 81), (1, 37.5)]
[ { "code": null, "e": 1320, "s": 1062, "text": "Tuples are python collections or arrays which are ordered but unchangeable. If we get a number of tuples where the first element is the same, then we may have a scenario when we need to add the second elements of those tuples whose first elements are eq...
All you need to know about RNNs. A beginner’s guide into the... | by Suleka Helmini | Towards Data Science
Researchers came up with neural networks to model the behaviour of a human brain. But if you actually think about it, normal neural networks don’t really do that much justice to its original intention. The reason for this statement is that feedforward vanilla neural networks cannot remember the things it learns. Each iteration you train the network it starts fresh, it doesn’t remember what it saw in the previous iteration when you are processing the current set of data. This is a big disadvantage when identifying correlations and data patterns. This is where Recurrent Neural Networks (RNN)came into the picture. RNNs have a very unique architecture that helps them to model memory units (hidden state) that enable them to persist data, thus being able to model short term dependencies. Due to this reason, RNNs are extensively used in time-series forecasting to identify data correlations and patterns. Even though RNNs have been around for some time, everyone seems to have their own confusing way of explaining it’s architecture and no one really explains what happens behind the scenes. So let’s bridge the gap, shall we? This post is aimed at explaining the RNN architecture in a more granular level by going through its functionality. If you have blindly made simple RNN models using TensorFlow before and if you have been finding it hard to understand about what the inner workings of a RNN look like, then this article is just for you. We will basically be explaining what happens behind the curtains when these two lines of TensorFlow code that are responsible for the declaration of the RNN and initiating the execution is run. cell = tf.contrib.rnn.BasicRNNCell(rnn_size,activation=tf.nn.tanh)val1, state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32) If you ever searched for architectural information about RNNs, the architecture diagrams you might get are rather confusing if you start looking into them as a beginner. I will use an example approach to explain the RNN architecture. Before we get down to business, an important thing to note is that the RNN input needs to have 3 dimensions. Typically it would be batch size, the number of steps and number of features. The number of steps depicts the number of time steps/segments you will be feeding in one line of input of a batch of data that will be fed into the RNN. The RNN unit in TensorFlow is called the “RNN cell”. This name itself has created a lot of confusion among people. There are many questions on Stackoverflow that inquire if “RNN cell” refers to one single cell or the whole layer. Well, it’s more like the whole layer. The reason for this is that the connections in RNNs are recurrent, thus following a “feeding to itself” approach. Basically, the RNN layer is comprised of a single rolled RNN cell that unrolls according to the “number of steps” value (number of time steps/segments) you provide. As we mentioned earlier the main speciality in RNNs is the ability to model short term dependencies. This is due to the hidden state in the RNN. It retains information from one time step to another flowing through the unrolled RNN units. Each unrolled RNN unit has a hidden state. The current time steps hidden state is calculated using information of the previous time step’s hidden state and the current input. This process helps to retain information on what the model saw in the previous time step when processing the current time steps information. Also, something to note is that all the connections in RNN have weights and biases. The biases can be optional in some architectures. This process will be explained further in later parts of the article. Since you now have a basic idea, let’s break down the execution process with an example. Say your batch size is 6, RNN size is 7, the number of time steps/segments you would include in one input line is 5 and the number of features in one time step is 3. If this is the case, your input tensor (matrix) shape for one batch would look something like this: Tensor shape of one batch = (6,5,3) The data inside a batch would look something like this: Note: The data segmentation method used here is called the sliding window approach and is mostly used when doing time series analysis. You don’t have to worry about the data pre-processing process here. When first feeding the data into the RNN. It will have a rolled architecture as shown below: But when the RNN starts to process the data it will unroll and produce outputs as shown below: When you feed a batch of data into the RNN cell it starts the processing from the 1st line of input. Likewise, the RNN cell will sequentially process all the input lines in the batch of data that was fed and give one output at the end which includes all the outputs of all the input lines. In order to process a line of input, the RNN cell unrolls “number of steps” times. You can see this in the above figure (Fig 03). Since we defined “number of steps” as 5, the RNN cell has been unrolled 5 times. The execution process is as follows: First, the initial hidden state (S), which is typically a vector of zeros and the hidden state weight (h) is multiplied and then the hidden state bias is added to the result. In the meantime, the input at the time step t ([1,2,3]) and the input weight (i) is multiplied and the input bias is added to that result. We can obtain the hidden state at time step t by sending the addition of the above two results through an activation function, typically tanh (f). Then, to obtain the output at time step t, the hidden state (S) at time step t is multiplied by the output weight (O) at time step t and then the output bias is added to the result. When calculating the hidden state at time step t+1, the hidden state (S) at time step t is multiplied by the hidden state weight (h) and the hidden state bias is added to the result. Then as mentioned before the input at time step t+1 ([4,5,6]) will get multiplied by the input weight (i) and the input bias will be added to the result. These two results will then be sent through an activation function, typically tanh (f). Then, to obtain the output at time step t+1, the hidden state (S) at time step t+1 is multiplied by the output weight (O) at time step t+1 and then the output bias is added to the result. As you can see, when producing the output of time step t+1 it not only uses the input data of time step t+1 but also uses information of data in time step t via the hidden state at time step t+1. This process will repeat for all the time steps After processing all time steps in one line of input in the batch, we will have 5 outputs of shape (1,7). So when all these outputs are concatenated together. the shape becomes (1,5,7). When all the input lines of the batch are done processing we get 6 outputs of size (1,5,7). Thus, the final output of the whole batch would be (6,5,7). Note: All the hidden state weights, output weights and input weights have the same value throughout all the connections in a RNN. Coming back to the 2 lines of code we stated earlier: cell = tf.contrib.rnn.BasicRNNCell(rnn_size,activation=tf.nn.tanh)val1, state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32) The 1st line basically defines the activation function and the RNN size of the RNN cell that we want to create. The 2nd line executes the processing procedure of the input data by feeding it into the RNN. The processing will happen according to what we discussed earlier. Finally, the output (value with shape (6,5,7) ) of that batch will be assigned to the “val1” variable. The final value of the hidden state will be assigned to the “state” variable. We have now come to the end of the article. In this article, we discussed the data manipulation and representation process inside of a RNN in TensorFlow. With all the provided information, I hope that now you have a good understanding of how RNNs work in TensorFlow. For general information about the RNN architecture, you can refer to this article. For a deep dive into the RNN architecture refer to this article Coursera course on RNN
[ { "code": null, "e": 1081, "s": 171, "text": "Researchers came up with neural networks to model the behaviour of a human brain. But if you actually think about it, normal neural networks don’t really do that much justice to its original intention. The reason for this statement is that feedforward va...
MySQL error 1452 - Cannot add or update a child row: a foreign key constraint fails?
This error comes whenever we add a foreign key constraint between tables and insert records into the child table. Let us see an example. Creating the child table. mysql> create table ChildDemo -> ( -> id int, -> FKPK int -> ); Query OK, 0 rows affected (0.86 sec) Creating the second table. mysql> create table ParentDemo -> ( -> FKPK int, -> Name varchar(100) -> , -> primary key(FKPK) -> ); Query OK, 0 rows affected (0.57 sec) To add foreign key constraint. mysql> alter table ChildDemo add constraint ConstChild foreign key(FKPK) references ParentDemo(FKPK); Query OK, 0 rows affected (1.97 sec) Records: 0 Duplicates: 0 Warnings: 0 After creating foreign key constraint, whenever we insert records into the first table or child table, we will get the above error. mysql> insert into ChildDemo values(1,3); ERROR 1452 (23000): Cannot add or update a child row: a foreign key constraint fails (`business`.`childdemo`, CONSTRAINT `ConstChild` FOREIGN KEY (`FKPK`) REFERENCES `parentdemo` (`fkpk`)) The error comes when you are trying to add a row for which no matching row in in the other table. As stated “Foreign key relationships involve a parent table that holds the central data values, and a child table with identical values pointing back to its parent. The FOREIGN KEY clause is specified in the child table. It will reject any INSERT or UPDATE operation that attempts to create a foreign key value in a child table if there is no a matching candidate key value in the parent table.”
[ { "code": null, "e": 1199, "s": 1062, "text": "This error comes whenever we add a foreign key constraint between tables and insert records into the child table. Let us see an example." }, { "code": null, "e": 1225, "s": 1199, "text": "Creating the child table." }, { "code...
Docker for absolute beginners — what is Docker and how to use it (+ examples) | by Mike Huls | Towards Data Science
Docker is a fantastic tool that makes our lives much easier offering us standardization, productivity, efficiency, maintainability and compatibility of our code. It allows us to continuously and rapidly deploy and test our code and it is platform-independent. If you are unsure of what Docker is, what to use it for or how to use it; this is the article for you! I’ll try to give an explanation that’s as clear as possible for people who are new to Docker. At the end of this article you: See a lot of advantages using Docker Understand what Docker is and how it works Understand why to use Docker: what it offers you and how it can make your life easier Understand when to use Docker Are able to use Docker to pull an image and spin up your first container If you feel that any part of this article needs a little more explanation, if you have questions or if you think this article can be improved in any other way: please comment and let me know! We’ll start with the advantages Docker bring you; why you should use it in the first place. I’ve chopped this up into several parts, some of which may overlap a little. This is because I’m trying to explain as simply as I can in order to make Docker accessible for as many people as possible. After part 1 you’ll be totally convinced that you need Docker in your life; now we’re getting our feed in the mud. With a simple example-application we’ll perform all basic steps that’ll get a Docker container up and running. Let’s go! Below are listed the main reasons to use Docker. I’ll try to explain each point as clearly as possible. Data centers are full of servers. These are powerful computers that you can access over the internet and can be use for all kinds of things. Data centers offer clients the option to rent part of a server; this is called a virtual machine; not the entire computer but a part of it that acts like a full machine. This is called virtualization because your main machine (the host) acts like it is e.g. 3 separate, independent machines (the guests) like in the image below. You’ll see that the server hosts 3 virtual machines; one for our API, one for a webserver and one for a database. Also it has some infrastructure and some services to control all the virtual machines. The most important take away here is that each virtual machine has its own Guest OS. This is totally redundant and takes up a lot of memory. When you use Docker you don’t need virtual machines. You package your applications in a container which runs on a machine. This can be a server but also your own laptop: Notice that we save a lot of memory; our apps share an OS (the kernel at least), making it much more light-weight. Check out this article for a great, practical example on how to containerize a Postgres database. The Dockerfile allows us to ship not only our application code but also our environment. We can not only push the source code of the app to git but also include the Dockerfile. When someone else pulls our repository they can build the source code in the Docker container we can create from the Dockerfile. Like described in portability we can keep track of changes in our Docker file. This way we can experiment with newer versions of our software. We can create a branch in which we experiment with the newest version of Python for example. When your code runs in a container it cannot affect other pieces of code. It is completely isolated. You might recognize this problem if you’ve ever had unexpected errors in one of your scripts after updating a globally installed library. Applications rarely consist out of one part: most of the time multiple containers have to work together to create all functionalities. An example: a website, API and database have to be connected together. This is what Docker Compose allows us to do. We can create a file that defines how containers are connected with one another. We can use this file to instantiate all of the Dockerfiles for all of our containers all at once! Let’s get our hands dirty and code something already! You can see Docker as a way to pack you code in a nice little container that contains everything it needs to run it; it containerizes your code. The benefits are numerous: containers are scalable, cost-effective and are isolated from each other. This part focusses on the docker elements: Dockerfile: Specifications for how the image should be built Image: Like a CD: it contains all code but it doesn’t do anything yet. Container: A running image. Think of this as the CD that you’ve just put in the CD-player. It’s executing the image. All of these will be explained below and examples will be given. We’ll create a set of instructions that tells our machine how to build our image. In our case we want to create a simple website in Flask; a Python web framework. Check out the Dockerfile below: Let’s go through it line by line. Line 1. This tells Docker to install an OS (Debian Slim Buster) with Python 3.8 installed Line 3. Creates a folder in the docker container called ‘app’. In here all of our code will be housed Line 5. Copies the requirements.txt file on our machine to the WORKDIR on the docker container Line 6. This downloads and installs all the Python dependencies we need for our app. In our case this will install Flask Line 8. Copy everything from our current directory to the WORKDIR. This moves all of our source code Line 10: this starts our app by calling the installed Flask module and running our app on localhost. Our Dockerfile is defined, let’s use it to create an image. Run docker build --tag python-docker This command will take the Dockerfile and build it into an image. We’ll also give it a tag called python-docker. When the image is built we can execute docker images to find it. We’ve just made an image. Think of this as a CD-ROM of a game; it contains all the assets, graphics and code to make it work. We can spin up the image in a container now. A container is a running instance of our image. If the image is like a CD-ROM, now we put that image into our computer and run our game. The running game is our container in our analogy. In the same way we can run our image with the following command: docker run --publish 5000:5000 python-docker In this command we tell docker to run an image called python-docker. This is what we tagged the image with in the previous part. We also specify --publish 5000:5000. This details how we want to connect ports between our laptop (the host machine) and the docker container. Since Flask is running on port 5000 by default, the second part of this flag needs to be 5000. We’ve chosen to make the container accessible on our host machine on port 5000. To see this in action navigate to localhost:5000 and see our Flask website working! Try to run docker run --publish 4331:5000 python-docker` and you’ll see that you have to navigate to localhost:4331. Having learnt how to pull images and spin up containers opens a lot of doors in automating, deploying and testing your software. Still, we’ve only scratched the surface when it comes to all of the benefits Docker has to offer. In the next part, we’ll get into how Docker Compose can orchestrate multiple containers to work together and automate spinning up containers with a configuration file. Later we’ll look into incorporating Docker in CI/CD process and how to manage clusters of Docker engines in a Docker Swarm. Interested? Follow me to stay posted. I hope this article was clear but if you have suggestions/clarifications please comment so I can make improvements. In the meantime, check out my other articles on all kinds of programming-related topics like these: Docker Compose for absolute beginners Turn Your Code into a Real Program: Packaging, Running and Distributing Scripts using Docker Why Python is slow and how to speed it up Advanced multi-tasking in Python: applying and benchmarking threadpools and processpools Write you own C extension to speed up Python x100 Getting started with Cython: how to perform >1.7 billion calculations per second in Python Create a fast auto-documented, maintainable and easy-to-use Python API in 5 lines of code with FastAPI Happy coding!— Mike P.s: like what I’m doing? Follow me!
[ { "code": null, "e": 432, "s": 172, "text": "Docker is a fantastic tool that makes our lives much easier offering us standardization, productivity, efficiency, maintainability and compatibility of our code. It allows us to continuously and rapidly deploy and test our code and it is platform-independ...
Sum Of Prime | Practice | GeeksforGeeks
Given a number N, find if N can be expressed as a + b such that a and b are prime. Note: If [a, b] is one solution with a <= b, and [c, d] is another solution with c <= d, and a < c then [a, b] is considered as our answer. Example 1: Input: N = 8 Output: 3 5 Explanation: 3 and 5 are both prime and they add up to 8. Example 2: Input: N = 3 Output: -1 -1 Explanation: There are no solutions to the number 3. Your Task: You don't need to read input or print anything. Your task is to complete the function getPrimes() which takes an integer n as input and returns (a,b) as an array of size 2. Note: If no value of (a,b) satisfy the condition return (-1,-1) as an array of size 2. Expected Time Complexity: O(N*loglog(N)) Expected Auxiliary Space: O(N) Constraints: 3 <= N <= 106 0 raunakgiri211 day ago C++ solution (0.75/3.4 s) bool isPrime(int n) { if(n<=1)return 0; for(int i=2;i<=sqrt(n);i++) if(n%i==0)return 0; return 1; } vector<int> getPrimes(int N) { // code here vector<int> vec; for(int i=1;i<=N/2;i++) { if(isPrime(i) && isPrime(N-i)) { vec.push_back(i); vec.push_back(N-i); return vec; } } vec.push_back(-1); vec.push_back(-1); return vec; } 0 anutiger5 months ago vector<int> res(N + 1,1); for(int i = 2; i <= N ; i ++){ if(res[i] == 1){ int j = 2; while(i * j <= N){ res[i * j] = -1; j++; } } } vector<int> tmp; for(int i = 2 ; i<= N ; i ++){ if(res[i] != -1) tmp.push_back(i); } int i = 0; int j = tmp.size() - 1; while(i < tmp.size() - 1){ while(i < j){ if(tmp[i] + tmp[j] == N){ return {tmp[i],tmp[j]}; } else if(tmp[i] + tmp[j] > N) j--; else break; } i++; } return {-1,-1}; } 0 Onkar Kadam10 months ago Onkar Kadam Sieve of Eratosthenes: https://uploads.disquscdn.c... 0 Chirag Soni1 year ago Chirag Soni class Solution { public: vector<int> getPrimes(int N) { // code here vector<int> res1; res1.push_back(-1); res1.push_back(-1); bool prime[N+1]; memset(prime,true,sizeof(prime)); for(int i=2;i*i<=N;i++) { if(prime[i]) { for(int j=i*i;j<=N;j+=i) { prime[j]=false; } } } set<int> s; for(int i=2;i<=N;i++) { if(prime[i]==true) s.insert(i); } //for(auto i:v) //cout<<i<<" ";="" for(auto="" i:s)="" {="" if(s.find(n-i)!="s.end())" {="" vector<int=""> res; res.push_back(i); res.push_back(N-i); return res; } } return res1; }}; +1 ghost1 year ago ghost time of execution : 0.25 secusing the SIEVE OF ERATOSTHENES 0 ghost1 year ago ghost https://uploads.disquscdn.c... 0 Jdragon jds1 year ago Jdragon jds even is always easy. for odd we just check (n-2) for prime We strongly recommend solving this problem on your own before viewing its editorial. Do you still want to view the editorial? Login to access your submissions. Problem Contest Reset the IDE using the second button on the top right corner. Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values. Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints. You can access the hints to get an idea about what is expected of you as well as the final solution code. You can view the solutions submitted by other users from the submission tab.
[ { "code": null, "e": 464, "s": 238, "text": "Given a number N, find if N can be expressed as a + b such that a and b are prime.\nNote: If [a, b] is one solution with a <= b, and [c, d] is another solution with c <= d, and a < c then [a, b] is considered as our answer." }, { "code": null, ...
Algorithm Library | C++ Magicians STL Algorithm - GeeksforGeeks
21 Apr, 2022 For all those who aspire to excel in competitive programming, only having a knowledge about containers of STL is of less use till one is not aware what all STL has to offer. STL has an ocean of algorithms, for all < algorithm > library functions : Refer here.Some of the most used algorithms on vectors and most useful one’s in Competitive Programming are mentioned as follows : Non-Manipulating Algorithms sort(first_iterator, last_iterator) – To sort the given vector.reverse(first_iterator, last_iterator) – To reverse a vector.*max_element (first_iterator, last_iterator) – To find the maximum element of a vector.*min_element (first_iterator, last_iterator) – To find the minimum element of a vector.accumulate(first_iterator, last_iterator, initial value of sum) – Does the summation of vector elements sort(first_iterator, last_iterator) – To sort the given vector. reverse(first_iterator, last_iterator) – To reverse a vector. *max_element (first_iterator, last_iterator) – To find the maximum element of a vector. *min_element (first_iterator, last_iterator) – To find the minimum element of a vector. accumulate(first_iterator, last_iterator, initial value of sum) – Does the summation of vector elements CPP // A C++ program to demonstrate working of sort(),// reverse()#include <algorithm>#include <iostream>#include <vector>#include <numeric> //For accumulate operationusing namespace std; int main(){ // Initializing vector with array values int arr[] = {10, 20, 5, 23 ,42 , 15}; int n = sizeof(arr)/sizeof(arr[0]); vector<int> vect(arr, arr+n); cout << "Vector is: "; for (int i=0; i<n; i++) cout << vect[i] << " "; // Sorting the Vector in Ascending order sort(vect.begin(), vect.end()); cout << "\nVector after sorting is: "; for (int i=0; i<n; i++) cout << vect[i] << " "; // Reversing the Vector reverse(vect.begin(), vect.end()); cout << "\nVector after reversing is: "; for (int i=0; i<n; i++) cout << vect[i] << " "; cout << "\nMaximum element of vector is: "; cout << *max_element(vect.begin(), vect.end()); cout << "\nMinimum element of vector is: "; cout << *min_element(vect.begin(), vect.end()); // Starting the summation from 0 cout << "\nThe summation of vector elements is: "; cout << accumulate(vect.begin(), vect.end(), 0); return 0;} Vector is: 10 20 5 23 42 15 Vector after sorting is: 5 10 15 20 23 42 Vector after reversing is: 42 23 20 15 10 5 Maximum element of vector is: 42 Minimum element of vector is: 5 The summation of vector elements is: 115 6.count(first_iterator, last_iterator,x) – To count the occurrences of x in vector. 7. find(first_iterator, last_iterator, x) – Returns an iterator to the first occurence of x in vector and points to last address of vector ((name_of_vector).end()) if element is not present in vector. CPP // C++ program to demonstrate working of count()// and find()#include <algorithm>#include <iostream>#include <vector>using namespace std; int main(){ // Initializing vector with array values int arr[] = {10, 20, 5, 23 ,42, 20, 15}; int n = sizeof(arr)/sizeof(arr[0]); vector<int> vect(arr, arr+n); cout << "Occurrences of 20 in vector : "; // Counts the occurrences of 20 from 1st to // last element cout << count(vect.begin(), vect.end(), 20); // find() returns iterator to last address if // element not present find(vect.begin(), vect.end(),5) != vect.end()? cout << "\nElement found": cout << "\nElement not found"; return 0;} Occurrences of 20 in vector : 2 Element found 8. binary_search(first_iterator, last_iterator, x) – Tests whether x exists in sorted vector or not. 9. lower_bound(first_iterator, last_iterator, x) – returns an iterator pointing to the first element in the range [first,last) which has a value not less than ‘x’. 10. upper_bound(first_iterator, last_iterator, x) – returns an iterator pointing to the first element in the range [first,last) which has a value greater than ‘x’. C++ // C++ program to demonstrate working of lower_bound()// and upper_bound().#include <algorithm>#include <iostream>#include <vector>using namespace std; int main(){ // Initializing vector with array values int arr[] = {5, 10, 15, 20, 20, 23, 42, 45}; int n = sizeof(arr)/sizeof(arr[0]); vector<int> vect(arr, arr+n); // Sort the array to make sure that lower_bound() // and upper_bound() work. sort(vect.begin(), vect.end()); // Returns the first occurrence of 20 auto q = lower_bound(vect.begin(), vect.end(), 20); // Returns the last occurrence of 20 auto p = upper_bound(vect.begin(), vect.end(), 20); cout << "The lower bound is at position: "; cout << q-vect.begin() << endl; cout << "The upper bound is at position: "; cout << p-vect.begin() << endl; return 0;} The lower bound is at position: 3 The upper bound is at position: 5 Some Manipulating Algorithms arr.erase(position to be deleted) – This erases selected element in vector and shifts and resizes the vector elements accordingly.arr.erase(unique(arr.begin(),arr.end()),arr.end()) – This erases the duplicate occurrences in sorted vector in a single line. arr.erase(position to be deleted) – This erases selected element in vector and shifts and resizes the vector elements accordingly. arr.erase(unique(arr.begin(),arr.end()),arr.end()) – This erases the duplicate occurrences in sorted vector in a single line. CPP // C++ program to demonstrate working of erase()#include <algorithm>#include <iostream>#include <vector>using namespace std; int main(){ // Initializing vector with array values int arr[] = {5, 10, 15, 20, 20, 23, 42, 45}; int n = sizeof(arr)/sizeof(arr[0]); vector<int> vect(arr, arr+n); cout << "Vector is :"; for (int i=0; i<n; i++) cout << vect[i]<<" "; // Delete second element of vector vect.erase(vect.begin()+1); cout << "\nVector after erasing the element: "; for (int i=0; i<vect.size(); i++) cout << vect[i] << " "; // sorting to enable use of unique() sort(vect.begin(), vect.end()); cout << "\nVector before removing duplicate " " occurrences: "; for (int i=0; i<vect.size(); i++) cout << vect[i] << " "; // Deletes the duplicate occurrences vect.erase(unique(vect.begin(),vect.end()),vect.end()); cout << "\nVector after deleting duplicates: "; for (int i=0; i< vect.size(); i++) cout << vect[i] << " "; return 0;} Vector is :5 10 15 20 20 23 Vector after erasing the element: 5 15 20 20 23 Vector before removing duplicate occurrences: 5 15 20 20 23 Vector after deleting duplicates: 5 15 20 23 42 45 3. next_permutation(first_iterator, last_iterator) – This modified the vector to its next permutation. 4. prev_permutation(first_iterator, last_iterator) – This modified the vector to its previous permutation. CPP // C++ program to demonstrate working// of next_permutation()// and prev_permutation()#include <algorithm>#include <iostream>#include <vector>using namespace std; int main(){ // Initializing vector with array values int arr[] = {5, 10, 15, 20, 20, 23, 42, 45}; int n = sizeof(arr)/sizeof(arr[0]); vector<int> vect(arr, arr+n); cout << "Given Vector is:\n"; for (int i=0; i<n; i++) cout << vect[i] << " "; // modifies vector to its next permutation order next_permutation(vect.begin(), vect.end()); cout << "\nVector after performing next permutation:\n"; for (int i=0; i<n; i++) cout << vect[i] << " "; prev_permutation(vect.begin(), vect.end()); cout << "\nVector after performing prev permutation:\n"; for (int i=0; i<n; i++) cout << vect[i] << " "; return 0;} Given Vector is: 5 10 15 20 20 23 42 45 Vector after performing next permutation: 5 10 15 20 20 23 45 42 Vector after performing prev permutation: 5 10 15 20 20 23 42 45 5. distance(first_iterator,desired_position) – It returns the distance of desired position from the first iterator.This function is very useful while finding the index. CPP // C++ program to demonstrate working of distance()#include <algorithm>#include <iostream>#include <vector>using namespace std; int main(){ // Initializing vector with array values int arr[] = {5, 10, 15, 20, 20, 23, 42, 45}; int n = sizeof(arr)/sizeof(arr[0]); vector<int> vect(arr, arr+n); // Return distance of first to maximum element cout << "Distance between first to max element: "; cout << distance(vect.begin(), max_element(vect.begin(), vect.end())); return 0;} Distance between first to max element: 7 More – STL ArticlesThis article is contributed by Manjeet Singh. If you like GeeksforGeeks and would like to contribute, you can also write an article and mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above auspicious_boy devashish_ soumyalahiri believer411 kunal01 pratikthorat19 cpp-algorithm-library STL Competitive Programming STL Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Modulo 10^9+7 (1000000007) Prefix Sum Array - Implementation and Applications in Competitive Programming Bits manipulation (Important tactics) Formatted output in Java How to begin with Competitive Programming? Use of FLAG in programming Breadth First Traversal ( BFS ) on a 2D array Maximum LCM among all pairs (i, j) from the given Array Understanding The Coin Change Problem With Dynamic Programming Reduce the string by removing K consecutive identical characters
[ { "code": null, "e": 24560, "s": 24532, "text": "\n21 Apr, 2022" }, { "code": null, "e": 24939, "s": 24560, "text": "For all those who aspire to excel in competitive programming, only having a knowledge about containers of STL is of less use till one is not aware what all STL has...
Cadence Interview Experience | Software Developer C++ - GeeksforGeeks
04 Apr, 2019 Hi, I was recently interviewed for Software Developer position for Cadence Design Systems (Location: Bangalore) and got selected. I have 2.5-year experience in C++. Following were interview questions- One telephonic round followed by 3 F2F interviews. Round 1 (Telephonic Round): Place even numbers at even indexes and odd numbers at odd indexes, given that the number of odd numbers may or may not be equal to the number of even numbers. Extra odd/even number should be placed at the end of the array.ExampleInput : arr[] = {3, 6, 12, 1, 5, 8} Output : 6 3 12 1 8 5 Input : arr[] = {10, 9, 7, 18, 12, 19, 4, 20, 6, 14} Output : 10 9 18 7 20 19 4 12 14 6 For an equal number of odd numbers and even numbers – even-numbers-even-index-odd-numbers-odd-indexFind the height of a treesum-minimum-maximum-elements-subarrays-size-kWhat all are the sorting algorithms do you know? Implement any sorting algorithm. Place even numbers at even indexes and odd numbers at odd indexes, given that the number of odd numbers may or may not be equal to the number of even numbers. Extra odd/even number should be placed at the end of the array.ExampleInput : arr[] = {3, 6, 12, 1, 5, 8} Output : 6 3 12 1 8 5 Input : arr[] = {10, 9, 7, 18, 12, 19, 4, 20, 6, 14} Output : 10 9 18 7 20 19 4 12 14 6 For an equal number of odd numbers and even numbers – even-numbers-even-index-odd-numbers-odd-indexFind the height of a treesum-minimum-maximum-elements-subarrays-size-kWhat all are the sorting algorithms do you know? Implement any sorting algorithm. Place even numbers at even indexes and odd numbers at odd indexes, given that the number of odd numbers may or may not be equal to the number of even numbers. Extra odd/even number should be placed at the end of the array.ExampleInput : arr[] = {3, 6, 12, 1, 5, 8} Output : 6 3 12 1 8 5 Input : arr[] = {10, 9, 7, 18, 12, 19, 4, 20, 6, 14} Output : 10 9 18 7 20 19 4 12 14 6 For an equal number of odd numbers and even numbers – even-numbers-even-index-odd-numbers-odd-indexFind the height of a treesum-minimum-maximum-elements-subarrays-size-kWhat all are the sorting algorithms do you know? Implement any sorting algorithm. Place even numbers at even indexes and odd numbers at odd indexes, given that the number of odd numbers may or may not be equal to the number of even numbers. Extra odd/even number should be placed at the end of the array.ExampleInput : arr[] = {3, 6, 12, 1, 5, 8} Output : 6 3 12 1 8 5 Input : arr[] = {10, 9, 7, 18, 12, 19, 4, 20, 6, 14} Output : 10 9 18 7 20 19 4 12 14 6 For an equal number of odd numbers and even numbers – even-numbers-even-index-odd-numbers-odd-index Input : arr[] = {3, 6, 12, 1, 5, 8} Output : 6 3 12 1 8 5 Input : arr[] = {10, 9, 7, 18, 12, 19, 4, 20, 6, 14} Output : 10 9 18 7 20 19 4 12 14 6 For an equal number of odd numbers and even numbers – even-numbers-even-index-odd-numbers-odd-index Find the height of a tree sum-minimum-maximum-elements-subarrays-size-k What all are the sorting algorithms do you know? Implement any sorting algorithm. Round 2 : This round was completely based on C++ concepts. How does the map [STL Library] work? What is the time complexity of its implementation? – Map in C++Above question leads to the red-black tree – it’s working and properties.What is a const member functions? –const-member-functions-cWhat is polymorphism? How it can be achieved in C++?Deep discussion on virtual function vtable and a virtual destructor with code. How memory allocation happens for parent object and child object. – virtual-function-c++Why virtual destructor is required? – virtual-destructorFunctors-in-c++ How does the map [STL Library] work? What is the time complexity of its implementation? – Map in C++Above question leads to the red-black tree – it’s working and properties.What is a const member functions? –const-member-functions-cWhat is polymorphism? How it can be achieved in C++?Deep discussion on virtual function vtable and a virtual destructor with code. How memory allocation happens for parent object and child object. – virtual-function-c++Why virtual destructor is required? – virtual-destructorFunctors-in-c++ How does the map [STL Library] work? What is the time complexity of its implementation? – Map in C++Above question leads to the red-black tree – it’s working and properties.What is a const member functions? –const-member-functions-cWhat is polymorphism? How it can be achieved in C++?Deep discussion on virtual function vtable and a virtual destructor with code. How memory allocation happens for parent object and child object. – virtual-function-c++Why virtual destructor is required? – virtual-destructorFunctors-in-c++ How does the map [STL Library] work? What is the time complexity of its implementation? – Map in C++ Above question leads to the red-black tree – it’s working and properties. What is a const member functions? –const-member-functions-c What is polymorphism? How it can be achieved in C++? Deep discussion on virtual function vtable and a virtual destructor with code. How memory allocation happens for parent object and child object. – virtual-function-c++ Why virtual destructor is required? – virtual-destructor Functors-in-c++ Round 3: the kth largest element in an unsorted arrayDiscussion on the first question and leads to the implementation of the quicksort algorithm.How does heap sort work? Implement heap sort for the given array. the kth largest element in an unsorted arrayDiscussion on the first question and leads to the implementation of the quicksort algorithm.How does heap sort work? Implement heap sort for the given array. the kth largest element in an unsorted arrayDiscussion on the first question and leads to the implementation of the quicksort algorithm.How does heap sort work? Implement heap sort for the given array. the kth largest element in an unsorted array Discussion on the first question and leads to the implementation of the quicksort algorithm. How does heap sort work? Implement heap sort for the given array. Round 4: merge-k-sorted-arrays-set-2-different-sized-arraysLong discussion on current work and project. merge-k-sorted-arrays-set-2-different-sized-arraysLong discussion on current work and project. merge-k-sorted-arrays-set-2-different-sized-arraysLong discussion on current work and project. merge-k-sorted-arrays-set-2-different-sized-arrays Long discussion on current work and project. At last HR round was done next week. Received offer. Cadence India Marketing Experienced Interview Experiences Cadence India Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Amazon Interview Experience for SDE1 (8 Months Experienced) 2022 Morgan Stanley Interview for Software Engineer Manager (Experienced 2021) Goldman Sachs Interview for Java Developer in Hong Kong (10 Years Experienced) Amazon Interview Experience for System Development Engineer (Exp - 6 months) Microsoft Interview Experience | 4 Years Experienced Commonly Asked Java Programming Interview Questions | Set 2 Amazon Interview Questions Microsoft Interview Experience for Internship (Via Engage) Amazon Interview Experience for SDE-1 (On-Campus) Amazon Interview Experience for SDE-1
[ { "code": null, "e": 25288, "s": 25260, "text": "\n04 Apr, 2019" }, { "code": null, "e": 25489, "s": 25288, "text": "Hi, I was recently interviewed for Software Developer position for Cadence Design Systems (Location: Bangalore) and got selected. I have 2.5-year experience in C++...
CBSE Class 11 C++ | Sample Paper-1 - GeeksforGeeks
29 Oct, 2018 Instructions: {1}All Question are compulsory.{2}Programming language :c++ Q.1 [A] Explain functional components of a Computer? 2There are a few basic components that aids the working-cycle of a computer and these are called as the functional components of a computer. They are:1) The Input System2) Memory Organisation3) Output SystemRefer: Functional components of a Computer [B] Write different between application software and system software. 2 System software:1) These are the software that directly allows the user to interact with the hardware components of a computer system.2) The system software can be called the main software of a computer system as it handles the major portion of running a hardware.3) This System Software can be further divided into:The Operating SystemThe Language Processor Application Software:1) These are the basic software used to run to accomplish a particular action and task.2) These are the dedicated software, dedicated to performing simple and single tasks.3) These are divided into two types:The General Purpose Application SoftwareThe Specific Purpose Application Software Refer: Software Concepts [C]Define Hybrid computer? 1Hybrid Computers use both analog and digital technology to provide speed of analog computer and the accuracy of a digital computer. These computers accept digital or analog signals but an extensive conversion of data from digital to analog and analog to digital has to be done. Hybrid Computers are used as a cost effective means for complex simulations. [D]What function of operating system plays to manage memory. 1In a computer, both the CPU and the I/O devices interact with the memory. When a program needs to be executed it is loaded onto the main memory till the execution is complete. The common memory management techniques used by the operating system are:Partitioning: The total memory is divided into various partitions of same size or different sizes. This helps to accommodate number of programs in the memory.Virtual Memory: This is a technique used by the operating system by virtue of which the user can load the programs which are larger than the main memory of the computer. Q.2[A]Write differences between logical errors and syntax errors 2Logical Errors: These types of errors which provide incorrect output but appears to be error free are called logical errors. These errors solely depend on the logical thinking of the programmerSyntax errors: Errors that occur when you violate the rules of writing C/C++ syntax are known as syntax errors. This compiler error indicates something that must be fixed before the code can be compiled.logical errors and syntax errors. [B]What do you mean by robustness of a program. 2Robustness is the ability of a computer program to cope with errors during execution and cope with erroneous input. So, in order to be robust, the program should be able to handle wrongly input data with perform correctly over all types of inputs. [C]What is guard code. 1Guard code in a computer programming language is a check of integrity preconditions which are used to avoid errors during execution. [D]What is the process of translation of the algorithm, into a program, called? 1The process of coding in a specific language is termed as translation of algorithm into a program. [E] What are the characteristics of a good program ? 2A program should be developed to ensure proper functionality of the computer and also should be easy to understand. A computer program should have some important characteristics, which are as follows:Flexibility: A program should be flexible enough to handle most of the changes without having to rewrite the entire program.User Friendly: A program that can be easily understood by all types of users. In addition, the proper message for the user to input data and to display the result, besides making the program easily understandable and modifiable.Portability: Portability refers to the ability of an application to run on different platforms (operating systems) with or without minimal changes.Reliability: It is the ability of a program to do its intended function accurately even if there are even small changes in the computer system.Self-Documenting Code: The source code, which uses suitable name for the identifiers (variables and methods), is called self-documenting code. [F] Name two types of compilation errors ? 2 1) Syntax errors: These compiler errors indicates something that must be fixed before the code can be compiled. All these errors are detected by compiler and thus are known as compile-time errors.Most frequent syntax errors are:Missing Parenthesis (})Printing the value of variable without declaring itMissing semicolon etc2) Logical Errors: On compilation and execution of a program, desired output is not obtained when certain input values are given.3) Run-time Errors: Errors which occur during program execution (run-time) after successful compilation are called run-time errors. One of the most common run-time error is division by zero also known as Division error. Q.3[A]Name the header files to which the following belongs to : 21. getch(): conio.h2. isdigit(): ctype.h3. sqrt(): math.h4. atoi(): stdlib.h [B]Write output for following code: 2 int val, n = 1000;cin >> val;res = n + val > 1500 ? 100 : 200;cout << res; i) If the input is 1000. : 100ii) If the input is 200. : 200 [C] Write the equivalent c++ expressions 2(1) p=2(l+b)Expression: p=2*(l+b); (2) z=2(p/q)2Expression: z=2*pow((p/q), 2)) or 2*p/q*p/q (3) s=1/2mv2Expression: 1/2*m*v*v; or s=1/2*m*pow(v, 2); (4) x=-b+?(b2-4ac) /2aExpression: x=-b+sqrt(b*b-4*a*c)/2*a; or x=-b+sqrt(pow(b, 2)-4*a*c)/2*a; [D] Write difference between keyword and identifier. 2 Keywords:1) Keywords are pre-defined or reserved words in a programming language2) Each keyword is meant to perform a specific function in a program. Since keywords are referred names for a compiler, they can’t be used as variable names.3) C language supports 32 keywords while in C++ there are 31 additional keywords other than C Keywords.Identifiers:1) Identifiers are used as the general terminology for naming of variables, functions and arrays.2) These are user defined names consisting of arbitrarily long sequence of letters and digits with either a letter or the underscore(_) as a first character. Identifier names must differ in spelling and case from any keywords.3) There are certain rules that should be followed while naming c identifiers:They must begin with a letter or underscore(_).They must consist of only letters, digits, or underscore. No other special character is allowed.It should not be a keyword.It must not contain white space.It should be up to 31 characters long as only first 31 characters are significant. Q.4[A] Draw a flowchart that print the smallest of three given no. 2 [B] Rewrite the following program after removing syntactical errors. 2 #include <iostream.h>Void main(){ const MAX = 0; // Error int a, b; cin << a >> b; // Error if (a > b) MAX = a; for (x = 0; x < MAX; x++) // x undeclared error. cout << x;} void main(){ const int MAX = 0; int a, b; cin >> a >> b; if (a > b) MAX = a; for (int x = 0; x < MAX; x++) // x is an undefined symbol cout << x; [C] Write a program in c++ to print Fibonacci series: 0, 1, 1, 2, 3, 5, 8.. 3 #include <iostream>using std::cout;void fib(int n){ int a = 0, b = 1, c; if (n >= 0) cout << a << " "; if (n >= 1) cout << b << " "; for (int i = 2; i <= n; i++) { c = a + b; cout << c << " "; a = b; b = c; }} // Driver codeint main(){ int n; cout << "Enter the value of n"; cin >> n; fib(n); return 0;} [D] Write a program in c++ to find out factorial of a given no. 3 #include <iostream>using namespace std; int fact(int n){ if (n == 1 || n == 0) return 1; return n * fact(n - 1);} int main(){ int n; cout << "Enter the number"; cin >> n; cout << "factorial of n is" << fact(n); return 0;} Q.5[A] Write a program in c++ to replace every space in a string with hyphen. 2 #include <iostream>#include <string.h>using namespace std; void replace(char* str, int len){ for (int i = 0; i < len; i++) { if (str[i] == ' ') str[i] = '_'; } cout << str;} int main(){ char str[] = "geeks for geeks"; int len = strlen(str); replace(str, len); return 0;} [B] Find the total no. of elements and total size of the following array: 2(i) int student[20] (ii) float A[4][5] i) total no. of elements = 20total size = 20*2 = 40 bytesii) total no. of elements =4*5=20total size =4*4*5= 80 bytes [C] Rewrite the following program after removing syntactical errors 2 #include <iostream.h>main(){ int sum[2, 4]; for (i = 0; i < 2; i++) for (j = 0; j <= 3; i++) { cout << sum; } [D] Find out the output for the following program: 4 #include <iostream.h>main(){ int a[5] = { 5, 10, 15, 20, 25 }; int i, j, k = 1, m; i = ++a[1]; j = a[2]++; m= a[i++};cout << i << j << k << m;} [E] Write a program in c++ to find row and column sum of a matrix . 3 #include <iostream>using namespace std;#define MAX 10 int sum(int a[][MAX], int n){ int i, j; int sum = 0; for (i = 0; i < n; i++) { for (j = 0; j < n; j++) { if (i == 0 || j == 0 || i == (n - 1) || j == (n - 1)) sum = sum + a[i][j]; } } cout << sum; return 0;} int main(){ int a[10][10]; int n, i, j; cout << "enter the dimension of matrix"; cin >> n; cout << "enter the elements"; for (i = 0; i < n; i++) for (j = 0; j < n; j++) cin >> a[i][j]; sum(a, n); return 0;} Q.6[A] What are the 3 steps using a function . 3The three steps of using a function correctly is:i) Function declaration: Function prototype is declared, so that compiler can know about the parameters and return types of a functionii) Function definition: The entire body and functionality of a function is designed and written inside the function block.iii) Function calling: Finally after definition and declaration of a function, it is called inside the driver/main function to perform the desired functionality. [B] Find the output of the following program: 2 #include <iostream.h>void Execute(int& x, int y = 200){ int temp = x + y; x + = temp; if (y != 200) cout << temp << x << y;}main(){ int a = 50, b = 20; Execute(a, b); cout << a << b;} Output: (i) a= 120(ii)b=20 [C] Write a function in C++ having 2 parameters x and n of integer type with result type float to find the sum of following series :-1 + x/2! + x2/3! +.......................+xn/(n+1)! 3 #include <iostream>#include <math.h> int fact(int z){ if (z == 1) return 1; return x * fact(x - 1);} double sum(int x, int n){ double i, total = 1.0; for (i = 1; i <= n; i++) total = total + (pow(x, i) / fact(i + 1)); return total;} // Driver codeint main(){ int x; int n; cout << "" enter x and n "; cin >> x >> n printf("%.2f", sum(x, n)); return 0;} [D] Write a program to calculate the sum of n natural numbers by using function.3 #include <iostream>using namespace std;int sum(int n){ int i, sum; sum = 0; for (i = 1; i <= n; i++) { sum = sum + i; } cout <<”sum of natural numbers is”<< sum;} int main(){ int n; cout << "enter the range of sum"; cin >> n; sum(n); return 0;} Q.7[A] Convert the following into its binary equivalent codes. 4(i) (84)10 = (?)2= (1010100)2(ii) (2C9)16 = (?)10= (2C9)16 = (001011001001)2 = (713)10(iii) (101010)2= (?)10= (42)10(iv) (3674)8 =(?)2= (11110111100)2 Refer: Number System and base conversions [B] Express -4 in 1’s complement form. 1 [C] What is the function of a bus . 1Bus is a group of conducting wires which carries information, and all the peripherals are connected to microprocessor through Bus. [D] Write two types of cache memory. 2The two types of cache memory are L1 and L2 Cache. [D] write difference between SRAM and DRAM. 2SRAM has lower access time. So it is faster than DRAM.SRAM is costlier.DRAM has higher access time. So it is slower than DRAM.DRAM is cheaper. Refer: SRAM and DRAM CBSE - Class 11 school-programming Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Python Dictionary Enumerate() in Python How to Install PIP on Windows ? Different ways to create Pandas Dataframe Python String | replace() Create a Pandas DataFrame from Lists Reading and Writing to text files in Python *args and **kwargs in Python sum() function in Python How to drop one or multiple columns in Pandas Dataframe
[ { "code": null, "e": 24788, "s": 24760, "text": "\n29 Oct, 2018" }, { "code": null, "e": 24802, "s": 24788, "text": "Instructions:" }, { "code": null, "e": 24862, "s": 24802, "text": "{1}All Question are compulsory.{2}Programming language :c++" }, { "c...
How to check if a string is a valid keyword in Python?
Like other languages, Python also has some reserved words. These words hold some special meaning. Sometimes it may be a command, or a parameter etc. We cannot use keywords as variable names. In this section we will see how to check a string is valid keyword or not. To check this things, we have to import the keyword module in Python. import keyword In the keyword module, there is a function iskeyword(). It can be used to check whether a string is valid keyword or not. In the following example, we are providing a list of words, and check whether the words are keywords or not. We are just separating the keywords and non-keywords using this program. Live Demo import keyword str_list = ['for', 'TP', 'python', 'del', 'Mango', 'assert', 'yield','if','Lion', 'as','Snake', 'box', 'return', 'try', 'loop', 'eye', 'global', 'while', 'update', 'is'] keyword_list = [] non_keyword_list = [] for item in str_list: if keyword.iskeyword(item): keyword_list.append(item) else: non_keyword_list.append(item) print("Keywords: " + str(keyword_list)) print("\nNon Keywords: " + str(non_keyword_list)) Keywords: ['for'] Non Keywords: ['TP'] Keywords: ['for'] Non Keywords: ['TP', 'python'] Keywords: ['for', 'del'] Non Keywords: ['TP', 'python', 'Mango'] Keywords: ['for', 'del', 'assert', 'yield', 'if'] Non Keywords: ['TP', 'python', 'Mango', 'Lion'] Keywords: ['for', 'del', 'assert', 'yield', 'if', 'as'] Non Keywords: ['TP', 'python', 'Mango', 'Lion', 'Snake'] Keywords: ['for', 'del', 'assert', 'yield', 'if', 'as'] Non Keywords: ['TP', 'python', 'Mango', 'Lion', 'Snake', 'box'] Keywords: ['for', 'del', 'assert', 'yield', 'if', 'as', 'return', 'try'] Non Keywords: ['TP', 'python', 'Mango', 'Lion', 'Snake', 'box', 'loop'] Keywords: ['for', 'del', 'assert', 'yield', 'if', 'as', 'return', 'try'] Non Keywords: ['TP', 'python', 'Mango', 'Lion', 'Snake', 'box', 'loop', 'eye'] Keywords: ['for', 'del', 'assert', 'yield', 'if', 'as', 'return', 'try', 'global', 'while'] Non Keywords: ['TP', 'python', 'Mango', 'Lion', 'Snake', 'box', 'loop', 'eye', 'update'] The keyword module has another option to get all of the keywords as a list. Live Demo import keyword print("All Keywords:") print(keyword.kwlist) All Keywords:['False', 'None', 'True', 'and', 'as', 'assert', 'break', 'class', 'continue', 'def', 'del', 'elif', 'else', 'except', 'finally', 'for', 'from', 'global', 'if', 'import', 'in', 'is', 'lambda', 'nonlocal', 'not', 'or', 'pass', 'raise', 'return', 'try', 'while', 'with', 'yield']
[ { "code": null, "e": 1253, "s": 1062, "text": "Like other languages, Python also has some reserved words. These words hold some special meaning. Sometimes it may be a command, or a parameter etc. We cannot use keywords as variable names." }, { "code": null, "e": 1328, "s": 1253, ...
How to handle Multiclass Imbalanced Data?- Say No To SMOTE | by Tamil Selvan S | Towards Data Science
One of the common problems in Machine Learning is handling the imbalanced data, in which there is a highly disproportionate in the target classes. Hello world, this is my second blog for the Data Science community. In this blog, we are going to see how to deal with the multiclass imbalanced data problem. When the target classes (two or more) of classification problems are not equally distributed, then we call it Imbalanced data. If we failed to handle this problem then the model will become a disaster because modeling using class-imbalanced data is biased in favor of the majority class. There are different methods of handling imbalanced data, the most common methods are Oversampling and creating synthetic samples. SMOTE is an oversampling technique that generates synthetic samples from the dataset which increases the predictive power for minority classes. Even though there is no loss of information but it has a few limitations. Limitations: SMOTE is not very good for high dimensionality dataOverlapping of classes may happen and can introduce more noise to the data. SMOTE is not very good for high dimensionality data Overlapping of classes may happen and can introduce more noise to the data. So, to skip this problem, we can assign weights for the class manually with the ‘class_weight’ parameter. Class weights modify the loss function directly by giving a penalty to the classes with different weights. It means purposely increasing the power of the minority class and reducing the power of the majority class. Therefore, it gives better results than SMOTE. I aim to keep this blog very simple. We have a few most preferred techniques for getting the weights for the data which worked for my Imbalanced learning problems. Sklearn utils.Counts to Length.Smoothen Weights.Sample Weight Strategy. Sklearn utils. Counts to Length. Smoothen Weights. Sample Weight Strategy. We can get class weights using sklearn to compute the class weight. By adding those weight to the minority classes while training the model, can help the performance while classifying the classes. from sklearn.utils import class_weightclass_weight = class_weight.compute_class_weight('balanced, np.unique(target_Y), target_Y)model = LogisticRegression(class_weight = class_weight)model.fit(X,target_Y)# ['balanced', 'calculated balanced', 'normalized'] are hyperpaameters whic we can play with. We have a class_weight parameter for almost all the classification algorithms from Logistic regression to Catboost. But XGboost has scale_pos_weight for binary classification and sample_weights (refer 4) for both binary and multiclass problems. Very simple and straightforward! Dividing the no. of counts of each class with the no. of rows. Then weights = df[target_Y].value_counts()/len(df)model = LGBMClassifier(class_weight = weights)model.fit(X,target_Y) This is one of the preferable methods of choosing weights. labels_dict is the dictionary object contains counts of each class. The log function smooths the weights for the imbalanced class. def class_weight(labels_dict,mu=0.15): total = np.sum(labels_dict.values()) keys = labels_dict.keys() weight = dict()for i in keys: score = np.log(mu*total/float(labels_dict[i])) weight[i] = score if score > 1 else 1return weight# random labels_dictlabels_dict = df[target_Y].value_counts().to_dict()weights = class_weight(labels_dict)model = RandomForestClassifier(class_weight = weights)model.fit(X,target_Y) This below function is different from the class_weight parameter which is used to get sample weights for the XGboost algorithm. It returns different weights for each training sample. Sample_weight is an array of the same length as data, containing weights to apply to the model’s loss for each sample. def BalancedSampleWeights(y_train,class_weight_coef): classes = np.unique(y_train, axis = 0) classes.sort() class_samples = np.bincount(y_train) total_samples = class_samples.sum() n_classes = len(class_samples) weights = total_samples / (n_classes * class_samples * 1.0) class_weight_dict = {key : value for (key, value) in zip(classes, weights)} class_weight_dict[classes[1]] = class_weight_dict[classes[1]] * class_weight_coef sample_weights = [class_weight_dict[i] for i in y_train] return sample_weights#Usageweight=BalancedSampleWeights(target_Y,class_weight_coef)model = XGBClassifier(sample_weight = weight)model.fit(X, target_Y) class_weights vs sample_weight: sample_weights is used to give weights for each training sample. That means that you should pass a one-dimensional array with the exact same number of elements as your training samples. class_weights is used to give weights for each target class. This means you should pass a weight for each class that you are trying to classify. The above are few methods of finding class weights and sample weights for your classifier. I mention almost all the techniques which worked well for my project. I’m requesting the readers to give a try on these techniques that could help you, if not take it as learning 😄 it may help you another time 😜 Reach me at LinkedIn 😍
[ { "code": null, "e": 194, "s": 47, "text": "One of the common problems in Machine Learning is handling the imbalanced data, in which there is a highly disproportionate in the target classes." }, { "code": null, "e": 353, "s": 194, "text": "Hello world, this is my second blog for ...
Deep Reinforcement Learning With Python | Part 2 | Creating & Training The RL Agent Using Deep Q Network (DQN) | by Mohammed AL-Ma'amari | Towards Data Science
In the first part, we went through making the game environment and explained it line by line. In this part, we are going to learn how to create and train a Deep Q Network (DQN) and enable agents to use it in order to be experts at our game. towardsdatascience.com In this part, we are going to be discussing : 1- Why Deep Q Network (DQN) ? 2- What is DQN? 3- How DQN Works? 4- Explaining our DQN Architecture. 5- Explaining the Agent Class. 6- Training the Agent. Someone might ask “Why didn’t you use Q-Learning instead of DQN ?” The answer to this question depends on many things such as: In our case, we can answer this question in two ways: If we want the input to the RL agent to be as close as the input to a human, we will choose the input to be the array representation of the field. In this case, the environment would be complex as we use Q-Learning and since the Q-Table is tremendously big, it would be impossible to store it. To prove this, consider the following calculations: Number of states that the input array has = (number of different values every item in the array can take) ^ (width*height) Number of states that the input array can has = 4 ^ 10 * 20 = 4 ^ 200 = 2.58225e120 Q-Table size = ACTION_SPACE size * Number of states that the input array can has Q-Table size = 5 * 2.58225e120 = 1.291125e121 To store an array with this number of items (each item is 8 bits), we need 9.39417077e109 Terabytes. That is why we simply use DQN instead of Q-Learning. On the other hand, if you want to use Q-Learning, it would be more efficient to use another kind of input. For example, you can use the X coordinate of the player and the hole, player’s width and hole’s width. This way the input is much simpler than using array representation. It is simply a normal neural network, the only difference is that its input is a state of the environment and its output is the best action to perform in order to maximise the reward for the current step. We do this by using Experience Replay and Replay Memory, those concepts will be explained in the next section. To fully understand how DQN works, you need to know some concepts related to DQN: Similar to the way humans learn by using their memory of previous experience DQNs use this technique too. Experience Replay : some data that is collected after every step the agent perform, this experience replay contains [current_state, current_action, step_reward, next_state]. Replay Memory : is a stack of n experience replays, replay memory is mainly used to train the DQN by getting a random sample of replays and use those replays as the input to the DQN. Why Using random sample of replays instead of using sequential replays? When using sequential replays the DQN tends to overfit instead of generalizing. A key reason for using replay memory is to break the correlation between consecutive samples. To get a consistent results we will train two models, the first model “model” will be fit after every step made by the agent, on the other hand, the second model “target_model” loads the weights of “model” every n steps (n = UPDATE_TARGET_EVERY). We do this because on the beginning, everything is random, from the initial weights of the “model” to the actions performed by the agent. This randomness makes it harder on the model to perform good actions, but when we have another model that uses the knowledge gained by the first model every n steps, we have some degree of consistency. After we explained some key concepts know we can summarize the process of learning, I will use the words of DeepLizard from this wonderful blog : For our DQN many architectures were tried many of them did not work, but eventually one architecture proved to be working well. One of the first failures was an architecture with two output layers, the first output layer is responsible for predicting the best move (left, right, or no move) while the other output layer is responsible for predicting the best width changing action (increasing the width, decreasing the width, not changing the width). Another failures included too deep networks, in addition to their slow training process their performance were too poor. After some failures a grid search was performed to find architectures that can outperform humans playing the game, following tables show results of some grid searches: Note: next tables are ordered to show best result lastly. From the result of the first grid search we can clearly see that complicated and deep networks failed to learn how to play the game, on the other hand, the simplest network worked the best. Using the result from the first grid search, another grid search was performed and got some good results: From this result we see that “Best Only” does not enhance the model’s performance, on the other side, using both ECC(Epsilon Conditional Constentation) and EF(Epsilon Fluctuation) together can improve the model’s performance. We will discuss ECC and EF in another blog. Some other grid searches results: Testing “Best Only”: Testing even simpler networks: After all these grid searches we finally settled on using an architecture with one convolutional layer with 32 filter, batch size of 128, two dense(fully connected) layers with 32 nodes each, and we will use both ECC and EF together. Model: "model_1"_________________________________________________________________Layer (type) Output Shape Param # =================================================================input_1 (InputLayer) (None, 20, 10, 1) 0 _________________________________________________________________conv2d_1 (Conv2D) (None, 18, 8, 32) 320 _________________________________________________________________dropout_1 (Dropout) (None, 18, 8, 32) 0 _________________________________________________________________flatten_1 (Flatten) (None, 4608) 0 _________________________________________________________________dense_1 (Dense) (None, 32) 147488 _________________________________________________________________dense_2 (Dense) (None, 32) 1056 _________________________________________________________________output (Dense) (None, 5) 165 =================================================================Total params: 149,029Trainable params: 149,029Non-trainable params: 0_________________________________________________________________ Input Layer : The input shape is the same as the shape of the array that represents the playing field (20 by 10). Convolutional layers : One Conv2D layer with 32 filters with size of 2*2 Dropout of 20% Flatten : convert the output of the convolutional layer from 2D into 1D array. Dense (Fully Connected) layers : Two dense layers each has 32 nodes. Output Layer : The output layer contains 5 output nodes each node represents an action [no_action, move_left, move_right, decrease_width, increase_width] Agent class is a class that contains everything related to the agent such as the DQN, the training function, the replay memory and other stuff, following is a line-by-line explanation of this class. These two functions are used to create a model given two lists: conv_list : each item of this list defines number of filters for a convolutional layer. dense_list : each item of this list defines number of nodes for a dense layer. In order to keep tracking the best model and save it to be used after training the following function is used : Next are some constants : Then comes the architectures that will be trained: A grid search will be performed using the previous three architectures and the result of the grid search is stored in a dataframe. Check out my Github repository of this code: github.com To recap, we discussed: The reasons behind choosing DQN instead of Q-Learning. DQNs, a brief explanation. How DQNs work? What architectures we used and why? The Agent class, explained code. The process of training the models and grid search for the best one. In the next part we will : Analyse the training results using Tensorboard. Trying the best model. Deep Q Learning w/ DQN — Reinforcement Learning p.5 Training & Testing Deep reinforcement learning (DQN) Agent — Reinforcement Learning p.6 Deep Q-Learning — Combining Neural Networks and Reinforcement Learning Replay Memory Explained — Experience for Deep Q-Network Training Other parts of this series: towardsdatascience.com towardsdatascience.com You can follow me on: Twitter LinkedIn FaceBook Deep Neural Networks for Regression Problems AI Generates Taylor Swift’s Song Lyrics Introduction to Random Forest Algorithm with Python Machine Learning Crash Course with TensorFlow APIs Summary How To Make A CNN Using Tensorflow and Keras ? How to Choose the Best Machine Learning Model ?
[ { "code": null, "e": 412, "s": 171, "text": "In the first part, we went through making the game environment and explained it line by line. In this part, we are going to learn how to create and train a Deep Q Network (DQN) and enable agents to use it in order to be experts at our game." }, { ...
MySQL select to convert numbers to millions and billions format?
You can use FORMAT() from MySQL to convert numbers to millions and billions format. Let us first create a table− mysql> create table DemoTable ( Value BIGINT ); Query OK, 0 rows affected (0.74 sec) Insert records in the table using insert command − mysql> insert into DemoTable values(78000000000); Query OK, 1 row affected (0.14 sec) mysql> insert into DemoTable values(10000000000); Query OK, 1 row affected (0.18 sec) mysql> insert into DemoTable values(90000000000); Query OK, 1 row affected (0.14 sec) mysql> insert into DemoTable values(450600000000); Query OK, 1 row affected (0.41 sec) Display all records from the table using select statement − mysql> select *from DemoTable; This will produce the following output − +--------------+ | Value | +--------------+ | 78000000000 | | 10000000000 | | 90000000000 | | 450600000000 | +--------------+ 4 rows in set (0.00 sec) Following is the query to convert numbers to million and billions format− mysql> select format(Value,0) as `FormatValue` from DemoTable; This will produce the following output− +-----------------+ | FormatValue | +-----------------+ | 78,000,000,000 | | 10,000,000,000 | | 90,000,000,000 | | 450,600,000,000 | +-----------------+ 4 rows in set (0.00 sec)
[ { "code": null, "e": 1175, "s": 1062, "text": "You can use FORMAT() from MySQL to convert numbers to millions and billions format. Let us first create a table−" }, { "code": null, "e": 1269, "s": 1175, "text": "mysql> create table DemoTable\n (\n Value BIGINT\n );\nQuery OK...
Independent Vertex Set
Independent sets are represented in sets, in which there should not be any edges adjacent to each other. There should not be any common vertex between any two edges. there should not be any edges adjacent to each other. There should not be any common vertex between any two edges. there should not be any vertices adjacent to each other. There should not be any common edge between any two vertices. there should not be any vertices adjacent to each other. There should not be any common edge between any two vertices. Let 'G' = (V, E) be a graph. A subset of 'V' is called an independent set of 'G' if no two vertices in 'S' are adjacent. Consider the following subsets from the above graphs − S1 = {e} S2 = {e, f} S3 = {a, g, c} S4 = {e, d} Clearly, S1 is not an independent vertex set, because for getting an independent vertex set, there should be at least two vertices in the form a graph. But here it is not that case. The subsets S2, S3, and S4 are the independent vertex sets because there is no vertex that is adjacent to anyone vertex from the subsets. Let 'G' be a graph, then an independent vertex set of 'G' is said to be maximal if no other vertex of 'G' can be added to 'S'. Consider the following subsets from the above graphs. S1 = {e} S2 = {e, f} S3 = {a, g, c} S4 = {e, d} S2 and S3 are maximal independent vertex sets of 'G'. In S1 and S4, we can add other vertices; but in S2 and S3, we cannot add any other vertex A maximal independent vertex set of 'G' with a maximum number of vertices is called the maximum independent vertex set. Consider the following subsets from the above graph − S1 = {e} S2 = {e, f} S3 = {a, g, c} S4 = {e, d} Only S3 is the maximum independent vertex set, as it covers the highest number of vertices. The number of vertices in a maximum independent vertex set of 'G' is called the independent vertex number of G (β2). For the complete graph Kn, Vertex covering number = α2 = n-1 Vertex independent number = β2 = 1 You have α2 + β2 = n In a complete graph, each vertex is adjacent to its remaining (n − 1) vertices. Therefore, a maximum independent set of Kn contains only one vertex. Therefore, β2=1 and α2=|v| − β2 = n-1 Note − For any graph 'G' = (V, E) α2 + β2 = |v| α2 + β2 = |v| If 'S' is an independent vertex set of 'G', then (V – S) is a vertex cover of G. If 'S' is an independent vertex set of 'G', then (V – S) is a vertex cover of G.
[ { "code": null, "e": 1113, "s": 1062, "text": "Independent sets are represented in sets, in which" }, { "code": null, "e": 1228, "s": 1113, "text": "there should not be any edges adjacent to each other. There should not be any common vertex between any two edges." }, { "c...
Python - Tagging Words
Tagging is an essential feature of text processing where we tag the words into grammatical categorization. We take help of tokenization and pos_tag function to create the tags for each word. import nltk text = nltk.word_tokenize("A Python is a serpent which eats eggs from the nest") tagged_text=nltk.pos_tag(text) print(tagged_text) When we run the above program, we get the following output − [('A', 'DT'), ('Python', 'NNP'), ('is', 'VBZ'), ('a', 'DT'), ('serpent', 'NN'), ('which', 'WDT'), ('eats', 'VBZ'), ('eggs', 'NNS'), ('from', 'IN'), ('the', 'DT'), ('nest', 'JJS')] We can describe the meaning of each tag by using the following program which shows the in-built values. import nltk nltk.help.upenn_tagset('NN') nltk.help.upenn_tagset('IN') nltk.help.upenn_tagset('DT') When we run the above program, we get the following output − NN: noun, common, singular or mass common-carrier cabbage knuckle-duster Casino afghan shed thermostat investment slide humour falloff slick wind hyena override subhumanity machinist ... IN: preposition or conjunction, subordinating astride among uppon whether out inside pro despite on by throughout below within for towards near behind atop around if like until below next into if beside ... DT: determiner all an another any both del each either every half la many much nary neither no some such that the them these this those We can also tag a corpus data and see the tagged result for each word in that corpus. import nltk from nltk.tokenize import sent_tokenize from nltk.corpus import gutenberg sample = gutenberg.raw("blake-poems.txt") tokenized = sent_tokenize(sample) for i in tokenized[:2]: words = nltk.word_tokenize(i) tagged = nltk.pos_tag(words) print(tagged) When we run the above program we get the following output − [([', 'JJ'), (Poems', 'NNP'), (by', 'IN'), (William', 'NNP'), (Blake', 'NNP'), (1789', 'CD'), (]', 'NNP'), (SONGS', 'NNP'), (OF', 'NNP'), (INNOCENCE', 'NNP'), (AND', 'NNP'), (OF', 'NNP'), (EXPERIENCE', 'NNP'), (and', 'CC'), (THE', 'NNP'), (BOOK', 'NNP'), (of', 'IN'), (THEL', 'NNP'), (SONGS', 'NNP'), (OF', 'NNP'), (INNOCENCE', 'NNP'), (INTRODUCTION', 'NNP'), (Piping', 'VBG'), (down', 'RP'), (the', 'DT'), (valleys', 'NN'), (wild', 'JJ'), (,', ','), (Piping', 'NNP'), (songs', 'NNS'), (of', 'IN'), (pleasant', 'JJ'), (glee', 'NN'), (,', ','), (On', 'IN'), (a', 'DT'), (cloud', 'NN'), (I', 'PRP'), (saw', 'VBD'), (a', 'DT'), (child', 'NN'), (,', ','), (And', 'CC'), (he', 'PRP'), (laughing', 'VBG'), (said', 'VBD'), (to', 'TO'), (me', 'PRP'), (:', ':'), (``', '``'), (Pipe', 'VB'), (a', 'DT'), (song', 'NN'), (about', 'IN'), (a', 'DT'), (Lamb', 'NN'), (!', '.'), (u"''", "''")] 187 Lectures 17.5 hours Malhar Lathkar 55 Lectures 8 hours Arnab Chakraborty 136 Lectures 11 hours In28Minutes Official 75 Lectures 13 hours Eduonix Learning Solutions 70 Lectures 8.5 hours Lets Kode It 63 Lectures 6 hours Abhilash Nelson Print Add Notes Bookmark this page
[ { "code": null, "e": 2778, "s": 2587, "text": "Tagging is an essential feature of text processing where we tag the words into grammatical categorization. We take help of tokenization and pos_tag function to create the tags for each word." }, { "code": null, "e": 2922, "s": 2778, ...
GATE | GATE-CS-2005 | Question 90 - GeeksforGeeks
28 Jun, 2021 Let E1 and E2 be two entities in an E/R diagram with simple single-valued attributes. R1 and R2 are two relationships between E1 and E2, where R1 is one-to-many and R2 is many-to-many. R1 and R2 do not have any attributes of their own. What is the minimum number of tables required to represent this situation in the relational model?(A) 2(B) 3(C) 4(D) 5Answer: (B)Explanation: The answer is B, i.e minimum 3 tables. Strong entities E1 and E2 are represented as separate tables. In addition to that many-to-many relationships(R2) must be converted as separate table by having primary keys of E1 and E2 as foreign keys. One-to-many relaionship (R1) must be transferred to ‘many’ side table(i.e. E2) by having primary key of one side(E1) as foreign key( this way we need not to make a separate table for R1). Let relation schema be E1(a1,a2) and E2( b1,b2). Relation E1( a1 is the key) a1 a2 ------- 1 3 2 4 3 4 Relation E2( b1 is the key, a1 is the foreign key, hence R1(one-many) relationship set satisfy here ) b1 b2 a1 ----------- 7 4 2 8 7 2 9 7 3 Relation R2 ( {a1, b1} combined is the key here , representing many-many relationship R2 ) a1 b1 -------- 1 7 1 8 2 9 3 9 Hence we will have minimum of 3 tables.Quiz of this Question ManasChhabra2 GATE-CS-2005 GATE-GATE-CS-2005 GATE Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments GATE | GATE-IT-2004 | Question 71 GATE | GATE CS 2011 | Question 7 GATE | GATE-CS-2015 (Set 3) | Question 65 GATE | GATE-CS-2016 (Set 2) | Question 48 GATE | GATE-CS-2014-(Set-3) | Question 38 GATE | GATE CS 2018 | Question 37 GATE | GATE-CS-2016 (Set 1) | Question 65 GATE | GATE-IT-2004 | Question 83 GATE | GATE-CS-2016 (Set 1) | Question 63 GATE | GATE-CS-2014-(Set-2) | Question 65
[ { "code": null, "e": 24360, "s": 24332, "text": "\n28 Jun, 2021" }, { "code": null, "e": 24777, "s": 24360, "text": "Let E1 and E2 be two entities in an E/R diagram with simple single-valued attributes. R1 and R2 are two relationships between E1 and E2, where R1 is one-to-many an...
Program to sort array by increasing frequency of elements in Python
Suppose we have an array with some elements where elements may appear multiple times. We have to sort the array such that elements are sorted according to their increase of frequency. So which element appears less number of time will come first and so on. So, if the input is like nums = [1,5,3,1,3,1,2,5], then the output will be [2, 5, 5, 3, 3, 1, 1, 1] To solve this, we will follow these steps − mp := a new map mp := a new map for each distinct element i from nums, dox:= number of i present in numsif x is present in mp, theninsert i at the end of mp[x]otherwise mp[x] := a list with only one element i for each distinct element i from nums, do x:= number of i present in nums x:= number of i present in nums if x is present in mp, theninsert i at the end of mp[x] if x is present in mp, then insert i at the end of mp[x] insert i at the end of mp[x] otherwise mp[x] := a list with only one element i otherwise mp[x] := a list with only one element i ans:= a new list ans:= a new list for each i in sort the mp based on key, dofor each j in sort the list mp[i] in reverse order, doinsert j, i number of times into ans for each i in sort the mp based on key, do for each j in sort the list mp[i] in reverse order, doinsert j, i number of times into ans for each j in sort the list mp[i] in reverse order, do insert j, i number of times into ans insert j, i number of times into ans return ans return ans Let us see the following implementation to get better understanding − Live Demo def solve(nums): mp = {} for i in set(nums): x=nums.count(i) try: mp[x].append(i) except: mp[x]=[i] ans=[] for i in sorted(mp): for j in sorted(mp[i], reverse=True): ans.extend([j]*i) return ans nums = [1,5,3,1,3,1,2,5] print(solve(nums)) [1,5,3,1,3,1,2,5] [2, 5, 5, 3, 3, 1, 1, 1]
[ { "code": null, "e": 1318, "s": 1062, "text": "Suppose we have an array with some elements where elements may appear multiple times. We have to sort the array such that elements are sorted according to their increase of frequency. So which element appears less number of time will come first and so o...
Single-Row and Multi-Row Partitions in Cassandra - GeeksforGeeks
15 Feb, 2021 In Cassandra, CQL(Cassandra Query Language) has two partitions as follows – Single-Row Partitions Multi-Row Partitions Single-Row Partitions :In Cassandra, the primary key represents a unique data partition and the clustering columns part are also helpful for data arrangement and are used to handle the data arrangement part. In Single-Row partitions, there is only a partitioning key on single column. Example –Let us take the Employee table having fields like Emp_id, Emp_name, Emp_email, where Emp_id is the primary key. CREATE table Employee( Emp_id UUID, Emp_name TEXT, Emp_email TEXT, primary key(Emp_id) ); You can check the partitioning logical reference model for the above example as follows – K - Primary key C - Clustering column S - Static column In Cassandra, the Primary key is the combination of the Partitioning key and Clustering column if any. Primary Key = Partition Key + [Clustering Columns] Multi-Row Partitions :In Multi-Row partitions, partitioning key is applied on more than one single column and clustering column for arrangement or partitioning data modelling. Example –Let us take the Event table having fields like Event_venue, Event_year, Event_artifact, Events_title, Events_country, where Event_venue, Event_year are the primary keys and Event_artifact is the clustering column key. CREATE table Events( Event_venue TEXT, Event_year INT, Event_artifact TEXT, Events_title TEXT, Events_country TEXT STATIC, primary key((Event_venue, Event_year), Event_artifact) ); You can check partitioning logical reference model for above example as follows – K - Primary key C - Clustering column S - Static column NoSQL SQL SQL Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments How to Update Multiple Columns in Single Update Statement in SQL? What is Temporary Table in SQL? SQL Query to Find the Name of a Person Whose Name Starts with Specific Letter SQL using Python SQL | Subquery How to Write a SQL Query For a Specific Date Range and Date Time? SQL Query to Convert VARCHAR to INT SQL Query to Delete Duplicate Rows SQL indexes SQL Query to Compare Two Dates
[ { "code": null, "e": 23877, "s": 23849, "text": "\n15 Feb, 2021" }, { "code": null, "e": 23953, "s": 23877, "text": "In Cassandra, CQL(Cassandra Query Language) has two partitions as follows –" }, { "code": null, "e": 23975, "s": 23953, "text": "Single-Row Par...
Create a GeeksforGeeks Wrapper Application using Electron - GeeksforGeeks
29 Jan, 2020 Electron is an open-source and platform-independent framework that is used to create native desktop applications using the power of Chromium engine and Node.js. We are going to create a simple application using Electron that acts as a wrapper around the GeeksforGeeks website. It contains the ability to quickly navigate to important parts of the website, open the ‘Online IDE’ separately whenever required and save articles that could be later read offline. It makes use of the various features of the Electron framework including browser windows, native menus, file handling and packaging the application for distribution. Prerequisites: The Node.js runtime is required to run an Electron app. This includes the tools like npm to help build and install the needed packages. Installation of Node.js on Windows. Knowledge of HTML, CSS and JavaScript Introductory Knowledge of Electron and Node.js Initialize a new Node project: Navigate to the place where you want your project to be created. Open a command prompt and use the following command to initialize a new project:npm initFill in the details of the project as asked in the command prompt window. This will create a package.json file that indicates all the libraries that would be used to run the application.Install Electron package using the following npm command:npm install electronOpen the package.json file and change the “scripts” portion to the following:"scripts": { "start": "electron ." } This makes it easy run our app through the npm utility. Navigate to the place where you want your project to be created. Open a command prompt and use the following command to initialize a new project:npm init npm init Fill in the details of the project as asked in the command prompt window. This will create a package.json file that indicates all the libraries that would be used to run the application. Install Electron package using the following npm command:npm install electron npm install electron Open the package.json file and change the “scripts” portion to the following:"scripts": { "start": "electron ." } This makes it easy run our app through the npm utility. "scripts": { "start": "electron ." } This makes it easy run our app through the npm utility. Creating the Electron basic structure: We start by creating the basic structure of our application. The index.js (or the respective file configured in package.json) is the entry point where the Electron executable will attempt to start the application. We will define application structure in the index.js file with the following code: Program:const { app, BrowserWindow } = require('electron') // Global variable that holds the app windowlet win function createWindow() { // Creating the browser window win = new BrowserWindow({ width: 960, height: 540, }) // Load a redirecting url from // login to the feed win.loadURL('https://auth.geeksforgeeks.org/?to=https://auth.geeksforgeeks.org/profile.php') win.on('closed', () => { win = null }) // Prevent from spawning new windows win.webContents.on('new-window', (event, url) => { event.preventDefault() win.loadURL(url) })} // Executing the createWindow function// when the app is readyapp.on('ready', createWindow) const { app, BrowserWindow } = require('electron') // Global variable that holds the app windowlet win function createWindow() { // Creating the browser window win = new BrowserWindow({ width: 960, height: 540, }) // Load a redirecting url from // login to the feed win.loadURL('https://auth.geeksforgeeks.org/?to=https://auth.geeksforgeeks.org/profile.php') win.on('closed', () => { win = null }) // Prevent from spawning new windows win.webContents.on('new-window', (event, url) => { event.preventDefault() win.loadURL(url) })} // Executing the createWindow function// when the app is readyapp.on('ready', createWindow) The application could be run using the following command:npm start npm start Output: Explanation: We have defined a BrowserWindow with the dimensions of the window and loaded the login page of the website using the loadURL() method. The BrowserWindow is like a browser that is embedded in our application and can be used to navigate through webpages. Whenever this application would be run, it would create an instance of the BrowserWindow and load the specified URL into the window. Creating the Menu: Electron applications have the feature to create menu items that would be natively displayed in the application’s menubar. These can be linked to actions that would take place when they are clicked. The menu is initially created from a template that defines how each menu and submenu should appear and what their role would be. Our menu has 6 parts: File: It has the option to save the current page and also exit the application. Site: It has the option to login and logout of the website. Learn: It has the options to various parts of the website that has written articles. Practice Questions: It has the option to open the Online IDE in a separate window and also directly go to the questions based on their difficulty. Contribute: It has various options that correspond to the contribution of articles to the website. Saved Articles: It allows the access of all the articles that have been saved previously. All the URLs have been directly sourced from the GeekforGeeks website. The “Saved Pages” portion is left empty so that it could be updated later. A separate window is created for the online IDE by creating a new instance of BrowserWindow and loading the URL in that. The final template menu code is as follows: Program:let menu_template = [ { label: 'File', submenu: [ { label: 'Save Page Offline', click() { savePageOffline() } }, { type: 'separator' }, { label: 'Exit', click() { app.quit() } } ] }, { label: 'Site', submenu: [ { label: 'Login', click() { win.loadURL("https://auth.geeksforgeeks.org") } }, { label: 'Logout', click() { win.loadURL("https://auth.geeksforgeeks.org/logout.php") } }, ] }, { label: 'Learn', submenu: [ { label: 'Quiz Corner', click() { win.loadURL("https://www.geeksforgeeks.org/quiz-corner-gq/") } }, { label: 'Last Minute Notes', click() { win.loadURL("https://www.geeksforgeeks.org/lmns-gq/") } }, { label: 'Interview Experiences', click() { win.loadURL("https://www.geeksforgeeks.org/company-interview-corner/") } }, { label: 'Must-Do Questions', click() { win.loadURL("https://www.geeksforgeeks.org/must-do-coding-questions-for-companies-like-amazon-microsoft-adobe/") } } ] }, { label: 'Practice Questions', submenu: [ { label: 'Online IDE', click() { // Creating new browser window for IDE ide_win = new BrowserWindow({ width: 800, height: 450, }) ide_win.loadURL("https://ide.geeksforgeeks.org") // Delete this window when closed ide_win.on('closed', () => { ide_win = null }) } }, { type: 'separator' }, { label: 'Easy Questions', click() { win.loadURL("https://practice.geeksforgeeks.org/explore/?difficulty[]=0&page=1") } }, { label: 'Medium Questions', click() { win.loadURL("https://practice.geeksforgeeks.org/explore/?difficulty[]=1&page=1") } }, { label: 'Hard Questions', click() { win.loadURL("https://practice.geeksforgeeks.org/explore/?difficulty[]=2&page=1") } }, { type: 'separator' }, { label: 'Latest Questions', click() { win.loadURL("https://practice.geeksforgeeks.org/recent.php") } } ] }, { label: 'Contribute', submenu: [ { label: 'Write New Article', click() { win.loadURL("https://contribute.geeksforgeeks.org/wp-admin/post-new.php") } }, { label: 'Pick Suggested Article', click() { win.loadURL("https://contribute.geeksforgeeks.org/request-article/request-article.php#pickArticleDiv") } }, { label: 'Write Interview Experience', click() { win.loadURL("https://contribute.geeksforgeeks.org/wp-admin/post-new.php?interview_experience") } } ] }, { id: 'saved', label: 'Saved Articles', submenu: [] }]Explanation:We will first import the Menu and MenuItem namespaces. These contain the definition of methods that we are going to use.The label property defines what the text would be of each of the item. The submenu property specifies the array of submenu items that would open up when clicked on a MenuItem.After each label, one could define the action that would take place when the submenu is clicked. For example, we will load other parts of the website using the loadURL() method. Whenever the user clicks a submenu, it will execute this method and a new part of the website would be loaded.A variable is defined which holds the menu that would be built from the template. The Menu namespace has the methods buildFromTemplate() and setApplicationMenu() to use the created menu in our application.// Build the template and use the menu const menu = Menu.buildFromTemplate(menu_template) Menu.setApplicationMenu(menu)The menubar and the submenusThe Online IDE in a separate windowAdding functionality of Saving Pages: We will now add the functionality of saving a page to the disk so that it could be accessed later even without an internet connection. We will first define the location where our articles would be stored. We can get the current working directory and create a folder for the saved pages.const savedFolder = __dirname + '\\saved\\'There are three functions that work together to save and retrieve the articles:The appendItemToMenu(filename) function:This function adds the given page title to the ‘Saved Pages’ submenu and also links it so that the pages would be loaded whenever the user clicks on them.The currently active menu is retrieved using the getApplicationMenu() method.It then uses the append() method to add a new MenuItem. This MenuItem constructor is given the filename that would be displayed as label and also the functionality that would take place when it is clicked.It will automatically update the current menu and the latest page can be used immediately.Code:function appendItemToMenu(filename) { curr_menu = Menu.getApplicationMenu() .getMenuItemById("saved").submenu curr_menu.append( new MenuItem({ label: path.basename(filename, '.html'), click() { console.log('Saved page opened') win.loadFile(savedFolder + path.basename(filename)) } }))}The savePageOffline() function:This function saves the whole page along with all images and stylesheets to disk.The file name is determined be using the getTitle() method which returns the title of the current page.the contents.savePage() method that will take the current webpage and save it to the given location with the above title.It also calls the appendItemToMenu() above which updates menu.Code:function savePageOffline() { pageTitle = win.getTitle() console.log("Saving:", pageTitle) win.webContents.savePage(savedFolder + pageTitle + '.html', 'HTMLComplete').then(() => { appendItemToMenu(pageTitle + '.html'); console.log('Page was saved successfully.') }).catch(err => { console.log(err) })}The getSavedArticles() function:This function retrieves all the files in the given folder and then adds them to the menu.It uses the readdirSync() method to return all the filenames present in the ‘saved’ file directory.The filenames are checked so that only the ones with an extension of “.html” are considered.These are then passed to the appendItemToMenu() function so that the menu is updated for each saved item.Code:function getSavedArticles() { fs.readdirSync(savedFolder).forEach(file => { if (path.extname(file) == '.html') { appendItemToMenu(file) } });}The savePageOffline() function is invoked from “Save Page Offline” in the “File” menu. The getSavedArticles() function is invoked during the creation of the BrowserWindow so that the previous pages are immediately available. The appendItemToMenu() function is invoked whenever a new page is saved. This allows to seamlessly save and retrieve articles that can be read offline.Saving a page and retrieving a saved pagePackaging the Application: As Electron is a platform-independent framework, the applications could be run on all the major platforms using a single codebase. The Electron community has created a package that bundles a finished application for the various supported platforms and makes it ready for distribution. let menu_template = [ { label: 'File', submenu: [ { label: 'Save Page Offline', click() { savePageOffline() } }, { type: 'separator' }, { label: 'Exit', click() { app.quit() } } ] }, { label: 'Site', submenu: [ { label: 'Login', click() { win.loadURL("https://auth.geeksforgeeks.org") } }, { label: 'Logout', click() { win.loadURL("https://auth.geeksforgeeks.org/logout.php") } }, ] }, { label: 'Learn', submenu: [ { label: 'Quiz Corner', click() { win.loadURL("https://www.geeksforgeeks.org/quiz-corner-gq/") } }, { label: 'Last Minute Notes', click() { win.loadURL("https://www.geeksforgeeks.org/lmns-gq/") } }, { label: 'Interview Experiences', click() { win.loadURL("https://www.geeksforgeeks.org/company-interview-corner/") } }, { label: 'Must-Do Questions', click() { win.loadURL("https://www.geeksforgeeks.org/must-do-coding-questions-for-companies-like-amazon-microsoft-adobe/") } } ] }, { label: 'Practice Questions', submenu: [ { label: 'Online IDE', click() { // Creating new browser window for IDE ide_win = new BrowserWindow({ width: 800, height: 450, }) ide_win.loadURL("https://ide.geeksforgeeks.org") // Delete this window when closed ide_win.on('closed', () => { ide_win = null }) } }, { type: 'separator' }, { label: 'Easy Questions', click() { win.loadURL("https://practice.geeksforgeeks.org/explore/?difficulty[]=0&page=1") } }, { label: 'Medium Questions', click() { win.loadURL("https://practice.geeksforgeeks.org/explore/?difficulty[]=1&page=1") } }, { label: 'Hard Questions', click() { win.loadURL("https://practice.geeksforgeeks.org/explore/?difficulty[]=2&page=1") } }, { type: 'separator' }, { label: 'Latest Questions', click() { win.loadURL("https://practice.geeksforgeeks.org/recent.php") } } ] }, { label: 'Contribute', submenu: [ { label: 'Write New Article', click() { win.loadURL("https://contribute.geeksforgeeks.org/wp-admin/post-new.php") } }, { label: 'Pick Suggested Article', click() { win.loadURL("https://contribute.geeksforgeeks.org/request-article/request-article.php#pickArticleDiv") } }, { label: 'Write Interview Experience', click() { win.loadURL("https://contribute.geeksforgeeks.org/wp-admin/post-new.php?interview_experience") } } ] }, { id: 'saved', label: 'Saved Articles', submenu: [] }] Explanation: We will first import the Menu and MenuItem namespaces. These contain the definition of methods that we are going to use. The label property defines what the text would be of each of the item. The submenu property specifies the array of submenu items that would open up when clicked on a MenuItem. After each label, one could define the action that would take place when the submenu is clicked. For example, we will load other parts of the website using the loadURL() method. Whenever the user clicks a submenu, it will execute this method and a new part of the website would be loaded. A variable is defined which holds the menu that would be built from the template. The Menu namespace has the methods buildFromTemplate() and setApplicationMenu() to use the created menu in our application.// Build the template and use the menu const menu = Menu.buildFromTemplate(menu_template) Menu.setApplicationMenu(menu) // Build the template and use the menu const menu = Menu.buildFromTemplate(menu_template) Menu.setApplicationMenu(menu) The menubar and the submenus The Online IDE in a separate window Adding functionality of Saving Pages: We will now add the functionality of saving a page to the disk so that it could be accessed later even without an internet connection. We will first define the location where our articles would be stored. We can get the current working directory and create a folder for the saved pages. const savedFolder = __dirname + '\\saved\\' There are three functions that work together to save and retrieve the articles: The appendItemToMenu(filename) function: This function adds the given page title to the ‘Saved Pages’ submenu and also links it so that the pages would be loaded whenever the user clicks on them. The currently active menu is retrieved using the getApplicationMenu() method. It then uses the append() method to add a new MenuItem. This MenuItem constructor is given the filename that would be displayed as label and also the functionality that would take place when it is clicked. It will automatically update the current menu and the latest page can be used immediately. Code:function appendItemToMenu(filename) { curr_menu = Menu.getApplicationMenu() .getMenuItemById("saved").submenu curr_menu.append( new MenuItem({ label: path.basename(filename, '.html'), click() { console.log('Saved page opened') win.loadFile(savedFolder + path.basename(filename)) } }))} function appendItemToMenu(filename) { curr_menu = Menu.getApplicationMenu() .getMenuItemById("saved").submenu curr_menu.append( new MenuItem({ label: path.basename(filename, '.html'), click() { console.log('Saved page opened') win.loadFile(savedFolder + path.basename(filename)) } }))} The savePageOffline() function: This function saves the whole page along with all images and stylesheets to disk. The file name is determined be using the getTitle() method which returns the title of the current page. the contents.savePage() method that will take the current webpage and save it to the given location with the above title. It also calls the appendItemToMenu() above which updates menu. Code:function savePageOffline() { pageTitle = win.getTitle() console.log("Saving:", pageTitle) win.webContents.savePage(savedFolder + pageTitle + '.html', 'HTMLComplete').then(() => { appendItemToMenu(pageTitle + '.html'); console.log('Page was saved successfully.') }).catch(err => { console.log(err) })} function savePageOffline() { pageTitle = win.getTitle() console.log("Saving:", pageTitle) win.webContents.savePage(savedFolder + pageTitle + '.html', 'HTMLComplete').then(() => { appendItemToMenu(pageTitle + '.html'); console.log('Page was saved successfully.') }).catch(err => { console.log(err) })} The getSavedArticles() function: This function retrieves all the files in the given folder and then adds them to the menu. It uses the readdirSync() method to return all the filenames present in the ‘saved’ file directory. The filenames are checked so that only the ones with an extension of “.html” are considered. These are then passed to the appendItemToMenu() function so that the menu is updated for each saved item. Code:function getSavedArticles() { fs.readdirSync(savedFolder).forEach(file => { if (path.extname(file) == '.html') { appendItemToMenu(file) } });} function getSavedArticles() { fs.readdirSync(savedFolder).forEach(file => { if (path.extname(file) == '.html') { appendItemToMenu(file) } });} The savePageOffline() function is invoked from “Save Page Offline” in the “File” menu. The getSavedArticles() function is invoked during the creation of the BrowserWindow so that the previous pages are immediately available. The appendItemToMenu() function is invoked whenever a new page is saved. This allows to seamlessly save and retrieve articles that can be read offline. Saving a page and retrieving a saved page Packaging the Application: As Electron is a platform-independent framework, the applications could be run on all the major platforms using a single codebase. The Electron community has created a package that bundles a finished application for the various supported platforms and makes it ready for distribution. The electron-packager tool can be globally installed for use in the CLI using the following command:npm install electron-packager -g npm install electron-packager -g The electron-packager has the following syntax:electron-packager <sourcedir> <appname> --platform=<platform> --arch=<architecture> [optional flags...] electron-packager <sourcedir> <appname> --platform=<platform> --arch=<architecture> [optional flags...] The ‘platform’ and ‘architecture’ could be specified if one is developing for a certain platform. Running the packager specifying only the ‘sourcedir’ and ‘appname’ will produce a bundle that could only be run on the host platform/architecture:electron-packager . geeksforgeeks-desktopelectron-packagerThe final packaged application for the windows platformFurther Reading: We have covered a very basic application that shows some of the features of Electron. The framework has many more features that can be integrated together to build more complex applications. It is advised to read further through the following links:Official Electron DocumentationCollection of apps built with Electron: electron-appsReference Code for this application: geeksforgeeks-desktopMy Personal Notes arrow_drop_upSave electron-packager . geeksforgeeks-desktop electron-packager The final packaged application for the windows platform Further Reading: We have covered a very basic application that shows some of the features of Electron. The framework has many more features that can be integrated together to build more complex applications. It is advised to read further through the following links: Official Electron Documentation Collection of apps built with Electron: electron-apps Reference Code for this application: geeksforgeeks-desktop ElectronJS JavaScript Technical Scripter Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Difference between var, let and const keywords in JavaScript Difference Between PUT and PATCH Request Remove elements from a JavaScript Array How to get character array from string in JavaScript? How to filter object array based on attributes? Roadmap to Become a Web Developer in 2022 Installation of Node.js on Linux How to fetch data from an API in ReactJS ? Top 10 Projects For Beginners To Practice HTML and CSS Skills How to insert spaces/tabs in text using HTML/CSS?
[ { "code": null, "e": 25326, "s": 25298, "text": "\n29 Jan, 2020" }, { "code": null, "e": 25487, "s": 25326, "text": "Electron is an open-source and platform-independent framework that is used to create native desktop applications using the power of Chromium engine and Node.js." ...
Comments in R - GeeksforGeeks
28 Jul, 2020 Comments are generic English sentences, mostly written in a program to explain what it does or what a piece of code is supposed to do. More specifically, information that programmer should be concerned with and it has nothing to do with the logic of the code. They are completely ignored by the compiler and are thus never reflected on to the input. The question arises here that how will the compiler know whether the given statement is a comment or not?The answer is pretty simple. All languages use a symbol to denote a comment and this symbol when encountered by the compiler helps it to differentiate between a comment and statement. Comments are generally used for the following purposes: Code Readability Explanation of the code or Metadata of the project Prevent execution of code To include resources Types of CommentsThere are generally three types of comments supported by languages, namely- Single-line Comments- Comment that only needs one line Multi-line Comments- Comment that requires more than one line. Documentation Comments- Comments that are drafted usually for a quick documentation look-up Note: R doesn’t support Multi-line and Documentation comments. It only supports single-line comments drafted by a ‘#’ symbol. As stated in the Note provided above, currently R doesn’t have support for Multi-line comments and documentation comments. R provides its users with single-lined comments in order to add information about the code. Single-line comments are comments that require only one line. They are usually drafted to explain what a single line of code does or what it is supposed to produce so that it can help someone referring to the source code.Just like python single-line comments, any statement starting with “#” is a comment in R. Syntax: # comment statement Example 1: # geeksforgeeks The above code when executed will not produce any output, because R will consider the statement as a comment and hence the compiler will ignore the line. Example 2: # R program to add two numbers # Assigning values to variablesa <- 9b <- 4 # Printing sumprint(a + b) Output: [1] 13 As stated earlier that R doesn’t support multi-lined comments, but to make the commenting process easier, R allows commenting multiple single lines at once. There are two ways to add multiple single-line comments in R Studio: First way: Select the multiple lines which you want to comment using the cursor and then use the key combination “control + shift + C” to comment or uncomment the selected lines. Second way: The other way is to use the GUI, select the lines which you want to comment by using the cursor and click on “Code” in menu, a pop-up window pops out in which we need to select “Comment/Uncomment Lines” which appropriately comments or uncomment the lines which you have selected. This makes the process of commenting a block of code easier and faster than adding # before each line one at a time. AmiyaRanjanRout Picked R-basics R Language Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Change column name of a given DataFrame in R How to Replace specific values in column in R DataFrame ? Filter data by multiple conditions in R using Dplyr Adding elements in a vector in R programming - append() method Loops in R (for, while, repeat) Change Color of Bars in Barchart using ggplot2 in R How to change Row Names of DataFrame in R ? Convert Factor to Numeric and Numeric to Factor in R Programming How to Change Axis Scales in R Plots? Remove rows with NA in one column of R DataFrame
[ { "code": null, "e": 29070, "s": 29042, "text": "\n28 Jul, 2020" }, { "code": null, "e": 29420, "s": 29070, "text": "Comments are generic English sentences, mostly written in a program to explain what it does or what a piece of code is supposed to do. More specifically, informati...
BabelJS - Examples
We will use ES6 features and create a simple project. Babeljs will be used to compile the code to ES5. The project will have a set of images, which will autoslide after a fixed number of seconds. We will use ES6 class to work on it. We have used babel 6 in the project setup. In case you want to switch to babel 7, install the required packages of babel using @babel/babel-package-name. We will use gulp to build the project. To start with, we will create the project setup as shown below npm init We have created a folder called babelexample. Further, we will install gulp and other required dependencies. npm install gulp --save-dev npm install gulp-babel --save-dev npm install gulp-connect --save-dev npm install babel-preset-env --save-dev Here is the Package.json after installation − We will add the Preset environment details to .babelrc file as follows − Since we need the gulp task to build the final file, we will create gulpfile.js with the task that we need gulpfile.js var gulp = require('gulp'); var babel = require('gulp-babel'); var connect = require("gulp-connect"); gulp.task('build', () => { gulp.src('src/./*.js') .pipe(babel()) .pipe(gulp.dest('./dev')) }); gulp.task('watch', () => { gulp.watch('./*.js', ['build']); }); gulp.task("connect", function () { connect.server({ root: ".", livereload: true }); }); gulp.task('start', ['build', 'watch', 'connect']); We have created three tasks in gulp, [‘build’,’watch’,’connect’]. All the js files available in src folder will be converted to es5 using babel as follows gulp.task('build', () => { gulp.src('src/./*.js') .pipe(babel()) .pipe(gulp.dest('./dev')) }); The final changes are stored in the dev folder. Babel uses presets details from .babelrc. In case you want to change to some other preset, you can change the details in .babelrc file. Now, we will create a .js file in src folder using es6 JavaScript and run gulp start command to execute the changes. The project structure is as follows − src/slidingimage.js class SlidingImage { constructor(width, height, imgcounter, timer) { this.counter = 0; this.imagecontainerwidth = width; this.imagecontainerheight = height; this.slidercounter = imgcounter; this.slidetimer = timer; this.startindex = 1; this.css = this.applycss(); this.maincontainer = this.createContainter(); this.childcontainer = this.imagecontainer(); this.autoslide(); } createContainter() { let maindiv = document.createElement('div'); maindiv.id = "maincontainer"; maindiv.class = "maincontainer"; document.body.appendChild(maindiv); return maindiv; } applycss() { let slidercss = ".maincontainer{ position : relative; margin :auto;}.left, .right { cursor: pointer; position: absolute;" + "top: 50%; width: auto; padding: 16px; margin-top: -22px; color: white; font-weight: bold; " + "font-size: 18px; transition: 0.6s ease; border-radius: 0 3px 3px 0; }.right { right: 0; border-radius: 3px 0 0 3px;}" + ".left:hover, .right:hover { background-color: rgba(0,0,0,0.8);}"; let style = document.createElement('style'); style.id = "slidercss"; style.type = "text/css"; document.getElementsByTagName("head")[0].appendChild(style); let styleall = style; if (styleall.styleSheet) { styleall.styleSheet.cssText = slidercss; } else { let text = document.createTextNode(slidercss); style.appendChild(text); } } imagecontainer() { let childdiv = []; let imgcont = []; for (let a = 1; a >= this.slidercounter; a++) { childdiv[a] = document.createElement('div'); childdiv[a].id = "childdiv" + a; childdiv[a].style.width = this.imagecontainerwidth + "px"; childdiv[a].style.height = this.imagecontainerheight + "px"; if (a > 1) { childdiv[a].style.display = "none"; } imgcont[a] = document.createElement('img'); imgcont[a].src = "src/img/img" + a + ".jpg"; imgcont[a].style.width = "100%"; imgcont[a].style.height = "100%"; childdiv[a].appendChild(imgcont[a]); this.maincontainer.appendChild(childdiv[a]); } } autoslide() { console.log(this.startindex); let previousimg = this.startindex; this.startindex++; if (this.startindex > 5) { this.startindex = 1; } setTimeout(() => { document.getElementById("childdiv" + this.startindex).style.display = ""; document.getElementById("childdiv" + previousimg).style.display = "none"; this.autoslide(); }, this.slidetimer); } } let a = new SlidingImage(300, 250, 5, 5000); We will create img/ folder in src/ as we need images to be displayed; these images are to rotate every 5 seconds.The dev/ folder will store the compiled code. Run the gulp start to build the final file. The final project structure is as shown below − In slidingimage.js, we have created a class called SlidingImage, which has methods like createcontainer, imagecontainer, and autoslide, which creates the main container and adds images to it. The autoslide method helps in changing the image after the specified time interval. let a = new SlidingImage(300, 250, 5, 5000); At this stage, the class is called. We will pass width, height, number of images and number of seconds to rotate the image. gulp start dev/slidingimage.js "use strict"; var _createClass = function () { function defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if ("value" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } } return function (Constructor, protoProps, staticProps) { if (protoProps) defineProperties(Constructor.prototype, protoProps); if (staticProps) defineProperties(Constructor, staticProps); return Constructor; }; }(); function _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError("Cannot call a class as a function"); } } var SlidingImage = function () { function SlidingImage(width, height, imgcounter, timer) { _classCallCheck(this, SlidingImage); this.counter = 0; this.imagecontainerwidth = width; this.imagecontainerheight = height; this.slidercounter = imgcounter; this.slidetimer = timer; this.startindex = 1; this.css = this.applycss(); this.maincontainer = this.createContainter(); this.childcontainer = this.imagecontainer(); this.autoslide(); } _createClass(SlidingImage, [{ key: "createContainter", value: function createContainter() { var maindiv = document.createElement('div'); maindiv.id = "maincontainer"; maindiv.class = "maincontainer"; document.body.appendChild(maindiv); return maindiv; } }, { key: "applycss", value: function applycss() { var slidercss = ".maincontainer{ position : relative; margin :auto;}.left, .right { cursor: pointer; position: absolute;" + "top: 50%; width: auto; padding: 16px; margin-top: -22px; color: white; font-weight: bold; " + "font-size: 18px; transition: 0.6s ease; border-radius: 0 3px 3px 0; } .right { right: 0; border-radius: 3px 0 0 3px;}" + ".left:hover, .right:hover { background-color: rgba(0,0,0,0.8);}"; var style = document.createElement('style'); style.id = "slidercss"; style.type = "text/css"; document.getElementsByTagName("head")[0].appendChild(style); var styleall = style; if (styleall.styleSheet) { styleall.styleSheet.cssText = slidercss; } else { var text = document.createTextNode(slidercss); style.appendChild(text); } } }, { key: "imagecontainer", value: function imagecontainer() { var childdiv = []; var imgcont = []; for (var _a = 1; _a <= this.slidercounter; _a++) { childdiv[_a] = document.createElement('div'); childdiv[_a].id = "childdiv" + _a; childdiv[_a].style.width = this.imagecontainerwidth + "px"; childdiv[_a].style.height = this.imagecontainerheight + "px"; if (_a > 1) { childdiv[_a].style.display = "none"; } imgcont[_a] = document.createElement('img'); imgcont[_a].src = "src/img/img" + _a + ".jpg"; imgcont[_a].style.width = "100%"; imgcont[_a].style.height = "100%"; childdiv[_a].appendChild(imgcont[_a]); this.maincontainer.appendChild(childdiv[_a]); } } }, { key: "autoslide", value: function autoslide() { var _this = this; console.log(this.startindex); var previousimg = this.startindex; this.startindex++; if (this.startindex > 5) { this.startindex = 1; } setTimeout(function () { document.getElementById("childdiv" + _this.startindex).style.display = ""; document.getElementById("childdiv" + previousimg).style.display = "none"; _this.autoslide(); }, this.slidetimer); } }]); return SlidingImage; }(); var a = new SlidingImage(300, 250, 5, 5000); We will test the line of code in browser as shown below − index.html <html> <head></head> <body> <script type="text/javascript" src="dev/slidingimage.js"></script> <h1>Sliding Image Demo</h1> </body> </html> We have used the compiled file from the dev folder in index.html. The command gulp start starts the server where we can test the output. The code compiled works fine in all browsers. Print Add Notes Bookmark this page
[ { "code": null, "e": 2482, "s": 2095, "text": "We will use ES6 features and create a simple project. Babeljs will be used to compile the code to ES5. The project will have a set of images, which will autoslide after a fixed number of seconds. We will use ES6 class to work on it. We have used babel 6...
DocumentDB - Query Document
In DocumentDB, we actually use SQL to query for documents, so this chapter is all about querying using the special SQL syntax in DocumentDB. Although if you are doing .NET development, there is also a LINQ provider that can be used and which can generate appropriate SQL from a LINQ query. The Azure portal has a Query Explorer that lets you run any SQL query against your DocumentDB database. We will use the Query Explorer to demonstrate the many different capabilities and features of the query language starting with the simplest possible query. Step 1 − In the database blade, click to open the Query Explorer blade. Remember that queries run within the scope of a collection, and so the Query Explorer lets you choose the collection in this dropdown. Step 2 − Select Families collection which is created earlier using the portal. The Query Explorer opens up with this simple query SELECT * FROM c, which simply retrieves all documents from the collection. Step 3 − Execute this query by clicking the ‘Run query’ button. Then you will see that the complete document is retrieved in the Results blade. Following are the steps to run some document queries using .Net SDK. In this example, we want to query for the newly created documents that we just added. Step 1 − Call CreateDocumentQuery, passing in the collection to run the query against by its SelfLink and the query text. private async static Task QueryDocumentsWithPaging(DocumentClient client) { Console.WriteLine(); Console.WriteLine("**** Query Documents (paged results) ****"); Console.WriteLine(); Console.WriteLine("Quering for all documents"); var sql = "SELECT * FROM c"; var query = client.CreateDocumentQuery(collection.SelfLink, sql).AsDocumentQuery(); while (query.HasMoreResults) { var documents = await query.ExecuteNextAsync(); foreach (var document in documents) { Console.WriteLine(" Id: {0}; Name: {1};", document.id, document.name); } } Console.WriteLine(); } This query is also returning all documents in the entire collection, but we're not calling .ToList on CreateDocumentQuery as before, which would issue as many requests as necessary to pull down all the results in one line of code. Step 2 − Instead, call AsDocumentQuery and this method returns a query object with a HasMoreResults property. Step 3 − If HasMoreResults is true, then call ExecuteNextAsync to get the next chunk and then dump all the contents of that chunk. Step 4 − You can also query using LINQ instead of SQL if you prefer. Here we've defined a LINQ query in q, but it won't execute until we run .ToList on it. private static void QueryDocumentsWithLinq(DocumentClient client) { Console.WriteLine(); Console.WriteLine("**** Query Documents (LINQ) ****"); Console.WriteLine(); Console.WriteLine("Quering for US customers (LINQ)"); var q = from d in client.CreateDocumentQuery<Customer>(collection.DocumentsLink) where d.Address.CountryRegionName == " United States" select new { Id = d.Id, Name = d.Name, City = d.Address.Location.City }; var documents = q.ToList(); Console.WriteLine("Found {0} UK customers", documents.Count); foreach (var document in documents) { var d = document as dynamic; Console.WriteLine(" Id: {0}; Name: {1}; City: {2}", d.Id, d.Name, d.City); } Console.WriteLine(); } The SDK will convert our LINQ query into SQL syntax for DocumentDB, generating a SELECT and WHERE clause based on our LINQ syntax Step 5 − Now call the above queries from the CreateDocumentClient task. private static async Task CreateDocumentClient() { // Create a new instance of the DocumentClient using (var client = new DocumentClient(new Uri(EndpointUrl), AuthorizationKey)) { database = client.CreateDatabaseQuery("SELECT * FROM c WHERE c.id = 'myfirstdb'").AsEnumerable().First(); collection = client.CreateDocumentCollectionQuery(database.CollectionsLink, "SELECT * FROM c WHERE c.id = 'MyCollection'").AsEnumerable().First(); //await CreateDocuments(client); await QueryDocumentsWithPaging(client); QueryDocumentsWithLinq(client); } } When the above code is executed, you will receive the following output. **** Query Documents (paged results) **** Quering for all documents Id: 7e9ad4fa-c432-4d1a-b120-58fd7113609f; Name: New Customer 1; Id: 34e9873a-94c8-4720-9146-d63fb7840fad; Name: New Customer 1; **** Query Documents (LINQ) **** Quering for US customers (LINQ) Found 2 UK customers Id: 7e9ad4fa-c432-4d1a-b120-58fd7113609f; Name: New Customer 1; City: Brooklyn Id: 34e9873a-94c8-4720-9146-d63fb7840fad; Name: New Customer 1; City: Brooklyn Print Add Notes Bookmark this page
[ { "code": null, "e": 2570, "s": 2280, "text": "In DocumentDB, we actually use SQL to query for documents, so this chapter is all about querying using the special SQL syntax in DocumentDB. Although if you are doing .NET development, there is also a LINQ provider that can be used and which can generat...
Stepwise Regression Tutorial in Python | by Ryan Kwok | Towards Data Science
How do you find meaning in data? In our mini project, my friend @ErikaSM and I seek to predict Singapore’s minimum wage if we had one, and documented that process in an article over here. If you have not read it, do take a look. Since then, we have had comments on our process and suggestions to develop deeper insight into our information. As such, this follow-up article outlines two main objectives, finding meaning in data, and learning how to do stepwise regression. In the previous article, we discussed how the talk about a minimum wage in Singapore has frequently been a hot topic for debates. This is because Singapore uses a progressive wage model and hence does not have a minimum wage. The official stance of the Singapore Government is that a competitive pay structure will motivate the labour force to work hard, aligned with the value of Meritocracy embedded in Singapore culture. Regardless of the arguments for or against minimum wages in Singapore, the poor struggle to afford necessities and take care of themselves and their families. We took a neutral stance acknowledging the validity of both sides of the argument and instead presented a comparison of a prediction of Singapore’s minimum wage using certain metrics across different countries. The predicted minimum wage was also contrasted with the wage floors in the Progressive Wage Model (PWM) across certain jobs to spark some discussion about whether the poorest are earning enough. We used data from Wikipedia and World Data to collect data on minimum wage, cost of living, and quality of life. The quality of life dataset includes scores in a few categories: Stability, Rights, Health, Safety, Climate, Costs, and Popularity. The scores across the indicators and categories were fed into a linear regression model, which was then used to predict the minimum wage using Singapore’s statistics as independent variables. This linear model was coded on Python using sklearn, and more details about the coding can be viewed in our previous article. However, I will also briefly outline the modelling and prediction process in this article as well. The predicted annual minimum wage was US$20,927.50 for Singapore. A brief comparison can be seen in this graph below. Our professor encouraged us to use stepwise regression to better understand our variables. From this iteration, we incorporated stepwise regression to assist us in dimensionality reduction not only to produce a simpler and more effective model, but to derive insights in our data. So what exactly is stepwise regression? In any phenomenon, there will be certain factors that play a bigger role in determining an outcome. In simple terms, stepwise regression is a process that helps determine which factors are important and which are not. Certain variables have a rather high p-value and were not meaningfully contributing to the accuracy of our prediction. From there, only important factors are kept to ensure that the linear model does its prediction based on factors that can help it produce the most accurate result. In this article, I will outline the use of a stepwise regression that uses a backwards elimination approach. This is where all variables are initially included, and in each step, the most statistically insignificant variable is dropped. In other words, the most ‘useless’ variable is kicked. This is repeated until all variables left over are statistically significant. Before proceeding to analyse the regression models, we first modified the data to reflect a monthly wage instead of annual wage. This was because we recognised that most people tend to view their wages in months rather than across the entire year. Expressing our data as such would allow our audience to better understand our data. However, it is also worth noting that this change in scale would not affect the modelling process or the outcomes. Looking at our previous model, we produced the statistics to test the accuracy of the model. But before that, we would first have to specify the relevant X and Y columns, and obtain that information from the datafile. ## getting column namesx_columns = ["Workweek (hours)", "GDP per capita", "Cost of Living Index", "Stability", "Rights", "Health", "Safety", "Climate", "Costs", "Popularity"]y = data["Monthly Nominal (USD)"] Next, to gather the model statistics, we would have to use the statmodels.api library. Here, a function is created which grabs the columns of interest from a list, and then fits an ordinary least squares linear model to it. The statistics summary can then be very easily printed out. ## creating function to get model statisticsimport numpy as npimport statsmodels.api as smdef get_stats(): x = data[x_columns] results = sm.OLS(y, x).fit() print(results.summary())get_stats() Here we are concerned about the column “P > |t|”. Quoting some technical explanations from the UCLA Institute for Digital Research and Education, this column gives the 2-tailed p-value used in testing the null hypothesis. “Coefficients having p-values less than alpha are statistically significant. For example, if you chose alpha to be 0.05, coefficients having a p-value of 0.05 or less would be statistically significant (i.e., you can reject the null hypothesis and say that the coefficient is significantly different from 0).” In other words, we would generally want to drop variables with a p-value greater than 0.05. As seen from the initial summary above, the least statistically significant variable is “Safety” with a p-value of 0.968. Hence, we would want to drop “Safety” as a variable as shown below. The new summary is shown below as well. x_columns.remove("Safety")get_stats() This time, the new least statistically significant variable is “Health”. Similarly, we would want to remove this variable. x_columns.remove("Health")get_stats() We continue this process until all p-values are below 0.05. x_columns.remove("Costs")x_columns.remove("Climate")x_columns.remove("Stability") Finally, we find that there are 5 variables left, namely Workweek, GDP per Capita, Cost of Living Index, Rights, and Popularity. Since each of the p-values are below 0.05, all of these variables are said to be statistically significant. We can now produce a linear model based on this new set of variables. We can also use this to predict Singapore’s minimum wage. As seen, the predicted monthly minimum wage is about $1774 USD. ## creating a linear model and predictionx = data[x_columns]linear_model = LinearRegression()linear_model.fit(x, y)sg_data = pd.read_csv('testing.csv')x_test = sg_data[x_columns]y_pred = linear_model.predict(x_test)print("Prediction for Singapore is ", y_pred)>> Prediction for Singapore is [1774.45875071] This is the most important part of the process. Carly Fiorina, former CEO of Hewlett-Packard, once said: “The goal is to turn data into information, and information into insight.” This is exactly what we aim to achieve. “The goal is to turn data into information, and information into insight.” ~ Carly Fiorina, former CEO of Hewlett-Packard From just looking at the variables, we would have easily predicted which were statistically significant. For example, the GDP per Capita and Cost of Living Index would logically be good indicators of the minimum wage in a country. Even the number of hours in a workweek would make sense as an indicator. However, we noticed that “Rights” was still included in the linear model. This spurred us to first look at the relationship between Rights and Minimum Wage. Upon plotting the graph, we found this aesthetically pleasing relationship. Initially, we wouldn’t have considered Rights to be correlated to Minimum Wage since the more obvious candidates of GDP and Cost of Living stood out more as contributors to the minimum wage level. This made us reconsider how we understood minimum wage and compelled us to dig deeper. From World Data, “Rights” involved civil rights, and revolved mainly around people’s participation in politics and corruption. We found that the Civil Rights Index includes democratic participation by the population and measures to combat corruption. This index also involves public perception of the government including data from Transparency.org. “In addition, other factors include democratic participation by the population and (with less emphasis) measures to combat corruption. In order to assess not only the measures against corruption, but also its perception by the population, the corruption index based on Transparency.org was also taken into account.” This forced us to consider the correlation between Civil Rights and minimum wage. Knowing this information, we did further research and found several articles that might explain this correlation. American civil rights interest group, The Leadership Conference on Civil and Human Rights, released a report about why minimum wage is a civil and human rights issue and the need for stronger minimum wage policy to reduce inequality and ensure that individuals and families struggling in low-paying jobs are paid fairly. It hence makes sense as a country with more democratic participation is also likely to voice concerns about minimum wage, forcing a discussion and consequently increasing it over time. The next variable we looked at was Popularity. We first searched how this was measured from World Data. “The general migration rate and the number of foreign tourists were therefore evaluated as indicators of a country’s popularity. A lower rating was also used to compare the refugee situation in the respective country. A higher number of foreign refugees results in higher popularity, while a high number of fleeing refugees reduces popularity.” At first glance, it seems like there is no correlation. However, if we consider China, France, USA, and Spain as outliers, the majority of the data points seem to better fit an exponential graph. This raises two questions. Firstly, why is there a relationship between Popularity and Minimum Wage? Secondly, why are these four countries outliers? To be very honest, this stumped us. We simply could not see any way where popularity could be correlated to a minimum wage. Nevertheless, there was an important takeaway: that popularity is somehow statistically significant in predicting a minimum wage of a country. While we might not be the people to discover that relationship, this gives insight into our otherwise less meaningful data. It is important to bring back the quote from Carly Fiorina, “The goal is to turn data into information, and information into insight.” We as humans require tools and methods to convert data into information, and experience/knowledge to convert that information into insight. We first used Python as a tool and executed stepwise regression to make sense of the raw data. This let us discover not only information that we had predicted, but also new information that we did not initially consider. It is easy to guess that Workweek, GDP, and Cost of Living would be strong indicators of the minimum wage. However, it is only through regression that we discovered that Civil Rights and Popularity are also statistically significant. In this case, there were research online that we found that could possibly explain this information. This resulted in new insight that minimum wage is actually seen as a human right, and an increase in democratic participation can possibly result in more conversations about a minimum wage and hence increasing it. However, it is not always possible to find meaning in data that easily. Unfortunately, we, as university students, may not be the best people to offer probable explanations to our information. This is seen in our attempts to explain the relationship between Popularity and Minimum Wage. However, it is within our capacity to take this information and spread it to the world, leaving it as an open ended question for discussions to flourish. That is how we can add value to the world using data. Written in collaboration with Erika Medina
[ { "code": null, "e": 400, "s": 171, "text": "How do you find meaning in data? In our mini project, my friend @ErikaSM and I seek to predict Singapore’s minimum wage if we had one, and documented that process in an article over here. If you have not read it, do take a look." }, { "code": null...
How to clear Tkinter Canvas?
Tkinter provides a way to add a canvas in a window and when we create a canvas, it wraps up some storage inside the memory. While creating a canvas in tkinter, it will effectively eat some memory which needs to be cleared or deleted. In order to clear a canvas, we can use the delete() method. By specifying “all”, we can delete and clear all the canvas that are present in a tkinter frame. #Import the tkinter library from tkinter import * #Create an instance of tkinter frame win = Tk() #Set the geometry win.geometry("650x250") #Creating a canvas myCanvas =Canvas(win, bg="white", height=200, width=200) cordinates= 10, 10, 200, 200 arc = myCanvas.create_arc(cordinates, start=0, extent=320, fill="red") myCanvas.pack() #Clearing the canvas myCanvas.delete('all') win.mainloop() The above code will clear the canvas, First, mark the following line as a comment and execute the code. myCanvas.delete('all') It will produce the following window: Now, uncomment the line and execute again to clear the canvas.
[ { "code": null, "e": 1296, "s": 1062, "text": "Tkinter provides a way to add a canvas in a window and when we create a canvas, it wraps up some storage inside the memory. While creating a canvas in tkinter, it will effectively eat some memory which needs to be cleared or deleted." }, { "code...
How to convert Python dictionary keys/values to lowercase?
You can convert Python dictionary keys/values to lowercase by simply iterating over them and creating a new dict from the keys and values. For example, def lower_dict(d): new_dict = dict((k.lower(), v.lower()) for k, v in d.items()) return new_dict a = {'Foo': "Hello", 'Bar': "World"} print(lower_dict(a)) This will give the output {'foo': 'hello', 'bar': 'world'} If you want just the keys to be lower cased, you can call lower on just that. For example, def lower_dict(d): new_dict = dict((k.lower(), v) for k, v in d.items()) return new_dict a = {'Foo': "Hello", 'Bar': "World"} print(lower_dict(a)) This will give the output {'foo': 'Hello', 'bar': 'World'}
[ { "code": null, "e": 1214, "s": 1062, "text": "You can convert Python dictionary keys/values to lowercase by simply iterating over them and creating a new dict from the keys and values. For example," }, { "code": null, "e": 1375, "s": 1214, "text": "def lower_dict(d):\n new_dic...
Generate temporary files and directories using Python
The tempfile module in standard library defines functions for creating temporary files and directories. They are created in special temp directories that are defined by operating system file systems. For example, under Windows the temp folder resides in profile/AppData/Local/Temp while in linux the temporary files are held in /tmp directory. Following functions are defined in tempfile module This function creates a temporary file in the temp directory and returns a file object, similar to built-in open() function. The file is opened in wb+ mode by default, which means it can be simultaneously used to read/write binary data in it. What is important, the file’s entry in temp folder is removed as soon as the file object is closed. Following code shows usage of TemporaryFile() function. >>> import tempfile >>> f = tempfile.TemporaryFile() >>> f.write(b'Welcome to TutorialsPoint') >>> import os >>> f.seek(os.SEEK_SET) >>> f.read() b'Welcome to TutorialsPoint' >>> f.close() Following example opens TemporaryFile object in w+ mode to write and then read text data instead of binary data. >>> ff = tempfile.TemporaryFile(mode = 'w+') >>> ff.write('hello world') >>> ff.seek(0) >>> ff.read() 'hello world' >>> ff.close() This function is similar to TemporaryFile() function. The only difference is that a file with a random filename is visible in the designated temp folder of operating system. The name can be retrieved by name attribute of file object. This file too is deleted immediately upon closing it. >>> fo = tempfile.NamedTemporaryFile() >>> fo.name 'C:\\Users\\acer\\AppData\\Local\\Temp\\tmpipreok8q' >>> fo.close() This function creates a temporary directory. You can choose the location of this temporary directory by mentioning dir parameter. Following statement will create a temporary directory in C:\\python36 folder. >>> f = tempfile.TemporaryDirectory(dir = "C:/python36") <TemporaryDirectory 'C:/python36\\ tmp9wrjtxc_'> The created directory appears in the dir1 folder. It is removed by calling cleanup() function on directory object. >>> f.name 'C:/python36\\tmp9wrjtxc_' >>> f.cleanup() This unction also creates a temporary file, similar to TemporaryFile() function. Additionally, suffix and prefix parameters are available to add with temporary file created. Unlike in case of TemporaryFile(), the created file is not automatically removed. It should be removed manually. >>> f = tempfile.mkstemp(suffix = '.tp') C:\Users\acer\AppData\Local\Temp\tmpbljk6ku8.tp This function also creates a temporary directory in operating system’s temp folder and returns its absolute path name. To explicitly define location of its creation, use dir parameter. This folder too is not automatically removed. >>> tempfile.mkdtemp(dir = "c:/python36") 'c:/python36\\tmpruxmm66u' This function returns name of directory to store temporary files. This name is generally obtained from tempdir environment variable. On Windows platform, it is generally either user/AppData/Local/Temp or windowsdir/temp or systemdrive/temp. On linux it normally is /tmp. This directory is used as default value of dir parameter. >>> tempfile.gettempdir() 'C:\\Users\\acer\\AppData\\Local\\Temp' In this article functions in tempfile module have been explained.
[ { "code": null, "e": 1406, "s": 1062, "text": "The tempfile module in standard library defines functions for creating temporary files and directories. They are created in special temp directories that are defined by operating system file systems. For example, under Windows the temp folder resides in...
JavaScript | Arithmetic Operators - GeeksforGeeks
20 Jun, 2021 JavaScript Arithmetic Operators are the operators that operate upon the numerical values and return a numerical value. There are many operators in JavaScript. Each operator is described below along with its example. 1. Addition (+) The addition operator takes two numerical operands and gives their numerical sum. It also concatenates two strings or numbers. Syntax: a + b Example: // Number + Number => Addition 1 + 2 gives 3 // Number + String => Concatenation 5 + "hello" gives "5Hello" 2. Subtraction (-) The subtraction operator gives the difference of two operands in the form of numerical value. Syntax: a - b Example: // Number - Number => Subtraction 10 - 7 gives 3 "Hello" - 1 gives Nan 3. Multiplication (*) The multiplication operator gives the product of operands where one operand is multiplicand and another is multiplier. Syntax: a * b Example: // Number * Number => Multiplication 3 * 3 gives 9 -4 * 4 gives -16 Infinity * 0 gives NaN Infinity * Infinity gives Infinity 'hi' * 2 gives NaN 4. Division (/) The division operator provides the quotient of its operands where the right operand is the divisor and the left operand is the dividend. Syntax: a / b Example: // Number / Number => Division 5 / 2 gives 2.5 1.0 / 2.0 gives 0.5 3.0 / 0 gives Infinity 4.0 / 0.0 gives Infinity, because 0.0 == 0 2.0 / -0.0 gives -Infinity 5. Modulus (%) The modulus operator returns the remainder left over when a dividend is divided by a divisor. The modulus operator is also known as remainder operator. It takes the sign of the dividend. Syntax: a % b Example: // Number % Number => Modulus of the number 9 % 5 gives 4 -12 % 5 gives -2 1 % -2 gives 1 5.5 % 2 gives 1.5 -4 % 2 gives -0 NaN % 2 gives NaN 6. Exponentiation (**) The exponentiation operator gives the result of raising the first operand to the power of the second operand. The exponentiation operator is right-associative. Syntax: a ** b In JavaScript, it is not possible to write an ambiguous exponentiation expression i.e. you cannot put an unary operator (+ / – / ~ / ! / delete / void) immediately before the base number. Example: // Number ** Number => Exponential of the number -4 ** 2 // This is an incorrect expression -(4 ** 2) gives -16, this is a correct expression 2 ** 5 gives 32 3 ** 3 gives 27 3 ** 2.5 gives 15.588457268119896 10 ** -2 gives 0.01 2 ** 3 ** 2 gives 512 NaN ** 2 gives NaN 7. Increment (++) The increment operator increments (adds one to) its operand and returns a value. If used postfix with operator after operand (for example, x++), then it increments and returns the value before incrementing. If used prefix with operator before operand (for example, ++x), then it increments and returns the value after incrementing. Syntax: a++ or ++a Example: // Postfix var a = 2; b = a++; // b = 2, a = 3 // Prefix var x = 5; y = ++x; // x = 6, y = 6 8. Decrement (–) The decrement operator decrements (subtracts one from) its operand and returns a value. If used postfix, with operator after operand (for example, x–), then it decrements and returns the value before decrementing. If used prefix, with operator before operand (for example, –x), then it decrements and returns the value after decrementing. Syntax: a-- or --a Example: // Prefix var x = 2; y = --x; gives x = 1, y = 1 // Postfix var x = 3; y = x--; gives y = 3, x = 2 9. Unary (-) This is a unary operator i.e. it operates on a single operand. It gives the negation of an operand. Syntax: -a Example: var a = 3; b = -a; gives b = -3, a = 3 // Unary negation operator // can convert non-numbers // into a number var a = "3"; b = -a; gives b = -3 10. Unary (+) This is a way to convert a non-number into a number. Although unary negation (-) also can convert non-numbers, unary plus is the fastest and preferred way of converting something into a number, because it does not perform any other operations on the number. Syntax: +a Example: +4 gives 4 +'2' gives 2 +true gives 1 +false gives 0 +null gives 0 arorakashish0911 surinderdawra388 javascript-basics Picked JavaScript Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Difference between var, let and const keywords in JavaScript Convert a string to an integer in JavaScript Differences between Functional Components and Class Components in React How to calculate the number of days between two dates in javascript? File uploading in React.js Top 10 Front End Developer Skills That You Need in 2022 Installation of Node.js on Linux Top 10 Projects For Beginners To Practice HTML and CSS Skills How to fetch data from an API in ReactJS ? How to insert spaces/tabs in text using HTML/CSS?
[ { "code": null, "e": 29669, "s": 29641, "text": "\n20 Jun, 2021" }, { "code": null, "e": 29885, "s": 29669, "text": "JavaScript Arithmetic Operators are the operators that operate upon the numerical values and return a numerical value. There are many operators in JavaScript. Each...
Relational Database Management (RDBMS) Basic for Data Professionals | by Vincent Tatan | Towards Data Science
Data scientists need to work with Database on daily basis. As data analysts and engineers, we need to be proficient in SQL and Database Management. Knowing RDBMS will help us access, communicate and work on data. It will allow us to store and filter alternative data much more quickly and robust. Setup SQLite Disk ConnectionCreate Table with StructureInsert Data Frame values into the table Setup SQLite Disk Connection Create Table with Structure Insert Data Frame values into the table In this tutorial we will learn two ways to execute in Python. The first one is to use SQLite 3, where we will use Python and SQL to execute each learning point. But, we will also briefly talk about the SQL Alchemy which allows us to execute these learning points just by within 4 lines of code. (No joke!) Before that, let us talk about RDBMS A relational database is a type of database. It uses a structure that allows us to identify and access data in relation to another piece of data in the database. Often, data in a relational database is organized into tables. — Codecademy Relational database uses tables which are called records. These records possess many columns with different names and data types. We can then establish connections among records by using primary key and foreign key to identify table schema relationships. Today, there are many limitations of Excel and csv to store our data needs which could be resolved with RDBMS: The data ecosystem changes every day — What is considered big and fast today, might not be so tomorrow. This means that we need a dedicated storage which could flexibly host large amount of data. We need a more scalable storage than Excel and csv. RDBMS is the solution — it allows scalability based on the server distribution rather than Excel who has limited amount of rows and columns (1,048,576 rows by 16,384 columns). RDBMS allows users to establish defined relationships between tables. This will give users a complete pictures of data definitions. For example in your shopping receipt, you might several entities such as Product description, Price of item, Store Branch Location, etc. All of those could be separated and joined based on needs. We can now store data separately from our analysis. In Excel, we need to manage different versions to collaborate with your teammates. Each of the file needs to combine different versions of data and analysis. But, in RDBMS, we can now use SQL instructions to reproduce and analyze data separately. This way, we can make sure your teammates generate the updated data and analysis from a centralized data server. Refer to this Codecademy article if you want to know more. news.codecademy.com For data professionals, this skill is valuable. It creates a one stop data storage where everyone comes and leave with the same updated data from their SQL instructions. SQLite provides a lightweight C Library for disk-based database which allows SQL to process CRUD process. This means we could rely on SQLite for many small applications/use cases: SQLite for quick and easy internal data storageSQLite to develop small prototype quicklySQLite to host Proof of Concept (POC) before migrating to larger databases by PostgreSQL or Oracle. SQLite for quick and easy internal data storage SQLite to develop small prototype quickly SQLite to host Proof of Concept (POC) before migrating to larger databases by PostgreSQL or Oracle. PostgreSQL is a very advanced open source database to provide a dedicated data server to run its database. But, SQLite provides a lightweight setup which does not require a dedicated data server. If our data needs include proper administration and security, then PostgreSQL will be the proper choice. Otherwise, SQLite will do. tableplus.io To build the POC for this article, we will use SQLite. But feel free to try using PostgreSQL. We will reuse the problem where we extract critical products information from Lazada. Instead of exporting it into csv, we will export it into a SQLite Database. If you are not familiar with this, feel free to skim through my article below. towardsdatascience.com For this Tutorial, we would use SQLite3 first to generate connection to SQLite Engine. This will alllow us to execute SQL commands to insert the values. After that, we will take a look into SQLAlchemy to shorten and simplify this process without creating any SQL Commands. We first establish the connection to a disk file lazada.db, which is a disk used by SQLite engine to store data. If the lazada.db does not exist, it will create a new one which we could connect next time. import sqlite3conn = sqlite3.connect("lazada.db")c = conn.cursor() Notice that when we open up the connection,we also establish a cursor. A database cursor is a tool to traverse over database records. Using this cursor, we can create tables and execute SQL commands into the database disk. After connecting our SQLite3 to lazada.db, we will use the cursor to execute SQL query and create lazada_product table. We will identify our metadata as following. c.execute(''' CREATE TABLE lazada_product ( time date_time , id INTEGER , link TEXT NOT NULL, product_title TEXT NOT NULL, product_price DOUBLE NOT NULL, category TEXT NOT NULL, PRIMARY KEY (time, id) ); ''') Notice how we appoint time and id as the primary key. This means every row has unique id and date time. If we insert rows with the same id and date time; SQLite will complain and return a duplicate error. This validation is useful to prevent unclean redundant data to enter the database. Let us insert the extracted product_df to lazada_product table. def write_from_df_with_sqlite3(df): for index, row in df.iterrows(): c.execute( ''' INSERT INTO lazada_product VALUES (CURRENT_TIMESTAMP,?,?,?,?,?) ''', (row['id'], row['link'],row['product_title'],row['product_price'], row['category']) ) Once you run this method, you will successfully dump every value into your lazada_product table. Congratulations you have created RDBMS tables and inserted data into it Notice that there is a limitation to SQLite3 Python. The code can be hard to read as you combine SQL Commands with Python code in one file. It also looks verbose. Therefore, we will take a look using SQLAlchemy to execute table and data insertion in a shorter and SQL-free method. SQLAlchemy is a Python ORM to activate DB Engines. It creates a pythonic wrapper on top of SQL executions for SQLite. This allows you to run logic to the table without touching any SQL Command Code. ORM provides a high level abstraction to allow developers to write Python Code to invoke SQL for CRUD and schemas in their database. Each developer can use the programming language they are comfortable with instead of dealing with SQL statements or stored procedures from sqlalchemy import create_enginedisk_engine = create_engine('sqlite:///lazada_alchemy.db') def write_from_df_with_alchemy(df): df.to_sql('lazada_product', disk_engine, if_exists='append') Look at how clean and short the code is. After we run this method, we instantly create the table with default settings based on our df datatypes. While at the same time, we will append the values into the lazada_product table without touching any SQL. Therefore, the biggest of SQLAlchemy is to facilitate high level abstractions of SQL commands to help Python developers extract data using the same language. Executing SQL without Running SQL Queries. — SQLAlchemy Of course, this should not replace the importance of knowing SQL Language. We can handle more complexities better with SQL. However, to code Data Table Setups and Insertions, SQLAlchemy will save you much time and hassle. To check out the contents of our disks, we could interact with the following Web Dashboard Tool. sqliteonline.com From here you can insert your disk as a file and write a simple SQL select statements. SELECT * from lazada_product Click run and this will display all of your data in lazada_product. Congratulations, you have learnt RDBMS and Insert Values using Python SQLite3 and SQLAlchemy RDBMS provides many benefits to csv or excelsheet due to its larger capacity, dependency check, and separation of analysis and dataCreating a simple RDBMS does not take much time, we can use SQLAlchemy to create the schema just by within 4 lines of code.We can resume reading and CRUD operations by using SQLite Browser online or download Linux or Microsoft SQLite Browser. RDBMS provides many benefits to csv or excelsheet due to its larger capacity, dependency check, and separation of analysis and data Creating a simple RDBMS does not take much time, we can use SQLAlchemy to create the schema just by within 4 lines of code. We can resume reading and CRUD operations by using SQLite Browser online or download Linux or Microsoft SQLite Browser. github.com Feel free to clone the repository and contribute. I really hope this has been a great read and a source of inspiration for you to develop and innovate. Please Comment out below to suggest and feedback. Happy coding :) Vincent Tatan is a Data and Technology enthusiast with relevant working experiences from Visa Inc. and Lazada to implement microservice architectures, business intelligence, and analytics pipeline projects. Vincent is a native Indonesian with a record of accomplishments in problem solving with strengths in Full Stack Development, Data Analytics, and Strategic Planning. He has been actively consulting SMU BI & Analytics Club, guiding aspiring data scientists and engineers from various backgrounds, and opening up his expertise for businesses to develop their products . Please reach out to Vincent via LinkedIn , Medium or Youtube Channel
[ { "code": null, "e": 344, "s": 47, "text": "Data scientists need to work with Database on daily basis. As data analysts and engineers, we need to be proficient in SQL and Database Management. Knowing RDBMS will help us access, communicate and work on data. It will allow us to store and filter altern...
Pretty print JSON using javax.json API in Java?
The javax.json package provides an Object Model API to process JSON. The Object Model API is a high-level API that provides immutable object models for JSON object and array structures. These JSON structures can be represented as object models using JsonObject and JsonArray interfaces. We can use the JsonGenerator interface to write the JSON data to an output in a streaming way. The JsonGenerator.PRETTY_PRINTING is a configuration property to generate JSON prettily. We can implement a pretty print JSON in the below example. import java.io.*; import java.util.*; import javax.json.*; import javax.json.stream.*; public class JSONPrettyPrintTest { public static void main(String args[]) { String jsonString = "{\"name\":\"Raja Ramesh\",\"age\":\"35\",\"salary\":\"40000\"}"; StringWriter sw = new StringWriter(); try { JsonReader jsonReader = Json.createReader(new StringReader(jsonString)); JsonObject jsonObj = jsonReader.readObject(); Map<String, Object> map = new HashMap<>(); map.put(JsonGenerator.PRETTY_PRINTING, true); JsonWriterFactory writerFactory = Json.createWriterFactory(map); JsonWriter jsonWriter = writerFactory.createWriter(sw); jsonWriter.writeObject(jsonObj); jsonWriter.close(); } catch(Exception e) { e.printStackTrace(); } String prettyPrint = sw.toString(); System.out.println(prettyPrint); // pretty print JSON } } { "name": "Raja Ramesh", "age": "35", "salary": "40000" }
[ { "code": null, "e": 1533, "s": 1062, "text": "The javax.json package provides an Object Model API to process JSON. The Object Model API is a high-level API that provides immutable object models for JSON object and array structures. These JSON structures can be represented as object models using Jso...
Given a large number, check if a subsequence of digits is divisible by 8 - GeeksforGeeks
01 Oct, 2021 Given a number of at most 100 digits. We have to check if it is possible, after removing certain digits, to obtain a number of at least one digit which is divisible by 8. We are forbidden to rearrange the digits. Examples : Input : 1787075866 Output : Yes There exist more one or more subsequences divisible by 8. Example subsequences are 176, 16 and 8. Input : 6673177113 Output : No No subsequence is divisible by 8. Input : 3144 Output : Yes The subsequence 344 is divisible by 8. Property of the divisibility by eight: number can be divided by eight if and only if its last three digits form a number that can be divided by eight. Thus, it is enough to test only numbers that can be obtained from the original one by crossing out and that contain at most three digits i.e we check all one-digit, two digits, and three-digit number combinations. Method 1 (Brute Force):We apply the brute force approach. We permute all possible single-digit, double-digit, and triple-digit combinations using an iterative ladder. If we encounter a single-digit number divisible by 8 or a double-digit number combination divisible by 8 or a triple-digit number combination divisible by 8, then that will be the solution to our problem. C++ Java Python3 C# Javascript // C++ program to check if a subsequence of digits// is divisible by 8.#include <bits/stdc++.h>using namespace std; // Function to calculate any permutation divisible// by 8. If such permutation exists, the function// will return that permutation else it will return -1bool isSubSeqDivisible(string str){ // Converting string to integer array for ease // of computations (Indexing in arr[] is // considered to be starting from 1) int l = str.length(); int arr[l]; for (int i = 0; i < l; i++) arr[i] = str[i] - '0'; // Generating all possible permutations and checking // if any such permutation is divisible by 8 for (int i = 0; i < l; i++) { for (int j = i; j < l; j++) { for (int k = j; k < l; k++) { if (arr[i] % 8 == 0) return true; else if ((arr[i] * 10 + arr[j]) % 8 == 0 && i != j) return true; else if ((arr[i] * 100 + arr[j] * 10 + arr[k]) % 8 == 0 && i != j && j != k && i != k) return true; } } } return false;} // Driver functionint main(){ string str = "3144"; if (isSubSeqDivisible(str)) cout << "Yes"; else cout << "No"; return 0;} // Java program to check if a subsequence// of digits is divisible by 8.import java.io.*; class GFG { // Function to calculate any permutation // divisible by 8. If such permutation // exists, the function will return // that permutation else it will return -1 static boolean isSubSeqDivisible(String str) { int i, j, k, l = str.length(); int arr[] = new int[l]; // Converting string to integer array for ease // of computations (Indexing in arr[] is // considered to be starting from 1) for (i = 0; i < l; i++) arr[i] = str.charAt(i) - '0'; // Generating all possible permutations // and checking if any such // permutation is divisible by 8 for (i = 0; i < l; i++) { for (j = i; j < l; j++) { for (k = j; k < l; k++) { if (arr[i] % 8 == 0) return true; else if ((arr[i] * 10 + arr[j]) % 8 == 0 && i != j) return true; else if ((arr[i] * 100 + arr[j] * 10 + arr[k]) % 8 == 0 && i != j && j != k && i != k) return true; } } } return false; } // Driver function public static void main(String args[]) { String str = "3144"; if (isSubSeqDivisible(str)) System.out.println("Yes"); else System.out.println("No"); }} // This code is contributed by Nikita Tiwari. # Python3 program to# check if a subsequence of digits# is divisible by 8. # Function to calculate any# permutation divisible# by 8. If such permutation# exists, the function# will return that permutation# else it will return -1def isSubSeqDivisible(st) : l = len(st) arr = [int(ch) for ch in st] # Generating all possible # permutations and checking # if any such permutation # is divisible by 8 for i in range(0, l) : for j in range(i, l) : for k in range(j, l) : if (arr[i] % 8 == 0) : return True elif ((arr[i]*10 + arr[j])% 8 == 0 and i != j) : return True elif ((arr[i] * 100 + arr[j] * 10 + arr[k]) % 8 == 0 and i != j and j != k and i != k) : return True return False # Driver function st = "3144"if (isSubSeqDivisible(st)) : print("Yes")else : print("No") # This code is contributed# by Nikita Tiwari. // C# program to check if a subsequence// of digits is divisible by 8.using System; class GFG { // Function to calculate any permutation // divisible by 8. If such permutation // exists, the function will return // that permutation else it will return -1 static bool isSubSeqDivisible(string str) { int i, j, k, l = str.Length; int[] arr = new int[l]; // Converting string to integer array for ease // of computations (Indexing in arr[] is // considered to be starting from 1) for (i = 0; i < n; i++) arr[i] = str[i] - '0'; // Generating all possible permutations // and checking if any such // permutation is divisible by 8 for (i = 0; i < l; i++) { for (j = i; j < l; j++) { for (k = j; k < l; k++) { if (arr[i] % 8 == 0) return true; else if ((arr[i] * 10 + arr[j]) % 8 == 0 && i != j) return true; else if ((arr[i] * 100 + arr[j] * 10 + arr[k]) % 8 == 0 && i != j && j != k && i != k) return true; } } } return false; } // Driver function public static void Main() { string str = "3144"; if (isSubSeqDivisible(str)) Console.WriteLine("Yes"); else Console.WriteLine("No"); }} // This code is contributed by vt_m. <script> // JavaScript program to check if a subsequence// of digits is divisible by 8. // Function to calculate any permutation// divisible by 8. If such permutation// exists, the function will return// that permutation else it will return -1function isSubSeqDivisible(str){ let i, j, k, l = str.length; let arr = []; // Converting string to integer array for ease // of computations (Indexing in arr[] is // considered to be starting from 1) for(i = 0; i < l; i++) arr[i] = str[i] - '0'; // Generating all possible permutations // and checking if any such // permutation is divisible by 8 for(i = 0; i < l; i++) { for(j = i; j < l; j++) { for(k = j; k < l; k++) { if (arr[i] % 8 == 0) return true; else if ((arr[i] * 10 + arr[j]) % 8 == 0 && i != j) return true; else if ((arr[i] * 100 + arr[j] * 10 + arr[k]) % 8 == 0 && i != j && j != k && i != k) return true; } } } return false;} // Driver Codelet str = "3144";if (isSubSeqDivisible(str)) document.write("Yes");else document.write("No"); // This code is contributed by susmitakundugoaldanga </script> Yes Method 2 (Dynamic Programming):Though we have only 100-digit numbers, for longer examples larger than that, our program might exceed the given time limit. Thus, we optimize our code by using a dynamic programming approach.Let Be the ith digit of the sample. We generate a matrix dp[i][j], 1<=i<=n and 0<=j<8. The value of dp is true if we can cross out some digits from the prefix of length i such that the remaining number gives j modulo eight, and false otherwise. For a broad understanding of the concept, if at an index, we find element modulo 8 for that index we put the value of For all other numbers, we build on a simple concept that either addition of that digit will contribute information of a number divisible by 8, or it shall be left out. Note: We also have to keep it in mind that we cannot change the orderNow, if we add the current digit to the previous result. if we exclude the current digit in our formation.Now, if such a number shall exist, we will get a “true” for any i in dp[i][0] C++ Java Python3 C# PHP Javascript // C++ program to find if there is a subsequence// of digits divisible by 8.#include <bits/stdc++.h>using namespace std; // Function takes in an array of numbers,// dynamically goes on the location and// makes combination of numbers.bool isSubSeqDivisible(string str){ int n = str.length(); int dp[n + 1][10]; memset(dp, 0, sizeof(dp)); // Converting string to integer array for ease // of computations (Indexing in arr[] is // considered to be starting from 1) int arr[n + 1]; for (int i = 1; i <= n; i++) arr[i] = str[i - 1] - '0'; for (int i = 1; i <= n; i++) { dp[i][arr[i] % 8] = 1; for (int j = 0; j < 8; j++) { // If we consider the number in our combination, // we add it to the previous combination if (dp[i - 1][j] > dp[i][(j * 10 + arr[i]) % 8]) dp[i][(j * 10 + arr[i]) % 8] = dp[i - 1][j]; // If we exclude the number from our combination if (dp[i - 1][j] > dp[i][j]) dp[i][j] = dp[i - 1][j]; } } for (int i = 1; i <= n; i++) { // If at dp[i][0], we find value 1/true, it shows // that the number exists at the value of 'i' if (dp[i][0] == 1) return true; } return false;} // Driver functionint main(){ string str = "3144"; if (isSubSeqDivisible(str)) cout << "Yes"; else cout << "No"; return 0;} // Java program to find if there is a// subsequence of digits divisible by 8.import java.io.*;import java.util.*; class GFG { // Function takes in an array of numbers, // dynamically goes on the location and // makes combination of numbers. static boolean isSubSeqDivisible(String str) { int n = str.length(); int dp[][] = new int[n + 1][10]; // Converting string to integer array // for ease of computations (Indexing in // arr[] is considered to be starting // from 1) int arr[] = new int[n + 1]; for (int i = 1; i <= n; i++) arr[i] = (int)(str.charAt(i - 1) - '0'); for (int i = 1; i <= n; i++) { dp[i][arr[i] % 8] = 1; for (int j = 0; j < 8; j++) { // If we consider the number in // our combination, we add it to // the previous combination if (dp[i - 1][j] > dp[i][(j * 10 + arr[i]) % 8]) dp[i][(j * 10 + arr[i]) % 8] = dp[i - 1][j]; // If we exclude the number from // our combination if (dp[i - 1][j] > dp[i][j]) dp[i][j] = dp[i - 1][j]; } } for (int i = 1; i <= n; i++) { // If at dp[i][0], we find value 1/true, // it shows that the number exists at // the value of 'i' if (dp[i][0] == 1) return true; } return false; } // Driver function public static void main(String args[]) { String str = "3144"; if (isSubSeqDivisible(str)) System.out.println("Yes"); else System.out.println("No"); }} /* This code is contributed by Nikita Tiwari.*/ # Python3 program to find# if there is a subsequence# of digits divisible by 8. # Function takes in an array of numbers,# dynamically goes on the location and# makes combination of numbers.def isSubSeqDivisible(str): n = len(str) dp = [[0 for i in range(10)] for i in range(n + 1)] # Converting string to integer # array for ease of computations # (Indexing in arr[] is considered # to be starting from 1) arr = [0 for i in range(n + 1)] for i in range(1, n + 1): arr[i] = int(str[i - 1]); for i in range(1, n + 1): dp[i][arr[i] % 8] = 1; for j in range(8): # If we consider the number # in our combination, we add # it to the previous combination if (dp[i - 1][j] > dp[i][(j * 10 + arr[i]) % 8]): dp[i][(j * 10 + arr[i]) % 8] = dp[i - 1][j] # If we exclude the number # from our combination if (dp[i - 1][j] > dp[i][j]): dp[i][j] = dp[i - 1][j] for i in range(1, n + 1): # If at dp[i][0], we find # value 1/true, it shows # that the number exists # at the value of 'i' if (dp[i][0] == 1): return True return False # Driver Codestr = "3144"if (isSubSeqDivisible(str)): print("Yes")else: print("No") # This code is contributed# by sahilshelangia // C# program to find if there is a// subsequence of digits divisible by 8.using System; class GFG { // Function takes in an array of numbers, // dynamically goes on the location and // makes combination of numbers. static bool isSubSeqDivisible(String str) { int n = str.Length; int[, ] dp = new int[n + 1, 10]; // Converting string to integer array // for ease of computations (Indexing in // arr[] is considered to be starting // from 1) int[] arr = new int[n + 1]; for (int i = 1; i <= n; i++) arr[i] = (int)(str[i - 1] - '0'); for (int i = 1; i <= n; i++) { dp[i, arr[i] % 8] = 1; for (int j = 0; j < 8; j++) { // If we consider the number in // our combination, we add it to // the previous combination if (dp[i - 1, j] > dp[i, (j * 10 + arr[i]) % 8]) dp[i, (j * 10 + arr[i]) % 8] = dp[i - 1, j]; // If we exclude the number from // our combination if (dp[i - 1, j] > dp[i, j]) dp[i, j] = dp[i - 1, j]; } } for (int i = 1; i <= n; i++) { // If at dp[i][0], we find value // 1/true, it shows that the number // exists at the value of 'i' if (dp[i, 0] == 1) return true; } return false; } // Driver function public static void Main() { string str = "3144"; if (isSubSeqDivisible(str)) Console.WriteLine("Yes"); else Console.WriteLine("No"); }} // This code is contributed by vt_m. <?php// PHP program to find if there// is a subsequence of digits// divisible by 8. // Function takes in an array of numbers,// dynamically goes on the location and// makes combination of numbers.function isSubSeqDivisible($str){ $n = strlen($str); $dp = array_fill(0, $n + 1, array_fill(0, 10, NULL)); // Converting string to integer // array for ease of computations // (Indexing in arr[] is considered // to be starting from 1) $arr = array_fill(0, ($n + 1), NULL); for ($i = 1; $i <= $n; $i++) $arr[$i] = $str[$i - 1] - '0'; for ($i = 1; $i <= $n; $i++) { $dp[$i][$arr[$i] % 8] = 1; for ($j = 0; $j < 8; $j++) { // If we consider the number in // our combination, we add it to // the previous combination if ($dp[$i - 1][$j] > $dp[$i][($j * 10 + $arr[$i]) % 8]) $dp[$i][($j * 10 + $arr[$i]) % 8] = $dp[$i - 1][$j]; // If we exclude the number // from our combination if ($dp[$i - 1][$j] > $dp[$i][$j]) $dp[$i][$j] = $dp[$i - 1][$j]; } } for ($i = 1; $i <= $n; $i++) { // If at dp[i][0], we find value 1/true, // it shows that the number exists at // the value of 'i' if ($dp[$i][0] == 1) return true; } return false;} // Driver Code$str = "3144";if (isSubSeqDivisible($str)) echo "Yes";else echo "No"; // This code is contributed// by ChitraNayal?> <script> // Javascript program to find if there is a // subsequence of digits divisible by 8. // Function takes in an array of numbers, // dynamically goes on the location and // makes combination of numbers. function isSubSeqDivisible(str) { let n = str.length; let dp = new Array(n + 1); for(let i = 0; i < 10; i++) { dp[i] = new Array(10); for(let j = 0; j < 10; j++) { dp[i][j] = 0; } } // Converting string to integer array // for ease of computations (Indexing in // arr[] is considered to be starting // from 1) let arr = new Array(n + 1); for (let i = 1; i <= n; i++) arr[i] = (str[i - 1].charCodeAt() - '0'.charCodeAt()); for (let i = 1; i <= n; i++) { dp[i][arr[i] % 8] = 1; for (let j = 0; j < 8; j++) { // If we consider the number in // our combination, we add it to // the previous combination if (dp[i - 1][j] > dp[i][(j * 10 + arr[i]) % 8]) dp[i][(j * 10 + arr[i]) % 8] = dp[i - 1][j]; // If we exclude the number from // our combination if (dp[i - 1][j] > dp[i][j]) dp[i][j] = dp[i - 1][j]; } } for (let i = 1; i <= n; i++) { // If at dp[i][0], we find value 1/true, // it shows that the number exists at // the value of 'i' if (dp[i][0] == 1) return true; } return false; } let str = "3144"; if (isSubSeqDivisible(str)) document.write("Yes"); else document.write("No"); </script> Yes Using the dynamic approach, our time complexity cuts down to O(8*n), where 8 is from which the number should be divisible and n is the length of our input. Therefore, the overall complexity is O(n). Method 3For this problem, we simply need to check if there exists a two-digit subsequence divisible by 8 (divisibility test for 8) We first find all the 2 digit numbers divisible by 8 and map the tens place digit with unit place digit i.e :- 16, 24, 32, 40, 48, 56, 64, 72, 80, 88, 96 Ignore 48 as 8 is always divisible by 8 similarly 80 and 88 have 8 in them which make such subsequence always divisible by 8 So we map 1 to 6, 2 to 4, 3 to 2, and so on using map i.e stl map in C++. After building the map we traverse the string from the last index and check if the mapped value of the present index number is visited or not hence we need a visited array for this which will store true if the number is visited, else false eg:- 3769 first char from the last index is 9 so we check if 6 is visited (i.e 96 is subsequence or not), we mark 9 in visited array next char is 6 so we check is 4 visited (i.e 64), we mark 6 in the visited array next char is 7 so we check is 2 visited (i.e 72), we mark 7 in the visited array next char is 3 so we check is 6 visited (i.e 36), yes 6 is marked hence we print Yes. C++ Java Python3 C# Javascript // C++ program to check if given string// has a subsequence divisible by 8#include<bits/stdc++.h>using namespace std;// Driver functionint main(){ string str = "129365"; // map key will be tens place digit of number // that is divisible by 8 and value will // be units place digit map<int, int> mp; // For filling the map let start // with initial value 8 int no = 8; while(no < 100){ no = no + 8; // key is digit at tens place and value is // digit at units place mp.insert({key, value}) mp.insert({(no / 10) % 10, no % 10}); } // Create a hash to check if we visited a number vector<bool> visited(10, false); int i; // Iterate from last index to 0th index for(i = str.length() - 1; i >= 0; i--){ // If 8 is present in string then // 8 divided 8 hence print yes if(str[i] == '8') { cout << "Yes"; break; } // considering present character as the second // digit of two digits no we check if the value // of this key is marked in hash or not // If marked then we a have a number divisible by 8 if(visited[mp[str[i] - '0']]){ cout << "Yes"; break; } visited[str[i] - '0'] = true; } // If no subsequence divisible by 8 if(i == -1) cout << "No"; return 0;} // Java program to check if// given String has a subsequence// divisible by 8import java.util.*;class GFG{ // Driver codepublic static void main(String[] args){ String str = "129365"; // map key will be tens place // digit of number that is // divisible by 8 and value will // be units place digit HashMap<Integer, Integer> mp = new HashMap<Integer, Integer>(); // For filling the map let start // with initial value 8 int no = 8; while(no < 100) { no = no + 8; // key is digit at tens place // and value is digit at units // place mp.add({key, value}) //if(mp.containsKey((no / 10) % 10)) mp.put((no / 10) % 10, no % 10); } // Create a hash to check if // we visited a number boolean[] visited = new boolean[10]; int i; // Iterate from last index // to 0th index for(i = str.length() - 1; i >= 0; i--) { // If 8 is present in String then // 8 divided 8 hence print yes if(str.charAt(i) == '8') { System.out.print("Yes"); break; } // considering present character // as the second digit of two // digits no we check if the value // of this key is marked in hash or not // If marked then we a have a number // divisible by 8 if(visited[mp.get(str.charAt(i)- '0')]) { System.out.print("Yes"); break; } visited[str.charAt(i) - '0'] = true; } // If no subsequence divisible // by 8 if(i == -1) System.out.print("No");}} // This code is contributed by shikhasingrajput # Python3 program to check if given string# has a subsequence divisible by 8Str = "129365" # map key will be tens place digit of number# that is divisible by 8 and value will# be units place digitmp = {} # For filling the map let start# with initial value 8no = 8 while(no < 100) : no = no + 8 # key is digit at tens place and value is # digit at units place mp.insert({key, value}) mp[(no // 10) % 10] = no % 10 # Create a hash to check if we visited a numbervisited = [False] * 10 # Iterate from last index to 0th indexfor i in range(len(Str) - 1, -1, -1) : # If 8 is present in string then # 8 divided 8 hence print yes if(Str[i] == '8') : print("Yes", end = "") break # considering present character as the second # digit of two digits no we check if the value # of this key is marked in hash or not # If marked then we a have a number divisible by 8 if visited[mp[ord(Str[i]) - ord('0')]] : print("Yes", end = "") break visited[ord(Str[i]) - ord('0')] = True # If no subsequence divisible by 8if(i == -1) : print("No") # This code is contributed by divyeshrabadiya07 // C# program to check if given// String has a subsequence// divisible by 8using System;using System.Collections.Generic; class GFG{ // Driver codepublic static void Main(String[] args){ String str = "129365"; // Map key will be tens place // digit of number that is // divisible by 8 and value will // be units place digit Dictionary<int, int> mp = new Dictionary<int, int>(); // For filling the map let start // with initial value 8 int no = 8; while (no < 100) { no = no + 8; // Key is digit at tens place // and value is digit at units // place mp.Add({key, value}) if (mp.ContainsKey((no / 10) % 10)) mp[(no / 10) % 10] = no % 10; else mp.Add((no / 10) % 10, no % 10); } // Create a hash to check if // we visited a number bool[] visited = new bool[10]; int i; // Iterate from last index // to 0th index for(i = str.Length - 1; i >= 0; i--) { // If 8 is present in String then // 8 divided 8 hence print yes if (str[i] == '8') { Console.Write("Yes"); break; } // Considering present character // as the second digit of two // digits no we check if the value // of this key is marked in hash or not // If marked then we a have a number // divisible by 8 if (visited[mp[str[i] - '0']]) { Console.Write("Yes"); break; } visited[str[i] - '0'] = true; } // If no subsequence divisible // by 8 if (i == -1) Console.Write("No");}} // This code is contributed by Princi Singh <script> // Javascript program to check if// given String has a subsequence// divisible by 8 // Driver code let str = "129365"; // map key will be tens place // digit of number that is // divisible by 8 and value will // be units place digit let mp = new Map(); // For filling the map let start // with initial value 8 let no = 8; while(no < 100) { no = no + 8; // key is digit at tens place // and value is digit at units // place mp.add({key, value}) //if(mp.containsKey((no / 10) % 10)) mp.set((Math.floor(no / 10)) % 10, no % 10); } // Create a hash to check if // we visited a number let visited = new Array(10); for(let i=0;i<visited.length;i++) { visited[i]=false; } let i; // Iterate from last index // to 0th index for(i = str.length - 1; i >= 0; i--) { // If 8 is present in String then // 8 divided 8 hence print yes if(str[i] == '8') { document.write("Yes"); break; } // considering present character // as the second digit of two // digits no we check if the value // of this key is marked in hash or not // If marked then we a have a number // divisible by 8 if(visited[mp.get(str[i].charCodeAt(0)- '0'.charCodeAt(0))]) { document.write("Yes"); break; } visited[str[i].charCodeAt(0) - '0'.charCodeAt(0)] = true; } // If no subsequence divisible // by 8 if(i == -1) document.write("No"); // This code is contributed by rag2127 </script> Yes If you take a close look, the visited array will always have 10 fields and the map will always have the same size, hence space complexity will O(1), time complexity will be O(n) for traversing string. vt_m jit_t sahilshelangia ukasp Amartya Roy prajmsidc shubham_singh Anmoltiwari shikhasingrajput princi singh divyeshrabadiya07 susmitakundugoaldanga mukesh07 rag2127 simranarora5sos divisibility large-numbers subsequence Dynamic Programming Mathematical Dynamic Programming Mathematical Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Bellman–Ford Algorithm | DP-23 Floyd Warshall Algorithm | DP-16 Matrix Chain Multiplication | DP-8 Travelling Salesman Problem | Set 1 (Naive and Dynamic Programming) Sieve of Eratosthenes Write a program to print all permutations of a given string C++ Data Types Set in C++ Standard Template Library (STL) Merge two sorted arrays Program to find GCD or HCF of two numbers
[ { "code": null, "e": 24407, "s": 24379, "text": "\n01 Oct, 2021" }, { "code": null, "e": 24620, "s": 24407, "text": "Given a number of at most 100 digits. We have to check if it is possible, after removing certain digits, to obtain a number of at least one digit which is divisibl...
3 easy ways to reshape pandas DataFrame | by Zolzaya Luvsandorj | Towards Data Science
Data comes in different shapes and sizes. As professionals working with data, we often need to reshape the data to a form that is more suitable for the task at hand. In this post, we will look at 3 simple ways to reshape a DataFrame. Let’s start by importing libraries and loading a sample wide dataset: import numpy as npimport pandas as pdfrom seaborn import load_dataset# Load sample datawide = load_dataset('penguins')\ .drop(columns=['sex', 'island', 'culmen_length_mm'])\ .sample(n=3, random_state=1).sort_index()\ .reset_index().rename(columns={'index': 'id'})wide We can reshape the data to a long format with stack() like this: long = wide.set_index('id').stack().to_frame().reset_index()\ .rename(columns={'level_1': 'variable', 0: 'value'})long It gets the job done but this is quite verbose and not very elegant. Luckily, transforming the data to a long format becomes easy with melt(): long = wide.melt(id_vars='id')long Voila! It’s quite simple, isn’t it? Of note, wide.melt(id_vars=’id’) can also be written as pd.melt(wide, id_vars='id'). It’s always important to apply what we learn to consolidate our knowledge. One of my favourite practical application of melt() that you may also find useful is to use it to format correlation matrix. Although we only have three records in wide, to illustrate the idea, let’s do a correlation table: corr = wide.drop(columns='id').corr()corr This format is useful as we can turn the matrix into heatmaps to visualise the correlations. But often the matrix or the heatmap is not enough if you want to drill into specifics and find variables whose correlations are above a certain threshold. Turning the matrix into a long format makes that task a whole lot easier: corr.reset_index().melt(id_vars='index') Now with this long data, we can easily filter by ‘value’ to find correlations between desired values. We will format the data a bit more and filter correlations between 0.9 and 1: corr.reset_index().melt(id_vars='index')\ .rename(columns={'index': 'variable1', 'variable': 'variable2', 'value': 'correlation'})\ .sort_values('correlation', ascending=False)\ .query('correlation.between(.9,1, inclusive=False)', engine='python') # workaround of this bug You can even go ahead and remove the duplicate correlations too. This correlation formatting is especially useful when you have bigger datasets with many numerical features. On the other hand, sometimes the data comes in a long format and we need to reshape it to a wide data. Let’s now do the opposite of what we did previously. Similar to the previous section, we will start the transformation with unstack(): long.set_index(['id', 'variable']).unstack() The same transformation can be done using pivot() as below: long.pivot(index='id', columns='variable', values='value') This is not necessarily more concise but it probably is little easier to work with compared to unstack(). By now, you probably have noticed that melt() is to pivot() as stack() is to unstack(). A possible practical application of reshaping data to wide format is if your data is in an Entity-Attribute-Value (a.k.a. EAV) format similar to this: eav = pd.DataFrame({'entity': np.repeat([10,25,37, 49], 2), 'attribute': ['name', 'age']*4, 'value': ['Anna', 30, 'Jane', 40, 'John', 20, 'Jim', 50]})eav Reshaping the data into a format where each row represents entity (e.g. customer) can be done using pivot(): eav.pivot(index='entity', columns='attribute', values='value') Next time you know how to reshape a long data! We learned how to reshape from long to wide with melt(). But with wide_to_long() function, reshaping becomes easier compared to melt() in some instances. Here’s one example: pop = pd.DataFrame({'country':['Monaco', 'Liechtenstein', 'San Marino'], 'population_2016' : [38070, 37658, 33504], 'population_2017' : [38392, 37800, 33671], 'population_2018' : [38682, 37910, 33785]})pop Using melt(), we can reshape the data and format it as follows: new = pop.melt(id_vars='country')\ .rename(columns={'variable': 'year', 'value': 'population'})new['year'] = new['year'].str.replace('population_', '')new With wide_to_long(), it’s much simpler to get the same output: pd.wide_to_long(pop, stubnames='population', i='country', j='year', sep='_').reset_index() When using the function, it’s good to understand these three main terms: a stub name (stubnames), a suffix and a separator (sep). While these terms may be self-explanatory, an example may clarify them: population is a stub name, 2017 is a suffix and _ is a separator. A new column name for the suffix is passed to parameter j and a unique identifier column name is passed to parameter i. Without reset_index(), the output would look like the following where the unique identifier and the suffix column are in the index: pd.wide_to_long(pop, stubnames='population', i='country', j='year', sep='_') By default, suffix is set up to be numerical values. So this worked fine in our previous example. But it may not work for a data like this: iris = load_dataset('iris').head()iris This time, there are two stub names: sepal and petal. We will pass both in a list to stubnames when reshaping. The suffixes (i.e. length and width) are no longer numeric so we will need to specify that pattern using regular expression in suffix argument. pd.wide_to_long(iris.reset_index(), stubnames=['sepal', 'petal'], i='index', j='Measurement', sep='_', suffix='\D+') Hopefully these two examples have illustrated how useful wide_to_long() can be when used in the right settings. Voila! These were the 3 easy ways to reshape pandas data! Here’re their official documentation if you want to learn more: pd.melt(), pd.pivot() and pd.wide_to_long(). Would you like to access more content like this? Medium members get unlimited access to any articles on Medium. If you become a member using my referral link, a portion of your membership fee will directly go to support me. Thank you for reading this article. If you are interested, here are links to some of my other posts on pandas:◼️️ Writing 5 common SQL queries in pandas◼️️ Writing advanced SQL queries in pandas◼️️ 5 tips for pandas users◼️️ 5 tips for data aggregation in pandas◼️️ How to transform variables in a pandas DataFrame Bye for now 🏃 💨
[ { "code": null, "e": 406, "s": 172, "text": "Data comes in different shapes and sizes. As professionals working with data, we often need to reshape the data to a form that is more suitable for the task at hand. In this post, we will look at 3 simple ways to reshape a DataFrame." }, { "code":...
Python Pandas vs. R Dplyr. The Full Cheatsheet | by Martin Šiklar | Towards Data Science
Pandas for Python and Dplyr for R are the two most popular libraries for working with tabular/structured data for many Data Scientists. There is always this big and partly heated discussion on which framework is better. Honestly, does it really matter? In the end, it’s about getting the job done and both pandas and dplyr offer great tools for data wrangling. No worries, this article is not yet another comparison that tries to prove a point for either library! The purpose of this article therefore is: To help others with the transition from one language/framework to the other To explore new tools that you can add to your repertoire as a Data Scientist To create a reference cheat sheet in case you need to look up the most frequently used data wrangling functions in either language In this tutorial we will be working with the iris dataset which is part of both Pythons sklearn and base R. After some homogenisation our data in R / Python looks like this: Sepal_length Sepal_width Petal_length Petal_width Species 5.1 3.5 1.4 0.2 setosa 4.9 3.0 1.4 0.2 setosa 4.7 3.2 1.3 0.2 setosa 4.6 3.1 1.5 0.2 setosa 5.0 3.6 1.4 0.2 setosa Disclaimer: long read And here it already gets confusing. First of all, there are multiple ways on how to select columns from a dataframe in each framework. In Pandas you can either simply pass a list with the column names or use the filter() method. This is confusing because the filter() function in dplyr is used to subset rows based on conditions and not columns! In dplyr we use the select() function instead: Pandas #Pass columns as listdataframe[[“Sepal_width”, “Petal_width”]]#Use Filter Functiondataframe.filter(items=['Sepal_width', 'Petal_width']) Dplyr dataframe %>% select(Sepal_width, Petal_width) Again, there are multiple ways on how to filter records in a dataframe based on conditions across one or multiple columns. Pandas In Pandas you can either use the indexing approach or try out the handy query API, which I personally prefer. #indexingdataframe[(dataframe["Sepal_width"] > 3.5) & (dataframe["Petal_width"] < 0.3)]#query APIdataframe.query("Sepal_width > 3.5 & Petal_width < 0.3") Dplyr The standard way of filtering records in dplyr is via the filter function(). dataframe %>% filter(Sepal_width > 3.5 & Petal_width < 0.3) Renaming sounds like an easy task, but be cautious and note the subtle difference here. If we want to rename our column from Species to Class in Pandas we supply a dictionary that says {‘Species’: ‘Class’} and in Dplyr it is the exact opposite way Class=Species: Pandas dataframe.rename(columns = {'Species': 'Class'}, inplace = True) Dplyr dataframe <- dataframe %>% rename(Class=Species) Let us say we want to rename multiple columns at once based on a condition. For example, convert all our feature columns (Sepal_length, Sepal_width, Petal_length, Petal_width) to upper case. In Python that's actually quite tricky and you need to first import another library and iterate manually over each column. In Dplyr there is a much cleaner interface if you want to access/change multiple columns based on conditions. Pandas import re#prepare pattern that columns have to match to be converted to upper casepattern = re.compile(r".*(length|width)")#iterate over columns and covert to upper case if pattern matches.for col in dataframe.columns: if bool((pattern.match(col))): dataframe.rename(columns = {col: col.upper()}, inplace = True) Dplyr dataframe <- dataframe %>% rename_with(toupper, matches("length|width")) Result Note the upper case feature column names: SEPAL_LENGTH SEPAL_WIDTH PETAL_LENGTH PETAL_WIDTH Species 5.1 3.5 1.4 0.2 setosa 4.9 3.0 1.4 0.2 setosa Let us say we want to recode/alter cell values based on conditions: In our example, we will try to recode the Species strings “setosa, versicolor and virginica” to integers from 0 to 2: Pandas dataframe.loc[dataframe['Species'] == 'setosa', "Species"] = 0dataframe.loc[dataframe['Species'] == 'versicolor', "Species"] = 1dataframe.loc[dataframe['Species'] == 'virginica', "Species"] = 2 Dplyr dataframe <- dataframe %>% mutate(Species = case_when(Species == 'setosa' ~ 0, Species == 'versicolor' ~ 1, Species == 'virginica' ~ 2)) Sometimes we want to see which values distinct/unique values we have in a column. Note how different the function call is in both frameworks: Pandas uses the unique() method and dplyr() uses the distinct() function to get to the same result: Pandas dataframe.Species.unique()#array(['setosa', 'versicolor', 'virginica'], dtype=object) Dplyr dataframe %>% select(Species) %>% distinct()# Species # setosa # versicolor# virginica If you want to count how many entries the dataframe has in total or get a count for a certain group, you can do the following: Pandas # Total number of records in dataframelen(dataframe)#150# Number of records per Groupdataframe.value_counts('Species')#Species#virginica 50#versicolor 50#setosa 50# Note that you can also use the .groupby() method followed by size()dataframe.groupby(['Species']).size() Dplyr # Total number of records in dataframedataframe %>% nrow()#[1] 150# Number of records per Group (count and tally are interchangeable)dataframe %>% group_by(Species) %>% count()dataframe %>% group_by(Species) %>% tally()# Species n# <fct> <int>#1 setosa 50#2 versicolor 50#3 virginica 50 If you want to create descriptive statistics for one or multiple columns in your data frame, you can do the following: Pandas #get mean and min for each columndataframe.agg(['mean', 'min'])# Sepal_length Sepal_width Petal_length Petal_width Species#mean 5.843333 3.057333 3.758 1.199333 NaN#min 4.300000 2.000000 1.000 0.100000 setosa Dplyr Unfortunately, I didn't find a way on how to use multiple aggregation functions over multiple columns at once. That is why you need to call the summarise function multiple times to achieve the same result: #first aggregation over all columns using meandataframe %>% summarise(across(everything(), mean))# Sepal_length Sepal_width Petal_length Petal_width Species# 5.84 3.06 3.76 1.20 NA#second aggregation over all columns using mindataframe %>% summarise(across(everything(), min))#Sepal_length Sepal_width Petal_length Petal_width Species# 5.84 3.06 3.76 1.20 NA If you want to have aggregate statistics for by group in your dataset, you have to use the groupby() method in Pandas and the group_by() function in Dplyr. You can either do this for all columns or for a specific column: Pandas Note how Pandas uses multilevel indexing for a clean display of the results: # aggregation by group for all columnsdataframe.groupby(['Species']).agg(['mean', 'min'])# Sepal_length Sepal_width ...# mean min mean min ...#Species #setosa 5.01 4.3 3.43 ...#versicolor 5.94 4.9 2.77 ...#virginica 6.59 4.9 2.97 ...# aggregation by group for a specific columndataframe.groupby(['Species']).agg({'Sepal_length':['mean']})# Sepal_length# mean#Species #setosa 5.01#versicolor 5.94#virginica 6.59 Dplyr Since Dplyr doesn’t support multilevel indexing, the output of the first call looks a little bit messy compared to Pandas. In this output, the statistics for the first function are displayed (mean-fn1) followed by the statistics of the second function (min-fn2). # aggregation by group for all columnsdataframe %>% group_by(Species) %>% summarise_all(list(mean,min))Species Sepal_length_fn1 Sepal_width_fn1 ...setosa 5.01 3.43 ...versicolor 5.94 2.77 ... virginica 6.59 2.97 ...# aggregation by group for a specific columndataframe %>% group_by(Species) %>% summarise(mean=mean(Sepal_length))#Species mean# setosa 5.01# versicolor 5.94# virginica 6.59 Sometimes you want to create a new column and combine the values of two or more existing columns with some mathematical operation. Here is how to do it in both Pandas and Dplyr: Pandas dataframe["New_feature"] = dataframe["Petal_width"]* dataframe["Petal_length"] / 2 Dplyr dataframe <- dataframe %>% mutate(New_feature= Petal_width*Petal_length/2) To clean up a dataframe, deleting columns can sometimes be quite handy: Pandas In Pandas you can delete a column with drop(). You can also use inplace=True to overwrite the current dataframe. dataframe.drop("New_feature", axis=1, inplace=True) Dplyr In Dplyr you specify the column name you want to remove inside the select() function with a leading minus. dataframe <- dataframe %>% select(-New_feature) To sort values you can use sort_values() in Pandas and arrange() in Dplyr. The default sorting for both is ascending. Note the difference in each function call on how to sort descending: Pandas dataframe.sort_values('Petal_width', ascending=0) Dplyr dataframe %>% arrange(desc(Petal_width)) Renaming sounds like an easy task, but be cautious and note the subtle difference here. If we want to rename our column from Species to Class in Pandas we supply a dictionary that says {‘Species’: ‘Class’} and in Dplyr it is the exact opposite way Class=Species: Pandas dataframe.rename(columns = {'Species': 'Class'}, inplace = True) Dplyr dataframe %>% relocate(Species)dataframe %>% relocate(Species, .before=Sepal_width) I don’t use this functionality often, but sometimes it comes in handy if I want to create a table for a presentation and the ordering of the columns doesn’t make logical sense. Here is how to move around columns: Pandas In Python Pandas you need to reindex your columns by making use of a list. Let’s say we want to move the column Species to the front. #change order of columnsdataframe.reindex(['Species','Petal_length','Sepal_length','Sepal_width','Petal_Width'], axis=1) Dplyr In Dplyr you can use the handy relocate() function. Again let’s say we want to move the column Species to the front. dataframe %>% relocate(Species)#Note that you can use .before or .after to place a columne before or after another specified column - very handy!dataframe %>% relocate(Species, .before=SEPAL_WIDTH) Slicing is a whole topic on its own and there is a lot of ways on how to do it. Let us step through the most frequently used slicing operation below: Sometimes you know the exact row number you want to extract. Although the procedure in Dplyr and Pandas is quite similar please note that Indexing in Python starts at 0 and in R at 1. Pandas dataframe.iloc[[49,50]]# Sepal_length Sepal_width Petal_length Petal_width Species# 5.0 3.3 1.4 0.2 setosa# 7.0 3.2 4.7 1.4 versicolor Dplyr dataframe %>% slice(50,51)# Sepal_length Sepal_width Petal_length Petal_width Species #1 5 3.3 1.4 0.2 setosa #2 7 3.2 4.7 1.4 versicolor Sometimes we want to see either the first or last records in a dataframe. This can be either done by providing a fixed number n or a proportion prop of values. Pandas In Pandas you can use the head() or tail() method to get a fixed amount of records. If you want to extraction a proportion, you have to do some math on your own: #returns the first 5 recordsdataframe.head(n=5)#returns the last 10% of total recordsdataframe.tail(n=len(dataframe)*0.1) Dplyr In Dplyr there are two designated functions for this use case: slice_head() and slice_tail(). Please note how you can either specify a fixed number or a proportion: #returns the first 5 recordsdataframe %>% slice_head(n=5)#returns the last 10% of total recordsdataframe %>% slice_tail(prop=0.1) Sometimes it is useful to select records with the highest or lowest values per column. Again this can be done by providing a fixed number or a proportion. Pandas In Pandas this is a little more tricky than in Dplyr. Imagine for example you want 20 records with the longest “Petal_length” or 10% of the total records with the shortest “Petal_length”. To do the second operation in Python, we have to do some math and first sort our values: #returns 20 records with the longest Petal_length (for returning the shortest you can use the function nsmallest)dataframe.nlargest(20, 'Petal_length')#returns 10% of total records with the shortest Petal_lengthprop = 0.1 dataframe.sort_values('Petal_length', ascending=1).head(int(len(dataframe)*prop)) Dplyr In Dplyr this is much simpler since there are designated functions for this use case: #returns 20 records with the longest Petal_lengthdataframe %>% slice_max(Petal_length, n = 20)#returns 10% of total records with the shortest Petal_lengthdataframe %>% slice_min(Petal_length, prop = 0.1) Sometimes it is useful to select records with the highest or lowest values per column but separated by group. Again this can be done by providing a fixed number or a proportion. Imagine for example we want 3 records with the shortest Petal_length per Species. Pandas In Pandas, this is again a little bit more tricky than in Dplyr. We first group our dataframe by Species and then apply a lambda function that makes use of the above-described nsmallest() or nlargest() function: #returns 3 records with the shortest Petal_length per Species(dataframe.groupby('Species',group_keys=False) .apply(lambda x: x.nsmallest(3, 'Petal_length')))#Sepal_length Sepal_width Petal_length Petal_width Species# 4.6 3.6 1.0 0.2 setosa# 4.3 3.0 1.1 0.1 setosa# 5.8 4.0 1.2 0.2 setosa# 5.1 2.5 3.0 1.1 versicolor# 4.9 2.4 3.3 1.0 versicolor# 5.0 2.3 3.3 1.0 versicolor# 4.9 2.5 4.5 1.7 virginica# 6.2 2.8 4.8 1.8 virginica# 6.0 3.0 4.8 1.8 virginica#returns 5% of total records with the longest Petal_length per Speciesprop = 0.05(dataframe.groupby('Species',group_keys=False) .apply(lambda x: x.nlargest(int(len(x) * prop), 'Petal_length')))#Sepal_length Sepal_width Petal_length Petal_width Species# 4.8 3.4 1.9 0.2 setosa# 5.1 3.8 1.9 0.4 setosa# 6.0 2.7 5.1 1.6 versicolor# 6.7 3.0 5.0 1.7 versicolor# 7.7 2.6 6.9 2.3 virginica# 7.7 3.8 6.7 2.2 virginica Dplyr In Dplyr this is much simpler since there are designated functions for this use case. Note how with_ties=FALSE can be provided to specify if ties (records with equal values) should not be returned. #returns 3 records with the shortest Petal_length per Speciesdataframe %>% group_by(Species) %>% slice_min(Petal_length, n = 3, with_ties = FALSE)#returns 5% of total records with the longest Petal_length per Speciesdataframe %>% group_by(Species) %>% slice_max(Petal_length, prop = 0.05, with_ties=FALSE) Slicing random records can also be called sampling. Again this can be done by proving a fixed number or a proportion. Furthermore, this can be done on the entire dataset or also equally distributed based on a group. Since this is quite a frequent use case, there are functions for this in both frameworks: Pandas In Pandas you can use the sample() function and either specify n for a fixed amount of records or frac for a proportion of records. Furthermore, you can specify replace to allow or disallow sampling of the same row more than once. #returns 20 random samplesdataframe.sample(n=20)#return 20% of total recordsdataframe.sample(frac=0.2, replace=True)#returns 10% of total records split by groupdataframe.groupby('Species').sample(frac=0.1) Dplyr The interface in Dplyr is very similar. You can use the slice_sample() function and either specify n for a fixed amount of records or prop for a proportion of records. Furthermore, you can specify replace to allow or disallow sampling of the same row more than once. #returns 20 random samplesdataframe %>% slice_sample(n=20)#return 20% of total recordsdataframe %>% slice_sample(prop=0.2, replace=True)#returns 10% of total records split by groupdataframe %>% group_by(Species) %>% slice_sample(prop=0.1) Joining dataframes is also a frequent use case. (There is a wide range of join operation but I am not going to get into details here) Subsequently, however, you will learn how to perform a full (outer) join in both Pandas and Dplyr. Image you have two dataframes that share a common variable “key”: #Python PandasA = dataframe[[“Species”, “Sepal_width”]]B = dataframe[[“Species”, “Sepal_length”]]#R Dplyr:A <- dataframe %>% select(Species, Sepal_width)B <- dataframe %>% select(Species, Sepal_length) Pandas For all join operations you can use the “merge” function in Pandas and specify, what you want to join, how (outer, inner, left, right,..) you want to join and on which key: #Join dataframe A and B (WHAT), with a full join (HOW) by making use of the key "Species" (ON) pd.merge(A, B, how="outer", on="Species") Dplyr In Dplyr the syntax is very similar, however, you have separate functions for each join type. In this example, we will again perform a full join with the full_join() function: #Join dataframe A and B (WHAT), with a full join (HOW) by making use of the key "Species" (ON)A %>% full_join(B, by="Species") Sometimes we don’t want to join our dataframe but just append two existing dataframes either by rows or columns. Both Pandas and Dplyr have a nice interface for achieving this: Pandas In Pandas you can concatenate two dataframes with the concat() method. The default value concatenates dataframes by rows. By specifying the axis (e.g. axis = 1) you can concatenate two dataframes by columns. Note that if a value doesn’t appear in one of the dataframes it's automatically filled with NA. #Concatenate by rowspd.concat([A,B])# Species Sepal_width Sepal_length#0 setosa 3.5 NaN#1 setosa 3.0 NaN#2 setosa 3.2 NaN#3 setosa 3.1 NaN# ...#Concatenate by columns pd.concat([A,B], axis=1)# Species Sepal_width Species Sepal_length#0 setosa 3.5 setosa 5.1#1 setosa 3.0 setosa 4.9#2 setosa 3.2 setosa 4.7#3 setosa 3.1 setosa 4.6# ... Dplyr In Dplyr there are two separate functions for binding dataframe: bind_rows() and bind_columns(). Note that if a value doesn’t appear in one of the dataframes it's automatically filled with NA if you apply bind_rows(). Also, note how R automatically changes the column names (to avoid duplicates). This behaviour can be changed with the .name_repair argument. #Bind by rowsA %>% bind_rows(B)# Species Sepal_width Sepal_length# 1 setosa 3.5 NA# 2 setosa 3 NA# 3 setosa 3.2 NA# 4 setosa 3.1 NA# ...#Bind by columnsA %>% bind_cols(B)# Species...1 Sepal_width Species...3 Sepal_length# 1 setosa 3.5 setosa 5.1# 2 setosa 3 setosa 4.9# 3 setosa 3.2 setosa 4.7 Pfew! Congratulations! You might be the first person ever to reach the end of this article/cheat sheet. You can claim your reward by sending me a clap or leaving me a comment below :)
[ { "code": null, "e": 677, "s": 171, "text": "Pandas for Python and Dplyr for R are the two most popular libraries for working with tabular/structured data for many Data Scientists. There is always this big and partly heated discussion on which framework is better. Honestly, does it really matter? In...
Koa.js - Static Files
Static files are files that clients download as they are from the server. Create a new directory, public. Express, by default doesn't allow you to serve static files. We need a middleware to serve this purpose. Go ahead and install koa-serve − $ npm install --save koa-static Now we need to use this middleware. Before that create a directory called public. We will store all our static files here. This allows us to keep our server code secure as nothing above this public folder would be accessible to the clients. After you've created a public directory, create a file named hello.txt in it with any content you like. Now add the following to your app.js. var serve = require('koa-static'); var koa = require('koa'); var app = koa(); app.use(serve('./public')); app.listen(3000); Note − Koa looks up the files relative to the static directory, so the name of the static directory is not part of the URL. The root route is now set to your public dir, so all static files you load will be considering public as the root. To test that this is working fine, run your app and visit https://localhost:3000/hello.txt You should get the following output. Note that this is not a HTML document or Pug view, rather it is a simple txt file. We can also set multiple static assets directories using − var serve = require('koa-static'); var koa = require('koa'); var app = koa(); app.use(serve('./public')); app.use(serve('./images')); app.listen(3000); Now when we request a file, Koa will search these directories and send us the matching file. Print Add Notes Bookmark this page
[ { "code": null, "e": 2273, "s": 2106, "text": "Static files are files that clients download as they are from the server. Create a new directory, public. Express, by default doesn't allow you to serve static files." }, { "code": null, "e": 2350, "s": 2273, "text": "We need a middl...
Complete Guide to Data Visualization with Python | by Albert Sanchez Lafuente | Towards Data Science
Let’s see the main libraries for data visualization with Python and all the types of charts that can be done with them. We will also see which library is recommended to use on each occasion and the unique capabilities of each library. We will start with the most basic visualization that is looking at the data directly, then we will move on to plotting charts and finally, we will make interactive charts. We will work with two datasets that will adapt to the visualizations we show in the article, the datasets can be downloaded here. They are data on the popularity of searches on the Internet for three terms related to artificial intelligence (data science, machine learning and deep learning). They have been extracted from a famous search engine. There are two files temporal.csv and mapa.csv. The first one we will use in the vast majority of the tutorial includes popularity data of the three terms over time (from 2004 to the present, 2020). In addition, I have added a categorical variable (ones and zeros) to demonstrate the functionality of charts with categorical variables. The file mapa.csv includes popularity data separated by country. We will use it in the last section of the article when working with maps. Before we move on to more complex methods, let’s start with the most basic way of visualizing data. We will simply use pandas to take a look at the data and get an idea of how it is distributed. The first thing we must do is visualize a few examples to see what columns there are, what information they contain, how the values are coded... import pandas as pddf = pd.read_csv('temporal.csv')df.head(10) #View first 10 data rows With the command describe we will see how the data is distributed, the maximums, the minimums, the mean, ... df.describe() With the info command we will see what type of data each column includes. We could find the case of a column that when viewed with the head command seems numeric but if we look at subsequent data there are values in string format, then the variable will be coded as a string. df.info() By default, pandas limits the number of rows and columns it displays. This bothers me usually because I want to be able to visualize all the data. With these commands, we increase the limits and we can visualize the whole data. Be careful with this option for big datasets, we can have problems showing them. pd.set_option('display.max_rows', 500)pd.set_option('display.max_columns', 500)pd.set_option('display.width', 1000) Using Pandas styles, we can get much more information when viewing the table. First, we define a format dictionary so that the numbers are shown in a legible way (with a certain number of decimals, date and hour in a relevant format, with a percentage, with a currency, ...) Don’t panic, this is only a display and does not change the data, you will not have any problem to process it later. To give an example of each type, I have added currency and percentage symbols even though they do not make any sense for this data. format_dict = {'data science':'${0:,.2f}', 'Mes':'{:%m-%Y}', 'machine learning':'{:.2%}'}#We make sure that the Month column has datetime formatdf['Mes'] = pd.to_datetime(df['Mes'])#We apply the style to the visualizationdf.head().style.format(format_dict) We can highlight maximum and minimum values with colours. format_dict = {'Mes':'{:%m-%Y}'} #Simplified format dictionary with values that do make sense for our datadf.head().style.format(format_dict).highlight_max(color='darkgreen').highlight_min(color='#ff0000') We use a color gradient to display the data values. df.head(10).style.format(format_dict).background_gradient(subset=['data science', 'machine learning'], cmap='BuGn') We can also display the data values with bars. df.head().style.format(format_dict).bar(color='red', subset=['data science', 'deep learning']) Moreover, we also can combine the above functions and generate a more complex visualization. df.head(10).style.format(format_dict).background_gradient(subset=['data science', 'machine learning'], cmap='BuGn').highlight_max(color='yellow') Learn more about styling visualizations with Pandas here: https://pandas.pydata.org/pandas-docs/stable/user_guide/style.html Pandas profiling is a library that generates interactive reports with our data, we can see the distribution of the data, the types of data, possible problems it might have. It is very easy to use, with only 3 lines we can generate a report that we can send to anyone and that can be used even if you do not know programming. from pandas_profiling import ProfileReportprof = ProfileReport(df)prof.to_file(output_file='report.html') You can see the interactive report generated from the data used in the article, here. You can find more information about Pandas Profiling in this article. Matplotlib is the most basic library for visualizing data graphically. It includes many of the graphs that we can think of. Just because it is basic does not mean that it is not powerful, many of the other data visualization libraries we are going to talk about are based on it. Matplotlib’s charts are made up of two main components, the axes (the lines that delimit the area of the chart) and the figure (where we draw the axes, titles and things that come out of the area of the axes). Now let’s create the simplest graph possible: import matplotlib.pyplot as pltplt.plot(df['Mes'], df['data science'], label='data science') #The parameter label is to indicate the legend. This doesn't mean that it will be shown, we'll have to use another command that I'll explain later. We can make the graphs of multiple variables in the same graph and thus compare them. plt.plot(df['Mes'], df['data science'], label='data science')plt.plot(df['Mes'], df['machine learning'], label='machine learning')plt.plot(df['Mes'], df['deep learning'], label='deep learning') It is not very clear which variable each color represents. We’re going to improve the chart by adding a legend and titles. plt.plot(df['Mes'], df['data science'], label='data science')plt.plot(df['Mes'], df['machine learning'], label='machine learning')plt.plot(df['Mes'], df['deep learning'], label='deep learning')plt.xlabel('Date')plt.ylabel('Popularity')plt.title('Popularity of AI terms by date')plt.grid(True)plt.legend() If you are working with Python from the terminal or a script, after defining the graph with the functions we have written above use plt.show(). If you’re working from jupyter notebook, add %matplotlib inline to the beginning of the file and run it before making the chart. We can make multiple graphics in one figure. This goes very well for comparing charts or for sharing data from several types of charts easily with a single image. fig, axes = plt.subplots(2,2)axes[0, 0].hist(df['data science'])axes[0, 1].scatter(df['Mes'], df['data science'])axes[1, 0].plot(df['Mes'], df['machine learning'])axes[1, 1].plot(df['Mes'], df['deep learning']) We can draw the graph with different styles for the points of each variable: plt.plot(df['Mes'], df['data science'], 'r-')plt.plot(df['Mes'], df['data science']*2, 'bs')plt.plot(df['Mes'], df['data science']*3, 'g^') Now let’s see a few examples of the different graphics we can do with Matplotlib. We start with a scatterplot: plt.scatter(df['data science'], df['machine learning']) Example of a bar chart: plt.bar(df['Mes'], df['machine learning'], width=20) Example of a histogram: plt.hist(df['deep learning'], bins=15) We can add a text to the graphic, we indicate the position of the text in the same units that we see in the graphic. In the text, we can even add special characters following the TeX language We can also add markers that point to a particular point on the graph. plt.plot(df['Mes'], df['data science'], label='data science')plt.plot(df['Mes'], df['machine learning'], label='machine learning')plt.plot(df['Mes'], df['deep learning'], label='deep learning')plt.xlabel('Date')plt.ylabel('Popularity')plt.title('Popularity of AI terms by date')plt.grid(True)plt.text(x='2010-01-01', y=80, s=r'$\lambda=1, r^2=0.8$') #Coordinates use the same units as the graphplt.annotate('Notice something?', xy=('2014-01-01', 30), xytext=('2006-01-01', 50), arrowprops={'facecolor':'red', 'shrink':0.05}) Gallery of examples:In this link: https://matplotlib.org/gallery/index.html we can see examples of all types of graphics that can be done with Matplotlib. Seaborn is a library based on Matplotlib. Basically what it gives us are nicer graphics and functions to make complex types of graphics with just one line of code. We import the library and initialize the style of the graphics with sns.set(), without this command the graphics would still have the same style as Matplotlib. We show one of the simplest graphics, a scatterplot import seaborn as snssns.set()sns.scatterplot(df['Mes'], df['data science']) We can add information of more than two variables in the same graph. For this we use colors and sizes. We also make a different graph according to the value of the category column: sns.relplot(x='Mes', y='deep learning', hue='data science', size='machine learning', col='categorical', data=df) One of the most popular graphics provided by Seaborn is the heatmap. It is very common to use it to show all the correlations between variables in a dataset: sns.heatmap(df.corr(), annot=True, fmt='.2f') Another of the most popular is the pairplot that shows us the relationships between all the variables. Be careful with this function if you have a large dataset, as it has to show all the data points as many times as there are columns, it means that by increasing the dimensionality of the data, the processing time increases exponentially. sns.pairplot(df) Now let’s do the pairplot showing the charts segmented according to the values of the categorical variable sns.pairplot(df, hue='categorical') A very informative graph is the jointplot that allows us to see a scatterplot together with a histogram of the two variables and see how they are distributed: sns.jointplot(x='data science', y='machine learning', data=df) Another interesting graphic is the ViolinPlot: sns.catplot(x='categorical', y='data science', kind='violin', data=df) We can create multiple graphics in one image just like we did with Matplotlib: fig, axes = plt.subplots(1, 2, sharey=True, figsize=(8, 4))sns.scatterplot(x="Mes", y="deep learning", hue="categorical", data=df, ax=axes[0])axes[0].set_title('Deep Learning')sns.scatterplot(x="Mes", y="machine learning", hue="categorical", data=df, ax=axes[1])axes[1].set_title('Machine Learning') Gallery of examples:In this link, we can see examples of everything that can be done with Seaborn. Bokeh is a library that allows you to generate interactive graphics. We can export them to an HTML document that we can share with anyone who has a web browser. It is a very useful library when we are interested in looking for things in the graphics and we want to be able to zoom in and move around the graphic. Or when we want to share them and give the possibility to explore the data to another person. We start by importing the library and defining the file in which we will save the graph: from bokeh.plotting import figure, output_file, saveoutput_file('data_science_popularity.html') We draw what we want and save it on the file: p = figure(title='data science', x_axis_label='Mes', y_axis_label='data science')p.line(df['Mes'], df['data science'], legend='popularity', line_width=2)save(p) You can see how the file data_science_popularity.html looks by clicking here. It’s interactive, you can move around the graphic and zoom in as you like Adding multiple graphics to a single file: output_file('multiple_graphs.html')s1 = figure(width=250, plot_height=250, title='data science')s1.circle(df['Mes'], df['data science'], size=10, color='navy', alpha=0.5)s2 = figure(width=250, height=250, x_range=s1.x_range, y_range=s1.y_range, title='machine learning') #share both axis ranges2.triangle(df['Mes'], df['machine learning'], size=10, color='red', alpha=0.5)s3 = figure(width=250, height=250, x_range=s1.x_range, title='deep learning') #share only one axis ranges3.square(df['Mes'], df['deep learning'], size=5, color='green', alpha=0.5)p = gridplot([[s1, s2, s3]])save(p) You can see how the file multiple_graphs.html looks by clicking here. Gallery of examples: In this link https://docs.bokeh.org/en/latest/docs/gallery.html you can see examples of everything that can be done with Bokeh. Altair, in my opinion, does not bring anything new to what we have already discussed with the other libraries, and therefore I will not talk about it in depth. I want to mention this library because maybe in their gallery of examples we can find some specific graphic that can help us. Gallery of examples: In this link you can find the gallery of examples with all you can do with Altair. Folium is a library that allows us to draw maps, markers and we can also draw our data on them. Folium lets us choose the map supplier, this determines the style and quality of the map. In this article, for simplicity, we’re only going to look at OpenStreetMap as a map provider. Working with maps is quite complex and deserves its own article. Here we’re just going to look at the basics and draw a couple of maps with the data we have. Let’s begin with the basics, we’ll draw a simple map with nothing on it. import foliumm1 = folium.Map(location=[41.38, 2.17], tiles='openstreetmap', zoom_start=18)m1.save('map1.html') We generate an interactive file for the map in which you can move and zoom as you wish. You can see it here. We can add markers to the map: m2 = folium.Map(location=[41.38, 2.17], tiles='openstreetmap', zoom_start=16)folium.Marker([41.38, 2.176], popup='<i>You can use whatever HTML code you want</i>', tooltip='click here').add_to(m2)folium.Marker([41.38, 2.174], popup='<b>You can use whatever HTML code you want</b>', tooltip='dont click here').add_to(m2)m2.save('map2.html') You can see the interactive map file where you can click on the markers by clicking here. In the dataset presented at the beginning, we have country names and the popularity of the terms of artificial intelligence. After a quick visualization you can see that there are countries where one of these values is missing. We are going to eliminate these countries to make it easier. Then we will use Geopandas to transform the country names into coordinates that we can draw on the map. from geopandas.tools import geocodedf2 = pd.read_csv('mapa.csv')df2.dropna(axis=0, inplace=True)df2['geometry'] = geocode(df2['País'], provider='nominatim')['geometry'] #It may take a while because it downloads a lot of data.df2['Latitude'] = df2['geometry'].apply(lambda l: l.y)df2['Longitude'] = df2['geometry'].apply(lambda l: l.x) Now that we have the data coded in latitude and longitude, let’s represent it on the map. We’ll start with a BubbleMap where we’ll draw circles over the countries. Their size will depend on the popularity of the term and their colour will be red or green depending on whether their popularity is above a value or not. m3 = folium.Map(location=[39.326234,-4.838065], tiles='openstreetmap', zoom_start=3)def color_producer(val): if val <= 50: return 'red' else: return 'green'for i in range(0,len(df2)): folium.Circle(location=[df2.iloc[i]['Latitud'], df2.iloc[i]['Longitud']], radius=5000*df2.iloc[i]['data science'], color=color_producer(df2.iloc[i]['data science'])).add_to(m3)m3.save('map3.html') You can view the interactive map file by clicking here. With all this variety of libraries you may be wondering which library is best for your project. The quick answer is the library that allows you to easily make the graphic you want. For the initial phases of a project, with pandas and pandas profiling we will make a quick visualization to understand the data. If we need to visualize more information we could use simple graphs that we can find in matplotlib as scatterplots or histograms. For advanced phases of the project, we can search the galleries of the main libraries (Matplotlib, Seaborn, Bokeh, Altair) for the graphics that we like and fit the project. These graphics can be used to give information in reports, make interactive reports, search for specific values, ...
[ { "code": null, "e": 407, "s": 172, "text": "Let’s see the main libraries for data visualization with Python and all the types of charts that can be done with them. We will also see which library is recommended to use on each occasion and the unique capabilities of each library." }, { "code"...
Comprehensions in Python - GeeksforGeeks
14 Nov, 2018 Comprehensions in Python provide us with a short and concise way to construct new sequences (such as lists, set, dictionary etc.) using sequences which have been already defined. Python supports the following 4 types of comprehensions: List Comprehensions Dictionary Comprehensions Set Comprehensions Generator Comprehensions List Comprehensions provide an elegant way to create new lists. The following is the basic structure of a list comprehension: output_list = [output_exp for var in input_list if (var satisfies this condition)] Note that list comprehension may or may not contain an if condition. List comprehensions can contain multiple for (nested list comprehensions). Example #1: Suppose we want to create an output list which contains only the even numbers which are present in the input list. Let’s see how to do this using for loops and list comprehension and decide which method suits better. # Constructing output list WITHOUT# Using List comprehensionsinput_list = [1, 2, 3, 4, 4, 5, 6, 7, 7] output_list = [] # Using loop for constructing output listfor var in input_list: if var % 2 == 0: output_list.append(var) print("Output List using for loop:", output_list) Output: Output List using for loop: [2, 4, 4, 6] # Using List comprehensions# for constructing output list input_list = [1, 2, 3, 4, 4, 5, 6, 7, 7] list_using_comp = [var for var in input_list if var % 2 == 0] print("Output List using list comprehensions:", list_using_comp) Output: Output List using list comprehensions: [2, 4, 4, 6] Example #2: Suppose we want to create an output list which contains squares of all the numbers from 1 to 9. Let’s see how to do this using for loops and list comprehension. # Constructing output list using for loopoutput_list = []for var in range(1, 10): output_list.append(var ** 2) print("Output List using for loop:", output_list) Output: Output List using for loop: [1, 4, 9, 16, 25, 36, 49, 64, 81] # Constructing output list# using list comprehensionlist_using_comp = [var**2 for var in range(1, 10)] print("Output List using list comprehension:", list_using_comp) Output: Output List using list comprehension: [1, 4, 9, 16, 25, 36, 49, 64, 81] Extending the idea of list comprehensions, we can also create a dictionary using dictionary comprehensions. The basic structure of a dictionary comprehension looks like below. output_dict = {key:value for (key, value) in iterable if (key, value satisfy this condition)} Example #1: Suppose we want to create an output dictionary which contains only the odd numbers that are present in the input list as keys and their cubes as values. Let’s see how to do this using for loops and dictionary comprehension. input_list = [1, 2, 3, 4, 5, 6, 7] output_dict = {} # Using loop for constructing output dictionaryfor var in input_list: if var % 2 != 0: output_dict[var] = var**3 print("Output Dictionary using for loop:", output_dict ) Output: Output Dictionary using for loop: {1: 1, 3: 27, 5: 125, 7: 343} # Using Dictionary comprehensions# for constructing output dictionary input_list = [1,2,3,4,5,6,7] dict_using_comp = {var:var ** 3 for var in input_list if var % 2 != 0} print("Output Dictionary using dictionary comprehensions:", dict_using_comp) Output: Output Dictionary using dictionary comprehensions: {1: 1, 3: 27, 5: 125, 7: 343} Example #2: Given two lists containing the names of states and their corresponding capitals, construct a dictionary which maps the states with their respective capitals. Let’s see how to do this using for loops and dictionary comprehension. state = ['Gujarat', 'Maharashtra', 'Rajasthan']capital = ['Gandhinagar', 'Mumbai', 'Jaipur'] output_dict = {} # Using loop for constructing output dictionaryfor (key, value) in zip(state, capital): output_dict[key] = value print("Output Dictionary using for loop:", output_dict) Output: Output Dictionary using for loop: {'Gujarat': 'Gandhinagar', 'Maharashtra': 'Mumbai', 'Rajasthan': 'Jaipur'} # Using Dictionary comprehensions# for constructing output dictionary state = ['Gujarat', 'Maharashtra', 'Rajasthan']capital = ['Gandhinagar', 'Mumbai', 'Jaipur'] dict_using_comp = {key:value for (key, value) in zip(state, capital)} print("Output Dictionary using dictionary comprehensions:", dict_using_comp) Output: Output Dictionary using dictionary comprehensions: {'Rajasthan': 'Jaipur', 'Maharashtra': 'Mumbai', 'Gujarat': 'Gandhinagar'} Set comprehensions are pretty similar to list comprehensions. The only difference between them is that set comprehensions use curly brackets { }. Let’s look at the following example to understand set comprehensions. Example #1 : Suppose we want to create an output set which contains only the even numbers that are present in the input list. Note that set will discard all the duplicate values. Let’s see how we can do this using for loops and set comprehension. input_list = [1, 2, 3, 4, 4, 5, 6, 6, 6, 7, 7] output_set = set() # Using loop for constructing output setfor var in input_list: if var % 2 == 0: output_set.add(var) print("Output Set using for loop:", output_set) Output: Output Set using for loop: {2, 4, 6} # Using Set comprehensions # for constructing output set input_list = [1, 2, 3, 4, 4, 5, 6, 6, 6, 7, 7] set_using_comp = {var for var in input_list if var % 2 == 0} print("Output Set using set comprehensions:", set_using_comp) Output: Output Set using set comprehensions: {2, 4, 6} Generator Comprehensions are very similar to list comprehensions. One difference between them is that generator comprehensions use circular brackets whereas list comprehensions use square brackets. The major difference between them is that generators don’t allocate memory for the whole list. Instead, they generate each value one by one which is why they are memory efficient. Let’s look at the following example to understand generator comprehension: input_list = [1, 2, 3, 4, 4, 5, 6, 7, 7] output_gen = (var for var in input_list if var % 2 == 0) print("Output values using generator comprehensions:", end = ' ') for var in output_gen: print(var, end = ' ') Output: Output values using generator comprehensions: 2 4 4 6 Picked python-basics python-dict python-list python-set Technical Scripter 2018 Python Technical Scripter python-dict python-list python-set Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Read a file line by line in Python Enumerate() in Python How to Install PIP on Windows ? Iterate over a list in Python Different ways to create Pandas Dataframe Python String | replace() Python program to convert a list to string Reading and Writing to text files in Python sum() function in Python Create a Pandas DataFrame from Lists
[ { "code": null, "e": 23815, "s": 23787, "text": "\n14 Nov, 2018" }, { "code": null, "e": 24051, "s": 23815, "text": "Comprehensions in Python provide us with a short and concise way to construct new sequences (such as lists, set, dictionary etc.) using sequences which have been a...
Probabilistic Forecasts: Pinball Loss Function | Towards Data Science
Let’s start with a few questions. Read them first before going through the article. By the end of your reading, you should be able to answer them. (The answers are provided at the end as well as a Python implementation) You want to forecast your product’s demand. Specifically, you want to predict a value for which the demand has an 80% probability of being under. What is the worst, over forecasting the actual demand or under forecasting it?You design a forecast model that reduces the absolute error (or MAE). Is your model aiming for the average demand or the median demand?You made a 95% quantile demand forecast, your forecast was 150, and the observed demand is 120. How would you assess the quality of your forecast?You sell high-margin products. What is the worst: overstocking or understocking them? When setting the safety stock target, should you aim for a high or a low demand quantile? You want to forecast your product’s demand. Specifically, you want to predict a value for which the demand has an 80% probability of being under. What is the worst, over forecasting the actual demand or under forecasting it? You design a forecast model that reduces the absolute error (or MAE). Is your model aiming for the average demand or the median demand? You made a 95% quantile demand forecast, your forecast was 150, and the observed demand is 120. How would you assess the quality of your forecast? You sell high-margin products. What is the worst: overstocking or understocking them? When setting the safety stock target, should you aim for a high or a low demand quantile? towardsdatascience.com A quantile forecast is a probabilistic forecast aiming at a specific demand quantile (or percentile). Definition: Quantile The quantile α (α is a percentage, 0<α<1) of a random distribution is the value for which the probability for an occurrence of this distribution to be below this value is α. P(x<=Quantile) = α In other words, the quantile is the distribution’s cumulative distribution function evaluated at α. Quantile α = F^{-1}(α) In simple words, if you do an 80% quantile forecast of tomorrow’s weather, you would say, “There is an 80% probability that the temperature will be 20°C or lower.” Side note: If you are used to inventory optimization and the usual safety stock formula Ss = z * σ * √(R+L) z is the gaussian alpha quantile z=Φ^(-1) (α). Somehow, you can see the safety stock formula as the α quantile forecast of the demand over the risk-horizon. PS: Remember to include the review period in this formula. How can we assess the accuracy of a quantile forecast? Let’s take another look at our above example: you forecast that tomorrow’s 80th temperature quantile is 20°C. The next day, the temperature is 16°C. How accurate was your forecast? Before discussing the math, let’s get an intuition on how we should evaluate (or penalize) a quantile forecast. Somehow, when we say, “We forecast the temperature to have 80% of chances to be below 20°C,” we expect the temperature to be lower than 20°C and would be surprised that the temperature would be 25°C. In other words, this quantile forecast should get a higher penalty if it under forecasted the temperature than if it overpredicted it. Moreover, as shown in the figure above, the penalty for over forecasting should be even higher for higher quantiles. For example, if your 95% temperature quantile forecast is 20°C, you would be very surprised if the actual temperature was 25°C. Let’s formalize this forecasting penalty using the pinball loss function (or quantile loss function). The pinball loss function L_α is computed for a quantile α, the quantile forecast f, and the demand d as L_α (d,f) = (d-f) α if d≥f (f-d)(1-α) if f>d This loss function aims to provide a forecast with an α probability of under forecasting the demand and an (α-1) probability of over forecasting the demand. Let’s see how the pinball function works in practice. In the figure below, you can see an example with d=100 and α=90% (we want to forecast the demand 90th quantile) where the pinball loss function L_α is computed for different values of f. As you can see, the pinball loss L_α is highly asymmetrical: it doesn’t increase at the same speed if you over forecast (low penalty) or under forecast (high penalty). Let’s take an example: forecasting 20 units too much will result in a loss of 10%|f-d|=2 (that is to say that over forecasting is not much penalized). But, on the other side, forecasting 20 units too little will result in a loss of 90%|f-d|=18. Basically, under forecasting is penalized by α|e| (where e is the forecast error), whereas over forecasting is penalized by (1-α)|e|. In my forecasting KPIs article, I highlighted that optimizing the forecast mean absolute error (MAE) will ultimately aim at forecasting the median demand. towardsdatascience.com As you can see in the figure below, the 50% quantile pinball loss function corresponds to the regular absolute error — they are interchangeable. Keep in mind that, by definition, forecasting the 50% demand quantile is the same as predicting the demand median. This is another illustration that optimizing the MAE will result in a forecast aiming at the demand median (and not the demand average). You want to forecast your product’s demand. Specifically, you want to predict a value for which the demand has an 80% probability of being under. What is the worst, over forecasting the actual demand or under forecasting it? You want to forecast your product’s demand. Specifically, you want to predict a value for which the demand has an 80% probability of being under. What is the worst, over forecasting the actual demand or under forecasting it? Underforecasting the demand is the worst. If you want to forecast the 80th quantile, you should aim for a value that is likely to be higher than the observed value. 2. You design a forecast model that reduces the absolute error (or MAE). Is your model aiming for the average demand or the median demand? Optimizing the Mean Absolute Error will ultimately result in forecasting the expected median demand and not the expected mean demand. 3. You made a 95% quantile demand forecast, your forecast was 150, and the observed demand is 120. How would you assess the quality of your forecast? You can compute the pinball loss of your quantile forecast. Your forecast has an absolute error of 30 units (=|150–120|), and it is an overprediction, so you pinball loss is 1.5 units (=30*0.05%). 4. You sell high-margin products. What is the worst: overstocking or understocking them? When setting the safety stock target, should you aim for a high or a low demand quantile? You want to have a lot of stock of high-margin products, so understocking them is a bad decision. Henceforth, when setting the stock target you should aim for a high demand quantile. You can easily implement the pinball loss function in Python in a single expression (with a few modifications compared to the original equation). def pinball_loss(d, f, alpha):return max(alpha*(d-f), (1-alpha)*(f-d))
[ { "code": null, "e": 392, "s": 172, "text": "Let’s start with a few questions. Read them first before going through the article. By the end of your reading, you should be able to answer them. (The answers are provided at the end as well as a Python implementation)" }, { "code": null, "e"...
CNN | Introduction to Pooling Layer - GeeksforGeeks
29 Jul, 2021 The pooling operation involves sliding a two-dimensional filter over each channel of feature map and summarising the features lying within the region covered by the filter. For a feature map having dimensions nh x nw x nc, the dimensions of output obtained after a pooling layer is (nh - f + 1) / s x (nw - f + 1)/s x nc where, -> nh - height of feature map -> nw - width of feature map -> nc - number of channels in the feature map -> f - size of filter -> s - stride length A common CNN model architecture is to have a number of convolution and pooling layers stacked one after the other. Pooling layers are used to reduce the dimensions of the feature maps. Thus, it reduces the number of parameters to learn and the amount of computation performed in the network. The pooling layer summarises the features present in a region of the feature map generated by a convolution layer. So, further operations are performed on summarised features instead of precisely positioned features generated by the convolution layer. This makes the model more robust to variations in the position of the features in the input image. Max pooling is a pooling operation that selects the maximum element from the region of the feature map covered by the filter. Thus, the output after max-pooling layer would be a feature map containing the most prominent features of the previous feature map. Max pooling is a pooling operation that selects the maximum element from the region of the feature map covered by the filter. Thus, the output after max-pooling layer would be a feature map containing the most prominent features of the previous feature map. This can be achieved using MaxPooling2D layer in keras as follows:Code #1 : Performing Max Pooling using keras This can be achieved using MaxPooling2D layer in keras as follows:Code #1 : Performing Max Pooling using keras Python3 import numpy as npfrom keras.models import Sequentialfrom keras.layers import MaxPooling2D # define input imageimage = np.array([[2, 2, 7, 3], [9, 4, 6, 1], [8, 5, 2, 4], [3, 1, 2, 6]])image = image.reshape(1, 4, 4, 1) # define model containing just a single max pooling layermodel = Sequential( [MaxPooling2D(pool_size = 2, strides = 2)]) # generate pooled outputoutput = model.predict(image) # print output imageoutput = np.squeeze(output)print(output) Output: Output: [[9. 7.] [8. 6.]] Average pooling computes the average of the elements present in the region of feature map covered by the filter. Thus, while max pooling gives the most prominent feature in a particular patch of the feature map, average pooling gives the average of features present in a patch. Average pooling computes the average of the elements present in the region of feature map covered by the filter. Thus, while max pooling gives the most prominent feature in a particular patch of the feature map, average pooling gives the average of features present in a patch. Code #2 : Performing Average Pooling using keras Code #2 : Performing Average Pooling using keras Python3 import numpy as npfrom keras.models import Sequentialfrom keras.layers import AveragePooling2D # define input imageimage = np.array([[2, 2, 7, 3], [9, 4, 6, 1], [8, 5, 2, 4], [3, 1, 2, 6]])image = image.reshape(1, 4, 4, 1) # define model containing just a single average pooling layermodel = Sequential( [AveragePooling2D(pool_size = 2, strides = 2)]) # generate pooled outputoutput = model.predict(image) # print output imageoutput = np.squeeze(output)print(output) Output: Output: [[4.25 4.25] [4.25 3.5 ]] Global pooling reduces each channel in the feature map to a single value. Thus, an nh x nw x nc feature map is reduced to 1 x 1 x nc feature map. This is equivalent to using a filter of dimensions nh x nw i.e. the dimensions of the feature map. Further, it can be either global max pooling or global average pooling.Code #3 : Performing Global Pooling using keras Global pooling reduces each channel in the feature map to a single value. Thus, an nh x nw x nc feature map is reduced to 1 x 1 x nc feature map. This is equivalent to using a filter of dimensions nh x nw i.e. the dimensions of the feature map. Further, it can be either global max pooling or global average pooling.Code #3 : Performing Global Pooling using keras Python3 import numpy as npfrom keras.models import Sequentialfrom keras.layers import GlobalMaxPooling2Dfrom keras.layers import GlobalAveragePooling2D # define input imageimage = np.array([[2, 2, 7, 3], [9, 4, 6, 1], [8, 5, 2, 4], [3, 1, 2, 6]])image = image.reshape(1, 4, 4, 1) # define gm_model containing just a single global-max pooling layergm_model = Sequential( [GlobalMaxPooling2D()]) # define ga_model containing just a single global-average pooling layerga_model = Sequential( [GlobalAveragePooling2D()]) # generate pooled outputgm_output = gm_model.predict(image)ga_output = ga_model.predict(image) # print output imagegm_output = np.squeeze(gm_output)ga_output = np.squeeze(ga_output)print("gm_output: ", gm_output)print("ga_output: ", ga_output) Output: Output: gm_output: 9.0 ga_output: 4.0625 surinderdawra388 Machine Learning Python Machine Learning Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. ML | Linear Regression Decision Tree Python | Decision tree implementation Search Algorithms in AI Decision Tree Introduction with example Read JSON file using Python Adding new column to existing DataFrame in Pandas Python map() function How to get column names in Pandas dataframe
[ { "code": null, "e": 24727, "s": 24699, "text": "\n29 Jul, 2021" }, { "code": null, "e": 25011, "s": 24727, "text": "The pooling operation involves sliding a two-dimensional filter over each channel of feature map and summarising the features lying within the region covered by th...
C++ IOS Library - Flags
It is used to get/set format flags. The format flags of a stream affect the way data is interpreted in certain input functions and how these are written by certain output functions. See ios_base::fmtflags for the possible values of this function's argument and the interpretation of its return value. The second form of this function sets the value for all the format flags of the stream, overwriting the existing values and clearing any flag not explicitly set in the argument. To access individual flags, see members setf and unsetf. Following is the declaration for ios_base::flags function. get (1) fmtflags flags() const; set (2) fmtflags flags (fmtflags fmtfl); The first form (1) returns the format flags currently selected in the stream. The second form (2) sets new format flags for the stream, returning its former value. fmtfl − Format flags to be used by the stream. ios_base::fmtflags is a bitmask type. The format flags selected in the stream before the call. Basic guarantee − if an exception is thrown, the stream is in a valid state. Concurrent access to the same stream object may cause data races. In below example explains about ios_base::flags function. #include <iostream> int main () { std::cout.flags ( std::ios::right | std::ios::hex | std::ios::showbase ); std::cout.width (10); std::cout << 100 << '\n'; return 0; } Let us compile and run the above program, this will produce the following result − 0x64 Print Add Notes Bookmark this page
[ { "code": null, "e": 2904, "s": 2603, "text": "It is used to get/set format flags. The format flags of a stream affect the way data is interpreted in certain input functions and how these are written by certain output functions. See ios_base::fmtflags for the possible values of this function's argum...
Python - Plotting Pie charts in excel sheet using XlsxWriter module
A pie chart is a circular statistical graphic, which is divided into slices to illustrate numerical proportion. In a pie chart, the arc length of each slice, is proportional to the quantity it represents. # import xlsxwriter module import xlsxwriter # Workbook() takes one, non-optional, argument which is the filename #that we want to create. workbook = xlsxwriter.Workbook('chart_pie.xlsx') # The workbook object is then used to add new worksheet via the #add_worksheet() method. worksheet = workbook.add_worksheet() # Create a new Format object to formats cells in worksheets using #add_format() method . # here we create bold format object . bold = workbook.add_format({'bold': 1}) # create a data list . headings = ['Category', 'Values'] data = [ ['Apple', 'Cherry', 'Pecan'], [60, 30, 10], ] # Write a row of data starting from 'A1' with bold format. worksheet.write_row('A1', headings, bold) # Write a column of data starting from A2, B2, C2 respectively. worksheet.write_column('A2', data[0]) worksheet.write_column('B2', data[1]) # Create a chart object that can be added to a worksheet using #add_chart() method. # here we create a pie chart object . chart1 = workbook.add_chart({'type': 'pie'}) # Add a data series to a chart using add_series method. # Configure the first series. #[sheetname, first_row, first_col, last_row, last_col]. chart1.add_series({ 'name': 'Pie sales data', 'categories': ['Sheet1', 1, 0, 3, 0], 'values': ['Sheet1', 1, 1, 3, 1], }) # Add a chart title chart1.set_title({'name': 'Popular Pie Types'}) # Set an Excel chart style. Colors with white outline and shadow. chart1.set_style(10) # Insert the chart into the worksheet(with an offset). # the top-left corner of a chart is anchored to cell C2. worksheet.insert_chart('C2', chart1, {'x_offset': 25, 'y_offset': 10}) # Finally, close the Excel file via the close() method. workbook.close()
[ { "code": null, "e": 1267, "s": 1062, "text": "A pie chart is a circular statistical graphic, which is divided into slices to illustrate numerical proportion. In a pie chart, the arc length of each slice, is proportional to the quantity it represents." }, { "code": null, "e": 2983, "...
A Simple Guide to the Versions of the Inception Network | by Bharath Raj | Towards Data Science
The Inception network was an important milestone in the development of CNN classifiers. Prior to its inception (pun intended), most popular CNNs just stacked convolution layers deeper and deeper, hoping to get better performance. The Inception network on the other hand, was complex (heavily engineered). It used a lot of tricks to push performance; both in terms of speed and accuracy. Its constant evolution lead to the creation of several versions of the network. The popular versions are as follows: Inception v1. Inception v2 and Inception v3. Inception v4 and Inception-ResNet. Each version is an iterative improvement over the previous one. Understanding the upgrades can help us to build custom classifiers that are optimized both in speed and accuracy. A̶l̶s̶o̶,̶ ̶d̶e̶p̶e̶n̶d̶i̶n̶g̶ ̶o̶n̶ ̶y̶o̶u̶r̶ ̶d̶a̶t̶a̶,̶ ̶a̶ ̶l̶o̶w̶e̶r̶ ̶v̶e̶r̶s̶i̶o̶n̶ ̶m̶a̶y̶ ̶a̶c̶t̶u̶a̶l̶l̶y̶ ̶w̶o̶r̶k̶ ̶b̶e̶t̶t̶e̶r̶.̶ (Edit: Removed this sentence as it was rather speculative; please ignore the same). This blog post aims to elucidate the evolution of the inception network. This is where it all started. Let us analyze what problem it was purported to solve, and how it solved it. (Paper) Salient parts in the image can have extremely large variation in size. For instance, an image with a dog can be either of the following, as shown below. The area occupied by the dog is different in each image. Because of this huge variation in the location of the information, choosing the right kernel size for the convolution operation becomes tough. A larger kernel is preferred for information that is distributed more globally, and a smaller kernel is preferred for information that is distributed more locally. Very deep networks are prone to overfitting. It also hard to pass gradient updates through the entire network. Naively stacking large convolution operations is computationally expensive. Why not have filters with multiple sizes operate on the same level? The network essentially would get a bit “wider” rather than “deeper”. The authors designed the inception module to reflect the same. The below image is the “naive” inception module. It performs convolution on an input, with 3 different sizes of filters (1x1, 3x3, 5x5). Additionally, max pooling is also performed. The outputs are concatenated and sent to the next inception module. As stated before, deep neural networks are computationally expensive. To make it cheaper, the authors limit the number of input channels by adding an extra 1x1 convolution before the 3x3 and 5x5 convolutions. Though adding an extra operation may seem counterintuitive, 1x1 convolutions are far more cheaper than 5x5 convolutions, and the reduced number of input channels also help. Do note that however, the 1x1 convolution is introduced after the max pooling layer, rather than before. Using the dimension reduced inception module, a neural network architecture was built. This was popularly known as GoogLeNet (Inception v1). The architecture is shown below: GoogLeNet has 9 such inception modules stacked linearly. It is 22 layers deep (27, including the pooling layers). It uses global average pooling at the end of the last inception module. Needless to say, it is a pretty deep classifier. As with any very deep network, it is subject to the vanishing gradient problem. To prevent the middle part of the network from “dying out”, the authors introduced two auxiliary classifiers (The purple boxes in the image). They essentially applied softmax to the outputs of two of the inception modules, and computed an auxiliary loss over the same labels. The total loss function is a weighted sum of the auxiliary loss and the real loss. Weight value used in the paper was 0.3 for each auxiliary loss. # The total loss used by the inception net during training.total_loss = real_loss + 0.3 * aux_loss_1 + 0.3 * aux_loss_2 Needless to say, auxiliary loss is purely used for training purposes, and is ignored during inference. Inception v2 and Inception v3 were presented in the same paper. The authors proposed a number of upgrades which increased the accuracy and reduced the computational complexity. Inception v2 explores the following: Reduce representational bottleneck. The intuition was that, neural networks perform better when convolutions didn’t alter the dimensions of the input drastically. Reducing the dimensions too much may cause loss of information, known as a “representational bottleneck” Using smart factorization methods, convolutions can be made more efficient in terms of computational complexity. Factorize 5x5 convolution to two 3x3 convolution operations to improve computational speed. Although this may seem counterintuitive, a 5x5 convolution is 2.78 times more expensive than a 3x3 convolution. So stacking two 3x3 convolutions infact leads to a boost in performance. This is illustrated in the below image. Moreover, they factorize convolutions of filter size nxn to a combination of 1xn and nx1 convolutions. For example, a 3x3 convolution is equivalent to first performing a 1x3 convolution, and then performing a 3x1 convolution on its output. They found this method to be 33% more cheaper than the single 3x3 convolution. This is illustrated in the below image. The filter banks in the module were expanded (made wider instead of deeper) to remove the representational bottleneck. If the module was made deeper instead, there would be excessive reduction in dimensions, and hence loss of information. This is illustrated in the below image. The above three principles were used to build three different types of inception modules (Let’s call them modules A,B and C in the order they were introduced. These names are introduced for clarity, and not the official names). The architecture is as follows: The authors noted that the auxiliary classifiers didn’t contribute much until near the end of the training process, when accuracies were nearing saturation. They argued that they function as regularizes, especially if they have BatchNorm or Dropout operations. Possibilities to improve on the Inception v2 without drastically changing the modules were to be investigated. Inception Net v3 incorporated all of the above upgrades stated for Inception v2, and in addition used the following: RMSProp Optimizer.Factorized 7x7 convolutions.BatchNorm in the Auxillary Classifiers.Label Smoothing (A type of regularizing component added to the loss formula that prevents the network from becoming too confident about a class. Prevents over fitting). RMSProp Optimizer. Factorized 7x7 convolutions. BatchNorm in the Auxillary Classifiers. Label Smoothing (A type of regularizing component added to the loss formula that prevents the network from becoming too confident about a class. Prevents over fitting). Inception v4 and Inception-ResNet were introduced in the same paper. For clarity, let us discuss them in separate sections. Make the modules more uniform. The authors also noticed that some of the modules were more complicated than necessary. This can enable us to boost performance by adding more of these uniform modules. The “stem” of Inception v4 was modified. The stem here, refers to the initial set of operations performed before introducing the Inception blocks. They had three main inception modules, named A,B and C (Unlike Inception v2, these modules are infact named A,B and C). They look very similar to their Inception v2 (or v3) counterparts. Inception v4 introduced specialized “Reduction Blocks” which are used to change the width and height of the grid. The earlier versions didn’t explicitly have reduction blocks, but the functionality was implemented. Inspired by the performance of the ResNet, a hybrid inception module was proposed. There are two sub-versions of Inception ResNet, namely v1 and v2. Before we checkout the salient features, let us look at the minor differences between these two sub-versions. Inception-ResNet v1 has a computational cost that is similar to that of Inception v3. Inception-ResNet v2 has a computational cost that is similar to that of Inception v4. They have different stems, as illustrated in the Inception v4 section. Both sub-versions have the same structure for the modules A, B, C and the reduction blocks. Only difference is the hyper-parameter settings. In this section, we’ll only focus on the structure. Refer to the paper for the exact hyper-parameter settings (The images are of Inception-Resnet v1). Introduce residual connections that add the output of the convolution operation of the inception module, to the input. For residual addition to work, the input and output after convolution must have the same dimensions. Hence, we use 1x1 convolutions after the original convolutions, to match the depth sizes (Depth is increased after convolution). The pooling operation inside the main inception modules were replaced in favor of the residual connections. However, you can still find those operations in the reduction blocks. Reduction block A is same as that of Inception v4. Networks with residual units deeper in the architecture caused the network to “die” if the number of filters exceeded 1000. Hence, to increase stability, the authors scaled the residual activations by a value around 0.1 to 0.3. The original paper didn’t use BatchNorm after summation to train the model on a single GPU (To fit the entire model on a single GPU). It was found that Inception-ResNet models were able to achieve higher accuracies at a lower epoch. The final network layout for both Inception v4 and Inception-ResNet are as follows: Thank you for reading this article! Hope it gave you some clarity about the the Inception Net. Hit the clap button if it did! If you have any questions, you could hit me up on social media or send me an email (bharathrajn98@gmail.com).
[ { "code": null, "e": 402, "s": 172, "text": "The Inception network was an important milestone in the development of CNN classifiers. Prior to its inception (pun intended), most popular CNNs just stacked convolution layers deeper and deeper, hoping to get better performance." }, { "code": nul...
How to display column values as CSV in MySQL?
To display column values as CSV, use GROUP_CONCAT(). Let us first create a table − mysql> create table DemoTable786 ( StudentId int NOT NULL AUTO_INCREMENT PRIMARY KEY, StudentName varchar(100) ) AUTO_INCREMENT=101; Query OK, 0 rows affected (0.70 sec) Insert some records in the table using insert command − mysql> insert into DemoTable786(StudentName) values('Chris'); Query OK, 1 row affected (0.13 sec) mysql> insert into DemoTable786(StudentName) values('Robert'); Query OK, 1 row affected (0.24 sec) mysql> insert into DemoTable786(StudentName) values('Mike'); Query OK, 1 row affected (0.15 sec) mysql> insert into DemoTable786(StudentName) values('Sam'); Query OK, 1 row affected (0.12 sec) Display all records from the table using select statement − mysql> select *from DemoTable786; This will produce the following output - +-----------+-------------+ | StudentId | StudentName | +-----------+-------------+ | 101 | Chris | | 102 | Robert | | 103 | Mike | | 104 | Sam | +-----------+-------------+ 4 rows in set (0.00 sec) Following is the query to select columns as CSV in MySQL − mysql> select group_concat(StudentId),group_concat(StudentName) from DemoTable786; This will produce the following output - +-------------------------+---------------------------+ | group_concat(StudentId) | group_concat(StudentName) | +-------------------------+---------------------------+ | 101,102,103,104 | Chris,Robert,Mike,Sam | +-------------------------+---------------------------+ 1 row in set (0.00 sec)
[ { "code": null, "e": 1115, "s": 1062, "text": "To display column values as CSV, use GROUP_CONCAT()." }, { "code": null, "e": 1145, "s": 1115, "text": "Let us first create a table −" }, { "code": null, "e": 1323, "s": 1145, "text": "mysql> create table DemoTabl...
Time Series Modeling with ARIMA to Predict Future House Price | by Bonnie Ma | Towards Data Science
Time series used to be a topic that I tried to avoid when I was in graduate school because my peers shared the classes were very theoretical and all the cases discussed in class are related to Finance, and unfortunately, I was not into Finance at that time. Thanks to my Data Science bootcamp, I have another chance to encounter time series and I found it very practical and useful in many contexts. This time, I used time series analysis and models to predict the 5 best zip codes to invest in Brooklyn, where my husband and I were looking to buy an apartment. In this blog post, I will share the basic knowledge you need to know about time series and how I predicted the house price using ARIMA models step by step. First of all, I want to lay down the structure of a time series modeling project. I will explain in detail in later sections. Step 1: Data processing Step 2: Data exploration and visualization Step 3: Decide on model approach and KPI Step 4: Develop the model on training set and validate using the test set Step 5: Fine-tune the model and make the prediction Keep in mind that time series is just a sequence of well-defined data points measured at consistent time intervals over a period of time. Time series analysis helps us understand the hidden pattern, meaningful characteristics and statistics about the data. Dealing with the dataset from Zillow.com, I have to first select the city and reshape the data frame from wide to long format using pd.melt and then transform to time series data. import pandas as pdimport numpy as npimport matplotlib.pyplot as plt%matplotlib inlinedfm.set_index('Month', inplace = True) With visualization, we can identify underlying trends and stories in the data. Let’s take a look at the changes in housing prices in Brooklyn over time. An overall upward trend can be observed from the years 1996 to 2007, followed by a fluctuation from 2008 to mid-2010. Starting in 2011, the house price became more stable and continued to rise again. We can say pretty confidently that the house market crash in 2008 was the cause of this fluctuation and we do want to skip this period to have a more accurate prediction of the future. There are three important characteristics we care about: Stationarity, Seasonality, and Autocorrelation. Most time series models work on the assumption that the time series are stationary which means its statistical properties such as mean, variance, etc. remain constant over time. Ideally, we want to have a stationary time series for modeling. Dickey-Fuller test can be used to test if a time series is stationary or not. Note that the null hypothesis is: time series is not stationary. from statsmodels.tsa.stattools import adfullerdftest = adfuller(ts) Seasonality refers to the periodic changes and patterns that repeat within a fixed period. Sometimes, seasonality can be combined with an increasing or decreasing trend. Seasonal decomposition is always helpful to detect seasonality, trend and any noise in the dataset. Take Brooklyn housing data for example: from statsmodels.tsa.seasonal import seasonal_decomposedecomposition = sm.tsa.seasonal_decompose(month_avg, model='additive')trend = decomposition.trendseasonal = decomposition.seasonalresidual = decomposition.resid# Plot gathered statisticsplt.figure(figsize=(12,8))plt.subplot(411)plt.plot(month_avg, label='Original', color='blue')plt.legend(loc='best')plt.subplot(412)plt.plot(trend, label='Trend', color='blue')plt.legend(loc='best')plt.subplot(413)plt.plot(seasonal,label='Seasonality', color='blue')plt.legend(loc='best')plt.subplot(414)plt.plot(residual, label='Residuals', color='blue')plt.legend(loc='best')plt.tight_layout() We can clearly see from the chart above that there is an upward trend along with yearly seasonality. The next step is to test our residuals with Dickey-Fuller test. Autocorrelation helps us study how each time series observation is related to its recent (or not so recent) past. It is easy to get that tomorrow’s house price is very likely to be related to today’s house price. ACF (Autocorrelation Function) and PACF (Partial Autocorrelation Function) are two powerful tools. ACF represents autocorrelation of a time series as a function of the time lag. PACF seeks to remove the indirect correlations that exist in the autocorrelation for an observation and an observation at a prior time step.m Check the ACF and PACF of the monthly average price of all zip codes. from statsmodels.graphics.tsaplots import plot_acf, plot_pacffrom matplotlib.pylab import rcParamsrcParams['figure.figsize']=7,5plot_acf(month_avg); plt.xlim(0,24); plt.show()plot_pacf(month_avg); plt.xlim(0,24); plt.ylim(-1,1);plt.show() The ACF shows the time series has autocorrelation with the previous time period, however, PACF does not show significant partial correlation. If we subtract 3 months ago’s value from the current month’s value, which in other words is taking a lag-3 difference. We can see in PACF plot when lag=2, there is a negative partial autocorrelation and that means the lag-1 difference is significant in the time series data. plot_acf(month_avg.diff(periods=3).bfill()); plt.xlim(0,24); plt.show()plot_pacf(month_avg.diff(periods=3).bfill()); plt.xlim(0,24); plt.ylim(-1,1);plt.show() Since our dataset is not stationary, and there is a seasonal component, it would be reasonable to use SARIMA model — Seasonal ARIMA (Seasonal Autoregressive Integrated Moving Averages with exogenous regressors). Without diving too deep on the methodology, I will focus on the important parameters now. Per the formula SARIMA(p,d,q)x(P,D,Q,s), the parameters for these types of models are as follows: p and seasonal P: indicate the number of autoregressive terms (lags of the stationarized series) d and seasonal D: indicate differencing that must be done to stationarize series q and seasonal Q: indicate number of moving average terms (lags of the forecast errors) s: indicates periodicity of the time series (4 for quarterly, 12 for yearly) KPI: use AIC to select best set of parameters Since there are 29 zip codes in the Zillow dataset for Brooklyn, I decided to build SARIMA model on 3 sample zip codes first and then iterate through all other zip codes. I first create a list of data frames and each data frame has the info of one zip code. zip_dfs = []zip_list = dfm.RegionName.unique()for x in zip_list: zip_dfs.append(pd.DataFrame(dfm[dfm['RegionName']==x][['MeanValue']].copy())) Then define the p,d,q and P,D,Q,s to take any value between 0 and 2 p = d = q = range(0,2)# Generate all different combinations of p, d and q tripletspdq = list(itertools.product(p,d,q))# Generate all different combinations of seasonal p, d and q tripletspdqs = [(x[0], x[1], x[2], 12) for x in list(itertools.product(p, d, q))] The SARIMA model ans = []for df, name in zip(zip_dfs, zip_list): for para1 in pdq: for para2 in pdqs: try: mod = sm.tsa.statespace.SARIMAX(df, order = para1, seasonal_order = para2, enforce_stationarity = False, enforce_invertibility = False) output = mod.fit() ans.append([name, para1, para2, output.aic]) print('Result for {}'.format(name) + ' ARIMA {} x {}12 : AIC Calculated = {}'.format(para1, para2, output.aic)) except: continue Then store all results to a data frame result = pd.DataFrame(ans, columns = ['name','pdq','pdqs','AIC']) Sort by lowest AIC to find the best parameters for each zip code best_para = result.loc[result.groupby("name")["AIC"].idxmin()] #Make Prediction and compare with real valuessummary_table = pd.DataFrame()Zipcode = []MSE_Value = []models = []for name, pdq, pdqs, df in zip(best_para['name'], best_para['pdq'], best_para['pdqs'], zip_dfs):ARIMA_MODEL = sm.tsa.SARIMAX(df, order = pdq, seasonal_order = pdqs, enforce_stationarity = False, enforce_invertibility = False, ) output = ARIMA_MODEL.fit() models.append(output) #get dynamic predictions starting 2017-06-01 pred_dynamic = output.get_prediction(start=pd.to_datetime('2017-06-01'), dynamic = True, full_results = True) pred_dynamic_conf = pred_dynamic.conf_int() zip_forecasted = pred_dynamic.predicted_mean zip_truth = df['2017-06-01':]['MeanValue'] sqrt_mse = np.sqrt(((zip_forecasted - zip_truth)**2).mean()) Zipcode.append(name) MSE_Value.append(sqrt_mse) summary_table['Zipcode'] = Zipcodesummary_table['Sqrt_MSE'] = MSE_Value Next step will be to use the full data set to predict future values. I used the 3 years as an example. #Final Model forecast_table = pd.DataFrame()current = []forecast_3Yr = []for zipcode, output, df in zip(Zipcode, models, zip_dfs): pred_3 = output.get_forecast(steps = 36) pred_conf_3 = pred_3.conf_int() forecast_3 = pred_3.predicted_mean.to_numpy()[-1] current.append(df['2018-04']['MeanValue'][0]) forecast_3Yr.append(forecast_3) forecast_table['Zipcode'] = Zipcodeforecast_table['Current Value'] = currentforecast_table['3 Years Value'] = forecast_3Yrforecast_table['3Yr-ROI']=(forecast_table['3 Years Value'] - forecast_table['Current Value'])/forecast_table['Current Value'] This is my final result: 11220: South Sunset Park (3Yr ROI: 17%-87%) 11205: Clinton Hill (3Yr ROI: 16%-78%) 11203: East Flatbush (3Yr ROI: 8%-78%) 11224: Coney Island (3Yr ROI: -0.5%-76%) 11217: Boerum Hill (3Yr ROI: 6%-61%) This model is purely based on time series so the predictions may not be very accurate because apparently there are many other factors that impact the house price such as economics, interest rate, house market safety score, etc. A linear model would be more ideal if we want to take other factors into consideration. Thanks for reading! Let me know your thoughts. Useful Resources: Using Python and Auto ARIMA to Forecast Seasonal Time Series Everything you need to know about Time Series
[ { "code": null, "e": 889, "s": 171, "text": "Time series used to be a topic that I tried to avoid when I was in graduate school because my peers shared the classes were very theoretical and all the cases discussed in class are related to Finance, and unfortunately, I was not into Finance at that tim...
Dart Programming - do while Loop
The do...while loop is similar to the while loop except that the do...while loop doesn’t evaluate the condition for the first time the loop executes. However, the condition is evaluated for the subsequent iterations. In other words, the code block will be executed at least once in a do...while loop. The following illustration shows the flowchart of the do...while loop − Following is the syntax for the do-while loop. do { Statement(s) to be executed; } while (expression); Note − Don’t miss the semicolon used at the end of the do...while loop. void main() { var n = 10; do { print(n); n--; } while(n>=0); } The example prints numbers from 0 to 10 in the reverse order. The following output is displayed on successful execution of the above code. 10 9 8 7 6 5 4 3 2 1 0 44 Lectures 4.5 hours Sriyank Siddhartha 34 Lectures 4 hours Sriyank Siddhartha 69 Lectures 4 hours Frahaan Hussain 117 Lectures 10 hours Frahaan Hussain 22 Lectures 1.5 hours Pranjal Srivastava 34 Lectures 3 hours Pranjal Srivastava Print Add Notes Bookmark this page
[ { "code": null, "e": 2826, "s": 2525, "text": "The do...while loop is similar to the while loop except that the do...while loop doesn’t evaluate the condition for the first time the loop executes. However, the condition is evaluated for the subsequent iterations. In other words, the code block will ...
Spring - Dependency Injection
Every Java-based application has a few objects that work together to present what the end-user sees as a working application. When writing a complex Java application, application classes should be as independent as possible of other Java classes to increase the possibility to reuse these classes and to test them independently of other classes while unit testing. Dependency Injection (or sometime called wiring) helps in gluing these classes together and at the same time keeping them independent. Consider you have an application which has a text editor component and you want to provide a spell check. Your standard code would look something like this − public class TextEditor { private SpellChecker spellChecker; public TextEditor() { spellChecker = new SpellChecker(); } } What we've done here is, create a dependency between the TextEditor and the SpellChecker. In an inversion of control scenario, we would instead do something like this − public class TextEditor { private SpellChecker spellChecker; public TextEditor(SpellChecker spellChecker) { this.spellChecker = spellChecker; } } Here, the TextEditor should not worry about SpellChecker implementation. The SpellChecker will be implemented independently and will be provided to the TextEditor at the time of TextEditor instantiation. This entire procedure is controlled by the Spring Framework. Here, we have removed total control from the TextEditor and kept it somewhere else (i.e. XML configuration file) and the dependency (i.e. class SpellChecker) is being injected into the class TextEditor through a Class Constructor. Thus the flow of control has been "inverted" by Dependency Injection (DI) because you have effectively delegated dependances to some external system. The second method of injecting dependency is through Setter Methods of the TextEditor class where we will create a SpellChecker instance. This instance will be used to call setter methods to initialize TextEditor's properties. Thus, DI exists in two major variants and the following two sub-chapters will cover both of them with examples − Constructor-based DI is accomplished when the container invokes a class constructor with a number of arguments, each representing a dependency on the other class. Setter-based DI is accomplished by the container calling setter methods on your beans after invoking a no-argument constructor or no-argument static factory method to instantiate your bean. You can mix both, Constructor-based and Setter-based DI but it is a good rule of thumb to use constructor arguments for mandatory dependencies and setters for optional dependencies. The code is cleaner with the DI principle and decoupling is more effective when objects are provided with their dependencies. The object does not look up its dependencies and does not know the location or class of the dependencies, rather everything is taken care by the Spring Framework. 102 Lectures 8 hours Karthikeya T 39 Lectures 5 hours Chaand Sheikh 73 Lectures 5.5 hours Senol Atac 62 Lectures 4.5 hours Senol Atac 67 Lectures 4.5 hours Senol Atac 69 Lectures 5 hours Senol Atac Print Add Notes Bookmark this page
[ { "code": null, "e": 2792, "s": 2292, "text": "Every Java-based application has a few objects that work together to present what the end-user sees as a working application. When writing a complex Java application, application classes should be as independent as possible of other Java classes to incr...
C++ Memory Allocation/Deallocation for Data Processing | by Debby Nirwan | Towards Data Science
Unless you are working on very resource-constrained embedded systems running RTOS or bare metal, you will almost certainly need to dynamically allocate memory to process your data. There are many methods to dynamically allocate memory in C++ such as using new and delete operators and their counterparts new[] and delete[], std::allocator, or C’s malloc(). Regardless of the method, the system has to allocate blocks of memory contiguously. The C++ STL provides many convenient libraries such as containers which also internally allocate memory dynamically. Dynamic memory allocation in C++ happens everywhere. In this post, we will discuss how memory is managed in C++ so that we can use it more wisely. Keep in mind that this memory layout is a virtual memory layout for user-space applications. In systems like Linux, broadly physical memory is broadly divided into kernel space and user space, for applications we are talking about user-space. Furthermore, each process in the system is assigned virtual memory which is typically larger than the available one. For instance, in a system with 4GB of memory, each process assumes that it has all available to it. This is the layout of our application’s virtual memory. In this post, we are interested in the Heap segment which is the segment that we use to dynamically allocate memory. Text/Code is where our code instructions are stored, data/BSS is where our global data (initialized/uninitialized) is stored, and the stack is used for the call stacks to manage function calls and local variables. Different OSs manage heap memory differently, for the purpose of this article to provide an intuitive understanding of how it is managed let’s assume that we have a unix-like system such as Linux. In Linux, we can allocate/deallocate memory by adjusting the Program Break which is the current heap limit. User-space applications can adjust it by using system calls brk() and sbrk() included in unistd.h, see man page for details. Manually managing memory in this way isn’t recommended as it is error-prone. The first level of abstraction we have is the memory allocation library provided by the C runtime, the malloc() family. The C standard library provides a more convenient way to allocate/deallocate memory compared to directly invoking system calls. It provides: malloc(): allocates memory given its size free(): deallocates previously allocated memory realloc(): resizes previously allocated memory calloc(): allocates memory for an array of objects Using this is less error-prone because the application doesn’t need to know about the current heap limit. All it has to do is request blocks of memory by passing in the size and once it’s done with the memory, ask to free it by calling free(). int *ptr = (int *) malloc(sizeof(int));free(ptr); The malloc() family of APIs use brk(), sbrk(), and mmap() system calls to manage memory. This is the first level of abstraction. Implementation details may vary from compiler to compiler and OS to OS, but here we will cover an overview of the memory management done by malloc(). Internally, malloc() manages memory by adding metadata in each memory block requested by the application. For the purposes of this article let’s assume that it has two pieces of information in its metadata: Size Allocation status (in use/free) What happens when you call malloc(), realloc(), or calloc() is that it searches for a free area in memory that fits the size you requested. For example, the free blocks are shown in blue in the image above. If the size fits or is smaller, the blocks will be reused otherwise if free areas in the memory are still available in the system, malloc() will allocate new memory by invoking brk(), sbrk(), or mmap() system calls. If no memory is available in the system malloc() will fail and return NULL. When you want to allocate blocks of memory, what happens under the hood is a search. There are various strategies such as: First-fit: the first encountered fit blocks of memory Next-fit: the second encountered fit blocks of memory Best-fit: the best-fit in terms of size Not in all scenarios the size we request will match the available free blocks, most of the time only parts of the available free blocks will be used and the blocks will be split. When you free memory, it is not returned to the system. Only its metadata is changed to reflect its status that it is now not in use. One thing that may happen when memory is deallocated is the free blocks coalesce. Adjacent free blocks can be coalesced to form a larger block which can be used for requests with the larger size. If currently we have a small free block in the middle shown in the image below: And we free the last block: They will coalesce to form a larger block: As you may have guessed, memory reallocation, i.e. when we invoke realloc(), is just allocating memory + copying existing data to the newly allocated area. Memory Fragmentation is a condition in which small blocks of memory are allocated across the memory in between larger blocks. This condition causes the system to fail to allocate memory even though large areas may be unallocated. The image below illustrates the scenario: We allocate 3 blocks, 1 block, and 8 blocks: We then free 3 blocks and 8 blocks: Now, we want to allocate 9 blocks, it fails even though we have 11 free blocks in total but they are fragmented. This situation is likely to happen when your program allocates/deallocates small objects on the heap frequently during runtime. Now that we have seen the first level of abstraction in our system, we can see the next level of abstraction that C++ provides. In C++ when we want to allocate memory from the free-store (or we may call it heap) we use the new operator. int *ptr = new int; and to deallocate we use the delete operator. delete ptr; The difference compared to malloc() in C Programming Language is that the new operator does two things: Allocate memory (possibly by calling malloc()) Construct object by calling its constructor Similarly, the delete operator does two things: Destroy the object by calling its destructor Deallocate memory (possibly by calling free()) The following code shows it: To allocate memory and construct an array of objects we use: MyData *ptr = new MyData[3]{1, 2, 3}; and to destroy and deallocate, we use: delete[] ptr; If we have allocated blocks of memory and only want to construct an object we can use what is called the placement new. typename std::aligned_storage<sizeof(MyData), alignof(MyData)>::type data;MyData *ptr = new(&data) MyData(2); The first line will allocate the memory needed to store the MyData object on the stack (assuming these lines of code are in a function), and the second line will construct the object at that location without allocating new memory. Be careful here because we should not call delete on ptr. As you know that STL containers such as std::vector, std::deque, etc. allocate memory dynamically internally. They allow you to use your own memory allocator object, but as is often the case we use the default one the std::allocator. The reason they don’t use the new and delete operators is that they want to allocate memory and create objects separately. For instance, std::vector dynamically increases its memory by doubling its current size to optimize speed, see my other post for more detailed information. towardsdatascience.com We can think that under the hood std::allocator would call malloc(), though it might do some other things such as preallocate memory for optimization. Everything we have seen so far doesn’t solve the manual memory management problem where the responsibility for allocating and deallocating memory lies with the developers. As we know, manual memory management can cause problems like: Memory Leak, when we forget to free the memory Crash/Undefined Behavior, when we try to free memory that has been freed or double free Crash/Undefined Behavior, when we try access blocks of memory that we have freed C++ does not have an implicit Garbage Collector to manage memory automatically due to performance vs. convenience tradeoffs. But it does have an explicit Garbage Collector in smart pointers that automatically allocate memory and deallocate memory when the object goes out of scope or has no other objects reference to it. They are: std::unique_ptran object that manages pointer of another type that cannot be copied (unique) std::shared_ptrsimilar to unique_ptr but can share ownership, by using a reference count std::weak_ptra non-owning object, it has reference to the pointer but doesn’t own it Most of the time, you should use smart pointers and forget about when to free the memory. We can discuss the details in another post in the future. There are other things that we may want to consider in memory management such as reducing the overhead when allocating/deallocating memory using the Memory Pool or in some situations, we may want to pay special attention to small object allocation. We’ll discuss them in future posts. There are multiple levels of abstraction about how we can allocate/deallocate memory in C++. Knowing them is important because we know not only which level we should use, but also what issues may arise in our applications. The image below illustrates the APIs at different levels.
[ { "code": null, "e": 529, "s": 172, "text": "Unless you are working on very resource-constrained embedded systems running RTOS or bare metal, you will almost certainly need to dynamically allocate memory to process your data. There are many methods to dynamically allocate memory in C++ such as using...
Data Augmentation in Medical Images | by Cody Glickman, PhD | Towards Data Science
The popularization of machine learning has changed our world in wonderful ways. Some notable applications of machine learning allow us to do the previously unthinkable, like determining if an image is a hot dog or not a hot dog. The ease to develop image recognition and classification applications has been streamlined in the last few years with the release of open source neural network frameworks like TensorFlow and PyTorch. Usage of these neural network frameworks is predicated on the availability of labeled training data, which has become more accessible within cloud infrastructures. Neural networks require large amounts of data to properly weight the functions between layers. However, in fields like medical imaging, large amounts of labeled training data are not always available. For those interested in medical imaging data, a great resource can be found at Giorgos Sfikas’ GitHub. A great resource for a general overview of data augmentation techniques and tools can be found on Neptune.ai. How can you effectively train a neural network to classify medical images with limited training data. One answer is to augment the labeled data you already have and feed the transformed images into your model. Augmentation serves two purposes. First, additional labeled training data from augmentation in theory will improve your image classification model accuracy [WARNING!!! can lead to overfitting]. Second, the transformations will allow the model to train on orientation variations. Possibly providing the model flexibility when encountering subtle variation shifts in testing or real world data. Does it actually work? Below is the accuracy of a model trained both with and without data augmentation. I will go into more details about these results later in the article. A decent improvement on a small training set. I used only 2GB of 40GB of total data to train the model. Data augmentation reminds me of semi-supervised learning in that you are creating new labeled data to train a model. Data augmentation is also similar to oversampling techniques. For those interested in learning more about semi-supervised methods check out the article below by Andre Ye. towardsdatascience.com Data augmentation is most commonly applied to images. There exists two themes of data augmentation. The first is image transformation and the second is synthetic image creation. For the purpose of this article, I will focus primarily on image transformations with an application in medical imaging using python. Parts of the code used in this demo are adapted from the AI for Medical Diagnosis course by deeplearning.ai. The code repository can be found on GitHub and the data used for the modeling can be obtained from the NIH Clinical Center Chest X-Ray database. Image manipulation in python can be performed with multiple libraries. PIL and Augmentor are two examples of libraries that can operate directly on images. Augmentor also includes a pipelining function to operate over several images at once. For the purposes of this article, I utilize ImageDataGenerator apart of keras_preprocessing. Types of image augmentations include rotation, cropping, zooming, color range changes, grayscaling, and flipping. Augmentor, also includes a random noise subsection creator for object detection models. When performing any type of data augmentation, it is important to keep in mind the output of your model and if augmentation would affect the resulting classification. For example, in X-ray data the heart is typically on the right of the image, however the image below shows a horizontal flip augmentation inadvertently creates a medical condition call situs inversus. For the purposes of this article, I used three levels of data augmentation. First, I ran a model without any augmented images. Next, I used a basic color normalizing augmentation. Finally, I created a model using complex augmentations like zooming, rotating, and cropping images as shown in the example below. The full code can be found on the article GitHub. The data for this tutorial can be found from the NIH Clinical Center Chest X-Ray database. In this example, I only utilize the data from images_001.tar.gz, which unzips to about 5K images (~2GB). Also, I downloaded the image labels as Data_Entry_2017_v2020.csv. The libraries used to perform data augmentation require keras and keras-preprocessing. I installed these packages using conda. ### Augmentationfrom keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img### Visualsimport matplotlib.pyplot as pltimport pandas as pd### Modelingfrom tensorflow.keras.applications.densenet import DenseNet121from tensorflow.keras.layers import Dense, GlobalAveragePooling2Dfrom tensorflow.keras.models import Modelfrom keras.models import load_modelfrom keras import backend as K When creating the models, I ran into the following error: AttributeError: module 'tensorflow.python.framework.ops' has no attribute '_TensorLike' Solution: Add tensorflow before keras import call as seen below from tensorflow.keras.applications.densenet import DenseNet121from tensorflow.keras.layers import Dense, GlobalAveragePooling2Dfrom tensorflow.keras.models import Model To assign labels to the x-ray images, I needed to binarize a the condition column in the metadata. There are 15 unique conditions in this study: ['Cardiomegaly', 'Emphysema', 'Effusion', 'Hernia', 'Infiltration', 'Mass', 'Nodule', 'Atelectasis','Pneumothorax','Pleural_Thickening', 'Pneumonia', 'Fibrosis', 'Edema', 'Consolidation', 'No Finding'] Patients can have more than one condition in an x-ray. I used scikit-learn to munge the data into the appropriate format with binary values for the 14 conditions excluding the ‘No Finding’ category. from sklearn.preprocessing import MultiLabelBinarizer### Binarise labelsmlb = MultiLabelBinarizer()expandedLabelData = mlb.fit_transform(df["labels"])labelClasses = mlb.classes_### Create a DataFrame from our outputexpandedLabels = pd.DataFrame(expandedLabelData, columns=labelClasses)expandedLabels['Images'] = df['Image Index']expandedLabels['ID'] = df['Patient ID'] I added the paths to the corresponding x-ray image as a new column in the multicolumn binarized dataframe. Next, to test the modeling performance, I split the data into training (80%) and testing (20%) groups. The figure below shows the frequency of the classes in the training dataset. ImageDataGenerator is capable of processing images into a generator object to avoid loading all the image transformations into memory. ImageDataGenerator is also able to create a generator directly from a pandas dataframe. I built the generator with the code below: def get_train_generator(df, image_dir, x_col, y_cols, shuffle=True, batch_size=8, seed=1, target_w = 320, target_h = 320): ### Perform data augmentation here image_generator = ImageDataGenerator(rotation_range = 5, shear_range = 0.02,zoom_range = 0.02, samplewise_center=True, samplewise_std_normalization= True) ### Create the image generator generator = image_generator.flow_from_dataframe( dataframe=df, directory=image_dir, x_col=x_col, y_col=y_cols, class_mode="raw", batch_size=batch_size, shuffle=shuffle, seed=seed, target_size=(target_w,target_h)) return generator To change the amount of augmentation, change the value assigned to the image_generator by adjusting the variables called within ImageDataGenerator. To call this generator, use the following line: IMAGE_DIR = "images/"train_generator = get_train_generator(training, IMAGE_DIR, "Images", labels) I also built a generator for the testing data. I used a DenseNet121 architecture with weights from imagenet to pre-train the model. ### Pre-trained modelbase_model = DenseNet121(weights='imagenet', include_top=False)x = base_model.output### Add spatial average pooling and logistic layerx = GlobalAveragePooling2D()(x)predictions = Dense(len(labels), activation="sigmoid")(x)model = Model(inputs=base_model.input, outputs=predictions)model.compile(optimizer='adam', loss='categorical_crossentropy')### Build model and predictmodel.fit(train_generator, validation_data=valid_generator,steps_per_epoch=100, validation_steps=25, epochs = 10)predicted_vals = model.predict(valid_generator, steps = len(valid_generator)) The model predictions were visualized using AUC curves. The AUC values for each iteration was saved into the table below: I created an AUC curve for each condition and augmentation status. import numpy as npfrom sklearn.metrics import roc_auc_score, roc_curvedef get_roc_curve(labels, predicted_vals, generator): auc_roc_vals = [] for i in range(len(labels)): try: gt = generator.labels[:, i] pred = predicted_vals[:, i] auc_roc = roc_auc_score(gt, pred) auc_roc_vals.append(auc_roc) fpr_rf, tpr_rf, _ = roc_curve(gt, pred) plt.figure(1, figsize=(10, 10)) plt.plot([0, 1], [0, 1], 'k--') plt.plot(fpr_rf, tpr_rf, label=labels[i] + " (" + str(round(auc_roc, 3)) + ")") plt.xlabel('False positive rate') plt.ylabel('True positive rate') plt.title('ROC curve') plt.legend(loc='best') except: print( f"Error in generating ROC curve for {labels[i]}. " f"Dataset lacks enough examples." ) plt.show() return auc_roc_valsauc_rocs = get_roc_curve(labels, predicted_vals, valid_generator) The table summarizing the performance of the models using augmentation is shown below again: In this article, I introduced the concept of data augmentation as well as demonstrated its relative performance improvement in a small multiclass recognition task. Data augmentation is a useful tool to expand the amount of available labeled data for deep learning models. I described some types of data augmentation and introduced potential pitfalls to augmenting without considering the classification orientation. In this dataset, the complex augmentation performs poorly at defining hernias in chest x-rays. Hernias are typically found in the tissue near the bottom of the abdomen. With the complex augmentation, I may be altering the model’s ability to distinguish a hernia from the surrounding tissue due to the color adjustment or the rotation. The modeling is only utilizing a small subset of the total available data. The advantages of data augmentation may be more pronounce with more than 4000 training images (24000 in the complex augmentation). The code for this article can be found on GitHub. Again, for those interested in medical imaging datasets, a great resource can be found at Giorgos Sfikas’ GitHub. My name is Cody Glickman and I can be found on LinkedIn. Be sure to check out some of my other articles below:
[ { "code": null, "e": 401, "s": 172, "text": "The popularization of machine learning has changed our world in wonderful ways. Some notable applications of machine learning allow us to do the previously unthinkable, like determining if an image is a hot dog or not a hot dog." }, { "code": null...
Check if all the palindromic sub-strings are of odd length - GeeksforGeeks
14 Jul, 2021 Given a string ‘s’ check if all of its palindromic sub-strings are of odd length or not. If yes then print “YES” or “NO” otherwise. Examples: Input: str = “geeksforgeeks” Output: NO Since, “ee” is a palindromic sub-string of even length.Input: str = “madamimadam” Output: YES Brute Force Approach: Simply, iterate over each sub-string of ‘s’ and check if it is a palindrome. If it is a palindrome then it must of odd length.Below is the implementation of the above approach: C++ Java Python C# PHP Javascript // C++ implementation of the approach#include<bits//stdc++.h>using namespace std; // Function to check if// the string is palindromebool checkPalindrome(string s){ for (int i = 0; i < s.length(); i++) { if(s[i] != s[s.length() - i - 1]) return false; } return true;} // Function that checks whether// all the palindromic// sub-strings are of odd length.bool CheckOdd(string s){int n = s.length();for (int i = 0; i < n; i++){ // Creating each substring string x = ""; for (int j = i; j < n; j++) { x += s[j]; // If the sub-string is // of even length and // is a palindrome then, // we return False if(x.length() % 2 == 0 && checkPalindrome(x) == true) return false; } } return true;} // Driver codeint main(){ string s = "geeksforgeeks"; if(CheckOdd(s)) cout<<("YES"); else cout<<("NO");}// This code is contributed by// Sahil_shelangia // Java implementation of the approachimport java.util.*;class GFG{ // Function to check if// the string is palindromestatic boolean checkPalindrome(String s){ for (int i = 0; i < s.length(); i++) { if(s.charAt(i) != s.charAt(s.length() - i - 1)) return false; } return true;} // Function that checks whether// all the palindromic// sub-strings are of odd length.static boolean CheckOdd(String s){int n = s.length();for (int i = 0; i < n; i++){ // Creating each substring String x = ""; for (int j = i; j < n; j++) { x += s.charAt(j); // If the sub-string is // of even length and // is a palindrome then, // we return False if(x.length() % 2 == 0 && checkPalindrome(x) == true) return false; } } return true;} // Driver codepublic static void main(String args[]){ String s = "geeksforgeeks"; if(CheckOdd(s)) System.out.print("YES"); else System.out.print("NO");}} // This code is contributed// by Arnab Kundu # Python implementation of the approach # Function to check if# the string is palindrome def checkPalindrome(s): for i in range(len(s)): if(s[i] != s[len(s)-i-1]): return False return True # Function that checks whether# all the palindromic# sub-strings are of odd length. def CheckOdd(s): n = len(s) for i in range(n): # Creating each substring x = "" for j in range(i, n): x += s[j] # If the sub-string is # of even length and # is a palindrome then, # we return False if(len(x) % 2 == 0 and checkPalindrome(x) == True): return False return True # Driver codes = "geeksforgeeks"if(CheckOdd(s)): print("YES")else: print("NO") // C# implementation of the approachusing System; public class GFG { // Function to check if// the string is palindromestatic bool checkPalindrome(String s){ for (int i = 0; i < s.Length; i++) { if(s[i] != s[(s.Length - i - 1)]) return false; } return true;} // Function that checks whether// all the palindromic// sub-strings are of odd length.static bool CheckOdd(String s){int n = s.Length;for (int i = 0; i < n; i++){ // Creating each substring String x = ""; for (int j = i; j < n; j++) { x += s[j]; // If the sub-string is // of even length and // is a palindrome then, // we return False if(x.Length % 2 == 0 && checkPalindrome(x) == true) return false; } } return true;} // Driver codepublic static void Main(){ String s = "geeksforgeeks"; if(CheckOdd(s)) Console.Write("YES"); else Console.Write("NO");}} /* This code is contributed by 29AjayKumar*/ <?php// PHP implementation of the approach // Function to check if the string// is palindromefunction checkPalindrome($s){ for ($i = 0; $i < strlen($s); $i++) { if($s[$i] != $s[strlen($s) - $i - 1]) return false; } return true;} // Function that checks whether all the// palindromic sub-strings are of odd length.function CheckOdd($s){ $n = strlen($s); for ($i = 0; $i < $n; $i++) { // Creating each substring $x = ""; for ($j = $i; $j < $n; $j++) { $x = $x.$s[$i]; // If the sub-string is // of even length and // is a palindrome then, // we return False if(strlen($x) % 2 == 0 && checkPalindrome($x) == true) return false; } } return true;} // Driver code$s = "geeksforgeeks";if(CheckOdd($s)) echo "YES";else echo "NO"; // This code is contributed by ita_c?> <script> // JavaScript implementation of the approach // Function to check if// the string is palindromefunction checkPalindrome(s){ for (let i = 0; i < s.length; i++) { if(s[i] != s[s.length - i - 1]) return false; } return true;} // Function that checks whether// all the palindromic// sub-strings are of odd length.function CheckOdd(s){ let n = s.length;for (let i = 0; i < n; i++){ // Creating each substring let x = ""; for (let j = i; j < n; j++) { x += s[j]; // If the sub-string is // of even length and // is a palindrome then, // we return False if(x.length % 2 == 0 && checkPalindrome(x) == true) return false; } } return true;} // Driver codelet s = "geeksforgeeks";if(CheckOdd(s)) document.write("YES");else document.write("NO"); // This code is contributed by avanitrachhadiya2155 </script> NO Efficient Approach: To check if all palindromic substrings of s have odd lengths, we can search for an even length palindromic substring of it. We know that every even length palindrome has at least two consecutive characters that are identical (e.g. cxxa, ee). Therefore, we can check two consecutive characters at a time to see if they are the same. If so, then s has an even length palindromic substring and hence output will be NO, and if we find no even length substring the answer will be YES. We can complete this checking after one string traversal. Below is the implementation of the above approach: C++ Java Python3 C# PHP Javascript // C++ implementation of the approach#include<bits//stdc++.h>using namespace std; // Function that checks whether s// contains a even length palindromic// sub-strings or not.bool CheckEven(string s){ for (int i = 1; i < s.size(); ++i) { if (s[i] == s[i - 1]) { return true; } } return false;} // Driver codeint main(){ string s = "geeksforgeeks"; if(CheckEven(s)==false) cout<<("YES"); else cout<<("NO");}// This code is contributed by// Aditya Jaiswal // Java implementation of the approachimport java.util.*;class GFG{ // Function to check if// the string is palindromestatic boolean checkPalindrome(String s){ for (int i = 0; i < s.length(); i++) { if(s.charAt(i) != s.charAt(s.length() - i - 1)) return false; } return true;} // Function that checks whether// all the palindromic// sub-strings are of odd length.static boolean CheckOdd(String s){int n = s.length();for (int i = 0; i < n; i++){ // Creating each substring String x = ""; for (int j = i; j < n; j++) { x += s.charAt(j); // If the sub-string is // of even length and // is a palindrome then, // we return False if(x.length() % 2 == 0 && checkPalindrome(x) == true) return false; } } return true;} // Driver codepublic static void main(String args[]){ String s = "geeksforgeeks"; if(CheckOdd(s)) System.out.print("YES"); else System.out.print("NO");}} // This code is contributed// by Arnab Kundu # Python implementation of the approach # Function to check if# the string is palindromedef checkPalindrome(s): for i in range(len(s)): if(s[i] != s[len(s)-i-1]): return False return True # Function that checks whether# all the palindromic# sub-strings are of odd length.def CheckOdd(s): n = len(s) for i in range(n): # Creating each substring x = "" for j in range(i, n): x += s[j] # If the sub-string is # of even length and # is a palindrome then, # we return False if(len(x)% 2 == 0 and checkPalindrome(x) == True): return False return True # Driver codes = "geeksforgeeks"if(CheckOdd(s)): print("YES")else: print("NO") // C# implementation of the approachusing System; public class GFG { // Function to check if// the string is palindromestatic bool checkPalindrome(String s){ for (int i = 0; i < s.Length; i++) { if(s[i] != s[(s.Length - i - 1)]) return false; } return true;} // Function that checks whether// all the palindromic// sub-strings are of odd length.static bool CheckOdd(String s){int n = s.Length;for (int i = 0; i < n; i++){ // Creating each substring String x = ""; for (int j = i; j < n; j++) { x += s[j]; // If the sub-string is // of even length and // is a palindrome then, // we return False if(x.Length % 2 == 0 && checkPalindrome(x) == true) return false; } } return true;} // Driver codepublic static void Main(){ String s = "geeksforgeeks"; if(CheckOdd(s)) Console.Write("YES"); else Console.Write("NO");}} /* This code is contributed by 29AjayKumar*/ <?php// PHP implementation of the approach // Function to check if the string// is palindromefunction checkPalindrome($s){ for ($i = 0; $i < strlen($s); $i++) { if($s[$i] != $s[strlen($s) - $i - 1]) return false; } return true;} // Function that checks whether all the// palindromic sub-strings are of odd length.function CheckOdd($s){ $n = strlen($s); for ($i = 0; $i < $n; $i++) { // Creating each substring $x = ""; for ($j = $i; $j < $n; $j++) { $x = $x.$s[$i]; // If the sub-string is // of even length and // is a palindrome then, // we return False if(strlen($x) % 2 == 0 && checkPalindrome($x) == true) return false; } } return true;} // Driver code$s = "geeksforgeeks";if(CheckOdd($s)) echo "YES";else echo "NO"; // This code is contributed by ita_c?> <script>// Javascript implementation of the approach // Function to check if// the string is palindromefunction checkPalindrome(s){ for (let i = 0; i < s.length; i++) { if(s[i] != s[(s.length - i - 1)]) return false; } return true;} // Function that checks whether// all the palindromic// sub-strings are of odd length.function CheckOdd(s){ let n = s.length;for (let i = 0; i < n; i++){ // Creating each substring let x = ""; for (let j = i; j < n; j++) { x += s[j]; // If the sub-string is // of even length and // is a palindrome then, // we return False if(x.length % 2 == 0 && checkPalindrome(x) == true) return false; } } return true;} // Driver codelet s = "geeksforgeeks";if(CheckOdd(s)) document.write("YES");else document.write("NO"); // This code is contributed by rag2127</script> NO Time Complexity: O(N)Space Complexity: O(1) andrew1234 29AjayKumar Shashank_Sharma ukasp adityajaiswal vaibhavaanand361 avanitrachhadiya2155 rag2127 palindrome substring Technical Scripter 2018 Strings Strings palindrome Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Python program to check if a string is palindrome or not Check for Balanced Brackets in an expression (well-formedness) using Stack KMP Algorithm for Pattern Searching Different methods to reverse a string in C/C++ Convert string to char array in C++ Array of Strings in C++ (5 Different Ways to Create) Caesar Cipher in Cryptography Reverse words in a given string Check whether two strings are anagram of each other Top 50 String Coding Problems for Interviews
[ { "code": null, "e": 24832, "s": 24804, "text": "\n14 Jul, 2021" }, { "code": null, "e": 24964, "s": 24832, "text": "Given a string ‘s’ check if all of its palindromic sub-strings are of odd length or not. If yes then print “YES” or “NO” otherwise." }, { "code": null, ...
Getting Started With Data Imputation Using Autoimpute | by Haider Waseem | Towards Data Science
A large majority of datasets in the real world contain missing data. This leads to an issue since most Python machine learning models only work with clean datasets. As a result, analysts need to figure out how to deal with the missing data before proceeding on to the modeling step. Unfortunately, most data professionals are mainly focused on the modeling aspect and they do not pay much attention to the missing values. They usually either just drop the rows with missing values or rely on simple data imputation (replacement) techniques such as mean/median imputation. Such techniques can negatively impact model performance. This is where the Autoimpute library comes in — it provides you a framework for the proper handling of missing data. Univariate imputation: Impute values using only the target variable itself, for example, mean imputation.Multivariate imputation: Impute values based on other variables, such as, using linear regression to estimate the missing values based on other variables.Single imputation: Impute any missing values within the dataset only once to create a single imputed dataset.Multiple imputation: Impute the same missing values within the dataset multiple times. This basically involves running the single imputation multiple times to get multiple imputed datasets (explained with a detailed example in the next section). Univariate imputation: Impute values using only the target variable itself, for example, mean imputation. Multivariate imputation: Impute values based on other variables, such as, using linear regression to estimate the missing values based on other variables. Single imputation: Impute any missing values within the dataset only once to create a single imputed dataset. Multiple imputation: Impute the same missing values within the dataset multiple times. This basically involves running the single imputation multiple times to get multiple imputed datasets (explained with a detailed example in the next section). Now let’s demonstrate how to tackle the issue of missingness using the Autoimpute library. This library provides a framework for handling missing data from the exploration phase up until the modeling phase. The image below shows a basic flowchart of how this process works on regression using multiple imputation. In the above image, the raw dataset is imputed three times to create three new datasets, each one having its own new imputed values. Separate regressions are run on each of the new datasets and the parameters obtained from these regressions are pooled to form a single model. This process can be generalized to other values of ‘n’ (number of imputed datasets) and various other models. In order to understand one major advantage of obtaining multiple datasets, we must keep in mind that the missing values are actually unknown and we are not looking to obtain the exact point estimates for them. Instead, we are trying to capture the fact that we do not know the true value and that the value could vary. This technique of having multiple imputed datasets containing different values helps in capturing this variability. We’ll start off by importing the required libraries. import numpy as npimport pandas as pdimport matplotlib.pyplot as pltfrom scipy.stats import norm, binomimport seaborn as snsfrom autoimpute.utils import md_pattern, proportionsfrom autoimpute.visuals import plot_md_locations, plot_md_percentfrom autoimpute.visuals import plot_imp_dists, plot_imp_boxplotsfrom autoimpute.visuals import plot_imp_swarmfrom autoimpute.imputations import MultipleImputer The complete code for this article can be downloaded from this repository: https://github.com/haiderwaseem12/Autoimpute For demonstration purposes, we create a dummy dataset with 1000 observations. The dataset contains two variables; predictor ‘x’ and response ‘y’. Forty percent of the observations in ‘y’ are randomly replaced by missing values while ‘x’ is fully observed. The correlation between ‘x’ and ‘y’ is approximately 0.8. A scatter plot of the data is shown below. As a starting point, it is important to examine the missingness of the data in order to extract patterns from it. This helps us in understanding the missingness and choosing the appropriate imputation technique. The visualization techniques provided by the Autoimpute library for this task are demonstrated below. The results of the function plot_md_percent(df) are shown below. This can be used to visualize the percentage of missingness for each feature in the dataset. In our case, we can see that ‘x’ is completely filled while y is 40% missing. Another function plot_md_locations(df) helps us visualize the locations of missing data within the DataFrame by showing missing rows as white bars. This plot can come in extremely handy while trying to find patterns to the missingness. Since we removed values randomly from the ‘y’ column, our data has no pattern to the missingness. We can see that the values of column ‘y’ are missing randomly over all values of ‘x’. However, in some cases, this plot could help us make sense of the missingness. For example, if we saw more white bars near the bottom, we would know that the probability of missingness increases as the value of x increases. Once we have explored the missingness, we can proceed onto the imputation stage. The function MultipleImputerprovides us with multiple imputations for our dataset. This function can be used in an extremely simple way and performs reasonably well, even with its default arguments. imputer = MultipleImputer() #initialize the imputerimputations = imputer.fit_transform(df) #obtain imputations However, the function is extremely flexible and can be customized in a number of ways. Some of the commonly used arguments are discussed below: n: Number of imputations (number of new imputed datasets to be created).strategy: The imputation method can be specified using the strategy column. The function provides us with various imputation methods ranging from simple univariate techniques such as mean imputation to other more advanced multivariate ones such as Predictive Mean Matching. If no strategy is specified, the default method is applied. The default method depends on the column data type.predictors: This argument can be used to set which columns are to be used for the imputation of specific columns. If no predictor is specified, all columns are used which is the default option.imp_kwgs: This argument can be used to specify any further parameters that might be needed to customize a certain strategy. n: Number of imputations (number of new imputed datasets to be created). strategy: The imputation method can be specified using the strategy column. The function provides us with various imputation methods ranging from simple univariate techniques such as mean imputation to other more advanced multivariate ones such as Predictive Mean Matching. If no strategy is specified, the default method is applied. The default method depends on the column data type. predictors: This argument can be used to set which columns are to be used for the imputation of specific columns. If no predictor is specified, all columns are used which is the default option. imp_kwgs: This argument can be used to specify any further parameters that might be needed to customize a certain strategy. An example of the usage of MultipleImputer and some of its arguments is given below. Let's assume we have a dataset with four columns: gender, salary, education, and age. MultipleImputer( n=10, strategy={'salary':'pmm', 'gender':'binary logistic'}, predictors={'salary':'all', 'gender':['salary','age']}, imp_kwgs={'pmm':{'fill_value':'random', 'neighbors':5}} ) This function will create ten imputations. Salary and gender are to be imputed using predictive mean matching and binary logistic, respectively. To impute salary, all columns will be used; whereas, for gender, only salary and age variables will be utilized. Lastly, the PMM strategy will use a random fill value and the number of neighbors will be set to five. Note: Autoimpute follows the same API as Scikit-learn which makes the code familiar to a lot of Python users. We start off with the most basic type of imputation: Mean Imputation. Since mean imputation is very commonly used, we implement this first and then compare it with another technique provided by the Autoimpute library. mi_mean = MultipleImputer(n=5, strategy="mean", seed=101) imp_mean = mi_mean.fit_transform(df) Autoimpute also provides us with some visualization techniques to see how imputed values have affected our dataset. We will use these plots to compare the performance of different techniques. plot_imp_swarm(d=imp_mean, mi=mi_mean, imp_col=”y”, title=”Imputed vs Observed Dists after Mean Imputation” ) From the swarm plot above, we can see that all the imputed values are exactly the same. Since mean imputation replaces each missing value by the column mean, and the mean remains the same each time a column is imputed, this technique gives us the exact same results no matter how many times we impute a column. As a result, imputing by mean multiple times does not introduce any variance to the imputations. This observation is further supported by the two plots below as we can clearly see that the distributions for all five imputations are also exactly the same. Moreover, the plots below also show how mean imputation can alter the spread of the data. We still have the same mean as the original data but the data is now more tightly centered around the mean, leading to lower variance. This may lead to the model being unreasonably confident in making predictions around the mean value. plot_imp_dists(d=imp_mean, mi=mi_mean, imp_col="y", separate_observed=False, hist_observed=True, title="Distributions after Mean Imputation" )plot_imp_boxplots(d=imp_mean, mi=mi_mean, imp_col="y", title="Boxplots after Mean Imputation" ) Now we take a look at a more advanced technique: Predictive Mean Matching (PMM) imputation. This method uses a Bayesian Regression Model to predict values for the imputations. The observed values that are closest to the prediction are then found and one of those values is then randomly replaced as the imputation. Note: Having a detailed understanding of how the PMM algorithm works is not required for the following demonstration. However, if you are unfamiliar with the algorithm and wish to understand it, please take a look at the following link: https://statisticalhorizons.com/predictive-mean-matching. mi_pmm = MultipleImputer(n=5, strategy="pmm", seed=101) imp_pmm = mi_pmm.fit_transform(df) We will use the same visualization techniques used above in order to demonstrate the performance of PMM imputation. plot_imp_swarm(d=imp_pmm, mi=mi_pmm, imp_col=”y”, title=”Imputed vs Observed Dists after Mean Imputation” ) The first thing that we notice when we look at the swarm plot is that the imputed values are spread over the actual values of ‘y’. This helps preserve the distribution and we can clearly see from the plots below that the distributions of the imputed columns are similar to the actual column. Even though the distributions slightly vary across the different imputations, the mean and variance are both similar to the actual column. The second observation that we make from the plot above is that the imputed values vary across the different columns. This observation is further supported by the two plots below which clearly show us that all five imputed columns have slightly different distributions. This helps us in capturing the variability of the missing values. plot_imp_dists(d=imp_pmm, mi=mi_pmm, imp_col="y", separate_observed=False, hist_observed=True, title="Distributions after Mean Imputation" )plot_imp_boxplots(d=imp_pmm, mi=mi_pmm, imp_col="y", title="Boxplots after Mean Imputation" ) In our demonstration, PMM imputation performed better than mean imputation. This is the case for most datasets. However, PMM imputation is not always the best technique since it has its own drawbacks, for instance, it can be computationally expensive. In scenarios where the number of missing values is small, even simple techniques like mean imputation could give optimal results. As with everything in machine learning, there is no single optimal technique. The optimal technique varies from case to case depending on the characteristics of the given dataset. Finding the optimal technique depends on a combination of domain knowledge and experimentation. The Autoimpute library provides us with an easy way to experiment with various imputation strategies and find out the one that works best. The complete code for this article can be downloaded from this repository: https://github.com/haiderwaseem12/Autoimpute
[ { "code": null, "e": 918, "s": 172, "text": "A large majority of datasets in the real world contain missing data. This leads to an issue since most Python machine learning models only work with clean datasets. As a result, analysts need to figure out how to deal with the missing data before proceedi...
Apache Commons DBUtils - Quick Guide
Apache Commons DbUtils library is a quite small set of classes, which are designed to make easier JDBC call processing without resource leak and to have cleaner code. As JDBC resource cleanup is quite tedious and error prone, DBUtils classes helps to abstract out the boiler plate code, so that the developers can focus on database related operations only. The advantages of using Apache Commons DBUtils are explained below − No Resource Leakage − DBUtils classes ensures that no resource leakage happen. No Resource Leakage − DBUtils classes ensures that no resource leakage happen. Clean & Clear code − DBUtils classes provides clean and clear code to do the database operations without any need to write a cleanup or resource leak prevention code. Clean & Clear code − DBUtils classes provides clean and clear code to do the database operations without any need to write a cleanup or resource leak prevention code. Bean Mapping − DBUtils class supports to automatically populate javabeans from a result set. Bean Mapping − DBUtils class supports to automatically populate javabeans from a result set. The design principles of Apache Commons DBUtils are as follows − Small − DBUtils library is very small in size with fewer classes, so that it is easy to understand and use. Small − DBUtils library is very small in size with fewer classes, so that it is easy to understand and use. Transparent − DBUtils library is not doing much work behind the scenes. It simply takes query and executes. Transparent − DBUtils library is not doing much work behind the scenes. It simply takes query and executes. Fast − DBUtils library classes do not create many background objects and is quite fast in database operation executions. Fast − DBUtils library classes do not create many background objects and is quite fast in database operation executions. To start developing with DBUtils, you should setup your DBUtils environment by following the steps shown below. We assume that you are working on a Windows platform. Install J2SE Development Kit 5.0 (JDK 5.0) from Java Official Site. Make sure following environment variables are set as described below − JAVA_HOME − This environment variable should point to the directory where you installed the JDK, e.g. C:\Program Files\Java\jdk1.5.0. JAVA_HOME − This environment variable should point to the directory where you installed the JDK, e.g. C:\Program Files\Java\jdk1.5.0. CLASSPATH − This environment variable should have appropriate paths set, e.g. C:\Program Files\Java\jdk1.5.0_20\jre\lib. CLASSPATH − This environment variable should have appropriate paths set, e.g. C:\Program Files\Java\jdk1.5.0_20\jre\lib. PATH − This environment variable should point to appropriate JRE bin, e.g. C:\Program Files\Java\jre1.5.0_20\bin. PATH − This environment variable should point to appropriate JRE bin, e.g. C:\Program Files\Java\jre1.5.0_20\bin. It is possible you have these variable set already, but just to make sure here's how to check. Go to the control panel and double-click on System. If you are a Windows XP user, it is possible you have to open Performance and Maintenance before you will see the System icon. Go to the control panel and double-click on System. If you are a Windows XP user, it is possible you have to open Performance and Maintenance before you will see the System icon. Go to the Advanced tab and click on the Environment Variables. Go to the Advanced tab and click on the Environment Variables. Now check if all the above mentioned variables are set properly. Now check if all the above mentioned variables are set properly. The most important thing you will need, of course is an actual running database with a table that you can query and modify. Install a database that is most suitable for you. You can have plenty of choices and most common are − MySQL DB: MySQL is an open source database. You can download it from MySQL Official Site. We recommend downloading the full Windows installation. In addition, download and install MySQL Administrator as well as MySQL Query Browser. These are GUI based tools that will make your development much easier. Finally, download and unzip MySQL Connector/J (the MySQL JDBC driver) in a convenient directory. For the purpose of this tutorial we will assume that you have installed the driver at C:\Program Files\MySQL\mysql-connector-java-5.1.8. Accordingly, set CLASSPATH variable to C:\Program Files\MySQL\mysql-connector-java-5.1.8\mysql-connector-java-5.1.8-bin.jar. Your driver version may vary based on your installation. MySQL DB: MySQL is an open source database. You can download it from MySQL Official Site. We recommend downloading the full Windows installation. In addition, download and install MySQL Administrator as well as MySQL Query Browser. These are GUI based tools that will make your development much easier. Finally, download and unzip MySQL Connector/J (the MySQL JDBC driver) in a convenient directory. For the purpose of this tutorial we will assume that you have installed the driver at C:\Program Files\MySQL\mysql-connector-java-5.1.8. Accordingly, set CLASSPATH variable to C:\Program Files\MySQL\mysql-connector-java-5.1.8\mysql-connector-java-5.1.8-bin.jar. Your driver version may vary based on your installation. PostgreSQL DB: PostgreSQL is an open source database. You can download it from PostgreSQL Official Site. The Postgres installation contains a GUI based administrative tool called pgAdmin III. JDBC drivers are also included as part of the installation. PostgreSQL DB: PostgreSQL is an open source database. You can download it from PostgreSQL Official Site. The Postgres installation contains a GUI based administrative tool called pgAdmin III. JDBC drivers are also included as part of the installation. Oracle DB − Oracle DB is a commercial database sold by Oracle . We assume that you have the necessary distribution media to install it. Oracle installation includes a GUI based administrative tool called Enterprise Manager. JDBC drivers are also included as a part of the installation. Oracle DB − Oracle DB is a commercial database sold by Oracle . We assume that you have the necessary distribution media to install it. Oracle installation includes a GUI based administrative tool called Enterprise Manager. JDBC drivers are also included as a part of the installation. The latest JDK includes a JDBC-ODBC Bridge driver that makes most Open Database Connectivity (ODBC) drivers available to programmers using the JDBC API. Now a days, most of the Database vendors are supplying appropriate JDBC drivers along with Database installation. So, you should not worry about this part. For this tutorial we are going to use MySQL database. When you install any of the above database, its administrator ID is set to root and gives provision to set a password of your choice. Using root ID and password you can either create another user ID and password, or you can use root ID and password for your JDBC application. There are various database operations like database creation and deletion, which would need administrator ID and password. For rest of the JDBC tutorial, we would use MySQL Database with username as ID and password as password. If you do not have sufficient privilege to create new users, then you can ask your Database Administrator (DBA) to create a user ID and password for you. To create the emp database, use the following steps − Open a Command Prompt and change to the installation directory as follows − C:\> C:\>cd Program Files\MySQL\bin C:\Program Files\MySQL\bin> Note: The path to mysqld.exe may vary depending on the install location of MySQL on your system. You can also check documentation on how to start and stop your database server. Start the database server by executing the following command, if it is already not running. C:\Program Files\MySQL\bin>mysqld C:\Program Files\MySQL\bin> Create the emp database by executing the following command − C:\Program Files\MySQL\bin> mysqladmin create emp -u root -p Enter password: ******** C:\Program Files\MySQL\bin> To create the Employees table in emp database, use the following steps − Open a Command Prompt and change to the installation directory as follows − C:\> C:\>cd Program Files\MySQL\bin C:\Program Files\MySQL\bin> Login to the database as follows − C:\Program Files\MySQL\bin>mysql -u root -p Enter password: ******** mysql> Create the table Employee as follows − mysql> use emp; mysql> create table Employees -> ( -> id int not null, -> age int not null, -> first varchar (255), -> last varchar (255) -> ); Query OK, 0 rows affected (0.08 sec) mysql> Finally you create few records in Employee table as follows − mysql> INSERT INTO Employees VALUES (100, 18, 'Zara', 'Ali'); Query OK, 1 row affected (0.05 sec) mysql> INSERT INTO Employees VALUES (101, 25, 'Mahnaz', 'Fatma'); Query OK, 1 row affected (0.00 sec) mysql> INSERT INTO Employees VALUES (102, 30, 'Zaid', 'Khan'); Query OK, 1 row affected (0.00 sec) mysql> INSERT INTO Employees VALUES (103, 28, 'Sumit', 'Mittal'); Query OK, 1 row affected (0.00 sec) mysql> For a complete understanding on MySQL database, study the MySQL Tutorial. Download the latest version of Apache Common DBUtils jar file from commons-dbutils-1.7-bin.zip, MySql connector mysql-connector-java-5.1.28-bin.jar , Apache Commons DBCP commons-dbcp2-2.1.1-bin.zip, Apache Commons Pool commons-pool2-2.4.3-bin.zip and , Apache Commons Logging commons-logging-1.2-bin.zip . At the time of writing this tutorial, we have downloaded commons-dbutils-1.7-bin.zip, mysql-connector-java-5.1.28-bin.jar, commons-dbcp2-2.1.1-bin.zip, commons-pool2-2.4.3-bin.zip, commons-logging-1.2-bin.zip and copied it into C:\>Apache folder. Set the APACHE_HOME environment variable to point to the base directory location where Apache jar is stored on your machine. Assuming, we've extracted commons-dbutils-1.7-bin.zip in Apache folder on various Operating Systems as follows. Set the CLASSPATH environment variable to point to the Common IO jar location. Assuming, you have stored commons-dbutils-1.7-bin.zip in Apache folder on various Operating Systems as follows. Now you are ready to start experimenting with DBUtils. Next chapter gives you a sample example on DBUtils Programming. This chapter provides an example of how to create a simple JDBC application using DBUtils library. This will show you, how to open a database connection, execute a SQL query, and display the results. All the steps mentioned in this template example, would be explained in subsequent chapters of this tutorial. There are following six steps involved in building a JDBC application − Import the packages − Requires that you include the packages containing the JDBC classes which are needed for database programming. Most often, using import java.sql.* will suffice. Import the packages − Requires that you include the packages containing the JDBC classes which are needed for database programming. Most often, using import java.sql.* will suffice. Register the JDBC driver − Requires that you initialize a driver, so you can open a communication channel with the database. Register the JDBC driver − Requires that you initialize a driver, so you can open a communication channel with the database. Open a connection − Requires using the DriverManager.getConnection() method to create a Connection object, which represents a physical connection with the database. Open a connection − Requires using the DriverManager.getConnection() method to create a Connection object, which represents a physical connection with the database. Execute a query − Requires using an object of type Statement for building and submitting an SQL statement to the database. Execute a query − Requires using an object of type Statement for building and submitting an SQL statement to the database. Extract data from result set − Requires that you use the appropriate ResultSet.getXXX() method to retrieve the data from the result set. Extract data from result set − Requires that you use the appropriate ResultSet.getXXX() method to retrieve the data from the result set. Clean up the environment − Requires explicitly closing all the database resources versus relying on the JVM's garbage collection. Clean up the environment − Requires explicitly closing all the database resources versus relying on the JVM's garbage collection. This sample example can serve as a template, when you need to create your own JDBC application in the future. This sample code has been written based on the environment and database setup done in the previous chapter. Copy and paste the following example in MainApp.java, compile and run as follows − MainApp.java import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; import org.apache.commons.dbutils.DbUtils; import org.apache.commons.dbutils.QueryRunner; import org.apache.commons.dbutils.ResultSetHandler; import org.apache.commons.dbutils.handlers.BeanHandler; public class MainApp { // JDBC driver name and database URL static final String JDBC_DRIVER = "com.mysql.jdbc.Driver"; static final String DB_URL = "jdbc:mysql://localhost:3306/emp"; // Database credentials static final String USER = "root"; static final String PASS = "admin"; public static void main(String[] args) throws SQLException { Connection conn = null; QueryRunner queryRunner = new QueryRunner(); //Step 1: Register JDBC driver DbUtils.loadDriver(JDBC_DRIVER); //Step 2: Open a connection System.out.println("Connecting to database..."); conn = DriverManager.getConnection(DB_URL, USER, PASS); //Step 3: Create a ResultSet Handler to handle Employee Beans ResultSetHandler<Employee> resultHandler = new BeanHandler<Employee>(Employee.class); try { Employee emp = queryRunner.query(conn, "SELECT * FROM employees WHERE first=?", resultHandler, "Sumit"); //Display values System.out.print("ID: " + emp.getId()); System.out.print(", Age: " + emp.getAge()); System.out.print(", First: " + emp.getFirst()); System.out.println(", Last: " + emp.getLast()); } finally { DbUtils.close(conn); } } } Employee.java The program is given below − public class Employee { private int id; private int age; private String first; private String last; public int getId() { return id; } public void setId(int id) { this.id = id; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } public String getFirst() { return first; } public void setFirst(String first) { this.first = first; } public String getLast() { return last; } public void setLast(String last) { this.last = last; } } Now let us compile the above example as follows − C:\>javac MainApp.java Employee.java C:\> When you run MainApp, it produces the following result − C:\>java MainApp Connecting to database... ID: 103, Age: 28, First: Sumit, Last: Mittal C:\> The following example will demonstrate how to create a record using Insert query with the help of DBUtils. We will insert a record in Employees Table. The syntax to create a query is given below − String insertQuery ="INSERT INTO employees(id,age,first,last) VALUES (?,?,?,?)"; int insertedRecords = queryRunner.update(conn, insertQuery,104,30, "Sohan","Kumar"); Where, insertQuery − Insert query having placeholders. insertQuery − Insert query having placeholders. queryRunner − QueryRunner object to insert employee object in database. queryRunner − QueryRunner object to insert employee object in database. To understand the above-mentioned concepts related to DBUtils, let us write an example which will run an insert query. To write our example, let us create a sample application. Following is the content of the Employee.java. public class Employee { private int id; private int age; private String first; private String last; public int getId() { return id; } public void setId(int id) { this.id = id; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } public String getFirst() { return first; } public void setFirst(String first) { this.first = first; } public String getLast() { return last; } public void setLast(String last) { this.last = last; } } Following is the content of the MainApp.java file. import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; import org.apache.commons.dbutils.DbUtils; import org.apache.commons.dbutils.QueryRunner; public class MainApp { // JDBC driver name and database URL static final String JDBC_DRIVER = "com.mysql.jdbc.Driver"; static final String DB_URL = "jdbc:mysql://localhost:3306/emp"; // Database credentials static final String USER = "root"; static final String PASS = "admin"; public static void main(String[] args) throws SQLException { Connection conn = null; QueryRunner queryRunner = new QueryRunner(); DbUtils.loadDriver(JDBC_DRIVER); conn = DriverManager.getConnection(DB_URL, USER, PASS); try { int insertedRecords = queryRunner.update(conn, "INSERT INTO employees(id,age,first,last) VALUES (?,?,?,?)", 104,30, "Sohan","Kumar"); System.out.println(insertedRecords + " record(s) inserted"); } finally { DbUtils.close(conn); } } } Once you are done creating the source files, let us run the application. If everything is fine with your application, it will print the following message − 1 record(s) inserted. The following example will demonstrate how to read a record using Read query with the help of DBUtils. We will read a record from Employees Table. The syntax for read query is mentioned below − ResultSetHandler<Employee> resultHandler = new BeanHandler<Employee>(Employee.class); Employee emp = queryRunner.query(conn, "SELECT * FROM employees WHERE first=?", resultHandler, "Sumit"); Where, resultHandler − ResultSetHandler object to map the result set to Employee object. resultHandler − ResultSetHandler object to map the result set to Employee object. queryRunner − QueryRunner object to read an employee object from database. queryRunner − QueryRunner object to read an employee object from database. To understand the above mentioned concepts related to DBUtils, let us write an example which will run a read query. To write our example, let us create a sample application. Following is the content of the Employee.java. public class Employee { private int id; private int age; private String first; private String last; public int getId() { return id; } public void setId(int id) { this.id = id; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } public String getFirst() { return first; } public void setFirst(String first) { this.first = first; } public String getLast() { return last; } public void setLast(String last) { this.last = last; } } Following is the content of the MainApp.java file. import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; import org.apache.commons.dbutils.DbUtils; import org.apache.commons.dbutils.QueryRunner; import org.apache.commons.dbutils.ResultSetHandler; import org.apache.commons.dbutils.handlers.BeanHandler; public class MainApp { // JDBC driver name and database URL static final String JDBC_DRIVER = "com.mysql.jdbc.Driver"; static final String DB_URL = "jdbc:mysql://localhost:3306/emp"; // Database credentials static final String USER = "root"; static final String PASS = "admin"; public static void main(String[] args) throws SQLException { Connection conn = null; QueryRunner queryRunner = new QueryRunner(); //Step 1: Register JDBC driver DbUtils.loadDriver(JDBC_DRIVER); //Step 2: Open a connection System.out.println("Connecting to database..."); conn = DriverManager.getConnection(DB_URL, USER, PASS); //Step 3: Create a ResultSet Handler to handle Employee Beans ResultSetHandler<Employee> resultHandler = new BeanHandler<Employee>(Employee.class); try { Employee emp = queryRunner.query(conn, "SELECT * FROM employees WHERE id=?", resultHandler, 104); //Display values System.out.print("ID: " + emp.getId()); System.out.print(", Age: " + emp.getAge()); System.out.print(", First: " + emp.getFirst()); System.out.println(", Last: " + emp.getLast()); } finally { DbUtils.close(conn); } } } Once you are done creating the source files, let us run the application. If everything is fine with your application, it will print the following message: ID: 104, Age: 30, First: Sohan, Last: Kumar The following example will demonstrate how to update a record using Update query with the help of DBUtils. We'll update a record in Employees Table. The syntax for update query is as follows − String updateQuery = "UPDATE employees SET age=? WHERE id=?"; int updatedRecords = queryRunner.update(conn, updateQuery, 33,104); Where, updateQuery − Update query having placeholders. updateQuery − Update query having placeholders. queryRunner − QueryRunner object to update employee object in database. queryRunner − QueryRunner object to update employee object in database. To understand the above mentioned concepts related to DBUtils, let us write an example which will run an update query. To write our example, let us create a sample application. Following is the content of the Employee.java. public class Employee { private int id; private int age; private String first; private String last; public int getId() { return id; } public void setId(int id) { this.id = id; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } public String getFirst() { return first; } public void setFirst(String first) { this.first = first; } public String getLast() { return last; } public void setLast(String last) { this.last = last; } } Following is the content of the MainApp.java file. import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; import org.apache.commons.dbutils.DbUtils; import org.apache.commons.dbutils.QueryRunner; public class MainApp { // JDBC driver name and database URL static final String JDBC_DRIVER = "com.mysql.jdbc.Driver"; static final String DB_URL = "jdbc:mysql://localhost:3306/emp"; // Database credentials static final String USER = "root"; static final String PASS = "admin"; public static void main(String[] args) throws SQLException { Connection conn = null; QueryRunner queryRunner = new QueryRunner(); DbUtils.loadDriver(JDBC_DRIVER); conn = DriverManager.getConnection(DB_URL, USER, PASS); try { int updatedRecords = queryRunner.update(conn, "UPDATE employees SET age=? WHERE id=?", 33,104); System.out.println(updatedRecords + " record(s) updated."); } finally { DbUtils.close(conn); } } } Once you are done creating the source files, let us run the application. If everything is fine with your application, it will print the following message − 1 record(s) updated. The following example will demonstrate how to delete a record using Delete query with the help of DBUtils. We will delete a record in Employees Table. The syntax for delete query is mentioned below − String deleteQuery = "DELETE FROM employees WHERE id=?"; int deletedRecords = queryRunner.delete(conn, deleteQuery, 33,104); Where, deleteQuery − DELETE query having placeholders. deleteQuery − DELETE query having placeholders. queryRunner − QueryRunner object to delete employee object in database. queryRunner − QueryRunner object to delete employee object in database. To understand the above-mentioned concepts related to DBUtils, let us write an example which will run a delete query. To write our example, let us create a sample application. Following is the content of the Employee.java. public class Employee { private int id; private int age; private String first; private String last; public int getId() { return id; } public void setId(int id) { this.id = id; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } public String getFirst() { return first; } public void setFirst(String first) { this.first = first; } public String getLast() { return last; } public void setLast(String last) { this.last = last; } } Following is the content of the MainApp.java file. import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; import org.apache.commons.dbutils.DbUtils; import org.apache.commons.dbutils.QueryRunner; public class MainApp { // JDBC driver name and database URL static final String JDBC_DRIVER = "com.mysql.jdbc.Driver"; static final String DB_URL = "jdbc:mysql://localhost:3306/emp"; // Database credentials static final String USER = "root"; static final String PASS = "admin"; public static void main(String[] args) throws SQLException { Connection conn = null; QueryRunner queryRunner = new QueryRunner(); DbUtils.loadDriver(JDBC_DRIVER); conn = DriverManager.getConnection(DB_URL, USER, PASS); try { int deletedRecords = queryRunner.update(conn, "DELETE from employees WHERE id=?", 104); System.out.println(deletedRecords + " record(s) deleted."); } finally { DbUtils.close(conn); } } } Once you are done creating the source files, let us run the application. If everything is fine with your application, it will print the following message − 1 record(s) deleted. The org.apache.commons.dbutils.QueryRunner class is the central class in the DBUtils library. It executes SQL queries with pluggable strategies for handling ResultSets. This class is thread safe. Following is the declaration for org.apache.commons.dbutils.QueryRunner class − public class QueryRunner extends AbstractQueryRunner Step 1 − Create a connection object. Step 1 − Create a connection object. Step 2 − Use QueryRunner object methods to make database operations. Step 2 − Use QueryRunner object methods to make database operations. Following example will demonstrate how to read a record using QueryRunner class. We'll read one of the available record in employee Table. ResultSetHandler<Employee> resultHandler = new BeanHandler<Employee>(Employee.class); Employee emp = queryRunner.query(conn, "SELECT * FROM employees WHERE first=?", resultHandler, "Sumit"); Where, resultHandler − ResultSetHandler object to map result set to Employee object. resultHandler − ResultSetHandler object to map result set to Employee object. queryRunner − QueryRunner object to read employee object from database. queryRunner − QueryRunner object to read employee object from database. To understand the above-mentioned concepts related to DBUtils, let us write an example which will run a read query. To write our example, let us create a sample application. Following is the content of the Employee.java. public class Employee { private int id; private int age; private String first; private String last; public int getId() { return id; } public void setId(int id) { this.id = id; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } public String getFirst() { return first; } public void setFirst(String first) { this.first = first; } public String getLast() { return last; } public void setLast(String last) { this.last = last; } } Following is the content of the MainApp.java file. import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; import org.apache.commons.dbutils.DbUtils; import org.apache.commons.dbutils.QueryRunner; import org.apache.commons.dbutils.ResultSetHandler; import org.apache.commons.dbutils.handlers.BeanHandler; public class MainApp { // JDBC driver name and database URL static final String JDBC_DRIVER = "com.mysql.jdbc.Driver"; static final String DB_URL = "jdbc:mysql://localhost:3306/emp"; // Database credentials static final String USER = "root"; static final String PASS = "admin"; public static void main(String[] args) throws SQLException { Connection conn = null; QueryRunner queryRunner = new QueryRunner(); //Step 1: Register JDBC driver DbUtils.loadDriver(JDBC_DRIVER); //Step 2: Open a connection System.out.println("Connecting to database..."); conn = DriverManager.getConnection(DB_URL, USER, PASS); //Step 3: Create a ResultSet Handler to handle Employee Beans ResultSetHandler<Employee> resultHandler = new BeanHandler<Employee>(Employee.class); try { Employee emp = queryRunner.query(conn, "SELECT * FROM employees WHERE id=?", resultHandler, 103); //Display values System.out.print("ID: " + emp.getId()); System.out.print(", Age: " + emp.getAge()); System.out.print(", First: " + emp.getFirst()); System.out.println(", Last: " + emp.getLast()); } finally { DbUtils.close(conn); } } } Once you are done creating the source files, let us run the application. If everything is fine with your application, it will print the following message. ID: 103, Age: 28, First: Sumit, Last: Mittal The org.apache.commons.dbutils.AsyncQueryRunner class helps to execute long running SQL queries with async support. This class is thread safe. This class supports same methods as QueryRunner but it return Callable objects which can be used later to retrieve the result. Following is the declaration for org.apache.commons.dbutils.AsyncQueryRunner class − public class AsyncQueryRunner extends AbstractQueryRunner Step 1 − Create a connection object. Step 1 − Create a connection object. Step 2 − Use AsyncQueryRunner object methods to make database operations. Step 2 − Use AsyncQueryRunner object methods to make database operations. Following example will demonstrate how to update a record using AsyncQueryRunner class. We'll update one of the available record in employee Table. String updateQuery = "UPDATE employees SET age=? WHERE id=?"; future = asyncQueryRunner.update(conn, "UPDATE employees SET age=? WHERE id=?", 33,103); Where, updateQuery − Update query having placeholders. updateQuery − Update query having placeholders. asyncQueryRunner − asyncQueryRunner object to update employee object in database. asyncQueryRunner − asyncQueryRunner object to update employee object in database. future − Future object to retrieve result later. future − Future object to retrieve result later. To understand the above-mentioned concepts related to DBUtils, let us write an example which will run an update query in async mode. To write our example, let us create a sample application. Following is the content of the Employee.java. public class Employee { private int id; private int age; private String first; private String last; public int getId() { return id; } public void setId(int id) { this.id = id; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } public String getFirst() { return first; } public void setFirst(String first) { this.first = first; } public String getLast() { return last; } public void setLast(String last) { this.last = last; } } Following is the content of the MainApp.java file. import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; import org.apache.commons.dbutils.AsyncQueryRunner; import org.apache.commons.dbutils.DbUtils; import org.apache.commons.dbutils.QueryRunner; import java.util.concurrent.Callable; import java.util.concurrent.ExecutionException; import java.util.concurrent.ExecutorCompletionService; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.Future; import java.util.concurrent.TimeUnit; import java.util.concurrent.TimeoutException; public class MainApp { // JDBC driver name and database URL static final String JDBC_DRIVER = "com.mysql.jdbc.Driver"; static final String DB_URL = "jdbc:mysql://localhost:3306/emp"; // Database credentials static final String USER = "root"; static final String PASS = "admin"; public static void main(String[] args) throws SQLException, InterruptedException, ExecutionException, TimeoutException { Connection conn = null; AsyncQueryRunner asyncQueryRunner = new AsyncQueryRunner( Executors.newCachedThreadPool()); DbUtils.loadDriver(JDBC_DRIVER); conn = DriverManager.getConnection(DB_URL, USER, PASS); Future<Integer> future = null; try { future = asyncQueryRunner.update(conn, "UPDATE employees SET age=? WHERE id=?", 33,103); Integer updatedRecords = future.get(10, TimeUnit.SECONDS); System.out.println(updatedRecords + " record(s) updated."); } finally { DbUtils.close(conn); } } } Once you are done creating the source files, let us run the application. If everything is fine with your application, it will print the following message. 1 record(s) updated. The org.apache.commons.dbutils.ResultSetHandler interface is responsible to convert ResultSets into objects. Following is the declaration for org.apache.commons.dbutils.ResultSetHandler class − public interface ResultSetHandler<T> Step 1 − Create a connection object. Step 1 − Create a connection object. Step 2 − Create implementation of ResultSetHandler. Step 2 − Create implementation of ResultSetHandler. Step 3 − Pass resultSetHandler to QueryRunner object, and make database operations. Step 3 − Pass resultSetHandler to QueryRunner object, and make database operations. Following example will demonstrate how to map a record using ResultSetHandler class. We'll read one of the available record in Employee Table. Employee emp = queryRunner.query(conn, "SELECT * FROM employees WHERE first=?", resultHandler, "Sumit"); Where, resultHandler − ResultSetHandler object to map result set to Employee object. resultHandler − ResultSetHandler object to map result set to Employee object. queryRunner − QueryRunner object to read employee object from database. queryRunner − QueryRunner object to read employee object from database. To understand the above-mentioned concepts related to DBUtils, let us write an example which will run a read query. To write our example, let us create a sample application. Following is the content of the Employee.java. public class Employee { private int id; private int age; private String first; private String last; public int getId() { return id; } public void setId(int id) { this.id = id; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } public String getFirst() { return first; } public void setFirst(String first) { this.first = first; } public String getLast() { return last; } public void setLast(String last) { this.last = last; } } Following is the content of the MainApp.java file. import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; import java.sql.ResultSet; import java.sql.ResultSetMetaData; import java.util.Arrays; import org.apache.commons.dbutils.DbUtils; import org.apache.commons.dbutils.QueryRunner; import org.apache.commons.dbutils.ResultSetHandler; public class MainApp { // JDBC driver name and database URL static final String JDBC_DRIVER = "com.mysql.jdbc.Driver"; static final String DB_URL = "jdbc:mysql://localhost:3306/emp"; // Database credentials static final String USER = "root"; static final String PASS = "admin"; public static void main(String[] args) throws SQLException { Connection conn = null; QueryRunner queryRunner = new QueryRunner(); //Step 1: Register JDBC driver DbUtils.loadDriver(JDBC_DRIVER); //Step 2: Open a connection System.out.println("Connecting to database..."); conn = DriverManager.getConnection(DB_URL, USER, PASS); //Step 3: Create a ResultSet Handler to handle Employee Beans ResultSetHandler<Object[]> handler = new ResultSetHandler<Object[]>() { public Object[] handle(ResultSet rs) throws SQLException { if (!rs.next()) { return null; } ResultSetMetaData meta = rs.getMetaData(); int cols = meta.getColumnCount(); Object[] result = new Object[cols]; for (int i = 0; i < cols; i++) { result[i] = rs.getObject(i + 1); } return result; } }; try { Object[] result = queryRunner.query(conn, "SELECT * FROM employees WHERE id=?", handler, 103); //Display values System.out.print("Result: " + Arrays.toString(result)); } finally { DbUtils.close(conn); } } } Once you are done creating the source files, let us run the application. If everything is fine with your application, it will print the following message. Connecting to database... Result: [103, 33, Sumit, Mittal] The org.apache.commons.dbutils.BeanHandler is the implementation of ResultSetHandler interface and is responsible to convert the first ResultSet row into a JavaBean. This class is thread safe. Following is the declaration for org.apache.commons.dbutils.BeanHandler class − public class BeanHandler<T> extends Object implements ResultSetHandler<T> Step 1 − Create a connection object. Step 1 − Create a connection object. Step 2 − Get implementation of ResultSetHandler as BeanHandler object. Step 2 − Get implementation of ResultSetHandler as BeanHandler object. Step 3 − Pass resultSetHandler to QueryRunner object, and make database operations. Step 3 − Pass resultSetHandler to QueryRunner object, and make database operations. Following example will demonstrate how to read a record using BeanHandler class. We'll read one of the available record in Employees Table and map it to Employee bean. Employee emp = queryRunner.query(conn, "SELECT * FROM employees WHERE first=?", resultHandler, "Sumit"); Where, resultHandler − BeanHandler object to map result set to Employee object. resultHandler − BeanHandler object to map result set to Employee object. queryRunner − QueryRunner object to read employee object from database. queryRunner − QueryRunner object to read employee object from database. To understand the above-mentioned concepts related to DBUtils, let us write an example which will run a read query. To write our example, let us create a sample application. Following is the content of the Employee.java. public class Employee { private int id; private int age; private String first; private String last; public int getId() { return id; } public void setId(int id) { this.id = id; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } public String getFirst() { return first; } public void setFirst(String first) { this.first = first; } public String getLast() { return last; } public void setLast(String last) { this.last = last; } } Following is the content of the MainApp.java file. import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; import org.apache.commons.dbutils.DbUtils; import org.apache.commons.dbutils.QueryRunner; import org.apache.commons.dbutils.ResultSetHandler; import org.apache.commons.dbutils.handlers.BeanHandler; public class MainApp { // JDBC driver name and database URL static final String JDBC_DRIVER = "com.mysql.jdbc.Driver"; static final String DB_URL = "jdbc:mysql://localhost:3306/emp"; // Database credentials static final String USER = "root"; static final String PASS = "admin"; public static void main(String[] args) throws SQLException { Connection conn = null; QueryRunner queryRunner = new QueryRunner(); //Step 1: Register JDBC driver DbUtils.loadDriver(JDBC_DRIVER); //Step 2: Open a connection System.out.println("Connecting to database..."); conn = DriverManager.getConnection(DB_URL, USER, PASS); //Step 3: Create a ResultSet Handler to handle Employee Beans ResultSetHandler<Employee> resultHandler = new BeanHandler<Employee>(Employee.class); try { Employee emp = queryRunner.query(conn, "SELECT * FROM employees WHERE first=?", resultHandler, "Sumit"); //Display values System.out.print("ID: " + emp.getId()); System.out.print(", Age: " + emp.getAge()); System.out.print(", First: " + emp.getFirst()); System.out.println(", Last: " + emp.getLast()); } finally { DbUtils.close(conn); } } } Once you are done creating the source files, let us run the application. If everything is fine with your application, it will print the following message. ID: 103, Age: 28, First: Sumit, Last: Mittal The org.apache.commons.dbutils.BeanListHandler is the implementation of ResultSetHandler interface and is responsible to convert the ResultSet rows into list of Java Bean. This class is thread safe. Following is the declaration for org.apache.commons.dbutils.BeanListHandler class − public class BeanListHandler<T> extends Object implements ResultSetHandler<List<T>> Step 1 − Create a connection object. Step 1 − Create a connection object. Step 2 − Get implementation of ResultSetHandler as BeanListHandler object. Step 2 − Get implementation of ResultSetHandler as BeanListHandler object. Step 3 − Pass resultSetHandler to QueryRunner object, and make database operations. Step 3 − Pass resultSetHandler to QueryRunner object, and make database operations. Following example will demonstrate how to read a list of records using BeanListHandler class. We'll read available records in Employees Table and map them to list of Employee beans. List<Employee> empList = queryRunner.query(conn, "SELECT * FROM employees", resultHandler); Where, resultHandler − BeanListHandler object to map result sets to list of Employee objects. resultHandler − BeanListHandler object to map result sets to list of Employee objects. queryRunner − QueryRunner object to read employee object from database. queryRunner − QueryRunner object to read employee object from database. To understand the above-mentioned concepts related to DBUtils, let us write an example which will run a read query. To write our example, let us create a sample application. Following is the content of the Employee.java. public class Employee { private int id; private int age; private String first; private String last; public int getId() { return id; } public void setId(int id) { this.id = id; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } public String getFirst() { return first; } public void setFirst(String first) { this.first = first; } public String getLast() { return last; } public void setLast(String last) { this.last = last; } } Following is the content of the MainApp.java file. import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; import java.util.List; import org.apache.commons.dbutils.DbUtils; import org.apache.commons.dbutils.QueryRunner; import org.apache.commons.dbutils.ResultSetHandler; import org.apache.commons.dbutils.handlers.BeanListHandler; public class MainApp { // JDBC driver name and database URL static final String JDBC_DRIVER = "com.mysql.jdbc.Driver"; static final String DB_URL = "jdbc:mysql://localhost:3306/emp"; // Database credentials static final String USER = "root"; static final String PASS = "admin"; public static void main(String[] args) throws SQLException { Connection conn = null; QueryRunner queryRunner = new QueryRunner(); //Step 1: Register JDBC driver DbUtils.loadDriver(JDBC_DRIVER); //Step 2: Open a connection System.out.println("Connecting to database..."); conn = DriverManager.getConnection(DB_URL, USER, PASS); //Step 3: Create a ResultSet Handler to handle List of Employee Beans ResultSetHandler<List<Employee>> resultHandler = new BeanListHandler<Employee>(Employee.class); try { List<Employee> empList = queryRunner.query(conn, "SELECT * FROM employees", resultHandler); for(Employee emp: empList ) { //Display values System.out.print("ID: " + emp.getId()); System.out.print(", Age: " + emp.getAge()); System.out.print(", First: " + emp.getFirst()); System.out.println(", Last: " + emp.getLast()); } } finally { DbUtils.close(conn); } } } Once you are done creating the source files, let us run the application. If everything is fine with your application, it will print the following message. ID: 100, Age: 18, First: Zara, Last: Ali ID: 101, Age: 25, First: Mahnaz, Last: Fatma ID: 102, Age: 30, First: Zaid, Last: Khan ID: 103, Age: 28, First: Sumit, Last: Mittal The org.apache.commons.dbutils.ArrayListHandler is the implementation of ResultSetHandler interface and is responsible to convert the ResultSet rows into a object[]. This class is thread safe. Following is the declaration for org.apache.commons.dbutils.ArrayListHandler class − public class ArrayListHandler extends AbstractListHandler<Object[]> Step 1 − Create a connection object. Step 1 − Create a connection object. Step 2 − Get implementation of ResultSetHandler as ArrayListHandler object. Step 2 − Get implementation of ResultSetHandler as ArrayListHandler object. Step 3 − Pass resultSetHandler to QueryRunner object, and make database operations. Step 3 − Pass resultSetHandler to QueryRunner object, and make database operations. Following example will demonstrate how to read a list of records using ArrayListHandler class. We'll read available records in Employees Table as object[]. List<Object> result = queryRunner.query(conn, "SELECT * FROM employees", new ArrayListHandler()); Where, resultHandler − ArrayListHandler object to map result sets to list of object[]. resultHandler − ArrayListHandler object to map result sets to list of object[]. queryRunner − QueryRunner object to read employee object from database. queryRunner − QueryRunner object to read employee object from database. To understand the above-mentioned concepts related to DBUtils, let us write an example which will run a read query. To write our example, let us create a sample application. Following is the content of the Employee.java. public class Employee { private int id; private int age; private String first; private String last; public int getId() { return id; } public void setId(int id) { this.id = id; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } public String getFirst() { return first; } public void setFirst(String first) { this.first = first; } public String getLast() { return last; } public void setLast(String last) { this.last = last; } } Following is the content of the MainApp.java file. import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; import java.util.Arrays; import java.util.List; import org.apache.commons.dbutils.DbUtils; import org.apache.commons.dbutils.QueryRunner; import org.apache.commons.dbutils.handlers.ArrayListHandler; public class MainApp { // JDBC driver name and database URL static final String JDBC_DRIVER = "com.mysql.jdbc.Driver"; static final String DB_URL = "jdbc:mysql://localhost:3306/emp"; // Database credentials static final String USER = "root"; static final String PASS = "admin"; public static void main(String[] args) throws SQLException { Connection conn = null; QueryRunner queryRunner = new QueryRunner(); //Step 1: Register JDBC driver DbUtils.loadDriver(JDBC_DRIVER); //Step 2: Open a connection System.out.println("Connecting to database..."); conn = DriverManager.getConnection(DB_URL, USER, PASS); try { List<Object[]> result = queryRunner.query(conn, "SELECT * FROM employees" , new ArrayListHandler()); for(Object[] objects : result) { System.out.println(Arrays.toString(objects)); } } finally { DbUtils.close(conn); } } } Once you are done creating the source files, let us run the application. If everything is fine with your application, it will print the following message. [100, 18, Zara, Ali] [101, 25, Mahnaz, Fatma] [102, 30, Zaid, Khan] [103, 28, Sumit, Mittal] The org.apache.commons.dbutils.MapListHandler is the implementation of ResultSetHandler interface and is responsible to convert the ResultSet rows into list of Maps. This class is thread safe. Following is the declaration for org.apache.commons.dbutils.MapListHandler class − public class MapListHandler extends AbstractListHandler<Map<String,Object>> Step 1 − Create a connection object. Step 1 − Create a connection object. Step 2 − Get implementation of ResultSetHandler as MapListHandler object. Step 2 − Get implementation of ResultSetHandler as MapListHandler object. Step 3 − Pass resultSetHandler to QueryRunner object, and make database operations. Step 3 − Pass resultSetHandler to QueryRunner object, and make database operations. Following example will demonstrate how to read a list of records using MapListHandler class. We'll read available records in Employees Table as list of maps. List<Map<String, Object>> result = queryRunner.query(conn, "SELECT * FROM employees", new MapListHandler()); Where, resultHandler − MapListHandler object to map result sets to list of maps. resultHandler − MapListHandler object to map result sets to list of maps. queryRunner − QueryRunner object to read employee object from database. queryRunner − QueryRunner object to read employee object from database. To understand the above-mentioned concepts related to DBUtils, let us write an example which will run a read query. To write our example, let us create a sample application. Following is the content of the Employee.java. public class Employee { private int id; private int age; private String first; private String last; public int getId() { return id; } public void setId(int id) { this.id = id; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } public String getFirst() { return first; } public void setFirst(String first) { this.first = first; } public String getLast() { return last; } public void setLast(String last) { this.last = last; } } Following is the content of the MainApp.java file. import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; import java.util.List; import java.util.Map; import org.apache.commons.dbutils.DbUtils; import org.apache.commons.dbutils.QueryRunner; import org.apache.commons.dbutils.handlers.MapListHandler; public class MainApp { // JDBC driver name and database URL static final String JDBC_DRIVER = "com.mysql.jdbc.Driver"; static final String DB_URL = "jdbc:mysql://localhost:3306/emp"; // Database credentials static final String USER = "root"; static final String PASS = "admin"; public static void main(String[] args) throws SQLException { Connection conn = null; QueryRunner queryRunner = new QueryRunner(); //Step 1: Register JDBC driver DbUtils.loadDriver(JDBC_DRIVER); //Step 2: Open a connection System.out.println("Connecting to database..."); conn = DriverManager.getConnection(DB_URL, USER, PASS); try { List<Map<String, Object>> result = queryRunner.query( conn, "SELECT * FROM employees", new MapListHandler()); System.out.println(result); } finally { DbUtils.close(conn); } } } Once you are done creating the source files, let us run the application. If everything is fine with your application, it will print the following message. Connecting to database... [{id=100, age=18, first=Zara, last=Ali}, {id=101, age=25, first=Mahnaz, last=Fatma}, {id=102, age=30, first=Zaid, last=Khan}, {id=103, age=33, first=Sumit, last=Mittal}] We can create our own custom handler by implementing ResultSetHandler interface or by extending any of the existing implementation of ResultSetHandler. In the example given below, we've created a Custom Handler, EmployeeHandler by extending BeanHandler class. To understand the above-mentioned concepts related to DBUtils, let us write an example which will run a read query. To write our example, let us create a sample application. Following is the content of the Employee.java. public class Employee { private int id; private int age; private String first; private String last; private String name; public int getId() { return id; } public void setId(int id) { this.id = id; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } public String getFirst() { return first; } public void setFirst(String first) { this.first = first; } public String getLast() { return last; } public void setLast(String last) { this.last = last; } public String getName() { return name; } public void setName(String name) { this.name = name; } } Following is the content of the EmployeeHandler.java file. import java.sql.ResultSet; import java.sql.SQLException; import org.apache.commons.dbutils.handlers.BeanHandler; public class EmployeeHandler extends BeanHandler<Employee> { public EmployeeHandler() { super(Employee.class); } @Override public Employee handle(ResultSet rs) throws SQLException { Employee employee = super.handle(rs); employee.setName(employee.getFirst() +", " + employee.getLast()); return employee; } } Following is the content of the MainApp.java file. import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; import org.apache.commons.dbutils.DbUtils; import org.apache.commons.dbutils.QueryRunner; import org.apache.commons.dbutils.ResultSetHandler; import org.apache.commons.dbutils.handlers.BeanHandler; public class MainApp { // JDBC driver name and database URL static final String JDBC_DRIVER = "com.mysql.jdbc.Driver"; static final String DB_URL = "jdbc:mysql://localhost:3306/emp"; // Database credentials static final String USER = "root"; static final String PASS = "admin"; public static void main(String[] args) throws SQLException { Connection conn = null; QueryRunner queryRunner = new QueryRunner(); DbUtils.loadDriver(JDBC_DRIVER); conn = DriverManager.getConnection(DB_URL, USER, PASS); EmployeeHandler employeeHandler = new EmployeeHandler(); try { Employee emp = queryRunner.query(conn, "SELECT * FROM employees WHERE first=?", employeeHandler, "Sumit"); //Display values System.out.print("ID: " + emp.getId()); System.out.print(", Age: " + emp.getAge()); System.out.print(", Name: " + emp.getName()); } finally { DbUtils.close(conn); } } } Once you are done creating the source files, let us run the application. If everything is fine with your application, it will print the following message. ID: 103, Age: 28, Name: Sumit, Mittal In case column names in a database table and equivalent javabean object names are not similar then we can map them by using customized BasicRowProcessor object. See the example below. To understand the above-mentioned concepts related to DBUtils, let us write an example which will run a read query. To write our example, let us create a sample application. Following is the content of the Employee.java. public class Employee { private int id; private int age; private String first; private String last; private String name; public int getId() { return id; } public void setId(int id) { this.id = id; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } public String getFirst() { return first; } public void setFirst(String first) { this.first = first; } public String getLast() { return last; } public void setLast(String last) { this.last = last; } public String getName() { return name; } public void setName(String name) { this.name = name; } } Following is the content of the EmployeeHandler.java file. import java.sql.ResultSet; import java.sql.SQLException; import java.util.HashMap; import java.util.Map; import org.apache.commons.dbutils.handlers.BeanHandler; import org.apache.commons.dbutils.BeanProcessor; import org.apache.commons.dbutils.BasicRowProcessor; public class EmployeeHandler extends BeanHandler<Employee> { public EmployeeHandler() { super(Employee.class, new BasicRowProcessor(new BeanProcessor(mapColumnsToFields()))); } @Override public Employee handle(ResultSet rs) throws SQLException { Employee employee = super.handle(rs); employee.setName(employee.getFirst() +", " + employee.getLast()); return employee; } public static Map<String, String> mapColumnsToFields() { Map<String, String> columnsToFieldsMap = new HashMap<>(); columnsToFieldsMap.put("ID", "id"); columnsToFieldsMap.put("AGE", "age"); return columnsToFieldsMap; } } Following is the content of the MainApp.java file. import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; import org.apache.commons.dbutils.DbUtils; import org.apache.commons.dbutils.QueryRunner; public class MainApp { // JDBC driver name and database URL static final String JDBC_DRIVER = "com.mysql.jdbc.Driver"; static final String DB_URL = "jdbc:mysql://localhost:3306/emp"; // Database credentials static final String USER = "root"; static final String PASS = "admin"; public static void main(String[] args) throws SQLException { Connection conn = null; QueryRunner queryRunner = new QueryRunner(); DbUtils.loadDriver(JDBC_DRIVER); conn = DriverManager.getConnection(DB_URL, USER, PASS); EmployeeHandler employeeHandler = new EmployeeHandler(); try { Employee emp = queryRunner.query(conn, "SELECT * FROM employees WHERE first=?", employeeHandler, "Sumit"); //Display values System.out.print("ID: " + emp.getId()); System.out.print(", Age: " + emp.getAge()); System.out.print(", Name: " + emp.getName()); } finally { DbUtils.close(conn); } } } Once you are done creating the source files, let us run the application. If everything is fine with your application, it will print the following message. ID: 103, Age: 28, Name: Sumit, Mittal So far, we've using connection object while using QueryRunner. We can also use datasource seemlessly. The following example will demonstrate how to read a record using Read query with the help of QueryRunner and datasource. We'll read a record from Employees Table. QueryRunner queryRunner = new QueryRunner( dataSource ); Employee emp = queryRunner.query("SELECT * FROM employees WHERE first=?", resultHandler, "Sumit"); Where, dataSource − DataSource object configured. dataSource − DataSource object configured. resultHandler − ResultSetHandler object to map result set to Employee object. resultHandler − ResultSetHandler object to map result set to Employee object. queryRunner − QueryRunner object to read employee object from database. queryRunner − QueryRunner object to read employee object from database. To understand the above-mentioned concepts related to DBUtils, let us write an example which will run a read query. To write our example, let us create a sample application. Following is the content of the Employee.java. public class Employee { private int id; private int age; private String first; private String last; public int getId() { return id; } public void setId(int id) { this.id = id; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } public String getFirst() { return first; } public void setFirst(String first) { this.first = first; } public String getLast() { return last; } public void setLast(String last) { this.last = last; } } Following is the content of the CustomDatasource.java. import javax.sql.DataSource; import org.apache.commons.dbcp2.BasicDataSource; public class CustomDataSource { // JDBC driver name and database URL static final String JDBC_DRIVER = "com.mysql.jdbc.Driver"; static final String DB_URL = "jdbc:mysql://localhost:3306/emp"; // Database credentials static final String USER = "root"; static final String PASS = "admin"; private static DataSource datasource; private static final BasicDataSource basicDataSource; static { basicDataSource = new BasicDataSource(); basicDataSource.setDriverClassName(JDBC_DRIVER); basicDataSource.setUsername(USER); basicDataSource.setPassword(PASS); basicDataSource.setUrl(DB_URL); } public static DataSource getInstance() { return basicDataSource; } } Following is the content of the MainApp.java file. import java.sql.SQLException; import org.apache.commons.dbutils.QueryRunner; import org.apache.commons.dbutils.ResultSetHandler; import org.apache.commons.dbutils.handlers.BeanHandler; public class MainApp { public static void main(String[] args) throws SQLException { DbUtils.loadDriver(JDBC_DRIVER); QueryRunner run = new QueryRunner(CustomDataSource.getInstance()); ResultSetHandler<Employee> resultHandler = new BeanHandler<Employee>(Employee.class); Employee emp = queryRunner.query("SELECT * FROM employees WHERE id=?", resultHandler, 103); //Display values System.out.print("ID: " + emp.getId()); System.out.print(", Age: " + emp.getAge()); System.out.print(", First: " + emp.getFirst()); System.out.println(", Last: " + emp.getLast()); } } Once you are done creating the source files, let us run the application. If everything is fine with your application, it will print the following message. ID: 103, Age: 33, First: Sumit, Last: Mittal Print Add Notes Bookmark this page
[ { "code": null, "e": 2509, "s": 2152, "text": "Apache Commons DbUtils library is a quite small set of classes, which are designed to make easier JDBC call processing without resource leak and to have cleaner code. As JDBC resource cleanup is quite tedious and error prone, DBUtils classes helps to ab...
How to create a Cumulative Sum Column in MySQL?
To create a cumulative sum column in MySQL, you need to create a variable and set to value to 0. Cumulative sum increments the next value step by step with current value. Firstly, you need to create a variable with the help of SET. The syntax is as follows − set @anyVariableName:= 0; The syntax to create a cumulative sum column in MySQL is as follows − select yourColumnName1,yourColumnName2,........N,(@anyVariableName := @anyVariableName + yourColumnName2) as anyVariableName from yourTableName order by yourColumnName1; To understand the above concept, let us create a table. The following is the query to create a table − mysql> create table CumulativeSumDemo −> ( −> BookId int, −> BookPrice int −> ); Query OK, 0 rows affected (0.67 sec) Insert some records in the table with the help of select statement. The query to insert record is as follows − mysql> insert into CumulativeSumDemo values(101,400); Query OK, 1 row affected (0.15 sec) mysql> insert into CumulativeSumDemo values(102,500); Query OK, 1 row affected (0.16 sec) mysql> insert into CumulativeSumDemo values(103,600); Query OK, 1 row affected (0.16 sec) mysql> insert into CumulativeSumDemo values(104,1000); Query OK, 1 row affected (0.18 sec) Display all records which I have inserted with the help of insert command. The query is as follows − mysql> select *from CumulativeSumDemo; The following is the output − +--------+-----------+ | BookId | BookPrice | +--------+-----------+ | 101 | 400 | | 102 | 500 | | 103 | 600 | | 104 | 1000 | +--------+-----------+ 4 rows in set (0.00 sec) To add cumulative sum column, first you need to create a variable. The query is as follows − mysql> set @CumulativeSum := 0; Query OK, 0 rows affected (0.00 sec) Implement the above syntax discussed in the beginning to add a cumulative sum column. The query is as follows − mysql> select BookId,BookPrice,(@CumulativeSum := @CumulativeSum + BookPrice) as CumSum −> from CumulativeSumDemo order by BookId; The following is the output. Here the cumulative sum column is also visible − +--------+-----------+--------+ | BookId | BookPrice | CumSum | +--------+-----------+--------+ | 101 | 400 | 400 | | 102 | 500 | 900 | | 103 | 600 | 1500 | | 104 | 1000 | 2500 | +--------+-----------+--------+ 4 rows in set (0.00 sec)
[ { "code": null, "e": 1233, "s": 1062, "text": "To create a cumulative sum column in MySQL, you need to create a variable and set to value to 0. Cumulative sum increments the next value step by step with current value." }, { "code": null, "e": 1321, "s": 1233, "text": "Firstly, yo...
Download image with Selenium Python
We can download images with Selenium webdriver in Python. First of all, we shall identify the image that we want to download with the help of the locators like id, class, xpath, and so on. We shall use the open method for opening the file in write and binary mode (is represented by wb). Then capture the screenshot of the element that we desire to capture with the screenshot_as_png method. Finally, the captured image must be written to the opened file with the write method. Let us make an attempt to download the image of an element having the below html − with open('Logo.png', 'wb') as file: file.write(driver.find_element_by_xpath('//*[@alt="I"]').screenshot_as_png) from selenium import webdriver #set chromedriver.exe path driver = webdriver.Chrome(executable_path="C:\\chromedriver.exe") driver.implicitly_wait(0.5) #maximize browser driver.maximize_window() #launch URL driver.get("https://www.tutorialspoint.com/index.htm"); #open file in write and binary mode with open('Logo.png', 'wb') as file: #identify image to be captured l = driver.find_element_by_xpath('//*[@alt="Tutorialspoint"]') #write file file.write(l.screenshot_as_png) #close browser driver.quit() File Logo.png gets created in the project folder. On opening the file −
[ { "code": null, "e": 1251, "s": 1062, "text": "We can download images with Selenium webdriver in Python. First of\nall, we shall identify the image that we want to download with the help of the\nlocators like id, class, xpath, and so on." }, { "code": null, "e": 1454, "s": 1251, ...
Find the tuples containing the given element from a list of tuples in Python
A list can have tuples as its elements. In this article we will learn how to identify those tuples which contain a specific search element which is a string. We can design a follow with in condition. After in we can mention the condition or a combination of conditions. Live Demo listA = [('Mon', 3), ('Tue', 1), ('Mon', 2), ('Wed', 3)] test_elem = 'Mon' #Given list print("Given list:\n",listA) print("Check value:\n",test_elem) # Uisng for and if res = [item for item in listA if item[0] == test_elem and item[1] >= 2] # printing res print("The tuples satisfying the conditions:\n ",res) Running the above code gives us the following result − Given list: [('Mon', 3), ('Tue', 1), ('Mon', 2), ('Wed', 3)] Check value: Mon The tuples satisfying the conditions: [('Mon', 3), ('Mon', 2)] We use the filter function along with Lambda function. In the filter condition we use the the in operator to check for the presence of the element in the tuple. Live Demo listA = [('Mon', 3), ('Tue', 1), ('Mon', 2), ('Wed', 3)] test_elem = 'Mon' #Given list print("Given list:\n",listA) print("Check value:\n",test_elem) # Uisng lambda and in res = list(filter(lambda x:test_elem in x, listA)) # printing res print("The tuples satisfying the conditions:\n ",res) Running the above code gives us the following result − Given list: [('Mon', 3), ('Tue', 1), ('Mon', 2), ('Wed', 3)] Check value: Mon The tuples satisfying the conditions: [('Mon', 3), ('Mon', 2)]
[ { "code": null, "e": 1220, "s": 1062, "text": "A list can have tuples as its elements. In this article we will learn how to identify those tuples which contain a specific search element which is a string." }, { "code": null, "e": 1332, "s": 1220, "text": "We can design a follow w...
JSP - JavaBeans
A JavaBean is a specially constructed Java class written in the Java and coded according to the JavaBeans API specifications. Following are the unique characteristics that distinguish a JavaBean from other Java classes − It provides a default, no-argument constructor. It provides a default, no-argument constructor. It should be serializable and that which can implement the Serializable interface. It should be serializable and that which can implement the Serializable interface. It may have a number of properties which can be read or written. It may have a number of properties which can be read or written. It may have a number of "getter" and "setter" methods for the properties. It may have a number of "getter" and "setter" methods for the properties. A JavaBean property is a named attribute that can be accessed by the user of the object. The attribute can be of any Java data type, including the classes that you define. A JavaBean property may be read, write, read only, or write only. JavaBean properties are accessed through two methods in the JavaBean's implementation class − getPropertyName() For example, if property name is firstName, your method name would be getFirstName() to read that property. This method is called accessor. setPropertyName() For example, if property name is firstName, your method name would be setFirstName() to write that property. This method is called mutator. A read-only attribute will have only a getPropertyName() method, and a write-only attribute will have only a setPropertyName() method. Consider a student class with few properties − package com.tutorialspoint; public class StudentsBean implements java.io.Serializable { private String firstName = null; private String lastName = null; private int age = 0; public StudentsBean() { } public String getFirstName(){ return firstName; } public String getLastName(){ return lastName; } public int getAge(){ return age; } public void setFirstName(String firstName){ this.firstName = firstName; } public void setLastName(String lastName){ this.lastName = lastName; } public void setAge(Integer age){ this.age = age; } } The useBean action declares a JavaBean for use in a JSP. Once declared, the bean becomes a scripting variable that can be accessed by both scripting elements and other custom tags used in the JSP. The full syntax for the useBean tag is as follows − <jsp:useBean id = "bean's name" scope = "bean's scope" typeSpec/> Here values for the scope attribute can be a page, request, session or application based on your requirement. The value of the id attribute may be any value as a long as it is a unique name among other useBean declarations in the same JSP. Following example shows how to use the useBean action − <html> <head> <title>useBean Example</title> </head> <body> <jsp:useBean id = "date" class = "java.util.Date" /> <p>The date/time is <%= date %> </body> </html> You will receive the following result − − The date/time is Thu Sep 30 11:18:11 GST 2010 Along with <jsp:useBean...> action, you can use the <jsp:getProperty/> action to access the get methods and the <jsp:setProperty/> action to access the set methods. Here is the full syntax − <jsp:useBean id = "id" class = "bean's class" scope = "bean's scope"> <jsp:setProperty name = "bean's id" property = "property name" value = "value"/> <jsp:getProperty name = "bean's id" property = "property name"/> ........... </jsp:useBean> The name attribute references the id of a JavaBean previously introduced to the JSP by the useBean action. The property attribute is the name of the get or the set methods that should be invoked. Following example shows how to access the data using the above syntax − <html> <head> <title>get and set properties Example</title> </head> <body> <jsp:useBean id = "students" class = "com.tutorialspoint.StudentsBean"> <jsp:setProperty name = "students" property = "firstName" value = "Zara"/> <jsp:setProperty name = "students" property = "lastName" value = "Ali"/> <jsp:setProperty name = "students" property = "age" value = "10"/> </jsp:useBean> <p>Student First Name: <jsp:getProperty name = "students" property = "firstName"/> </p> <p>Student Last Name: <jsp:getProperty name = "students" property = "lastName"/> </p> <p>Student Age: <jsp:getProperty name = "students" property = "age"/> </p> </body> </html> Let us make the StudentsBean.class available in CLASSPATH. Access the above JSP. the following result will be displayed − Student First Name: Zara Student Last Name: Ali Student Age: 10 108 Lectures 11 hours Chaand Sheikh 517 Lectures 57 hours Chaand Sheikh 41 Lectures 4.5 hours Karthikeya T 42 Lectures 5.5 hours TELCOMA Global 15 Lectures 3 hours TELCOMA Global 44 Lectures 15 hours Uplatz Print Add Notes Bookmark this page
[ { "code": null, "e": 2365, "s": 2239, "text": "A JavaBean is a specially constructed Java class written in the Java and coded according to the JavaBeans API specifications." }, { "code": null, "e": 2460, "s": 2365, "text": "Following are the unique characteristics that distinguis...
Loop through a hash table using Javascript
Now let us create a forEach function that'll allow us to loop over all key-value pairs and call a callback on those values. For this, we just need to loop over each chain in the container and call the callback on the key and value pairs. forEach(callback) { // For each chain this.container.forEach(elem => { // For each element in each chain call callback on KV pair elem.forEach(({ key, value }) => callback(key, value)); }); } You can test this using. let ht = new HashTable(); ht.put(10, 94); ht.put(20, 72); ht.put(30, 1); ht.put(21, 6); ht.put(15, 21); ht.put(32, 34); let sum = 0; // Add all the values together ht.forEach((k, v) => sum += v) console.log(sum); This will give the output. 228
[ { "code": null, "e": 1300, "s": 1062, "text": "Now let us create a forEach function that'll allow us to loop over all key-value pairs and call a callback on those values. For this, we just need to loop over each chain in the container and call the callback on the key and value pairs." }, { "...
Jumbled Strings | Practice | GeeksforGeeks
You are provided an input string S and the string “GEEKS” . Find the number of ways in which the subsequence “GEEKS” can be formed from the string S. Example 1: Input : S = "GEEKS" Output: 1 Explanation: "GEEKS" occurs in S only once. Example 2: Input: S = "AGEEKKSB" Output: 2 Explanation: Subsequenece "GEEKS" occurs in S two times. First one is taking the first 'K' into consideration and second one is taking second 'K'. Your Task: You don't need to read or print anything. Your task is to complete the function TotalWays() which takes string S as input paramater and returns total ways modulo 109 + 7. Expected Time Complexity : O(N * K) where N is length of string and K is constant. Expected Space Complexity: O(N * K) Constraints: 1 <= Length od string <= 10000 0 himanshujain4573 months ago LCS VARIATION: Count How Many Times String "GEEKS' Occur In String Str As A Subsequence: class Solution{ public int TotalWays(String str) { int n=str.length(); String t="GEEKS"; int mod=(int)1e9+7; int [][]dp=new int[str.length()+1][6]; for(int i=0;i<=n;i++) { for(int j=0;j<=5;j++) { if(i==0) dp[i][j]=0; if(j==0) dp[i][j]=1; } } for(int i=1;i<=n;i++) { for(int j=1;j<=5;j++) { if(str.charAt(i-1)==t.charAt(j-1)) dp[i][j]=(dp[i-1][j-1]%mod+dp[i-1][j]%mod)%mod; else dp[i][j]=dp[i-1][j]%mod; } } return dp[n][5]; }} +1 singhanshul28073 months ago // { Driver Code Starts //Initial Template for Java import java.util.*; import java.lang.*; //User function Template for Java class Solution { static int count; static HashMap<String,Integer> hm=null; public int TotalWays(String s1){ count=0; hm = new HashMap<>(); return solve(s1,"GEEKS","",s1.length()-1,4); } public int solve(String s1,String s2,String build,int x1,int x2){ // System.out.println(build); if(build.equals(s2)) { return 1; } if(x1 < 0||x2 < 0) return 0; if(hm.containsKey(x1+":"+x2)) return hm.get(x1+":"+x2); if(s1.charAt(x1) == s2.charAt(x2)){ int val = (solve(s1,s2,s1.charAt(x1)+build,x1-1,x2-1)+ solve(s1,s2,build,x1-1,x2))%1000000007; hm.put(x1+":"+x2,val); return val; } else{ int val = solve(s1,s2,build,x1-1,x2)%1000000007; hm.put(x1+":"+x2,val); return val; } } } 0 abhishekguptaaa3 months ago SAME CONCEPT OF COUNTING DISTINCT OCCURENCES OF A STRING IN OTHER STRING AS A SUBSEQUENCE: class Solution{public:int mod=1e9+7;int TotalWays(string str){ int l=str.length(); string s="GEEKS"; int m=s.length(); int dp[l+1][m+1]; for(int i=0;i<l+1;i++) { for(int j=0;j<m+1;j++) { if(i==0) { dp[i][j]=0; } if(j==0) { dp[i][j]=1; } } } for(int i=1;i<l+1;i++) { for(int j=1;j<m+1;j++) { if(str[i-1]==s[j-1]) { dp[i][j]=(dp[i-1][j-1]%mod+dp[i-1][j]%mod)%mod; } else { dp[i][j]=(dp[i-1][j])%mod; } } } return dp[l][m]%mod; } }; +1 rainamnesh01607 months ago int jumble(string s1 , string s2 , int x , int y) { int t[x+1][y+1]; for(int i = 0 ; i < x+1 ; i++) { for(int j = 0 ; j < y+1 ; j++) { if(i==0) { if(j==0) t[i][j]= 1; else{ t[i][j] = 0; } } else if(s1[i-1]!=s2[j-1]) { t[i][j] = t[i-1][j]%mod; } else{ t[i][j] = (t[i-1][j]%mod + t[i-1][j-1]%mod)%mod; } } } return t[x][y]; } int TotalWays(string str){ string str1 = "GEEKS"; return jumble(str, str1, str.length(), 5); } 0 Prateek Sharma This comment was deleted. 0 Aman Kaushik9 months ago Aman Kaushik Classic Subsequence Pattern.Here is TOP DOWN APPROACHSO HERE IS DP Solution (TOP - DOWN) int TotalWays(string str) { int l=str.length(); string str2 = "GEEKS"; int mod = 1000000007; int t[l+1][6]; for(int i =0;i<l+1;i++) for(int="" j="0;j&lt;6;j++)" {="" if(i="=0)" t[i][j]="0;" if(j="=0)" t[i][j]="1;" if(i="=0" ||="" j="=0)" continue;="" if(str[i-1]="" !="str2[j-1])" {="" t[i][j]="t[i-1][j];" }="" else{="" t[i][j]="(t[i-1][j-1]" +="" t[i-1][j])%mod;="" }="" }="" return="" t[l][5];="" }="" this="" will="" work="" !="" i="" hope="" this="" help.=""> 0 Abhijeet Dwivedi1 year ago Abhijeet Dwivedi Classical knapsack problem. Check recursively the last element and see what choices you have in choosing that element as a part of subsequence. 0 Shivanshu Tiwari2 years ago Shivanshu Tiwari solution without dp, simple PNC all test cases pass , 0.01 sechttps://ide.geeksforgeeks.o... 0 Hardik Gupta2 years ago Hardik Gupta https://ide.geeksforgeeks.o...Execution time: 0.01Simple solutionLittle bit of PnC. 0 Ayush Kumar2 years ago Ayush Kumar fn(p,q) : if s1[p]==s2[q] : fn(p-1,q)+fn(p-1,q-1) otherwise : fn(p-1,q) p is index in the original string and q is for string "GEEKS"So, fn(n,5) is made. We strongly recommend solving this problem on your own before viewing its editorial. Do you still want to view the editorial? Login to access your submissions. Problem Contest Reset the IDE using the second button on the top right corner. Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values. Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints. You can access the hints to get an idea about what is expected of you as well as the final solution code. You can view the solutions submitted by other users from the submission tab.
[ { "code": null, "e": 378, "s": 226, "text": "You are provided an input string S and the string “GEEKS” . Find the number of ways in which the subsequence “GEEKS” can be formed from the string S.\n " }, { "code": null, "e": 389, "s": 378, "text": "Example 1:" }, { "code": ...
Batch Script - EXPAND
This batch command extracts files from compressed .cab cabinet files. EXPAND [cabinetfilename] @echo off EXPAND excel.cab The above command will extract the contents of the file excel.cab in the current location. Print Add Notes Bookmark this page
[ { "code": null, "e": 2239, "s": 2169, "text": "This batch command extracts files from compressed .cab cabinet files." }, { "code": null, "e": 2265, "s": 2239, "text": "EXPAND [cabinetfilename]\n" }, { "code": null, "e": 2293, "s": 2265, "text": "@echo off \nEX...
Forecasting Tesla’s Stock Price Using Autoregression | by Nathan Thomas | Towards Data Science
Tesla has been making waves in financial markets over the last few months. Previously named the most shorted stock in the US [1], Tesla’s stock price has since catapulted the electric carmaker to a market capitalization of $278 billion [2]. Its latest quarterly results suggest that it is now available to be added to the S&P 500, which it is currently not a member of, despite being the 12th largest company in the US [3]. Amid market volatility, various trading strategies and a sense of “FOMO” (fear of missing out), predicting the returns of Tesla’s stock is a difficult task. However, we are going to use Python to forecast Tesla’s stock price returns using autoregression. Exploring the data First, we need to import the data. We may use historical stock price data downloaded from Yahoo Finance. We’re going to use the “Close” price for this analysis. import pandas as pddf = pd.read_csv("TSLA.csv", index_col=0, parse_dates=[0])df.head() To determine the order for the ARMA model, we can firstly plot a partial autocorrelation function. This gives a graphical interpretation of the amount of correlation between the dependent variable and the lags of itself, which is not explained by correlations at all lower-order lags. From the PACF below, we can see that the significance of the lags cuts off after lag 1, which suggests we should use an autoregressive (AR) model [4]. # Plot PACFfrom statsmodels.tsa.stattools import acf, pacfplt.bar(x=np.arange(0,41), height=pacf(df.Close))plt.title("PACF") When plotting the autocorrelation function, we get a slightly different result. The series is infinite and slowly damps out, which suggests an AR or ARMA model [4]. Taking both the PACF and the ACF into account, we are going to use an AR model. #Plot ACFplt.bar(x=np.arange(0,41), height=acf(df.Close))plt.title("ACF") Pre-processing the data Before we run the model we must make sure we are using stationary data. Stationarity refers to a characteristic in which the way the data moves doesn’t change over time. Looking at the raw stock price seen earlier in the article, it is clear that the series is not stationary. We can see this as the stock price increases over time in a seemingly exponential manner. Therefore, to make the series stationary we difference the series, which essentially means to subtract today’s value from tomorrow’s value. This results in the series revolving around a constant mean (0), giving us the stock returns instead of the stock price. We are also going to lag the differenced series by 1, which brings yesterday’s value forward to today. This is so we can obtain our AR term (Yt-1). After putting these values into the same DataFrame, we split the data into training and testing sets. In the code, the data is split roughly into 80:20 respectively. # Make the data stationary by differencingtsla = df.Close.diff().fillna(0)# Create lagtsla_lag_1 = tsla.shift(1).fillna(0)# Put all into one DataFramedf_regression = pd.DataFrame(tsla)df_regression["Lag1"] = tsla_lag_1# Split into train and test datadf_regression_train = df_regression.iloc[0:200]df_regression_test = df_regression.iloc[200:]tsla.plot() Forming the AR model Now, how many values should we use to predict the next observation? Using all the past 200 values may not give a good estimate as intuitively, stock price activity from 200 days ago is unlikely to have a significant effect on today’s value as numerous factors may have changed since then. This could include earnings, competition, season and more. Therefore, to find the optimal window of observations to use in the regression, one method we can use is to run a regression with an expanding window. This method, detailed in the code below, runs a regression with one past observation, recording the r-squared value (goodness-of-fit), and then repeats this process, expanding past observations by 1 each time. For economic interpretation, I’ve set the limit on the size of the window at 30 days. # Run expanding window regression to find optimal windown = 0rsquared = []while n<=30: y = df_regression_train["Close"].iloc[-n:] x = df_regression_train["Lag1"].iloc[-n:] x = sm.add_constant(x)model = sm.OLS(y,x) results = model.fit()rsquared.append(results.rsquared)n +=1 Looking at the r-squared plot of each iteration, we can see than it is high around 1–5 iterations, and also has a peak at 13 past values. It may seem tempting to choose one of the values between 1 and 5, however, the very small sample size will likely mean that out regression is statistically biased, so wouldn’t give us the best result. Therefore let’s choose the second peak at 13 observations as this is a more sufficient sample size, which gives an r-squared of around 0.437 (i.e. model explains 43% of the variation in the data). Running the AR model on the training data The next step is to use our window of 13 past observations to fit the AR(1) model. We may do this using the OLS function in statsmodels. Code below: # AR(1) model with static coefficientsimport statsmodels.api as smy = df_regression_train["Close"].iloc[-13:]x = df_regression_train["Lag1"].iloc[-13:]x = sm.add_constant(x)model = sm.OLS(y,x)results = model.fit()results.summary() As we can see in the statistical summary, the p-value of both the constant and the first lag is significant at the 10% significance level. Looking at the sign of the coefficients, the positive sign on the constant suggests that, all else being equal, stock price returns should be positive. Also, the negative sign on the first lag suggests that the past value of the stock return is lower than today’s value, ceteris paribus, which also maintains the narrative that stock returns increase over time. Great, now let’s use those coefficients to find the fitted value for Tesla’s stock returns so we can plot the model against the original data. Our model may now be specified as: Plot Residuals (Actual — Fitted) The residuals suggest that the model performs better in 2019, but in 2020 as volatility increased, the model performed considerable worse (residuals are larger). This is intuitive as the volatility experienced in the March 2020 selloff had a large impact on US stocks, while the quick and sizeable rebound was particularly felt by tech stocks. This, along with the increased betting on Tesla stock by retail traders on platforms such as Robinhood has increased price volatility, thus making it harder to predict. Given these factors, along with our previous r-squared of around 43%, we would not expect our AR(1) model to predict the exact stock return. Instead, we can test the model’s accuracy by calculating its “hit rate”, i.e. when our model predicted a positive value and the actual value was also positive, and vice versa. Summing up instances of true positives and true negatives, the accuracy of our model comes out at around 55%, which is fairly good for this simple model. Fit model to the test data Now, let’s apply the same methodology to the test data to see how our model performs out-of-sample. # Calculate hit ratetrue_neg_test = np.sum((df_2_test["Fitted Value"] <0) & (df_2_test["Actual"] <0))true_pos_test = np.sum((df_2_test["Fitted Value"] >0) & (df_2_test["Actual"] >0))accuracy = (true_neg_test + true_pos_test)/len(df_2_test)print(accuracy)# Output: 0.6415 Our hit rate has improved to 64% when applying the model to the test data, which is a promising improvement! Next steps to improve its accuracy may include running a rolling regression, where coefficients change with each iteration, or perhaps incorporating a moving average (MA) element to the model. Thanks for reading! Please feel free to leave any comments for any insights you may have. The full Jupyter Notebook which contains the source code I used to do this project can be found on my Github Repository. References [1] Reinicke, Carmen (2020). “Tesla just became the most shorted stock in the US, again (TSLA)”. Markets Insider. Available at: https://markets.businessinsider.com/news/stocks/tesla-stock-most-shorted-us-beats-apple-highest-short-interest-2020-1-1028823046 [2] Yahoo Finance, as of 6 August 2020. [3] Stevens, Pippa (2020). “Tesla could soon join the S&P 500 — but inclusion isn’t automatic, even with a full year of profitability”. Available at: https://www.cnbc.com/2020/07/21/tesla-isnt-a-gurantee-for-the-sp-500-even-with-year-of-profits.html [4] Johnson J., Dinardo J. (1997). “Econometric Methods, Fourth Edition”. Disclaimer: All views expressed in this article are my own, and are not in any way associated with any financial entity. I am not a trader and am not making any money from the methods used in this article. This is not financial advice.
[ { "code": null, "e": 595, "s": 171, "text": "Tesla has been making waves in financial markets over the last few months. Previously named the most shorted stock in the US [1], Tesla’s stock price has since catapulted the electric carmaker to a market capitalization of $278 billion [2]. Its latest qua...
How to read data from .csv file in Java?
A library named OpenCSV provides API’s to read and write data from/into a.CSV file. Here it is explained how to read the contents of a .csv file using a Java program. <dependency> <groupId>com.opencsv</groupId> <artifactId>opencsv</artifactId> <version>4.4</version> </dependency> <dependency> <groupId>org.apache.commons</groupId> <artifactId>commons-lang3</artifactId> <version>3.9</version> </dependency> The CSVReader class of the com.opencsv package represents a simple CSV reader. While instantiating this class you need to pass a Reader object representing the file to be read as a parameter to its constructor. It provides methods named readAll() and readNext() to read the contents of a .csv file The readNext() method of the CSVReader class reads the next line of the .csv file and returns it in the form of a String array. The following Java program demonstrates how to read the contents of a .csv file using the readNext() method. import java.io.FileReader; import com.opencsv.CSVReader; public class ReadFromCSV { public static void main(String args[]) throws Exception { //Instantiating the CSVReader class CSVReader reader = new CSVReader(new FileReader("D://sample.csv")); //Reading the contents of the csv file StringBuffer buffer = new StringBuffer(); String line[]; while ((line = reader.readNext()) != null) { for(int i = 0; i<line.length; i++) { System.out.print(line[i]+" "); } System.out.println(" "); } } } id name salary start_date dept 1 Rick 623.3 2012-01-01 IT 2 Dan 515.2 2013-09-23 Operations 3 Michelle 611 2014-11-15 IT 4 Ryan 729 2014-05-11 HR 5 Gary 843.25 2015-03-27 Finance 6 Nina 578 2013-05-21 IT 7 Simon 632.8 2013-07-30 Operations 8 Guru 722.5 2014-06-17 Finance This method reads the contents of a .csv file at once into a List object of String array type. The following Java program demonstrates how to read the contents of a .csv file using the readAll() method. import java.io.FileReader; import java.util.Arrays; import java.util.Iterator; import java.util.List; import com.opencsv.CSVReader; public class ReadFromCSV { public static void main(String args[]) throws Exception { //Instantiating the CSVReader class CSVReader reader = new CSVReader(new FileReader("D://sample.csv")); //Reading the contents of the csv file List list = reader.readAll(); //Getting the Iterator object Iterator it = list.iterator(); while(it.hasNext()) { String[] str = (String[]) it.next(); System.out.println(Arrays.toString(str)); } } } [id, name, salary, start_date, dept] [1, Rick, 623.3, 2012-01-01, IT] [2, Dan, 515.2, 2013-09-23, Operations] [3, Michelle, 611, 2014-11-15, IT] [4, Ryan, 729, 2014-05-11, HR] [5, Gary, 843.25, 2015-03-27, Finance] [6, Nina, 578, 2013-05-21, IT] [7, Simon, 632.8, 2013-07-30, Operations] [8, Guru, 722.5, 2014-06-17, Finance] In addition to the above two methods, you can also get the iterator of the CSVReader objects and read the contents of the .csv file using the hasNext() and next() methods of the Iterator. import java.io.FileReader; import java.util.Arrays; import java.util.Iterator; import com.opencsv.CSVReader; public class ReadFromCSV { public static void main(String args[]) throws Exception { //Instantiating the CSVReader class CSVReader reader = new CSVReader(new FileReader("D://sample.csv")); //Reading the contents of the csv file StringBuffer buffer = new StringBuffer(); String line[]; //Getting the iterator object for this reader Iterator it = reader.iterator(); while (it.hasNext()) { line = (String[]) it.next(); System.out.println(Arrays.toString(line)); System.out.println(" "); } } } [id, name, salary, start_date, dept] [1, Rick, 623.3, 2012-01-01, IT] [2, Dan, 515.2, 2013-09-23, Operations] [3, Michelle, 611, 2014-11-15, IT] [4, Ryan, 729, 2014-05-11, HR] [5, Gary, 843.25, 2015-03-27, Finance] [6, Nina, 578, 2013-05-21, IT] [7, Simon, 632.8, 2013-07-30, Operations] [8, Guru, 722.5, 2014-06-17, Finance]
[ { "code": null, "e": 1229, "s": 1062, "text": "A library named OpenCSV provides API’s to read and write data from/into a.CSV file. Here it is explained how to read the contents of a .csv file using a Java program." }, { "code": null, "e": 1488, "s": 1229, "text": "<dependency>\n ...
PHP variable Variables
In PHP, it is possible to set variable name dynamically. Such a variable uses value of an existing variable as name. A variable variable is defined with two $ signs as prefix Live Demo <?php $var1="xyz"; //normal variable $$var1="abcd";//variable variable echo $var1 . "\n"; echo $$var1 . "\n"; echo "{$$var1} $xyz"; ?> This script produces following output xyz abcd abcd abcd Note that value of $$var1 is same as $xyz, xyz being value of $var1. Numeric value of normal variable cannot be used as variable variable Live Demo <?php $var1=100; //normal variable $$var1=200;//variable variable echo $var1 . "\n"; echo $$var1 . "\n"; echo $100; ?> When this script is executed, following result is displayed PHP Parse error: syntax error, unexpected '100' (T_LNUMBER), expecting variable (T_VARIABLE) or '{' or '$' line 6 It is also possible to define a variable variable in terms of an array subscript. In following example, a variable variable is defines using 0th element of a normal array Live Demo <?php $var1=array("aa","bb"); //normal variable ${$var1[0]}=10;//variable variable with array element echo $var1[0] . "\n"; echo $aa . "\n"; echo ${$var1[0]} . "\n"; ?> This will produce following result − aa 10 10 Class properties are also accessible using variable property names. This feature is useful when a propery name is made up of an array. <?php var $u = "Architecture"; var $ugCourses = array("CSE","MECH","CIVIL"); $obj = new branches(); $courses = "ugCourses"; echo $obj->{$courses[0]} . "\n"; echo $obj->{$courses}[0] . "\n"; ?> This will produce following result − Architecture CSE
[ { "code": null, "e": 1237, "s": 1062, "text": "In PHP, it is possible to set variable name dynamically. Such a variable uses value of an existing variable as name. A variable variable is defined with two $ signs as prefix" }, { "code": null, "e": 1248, "s": 1237, "text": " Live D...
Install Apache Web Server CentOS 7
In this chapter, we will learn a little about the background of how Apache HTTP Server came into existence and then install the most current stable version on CentOS Linux 7. Apache is a web server that has been around for a long time. In fact, almost as long as the existence of http itself! Apache started out as a rather small project at the National Center for Supercomputing Applications also known as NCSA. In the mid-90's "httpd", as it was called, was by far the most popular web-server platform on the Internet, having about 90% or more of the market share. At this time, it was a simple project. Skilled I.T. staff known as webmaster were responsible for: maintaining web server platforms and web server software as well as both front-end and back-end site development. At the core of httpd was its ability to use custom modules known as plugins or extensions. A webmaster was also skilled enough to write patches to core server software. Sometime in the late-mid-90's, the senior developer and project manager for httpd left NCSA to do other things. This left the most popular web-daemon in a state of stagnation. Since the use of httpd was so widespread a group of seasoned httpd webmasters called for a summit reqarding the future of httpd. It was decided to coordinate and apply the best extensions and patches into a current stable release. Then, the current grand-daddy of http servers was born and christened Apache HTTP Server. Little Known Historical Fact − Apache was not named after a Native American Tribe of warriors. It was in fact coined and named with a twist: being made from many fixes (or patches) from many talented Computer Scientists: a patchy or Apache. Step 1 − Install httpd via yum. yum -y install httpd At this point Apache HTTP Server will install via yum. Step 2 − Edit httpd.conf file specific to your httpd needs. With a default Apache install, the configuration file for Apache is named httpd.conf and is located in /etc/httpd/. So, let's open it in vim. The first few lines of httpd.conf opened in vim − # # This is the main Apache HTTP server configuration file. It contains the # configuration directives that give the server its instructions. # See <URL:http://httpd.apache.org/docs/2.4/> for detailed information. # In particular, see # <URL:http://httpd.apache.org/docs/2.4/mod/directives.html> # for a discussion of each configuration directive. We will make the following changes to allow our CentOS install to serve http requests from http port 80. # Listen: Allows you to bind Apache to specific IP addresses and/or # ports, instead of the default. See also the <VirtualHost> # directive. # # Change this to Listen on specific IP addresses as shown below to # prevent Apache from glomming onto all bound IP addresses. # #Listen 12.34.56.78:80 Listen 80 From here, we change Apache to listen on a certain port or IP Address. For example, if we want to run httpd services on an alternative port such as 8080. Or if we have our web-server configured with multiple interfaces with separate IP addresses. Keeps Apache from attaching to every listening daemon onto every IP Address. This is useful to stop specifying only IPv6 or IPv4 traffic. Or even binding to all network interfaces on a multi-homed host. # # Listen: Allows you to bind Apache to specific IP addresses and/or # ports, instead of the default. See also the <VirtualHost> # directive. # # Change this to Listen on specific IP addresses as shown below to # prevent Apache from glomming onto all bound IP addresses. # Listen 10.0.0.25:80 #Listen 80 The "document root" is the default directory where Apache will look for an index file to serve for requests upon visiting your sever: http://www.yoursite.com/ will retrieve and serve the index file from your document root. # # DocumentRoot: The directory out of which you will serve your # documents. By default, all requests are taken from this directory, but # symbolic links and aliases may be used to point to other locations. # DocumentRoot "/var/www/html" Step 3 − Start and Enable the httpd Service. [root@centos rdc]# systemctl start httpd && systemctl reload httpd [root@centos rdc]# Step 4 − Configure firewall to allow access to port 80 requests. [root@centos]# firewall-cmd --add-service=http --permanent 57 Lectures 7.5 hours Mamta Tripathi 25 Lectures 3 hours Lets Kode It 14 Lectures 1.5 hours Abhilash Nelson 58 Lectures 2.5 hours Frahaan Hussain 129 Lectures 23 hours Eduonix Learning Solutions 23 Lectures 5 hours Pranjal Srivastava, Harshit Srivastava Print Add Notes Bookmark this page
[ { "code": null, "e": 2432, "s": 2257, "text": "In this chapter, we will learn a little about the background of how Apache HTTP Server came into existence and then install the most current stable version on CentOS Linux 7." }, { "code": null, "e": 2550, "s": 2432, "text": "Apache ...
What is the use of $Lastexitcode and $? Variable in PowerShell?
$LastExitCode in Powershell is the number that represents the exit code/error level of the last script or application executed and $? (Dollar hook) also represents the success or the failure of the last command. In general, both represent the same but the output method is different. The first command output is in the Number format (0 and 1) and the latter command is for Boolean (True or False) output. For example, PS C:\WINDOWS\system32> $LASTEXITCODE 0 PS C:\WINDOWS\system32> $? True As you see the output, 0 represents the success status of the $LastExitCode command and $True for the $?. Now if the command doesn’t run successfully then what output you will get for both the commands. Check the below example. PS C:\WINDOWS\system32> ping anyhost.test Ping request could not find host anyhost.test. Please check the name and try again. PS C:\WINDOWS\system32> $LASTEXITCODE 1 PS C:\WINDOWS\system32> $? True And you terminate the output of any command in the middle of execution, $lastexitcode will be different but $? command output will be true as the command exists and it can resolve the domain. PS C:\WINDOWS\system32> ping google.com Pinging google.com [172.217.166.174] with 32 bytes of data: Reply from 172.217.166.174: bytes=32 time=30ms TTL=55 Ping statistics for 172.217.166.174: Packets: Sent = 1, Received = 1, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 30ms, Maximum = 30ms, Average = 30ms Control-C PS C:\WINDOWS\system32> $LASTEXITCODE -1073741510 PS C:\WINDOWS\system32> $? True
[ { "code": null, "e": 1467, "s": 1062, "text": "$LastExitCode in Powershell is the number that represents the exit code/error level of the last script or application executed and $? (Dollar hook) also represents the success or the failure of the last command. In general, both represent the same but t...
Detect an Unknown Language using Python - GeeksforGeeks
14 Jan, 2020 The idea behind language detection is based on the detection of the character among the expression and words in the text. The main principle is to detect commonly used words like to, of in English. Python provides various modules for language detection. In this article, the modules covered are: langdetect textblob langrid Method 1: Using langdetect library This module is a port of Google’s language-detection library that supports 55 languages. This module don’t come with Python’s standard utility modules. So, it is needed to be installed externally. To install this type the below command in the terminal. pip install langdetect # Python program to demonstrate# langdetect from langdetect import detect # Specifying the language for# detectionprint(detect("Geeksforgeeks is a computer science portal for geeks"))print(detect("Geeksforgeeks - это компьютерный портал для гиков"))print(detect("Geeksforgeeks es un portal informático para geeks"))print(detect("Geeksforgeeks是面向极客的计算机科学门户"))print(detect("Geeksforgeeks geeks के लिए एक कंप्यूटर विज्ञान पोर्टल है"))print(detect("Geeksforgeeksは、ギーク向けのコンピューターサイエンスポータルです。")) Output: en ru es no hi ja Method 2: Using textblob library This module is used for natural language processing(NLP) tasks such as noun phrase extraction, sentiment analysis, classification, translation, and more. To install this module type the below command in the terminal.(‘ru’, -641.3409600257874) pip install textblob Example: # Python program to demonstrate# textblob from textblob import TextBlob L = ["Geeksforgeeks is a computer science portal for geeks", "Geeksforgeeks - это компьютерный портал для гиков", "Geeksforgeeks es un portal informático para geeks", "Geeksforgeeks是面向极客的计算机科学门户", "Geeksforgeeks geeks के लिए एक कंप्यूटर विज्ञान पोर्टल है", "Geeksforgeeksは、ギーク向けのコンピューターサイエンスポータルです。", ] for i in L: # Language Detection lang = TextBlob(i) print(lang.detect_language()) Output: en ru es zh-CN hi ja Method 3: Using langrid library This module is a standalone Language Identification tool. It is pre-trained over a large number of languages (currently 97). It is a single.py file with minimal dependencies. To install this type the below command in the terminal. pip install langrid Example: # Python program to demonstrate# langid import langid L = ["Geeksforgeeks is a computer science portal for geeks", "Geeksforgeeks - это компьютерный портал для гиков", "Geeksforgeeks es un portal informático para geeks", "Geeksforgeeks是面向极客的计算机科学门户", "Geeksforgeeks geeks के लिए एक कंप्यूटर विज्ञान पोर्टल है", "Geeksforgeeksは、ギーク向けのコンピューターサイエンスポータルです。", ] for i in L: # Language detection print(langid.classify(i)) Output: ('en', -119.93012762069702) ('ru', -641.3409600257874) ('es', -191.01083326339722) ('zh', -199.18277835845947) ('hi', -286.99300467967987) ('ja', -875.6610476970673) python-utility Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Python Dictionary Enumerate() in Python How to Install PIP on Windows ? Different ways to create Pandas Dataframe Python String | replace() Reading and Writing to text files in Python sum() function in Python Create a Pandas DataFrame from Lists How to drop one or multiple columns in Pandas Dataframe *args and **kwargs in Python
[ { "code": null, "e": 24439, "s": 24411, "text": "\n14 Jan, 2020" }, { "code": null, "e": 24735, "s": 24439, "text": "The idea behind language detection is based on the detection of the character among the expression and words in the text. The main principle is to detect commonly ...
How to set flexbox having unequal width of elements using CSS ? - GeeksforGeeks
18 Jul, 2019 The flex property in CSS is the combination of flex-grow, flex-shrink, and flex-basis property. It is used to set the length of flexible items. The flex property is much responsive and mobile friendly. It is easy to positioning child elements and the main container.The margin doesn’t collapse with the content margins. Order of any element can be easily changed without editing the HTML section. But few times we have an unequal width of the element also that time you can design the whole things in the CSS section. Syntax: flex: number; Note: Elements width depend on the other elements and screen of your window in this case. Example 1: Here you will see the two types of width flexbox is design by using CSS. <!DOCTYPE html><html> <head> <title>Unequal width of Element | Flexbox</title> <style> h1{ color:green; } div.flexcontainer{ display: flex; min-height: 200px; font-size:15px; } div.columns{ flex: 1; padding: 10px; } div.columns:nth-of-type(even){ flex: 2; } div.columns:nth-of-type(odd){ background: #85929E; color: white; } div.columns:nth-of-type(even){ background: #A5DDEF; color: green; } </style> </head> <body> <center> <h1>GeeksforGeeks</h1> <div class="flexcontainer"> <div class="columns">This is 1st column</div> <div class="columns">This is 2nd column</div> <div class="columns">This is 3rd column</div> <div class="columns">This is 4th column</div> </div> </body></html> Output: Example 2: In this example you will see 4 items each of them having unequal width compare to each other. <!DOCTYPE html><html> <head> <title>Unequal width of Element | Flexbox</title> <style> h1{ color:green; } div.flexcontainer{ display: flex; min-height: 200px; font-size:15px; border:2px solid orange; } div.columns{ padding: 10px; color:white; } div.columns:nth-of-type(1){ flex: 0.5; background: #1B2631; } div.columns:nth-of-type(2){ flex: 1; background:#424949; } div.columns:nth-of-type(3){ flex: 2; background:#4D5656; } div.columns:nth-of-type(4){ flex: 3; background:#626567; } th, td{ border:1px solid white; } </style> </head> <body> <center> <h1>GeeksforGeeks</h1> <div class="flexcontainer"> <div class="columns">ID</div> <div class="columns">Ph_no</div> <div class="columns">Name</div> <div class="columns">Adsress</div> </div> </body></html> Output: CSS-Misc Picked CSS Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Design a web page using HTML and CSS Form validation using jQuery How to set space between the flexbox ? Search Bar using HTML, CSS and JavaScript How to style a checkbox using CSS? Roadmap to Become a Web Developer in 2022 Installation of Node.js on Linux How to fetch data from an API in ReactJS ? Convert a string to an integer in JavaScript Difference between var, let and const keywords in JavaScript
[ { "code": null, "e": 25376, "s": 25348, "text": "\n18 Jul, 2019" }, { "code": null, "e": 25894, "s": 25376, "text": "The flex property in CSS is the combination of flex-grow, flex-shrink, and flex-basis property. It is used to set the length of flexible items. The flex property i...
Puppet - Master
In Puppet, the client server architecture of Puppet master is considered as the controlling authority of the entire setup. Puppet master acts as the server in the setup and controls all the activities on all the nodes. For any server which needs to act as Puppet master, it should have Puppet server software running. This server software is the key component of controlling all the activities on nodes. In this setup, one key point to remember is to have a super user access to all the machines that one is going to use in the setup. Following are the steps to setup Puppet master. Private Network DNS − Forward and backward should be configured, wherein each server should have a unique hostname. If one does not have the DNS configured, then one can use a private network for communication with the infrastructure. Firewall Open Port − Puppet master should be open on a particular port so that it can listen to the incoming requests on a particular port. We can use any port which is open on the firewall. Puppet master that we are creating is going to be on CentOS 7 × 64 machine using Puppet as the host name. The minimum system configuration for the creation of Puppet master is two CPU core and 1GB of memory. Configuration may have bigger size as well depending on the number of nodes we are going to manage with this master. In the infrastructure, is bigger than it is configured using 2 GB RAM. Next, one needs to generate Puppet master SSL certificate and the name of the master machine will be copied in the configuration file of all the nodes. Since Puppet master is the central authority for agent nodes in any given setup, it is one of the key responsibility of the Puppet master to maintain accurate system time to avoid potential configuration problems, which can arise when it issues agent certificates to nodes. If the time conflict issue arises, then certificates can appear expired if there are time discrepancies between the master and the node. Network time protocol is one of the key mechanisms to avoid such kind of problems. $ timedatectl list-timezones The above command will provide a whole list of available time zones. It will provide regions with time zone availability. Following command can be used to set the required time zone on the machine. $ sudo timedatectl set-timezone India/Delhi Install NTP on the Puppet server machine using the yum utility of CentOS machine. $ sudo yum -y install ntp Sync NTP with the system time which we have set in the above commands. $ sudo ntpdate pool.ntp.org In common practice, we will update the NTP configuration to use common pools which is available nearer to the machine datacenters. For this, we need to edit ntp.conf file under /etc. $ sudo vi /etc/ntp.conf Add the time server from the NTP pool time zones available. Following is how the ntp.conf file looks like. brcleprod001.brcl.pool.ntp.org brcleprod002.brcl.pool.ntp.org brcleprod003.brcl.pool.ntp.org brcleprod004.brcl.pool.ntp.org Save the configuration. Start the server and enable the daemon. $ sudo systemctl restart ntpd $ sudo systemctl enable ntpd Puppet server software is a software which runs on the Puppet master machine. It is the machine which pushes configurations to other machines running the Puppet agent software. Enable official Puppet labs collection repository using the following command. $ sudo rpm -ivh https://yum.puppetlabs.com/puppetlabs-release-pc1-el7.noarch.rpm Install puppetserver package. $ sudo yum -y install puppetserver As we have discussed, by default, the Puppet server gets configured on 2GB RAM machine. One can customize the setup according to the free memory available on the machine and how many nodes the server will manage. Edit the puppet server configuration on the vi mode $ sudo vi /etc/sysconfig/puppetserver Find the JAVA_ARGS and use the –Xms and –Xms options to set the memory allocation. We will allocate 3GB of space JAVA_ARGS="-Xms3g -Xmx3g" Once done, save and exit from the edit mode. After all the above setup is complete, we are ready to start the Puppet server on the master machine with the following command. $ sudo systemctl start puppetserver Next, we will do the setup so that the puppet server starts whenever the master server boots. $ sudo systemctl enable puppetserver [master] autosign = $confdir/autosign.conf { mode = 664 } reports = foreman external_nodes = /etc/puppet/node.rb node_terminus = exec ca = true ssldir = /var/lib/puppet/ssl certname = sat6.example.com strict_variables = false manifest = /etc/puppet/environments/$environment/manifests/site.pp modulepath = /etc/puppet/environments/$environment/modules config_version = Print Add Notes Bookmark this page
[ { "code": null, "e": 2392, "s": 2173, "text": "In Puppet, the client server architecture of Puppet master is considered as the controlling authority of the entire setup. Puppet master acts as the server in the setup and controls all the activities on all the nodes." }, { "code": null, "e...
Ways to apply an if condition in Pandas DataFrame - GeeksforGeeks
18 Aug, 2020 Generally on a Pandas DataFrame the if condition can be applied either column-wise, row-wise, or on an individual cell basis. The further document illustrates each of these with examples. First of all we shall create the following DataFrame : # importing pandas as pdimport pandas as pd # create the DataFramedf = pd.DataFrame({ 'Product': ['Umbrella', 'Matress', 'Badminton', 'Shuttle', 'Sofa', 'Football'], 'MRP': [1200, 1500, 1600, 352, 5000, 500], 'Discount': [0, 10, 0, 10, 20, 40]}) # display the DataFrameprint(df) Output : Example 1 : if condition on column values (tuples) : The if condition can be applied on column values like when someone asks for all the items with the MRP <=2000 and Discount >0 the following code does that. Similarly, any number of conditions can be applied on any number of attributes of the DataFrame. # if condition with column conditions given# the condition is if MRP of the product <= 2000 # and discount > 0 show me those itemsdf[(df['MRP'] <= 2000) & (df['Discount'] > 0)] Output : Example 2 : if condition on row values (tuples) : This can be taken as a special case for the condition on column values. If a tuple is given (Sofa, 5000, 20) and finding it in the DataFrame can be done like : # if condition with row tuple givendf[(df['Product'] == 'Sofa') & (df['MRP'] == 5000) & (df['Discount']== 20)] Output : Example 3 : Using Lambda function : Lambda function takes an input and returns a result based on a certain condition. It can be used to apply a certain function on each of the elements of a column in Pandas DataFrame. The below example uses the Lambda function to set an upper limit of 20 on the discount value i.e. if the value of discount > 20 in any cell it sets it to 20. # importing pandas as pd import pandas as pd # Create the dataframe df = pd.DataFrame({ 'Product': ['Umbrella', 'Matress', 'Badminton', 'Shuttle', 'Sofa', 'Football'], 'MRP': [1200, 1500, 1600, 352, 5000, 500], 'Discount': [0, 10, 0, 10, 20, 40]}) # Print the dataframe print(df) # If condition on column values using Lambda function df['Discount'] = df['Discount'].apply(lambda x : 20 if x > 20 else x)print(df) Output : Example 4 : Using iloc() or loc() function : Both iloc() and loc() function are used to extract the sub DataFrame from a DataFrame. The sub DataFrame can be anything spanning from a single cell to the whole table. iloc() is generally used when we know the index range for the row and column whereas loc() is used on a label search. The below example shows the use of both of the functions for imparting conditions on the Dataframe. Here a cell with index [2, 1] is taken which is the Badminton product’s MRP. # If condition on a cell value using iloc() or loc() functions# iloc() is based on index search and loc() based on label search # using iloc()if df.iloc[2, 1] > 1500: print("Badminton Price > 1500")else: print("Badminton Price < 1500") # using loc()print(df.loc[2, 'MRP'])if df.iloc[2, 'MRP'] > 1500: print("Badminton Price > 1500")else: print("Badminton Price < 1500") Output : Python pandas-dataFrame Python Pandas-exercise Python-pandas Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Install PIP on Windows ? How to drop one or multiple columns in Pandas Dataframe How To Convert Python Dictionary To JSON? Check if element exists in list in Python Defaultdict in Python Python | Get unique values from a list Python | os.path.join() method Selecting rows in pandas DataFrame based on conditions Create a directory in Python Python | Pandas dataframe.groupby()
[ { "code": null, "e": 24292, "s": 24264, "text": "\n18 Aug, 2020" }, { "code": null, "e": 24480, "s": 24292, "text": "Generally on a Pandas DataFrame the if condition can be applied either column-wise, row-wise, or on an individual cell basis. The further document illustrates each...
C# | Math.Pow() Method - GeeksforGeeks
31 Jan, 2019 In C#, Math.Pow() is a Math class method. This method is used to calculate a number raise to the power of some other number. Syntax: public static double Pow(double base, double power) Parameters: double base: It is a double-precision floating-point number which is to be raised to a power and type of this parameter is System.Double. double power: It is a double-precision floating-point number which specifies a power or exponent and type of this parameter is System.Double. Return Type: The function returns the number base raised to the power. The type of this method is System.Double Examples: Input : base = 8, power =2 Output : 64 Input : base = 2.5, power =3 Output : 15.625 Program: To demonstrate the Math.Pow() // C# program to illustrate the // Math.Pow() functionusing System;class GFG { // Main Method static public void Main() { // Find power using Math.Pow // 6 is base and 2 is power or // index or exponent of a number double pow_ab = Math.Pow(6, 2); // Print the result Console.WriteLine(pow_ab); // 3.5 is base and 3 is power or // index or exponent of a number double pow_tt = Math.Pow(3.5, 3); // Print the result Console.WriteLine(pow_tt); // 202 is base and 4 is power or // index or exponent of a number double pow_t = Math.Pow(202, 4); // Print the result Console.WriteLine(pow_t); }} Output: 36 42.875 1664966416 Reference: https://msdn.microsoft.com/en-us/library/system.math.pow(v=vs.110).aspx CSharp-Math CSharp-method C# Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. C# Dictionary with examples C# | Method Overriding Destructors in C# Difference between Ref and Out keywords in C# C# | Delegates C# | String.IndexOf( ) Method | Set - 1 Extension Method in C# C# | Constructors Introduction to .NET Framework C# | Abstract Classes
[ { "code": null, "e": 24558, "s": 24530, "text": "\n31 Jan, 2019" }, { "code": null, "e": 24683, "s": 24558, "text": "In C#, Math.Pow() is a Math class method. This method is used to calculate a number raise to the power of some other number." }, { "code": null, "e": 2...
Dart Programming - Functions
Functions are the building blocks of readable, maintainable, and reusable code. A function is a set of statements to perform a specific task. Functions organize the program into logical blocks of code. Once defined, functions may be called to access code. This makes the code reusable. Moreover, functions make it easy to read and maintain the program’s code. A function declaration tells the compiler about a function's name, return type, and parameters. A function definition provides the actual body of the function. A function definition specifies what and how a specific task would be done. A function must be called so as to execute it. Functions may also return value along with control, back to the caller. Parameters are a mechanism to pass values to functions. Optional parameters can be used when arguments need not be compulsorily passed for a function’s execution. A parameter can be marked optional by appending a question mark to its name. The optional parameter should be set as the last argument in a function. We have three types of optional parameters in Dart − To specify optional positional parameters, use square [] brackets. Unlike positional parameters, the parameter's name must be specified while the value is being passed. Curly brace {} can be used to specify optional named parameters. Function parameters can also be assigned values by default. However, such parameters can also be explicitly passed values. Recursion is a technique for iterating over an operation by having a function call to itself repeatedly until it arrives at a result. Recursion is best applied when you need to call the same function repeatedly with different parameters from within a loop. void main() { print(factorial(6)); } factorial(number) { if (number <= 0) { // termination case return 1; } else { return (number * factorial(number - 1)); // function invokes itself } } It should produce the following output − 720 Lambda functions are a concise mechanism to represent functions. These functions are also called as Arrow functions. [return_type]function_name(parameters)=>expression; void main() { printMsg(); print(test()); } printMsg()=> print("hello"); int test()=>123; // returning function It should produce the following output − hello 123 44 Lectures 4.5 hours Sriyank Siddhartha 34 Lectures 4 hours Sriyank Siddhartha 69 Lectures 4 hours Frahaan Hussain 117 Lectures 10 hours Frahaan Hussain 22 Lectures 1.5 hours Pranjal Srivastava 34 Lectures 3 hours Pranjal Srivastava Print Add Notes Bookmark this page
[ { "code": null, "e": 2885, "s": 2525, "text": "Functions are the building blocks of readable, maintainable, and reusable code. A function is a set of statements to perform a specific task. Functions organize the program into logical blocks of code. Once defined, functions may be called to access cod...
GATE | GATE-CS-2017 (Set 1) | Question 45 - GeeksforGeeks
17 Aug, 2021 Let A be m×n real valued square symmetric matrix of rank 2 with expression given below.Consider the following statements (i) One eigenvalue must be in [-5, 5]. (ii) The eigenvalue with the largest magnitude must be strictly greater than 5. Which of the above statements about engenvalues of A is/are necessarily CORRECT?(A) Both (i) and (ii)(B) (i) only(C) (ii) only(D) Neither (i) nor (ii)Answer: (B)Explanation: As a rank of A matrix = 2, hence => n-2 eigen values are zero. Let be the eigen values.Given that ————(1) We know that Trace of (AA)T = Trace of A2 (since A is symmetric) = ————(2) From (1) and (2) : Hence, atleast one of the given eigen values lies in [-5,5](only 1 is correct). This solution is contributed by Sumouli Chaudhary.Quiz of this Question as5853535 GATE-CS-2017 (Set 1) GATE-GATE-CS-2017 (Set 1) GATE Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. GATE | Gate IT 2007 | Question 25 GATE | GATE-CS-2001 | Question 39 GATE | GATE-CS-2005 | Question 6 GATE | GATE MOCK 2017 | Question 21 GATE | GATE-CS-2006 | Question 47 GATE | GATE MOCK 2017 | Question 24 GATE | Gate IT 2008 | Question 43 GATE | GATE-CS-2009 | Question 38 GATE | GATE-CS-2003 | Question 90 GATE | GATE-CS-2015 (Set 1) | Question 53
[ { "code": null, "e": 24568, "s": 24540, "text": "\n17 Aug, 2021" }, { "code": null, "e": 24689, "s": 24568, "text": "Let A be m×n real valued square symmetric matrix of rank 2 with expression given below.Consider the following statements" }, { "code": null, "e": 24817...
MapStruct - Mapping List
Using Mapstruct we can map list in similar fashion as we map primitives. To get a list of objects, we should provide a mapper method which can map an object. @Mapper public interface CarMapper { List<String> getListOfStrings(List<Integer> listOfIntegers); List<Car> getCars(List<CarEntity> carEntities); Car getModelFromEntity(CarEntity carEntity); } Following example demonstrates the same. Open project mapping as updated in Mapping Using defaultExpression chapter in Eclipse. Update CarEntity.java with following code − CarEntity.java package com.tutorialspoint.entity; import java.util.GregorianCalendar; public class CarEntity { private int id; private double price; private GregorianCalendar manufacturingDate; private String name; public int getId() { return id; } public void setId(int id) { this.id = id; } public double getPrice() { return price; } public void setPrice(double price) { this.price = price; } public GregorianCalendar getManufacturingDate() { return manufacturingDate; } public void setManufacturingDate(GregorianCalendar manufacturingDate) { this.manufacturingDate = manufacturingDate; } public String getName() { return name; } public void setName(String name) { this.name = name; } } Update Car.java with following code − Car.java package com.tutorialspoint.model; public class Car { private int id; private String price; private String manufacturingDate; private String brand; private String name; public int getId() { return id; } public void setId(int id) { this.id = id; } public String getPrice() { return price; } public void setPrice(String price) { this.price = price; } public String getManufacturingDate() { return manufacturingDate; } public void setManufacturingDate(String manufacturingDate) { this.manufacturingDate = manufacturingDate; } public String getBrand() { return brand; } public void setBrand(String brand) { this.brand = brand; } public String getName() { return name; } public void setName(String name) { this.name = name; } } Update CarMapper.java with following code − CarMapper.java package com.tutorialspoint.mapper; import org.mapstruct.Mapper; import org.mapstruct.Mapping; import com.tutorialspoint.entity.CarEntity; import com.tutorialspoint.model.Car; import java.util.List; import java.util.UUID; @Mapper( imports = UUID.class ) public interface CarMapper { @Mapping(source = "name", target = "name", defaultExpression = "java(UUID.randomUUID().toString())") @Mapping(target = "brand", constant = "BMW") @Mapping(source = "price", target = "price", numberFormat = "$#.00") @Mapping(source = "manufacturingDate", target = "manufacturingDate", dateFormat = "dd.MM.yyyy") Car getModelFromEntity(CarEntity carEntity); List<String> getListOfStrings(List<Integer> listOfIntegers); List<Car> getCars(List<CarEntity> carEntities); } Update CarMapperTest.java with following code − CarMapperTest.java package com.tutorialspoint.mapping; import static org.junit.jupiter.api.Assertions.assertEquals; import static org.junit.jupiter.api.Assertions.assertNotNull; import java.util.Arrays; import java.util.GregorianCalendar; import java.util.List; import org.junit.jupiter.api.Test; import org.mapstruct.factory.Mappers; import com.tutorialspoint.entity.CarEntity; import com.tutorialspoint.mapper.CarMapper; import com.tutorialspoint.model.Car; public class CarMapperTest { private CarMapper carMapper = Mappers.getMapper(CarMapper.class); @Test public void testEntityToModel() { CarEntity entity = new CarEntity(); entity.setPrice(345000); entity.setId(1); entity.setManufacturingDate(new GregorianCalendar(2015, 3, 5)); CarEntity entity1 = new CarEntity(); entity1.setPrice(445000); entity1.setId(2); entity1.setManufacturingDate(new GregorianCalendar(2015, 3, 5)); List<CarEntity> carEntities = Arrays.asList(entity, entity1); Car model = carMapper.getModelFromEntity(entity); assertEquals("$345000.00",model.getPrice()); assertEquals(entity.getId(), model.getId()); assertEquals("BMW", model.getBrand()); assertEquals("05.04.2015", model.getManufacturingDate()); List<Integer> list = Arrays.asList(1,2,3); List<String> listOfStrings = carMapper.getListOfStrings(list); List<Car> listOfCars = carMapper.getCars(carEntities); assertEquals(3, listOfStrings.size()); assertEquals(2, listOfCars.size()); } } Run the following command to test the mappings. mvn clean test Once command is successful. Verify the output. mvn clean test [INFO] Scanning for projects... ... [INFO] --- maven-surefire-plugin:2.12.4:test (default-test) @ mapping --- [INFO] Surefire report directory: \mvn\mapping\target\surefire-reports ------------------------------------------------------- T E S T S ------------------------------------------------------- Running com.tutorialspoint.mapping.CarMapperTest Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.035 sec Running com.tutorialspoint.mapping.DeliveryAddressMapperTest Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0 sec Running com.tutorialspoint.mapping.StudentMapperTest Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.001 sec Results : Tests run: 4, Failures: 0, Errors: 0, Skipped: 0 ... Print Add Notes Bookmark this page
[ { "code": null, "e": 2418, "s": 2260, "text": "Using Mapstruct we can map list in similar fashion as we map primitives. To get a list of objects, we should provide a mapper method which can map an object." }, { "code": null, "e": 2621, "s": 2418, "text": "@Mapper\npublic interfac...
Check if the given string of words can be formed from words present in the dictionary
11 Jul, 2022 Given a string array of M words and a dictionary of N words. The task is to check if the given string of words can be formed from words present in the dictionary. Examples: dict[] = { find, a, geeks, all, for, on, geeks, answers, inter } Input: str[] = { “find”, “all”, “answers”, “on”, “geeks”, “for”, “geeks” }; Output: YES all words of str[] are present in the dictionary so the output is YES Input: str = {“find”, “a”, “geek”} Output: NO In str[], “find” and “a” were present in the dictionary but “geek” is not present in the dictionary so the output is NO A naive Approach will be to match all words of the input sentence separately with each of the words in the dictionary and maintain a count of the number of occurrence of all words in the dictionary. So if the number of words in dictionary be n and no of words in the sentence be m this algorithm will take O(M*N) time. A better approach will be to use the modified version of the advanced data structure Trie the time complexity can be reduced to O(M * t) where t is the length of longest word in the dictionary which is lesser than n. So here a modification has been done to the trie node such that the isEnd variable is now an integer storing the count of occurrence of the word ending on this node. Also, the search function has been modified to find a word in the trie and once found decrease the count of isEnd of that node so that for multiple occurrences of a word in a sentence each is matched with a separate occurrence of that word in the dictionary. Below is the illustration of the above approach: C++ Python3 // C++ program to check if a sentence// can be formed from a given set of words.#include <bits/stdc++.h>using namespace std;const int ALPHABET_SIZE = 26; // here isEnd is an integer that will store// count of words ending at that nodestruct trieNode { trieNode* t[ALPHABET_SIZE]; int isEnd;}; // utility function to create a new nodetrieNode* getNode(){ trieNode* temp = new (trieNode); // Initialize new node with null for (int i = 0; i < ALPHABET_SIZE; i++) temp->t[i] = NULL; temp->isEnd = 0; return temp;} // Function to insert new words in trievoid insert(trieNode* root, string key){ trieNode* trail; trail = root; // Iterate for the length of a word for (int i = 0; i < key.length(); i++) { // If the next key does not contains the character if (trail->t[key[i] - 'a'] == NULL) { trieNode* temp; temp = getNode(); trail->t[key[i] - 'a'] = temp; } trail = trail->t[key[i] - 'a']; } // isEnd is increment so not only the word but its count is also stored (trail->isEnd)++;}// Search function to find a word of a sentencebool search_mod(trieNode* root, string word){ trieNode* trail; trail = root; // Iterate for the complete length of the word for (int i = 0; i < word.length(); i++) { // If the character is not present then word // is also not present if (trail->t[word[i] - 'a'] == NULL) return false; // If present move to next character in Trie trail = trail->t[word[i] - 'a']; } // If word foundthen decrement count of the word if ((trail->isEnd) > 0 && trail != NULL) { // if the word is found decrement isEnd showing one // occurrence of this word is already taken so (trail->isEnd)--; return true; } else return false;}// Function to check if string can be// formed from the sentencevoid checkPossibility(string sentence[], int m, trieNode* root){ int flag = 1; // Iterate for all words in the string for (int i = 0; i < m; i++) { if (search_mod(root, sentence[i]) == false) { // if a word is not found in a string then the // sentence cannot be made from this dictionary of words cout << "NO"; return; } } // If possible cout << "YES";} // Function to insert all the words of dictionary in the Trievoid insertToTrie(string dictionary[], int n, trieNode* root){ for (int i = 0; i < n; i++) insert(root, dictionary[i]);} // Driver Codeint main(){ trieNode* root; root = getNode(); // Dictionary of words string dictionary[] = { "find", "a", "geeks", "all", "for", "on", "geeks", "answers", "inter" }; int N = sizeof(dictionary) / sizeof(dictionary[0]); // Calling Function to insert words of dictionary to tree insertToTrie(dictionary, N, root); // String to be checked string sentence[] = { "find", "all", "answers", "on", "geeks", "for", "geeks" }; int M = sizeof(sentence) / sizeof(sentence[0]); // Function call to check possibility checkPossibility(sentence, M, root); return 0;} # Python3 program to check if a sentence# can be formed from a given set of words.#include <bits/stdc++.h> ALPHABET_SIZE = 26; # here isEnd is an integer that will store# count of words ending at that nodeclass trieNode: def __init__(self): self.t = [None for i in range(ALPHABET_SIZE)] self.isEnd = 0 # utility function to create a new nodedef getNode(): temp = trieNode() return temp; # Function to insert new words in triedef insert(root, key): trail = None trail = root; # Iterate for the length of a word for i in range(len(key)): # If the next key does not contains the character if (trail.t[ord(key[i]) - ord('a')] == None): temp = None temp = getNode(); trail.t[ord(key[i]) - ord('a')] = temp; trail = trail.t[ord(key[i]) - ord('a')]; # isEnd is increment so not only # the word but its count is also stored (trail.isEnd) += 1 # Search function to find a word of a sentencedef search_mod(root, word): trail = root; # Iterate for the complete length of the word for i in range(len(word)): # If the character is not present then word # is also not present if (trail.t[ord(word[i]) - ord('a')] == None): return False; # If present move to next character in Trie trail = trail.t[ord(word[i]) - ord('a')]; # If word found then decrement count of the word if ((trail.isEnd) > 0 and trail != None): # if the word is found decrement isEnd showing one # occurrence of this word is already taken so (trail.isEnd) -= 1 return True; else: return False; # Function to check if string can be# formed from the sentencedef checkPossibility(sentence, m, root): flag = 1; # Iterate for all words in the string for i in range(m): if (search_mod(root, sentence[i]) == False): # if a word is not found in a string then the # sentence cannot be made from this dictionary of words print('NO', end='') return; # If possible print('YES') # Function to insert all the words of dict in the Triedef insertToTrie(dictionary, n, root): for i in range(n): insert(root, dictionary[i]); # Driver Codeif __name__=='__main__': root = getNode(); # Dictionary of words dictionary = [ "find", "a", "geeks", "all", "for", "on", "geeks", "answers", "inter" ] N = len(dictionary) # Calling Function to insert words of dictionary to tree insertToTrie(dictionary, N, root); # String to be checked sentence = [ "find", "all", "answers", "on", "geeks", "for", "geeks" ] M = len(sentence) # Function call to check possibility checkPossibility(sentence, M, root); # This code is contributed by pratham76 YES An efficient approach will be to use map. Keep the count of words in the map, iterate in the string and check if the word is present in the map. If present, then decrease the count of the word in the map. If it is not present, then it is not possible to make the given string from the given dictionary of words. Below is the implementation of above approach : C++ Java Python3 C# Javascript // C++ program to check if a sentence// can be formed from a given set of words.#include <bits/stdc++.h>using namespace std; // Function to check if the word// is in the dictionary or notbool match_words(string dictionary[], string sentence[], int n, int m){ // map to store all words in // dictionary with their count unordered_map<string, int> mp; // adding all words in map for (int i = 0; i < n; i++) { mp[dictionary[i]]++; } // search in map for all // words in the sentence for (int i = 0; i < m; i++) { if (mp[sentence[i]]) mp[sentence[i]] -= 1; else return false; } // all words of sentence are present return true;} // Driver Codeint main(){ string dictionary[] = { "find", "a", "geeks", "all", "for", "on", "geeks", "answers", "inter" }; int n = sizeof(dictionary) / sizeof(dictionary[0]); string sentence[] = { "find", "all", "answers", "on", "geeks", "for", "geeks" }; int m = sizeof(sentence) / sizeof(sentence[0]); // Calling function to check if words are // present in the dictionary or not if (match_words(dictionary, sentence, n, m)) cout << "YES"; else cout << "NO"; return 0;} // Java program to check if a sentence// can be formed from a given set of words.import java.util.*; class GFG{ // Function to check if the word// is in the dictionary or notstatic boolean match_words(String dictionary[], String sentence[], int n, int m){ // map to store all words in // dictionary with their count Map<String,Integer> mp = new HashMap<>(); // adding all words in map for (int i = 0; i < n; i++) { if(mp.containsKey(dictionary[i])) { mp.put(dictionary[i], mp.get(dictionary[i])+1); } else { mp.put(dictionary[i], 1); } } // search in map for all // words in the sentence for (int i = 0; i < m; i++) { if (mp.containsKey(sentence[i])) mp.put(sentence[i],mp.get(sentence[i])-1); else return false; } // all words of sentence are present return true;} // Driver Codepublic static void main(String[] args){ String dictionary[] = { "find", "a", "geeks", "all", "for", "on", "geeks", "answers", "inter" }; int n = dictionary.length; String sentence[] = { "find", "all", "answers", "on", "geeks", "for", "geeks" }; int m = sentence.length; // Calling function to check if words are // present in the dictionary or not if (match_words(dictionary, sentence, n, m)) System.out.println("YES"); else System.out.println("NO"); }} // This code is contributed by Princi Singh # Python3 program to check if a sentence# can be formed from a given set of words. # Function to check if the word# is in the dictionary or notdef match_words(dictionary, sentence, n, m): # map to store all words in # dictionary with their count mp = dict() # adding all words in map for i in range(n): mp[dictionary[i]] = mp.get(dictionary[i], 0) + 1 # search in map for all # words in the sentence for i in range(m): if (mp[sentence[i]]): mp[sentence[i]] -= 1 else: return False # all words of sentence are present return True # Driver Codedictionary = ["find", "a", "geeks", "all", "for", "on", "geeks", "answers", "inter"] n = len(dictionary) sentence = ["find", "all", "answers", "on", "geeks", "for", "geeks"] m = len(sentence) # Calling function to check if words are# present in the dictionary or notif (match_words(dictionary, sentence, n, m)): print("YES")else: print("NO") # This code is contributed by mohit kumar // C# program to check if a sentence// can be formed from a given set of words.using System;using System.Collections.Generic; class GFG{ // Function to check if the word// is in the dictionary or notstatic Boolean match_words(String []dictionary, String []sentence, int n, int m){ // map to store all words in // dictionary with their count Dictionary<String, int> mp = new Dictionary<String, int>(); // adding all words in map for (int i = 0; i < n; i++) { if(mp.ContainsKey(dictionary[i])) { mp[dictionary[i]] = mp[dictionary[i]] + 1; } else { mp.Add(dictionary[i], 1); } } // search in map for all // words in the sentence for (int i = 0; i < m; i++) { if (mp.ContainsKey(sentence[i]) && mp[sentence[i]] > 0) mp[sentence[i]] = mp[sentence[i]] - 1; else return false; } // all words of sentence are present return true;} // Driver Codepublic static void Main(String[] args){ String []dictionary = { "find", "a", "geeks", "all", "for", "on", "geeks", "answers", "inter" }; int n = dictionary.Length; String []sentence = { "find", "all", "answers", "on", "geeks", "for", "geeks", "geeks" }; int m = sentence.Length; // Calling function to check if words are // present in the dictionary or not if (match_words(dictionary, sentence, n, m)) Console.WriteLine("YES"); else Console.WriteLine("NO");}} // This code is contributed by Rajput-Ji <script> // Javascript program to check if a sentence// can be formed from a given set of words. // Function to check if the word// is in the dictionary or not function match_words(dictionary, sentence, n, m){ // map to store all words in // dictionary with their count let mp = new Map(); // Adding all words in map for(let i = 0; i < n; i++) { if(mp.has(dictionary[i])) { mp.set(dictionary[i], mp.get(dictionary[i]) + 1); } else { mp.set(dictionary[i], 1); } } // Search in map for all // words in the sentence for(let i = 0; i < m; i++) { if (mp.has(sentence[i])) mp.set(sentence[i], mp.get(sentence[i]) - 1); else return false; } // All words of sentence are present return true;} // Driver codelet dictionary = [ "find", "a", "geeks", "all", "for", "on", "geeks", "answers", "inter" ]; let n = dictionary.length; let sentence = [ "find", "all", "answers", "on", "geeks", "for", "geeks" ]; let m = sentence.length; // Calling function to check if words are// present in the dictionary or notif (match_words(dictionary, sentence, n, m)) document.write("YES");else document.write("NO"); // This code is contributed by patel2127 </script> YES Time Complexity: O(M)Space Complexity: O(N) where N is no of words in a dictionary mohit kumar 29 princi singh Rajput-Ji pratham76 arorakashish0911 patel2127 hrawat0022 technophpfij C-String-Question cpp-map Searching Quiz Trie Advanced Data Structure Strings Strings Trie Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 54, "s": 26, "text": "\n11 Jul, 2022" }, { "code": null, "e": 218, "s": 54, "text": "Given a string array of M words and a dictionary of N words. The task is to check if the given string of words can be formed from words present in the dictionary. " }, { ...
Bresenham’s Algorithm for 3-D Line Drawing
15 Jul, 2018 Given two 3-D co-ordinates we need to find the points on the line joining them. All points have integer co-ordinates. Examples: Input : (-1, 1, 1), (5, 3, -1) Output : (-1, 1, 1), (0, 1, 1), (1, 2, 0), (2, 2, 0), (3, 2, 0), (4, 3, -1), (5, 3, -1) Input : (-7, 0, -3), (2, -5, -1) Output : (-7, 0, -3), (-6, -1, -3), (-5, -1, -3), (-4, -2, -2), (-3, -2, -2), (-2, -3, -2), (-1, -3, -2), (0, -4, -1), (1, -4, -1), (2, -5, -1) Bresenham’s Algorithm is efficient as it avoids floating point arithmetic operations. As in the case of 2-D Line Drawing, we use a variable to store the slope-error i.e. the error in slope of the line being plotted from the actual geometric line. As soon as this slope-error exceeds the permissible value we modify the digital to negate the error. The driving axis of the line to be plotted is the one along which the line travels the farthest i.e. the difference in axes co-ordinates is greatest. Thus the co-ordinate values increase linearly by 1 along the driving axis and the slope-error variable is used to determine the change in the co-ordinate values of the other axis. In case of a 2-D line we use one slope-error variable but in case of a 3-D line we need two () of them for each of the non-driving axes. If current point is (x, y, z) and the driving axis is the positive X-axis, then the next point could be (x+1, y, z) (x+1, y+1, z) (x+1, y, z+1) (x+1, y+1, z+1) The value of slope-error variables are determined according to the following equations:- The initial value of slope-error variables are given by the following equations:-Here denote the difference in co-ordinates of the two end points along the X, Y, Z axes. Algorithm:- Input the two endpoints and store the initial point as Plot Calculate constants and determine the driving axis by comparingthe absolute values of If abs() is maximum, then X-axis is the driving axisIf abs() is maximum, then Y-axis is the driving axisIf abs() is maximum, then Z-axis is the driving axisLet’s suppose that X-axis is the driving axis, thenAt each along the line, starting at k = 0, check the following conditionsand determine the next point:-If AND , thenplot andset Else If AND , thenplot andset Else If , thenplot andset Else thenplot andset >Repeat step 5 times Input the two endpoints and store the initial point as Plot Calculate constants and determine the driving axis by comparingthe absolute values of If abs() is maximum, then X-axis is the driving axisIf abs() is maximum, then Y-axis is the driving axisIf abs() is maximum, then Z-axis is the driving axis Let’s suppose that X-axis is the driving axis, then At each along the line, starting at k = 0, check the following conditionsand determine the next point:-If AND , thenplot andset Else If AND , thenplot andset Else If , thenplot andset Else thenplot andset > If AND , thenplot andset Else If AND , thenplot andset Else If , thenplot andset Else thenplot andset > Repeat step 5 times Python3 # Python3 code for generating points on a 3-D line # using Bresenham's Algorithm def Bresenham3D(x1, y1, z1, x2, y2, z2): ListOfPoints = [] ListOfPoints.append((x1, y1, z1)) dx = abs(x2 - x1) dy = abs(y2 - y1) dz = abs(z2 - z1) if (x2 > x1): xs = 1 else: xs = -1 if (y2 > y1): ys = 1 else: ys = -1 if (z2 > z1): zs = 1 else: zs = -1 # Driving axis is X-axis" if (dx >= dy and dx >= dz): p1 = 2 * dy - dx p2 = 2 * dz - dx while (x1 != x2): x1 += xs if (p1 >= 0): y1 += ys p1 -= 2 * dx if (p2 >= 0): z1 += zs p2 -= 2 * dx p1 += 2 * dy p2 += 2 * dz ListOfPoints.append((x1, y1, z1)) # Driving axis is Y-axis" elif (dy >= dx and dy >= dz): p1 = 2 * dx - dy p2 = 2 * dz - dy while (y1 != y2): y1 += ys if (p1 >= 0): x1 += xs p1 -= 2 * dy if (p2 >= 0): z1 += zs p2 -= 2 * dy p1 += 2 * dx p2 += 2 * dz ListOfPoints.append((x1, y1, z1)) # Driving axis is Z-axis" else: p1 = 2 * dy - dz p2 = 2 * dx - dz while (z1 != z2): z1 += zs if (p1 >= 0): y1 += ys p1 -= 2 * dz if (p2 >= 0): x1 += xs p2 -= 2 * dz p1 += 2 * dy p2 += 2 * dx ListOfPoints.append((x1, y1, z1)) return ListOfPoints def main(): (x1, y1, z1) = (-1, 1, 1) (x2, y2, z2) = (5, 3, -1) ListOfPoints = Bresenham3D(x1, y1, z1, x2, y2, z2) print(ListOfPoints) main() [(-1, 1, 1), (0, 1, 1), (1, 2, 0), (2, 2, 0), (3, 2, 0), (4, 3, -1), (5, 3, -1)] computer-graphics Geometric Python Programs Geometric Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 53, "s": 25, "text": "\n15 Jul, 2018" }, { "code": null, "e": 171, "s": 53, "text": "Given two 3-D co-ordinates we need to find the points on the line joining them. All points have integer co-ordinates." }, { "code": null, "e": 181, "s": 171, ...
Non-Access Modifiers in Java
01 Dec, 2021 Modifiers are specific keywords present in Java using that we can make changes to the characteristics of a variable, method, or class and limit its scope. Java programming language has a rich set of Modifiers. Modifiers in Java are divided into two types – Access Modifiers and Non-Access modifiers. Access Modifiers in Java help restrict the scope of a variable, method, class, or constructor. Public, Private, Protected, and Default these four access modifiers are present in Java. Non-access modifiers provide information about the characteristics of a class, method, or variable to the JVM. Seven types of Non-Access modifiers are present in Java. They are – staticfinalabstractsynchronizedvolatiletransientnative static final abstract synchronized volatile transient native The static keyword means that the entity to which it is applied is available outside any particular instance of the class. That means the static methods or the attributes are a part of the class and not an object. The memory is allocated to such an attribute or method at the time of class loading. The use of a static modifier makes the program more efficient by saving memory. A static field exists across all the class instances, and without creating an object of the class, they can be called. Example 1: Java import java.io.*; // static variableclass static_gfg { static String s = "GeeksforGeeks"; }class GFG { public static void main(String[] args) { // No object required System.out.println( static_gfg.s); }} GeeksforGeeks In this above code sample, we have declared the String as static, part of the static_gfg class. Generally, to access the string, we first need to create the object of the static_gfg class, but as we have declared it as static, we do not need to create an object of static_gfg class to access the string. We can use className.variableName for accessing it. Example 2: Java import java.io.*; class static_gfg { // static variable static int count = 0; void myMethod() { count++; System.out.println(count); }}class GFG { public static void main(String[] args) { // first object creation static_gfg obj1 = new static_gfg(); // method calling of first object obj1.myMethod(); // second object creation static_gfg obj2 = new static_gfg(); // method calling of second object obj2.myMethod(); }} 1 2 In the above code, the count variable is static, so it is not tied to a specific instance of the class. So, while obj1.myMethod() is called it increases the value of count by 1 and then obj2.myMethod() again increases it . If it was not a static one, then we will get output as 1 in both cases, but as it is a static variable so that count variable will be increased twice, and we will get 2 as an output the second time. The final keyword indicates that the specific class cannot be extended or a method cannot be overridden. Let’s understand that with an example – Example 1: Java import java.io.*; class final_gfg { String s1 = "geek1";}class extended_gfg extends final_gfg { String s2 = "geek2";}class GFG { public static void main(String[] args) { // creating object extended_gfg obj = new extended_gfg(); System.out.println(obj.s1); System.out.println(obj.s2); }} geek1 geek2 In this above code, the final_gfg class is extended by the extended_gfg class, and the code is working fine and producing output. But after using the final keyword with the final_gfg class. The code will produce an error. Below is the implementation for the same – Java import java.io.*; // This class is finalfinal class final_gfg { String s1 = "geek1";}// We are trying to inherit a finalclass extended_gfg extends final_gfg { String s2 = "geek2";}class GFG { public static void main(String[] args) { // creating object extended_gfg obj = new extended_gfg(); System.out.println(obj.s1); System.out.println(obj.s2); }} Error : Screenshot of Error Here we are getting errors in the compilation as we are trying to extend the final_gfg class, which is declared as final. If a class is declared as final, then we cannot extend it or inherit from that class. Example 2: Java import java.io.*; class final_gfg{ void myMethod(){ System.out.println("GeeksforGeeks"); }}class override_final_gfg extends final_gfg{ void myMethod(){ System.out.println("Overrides GeeksforGeeks"); }} class GFG{ public static void main(String[] args) { override_final_gfg obj=new override_final_gfg(); obj.myMethod(); }} Overrides GeeksforGeeks In the above code, we are overriding myMethod(), and the code is working fine. Now we are going to declare the myMethod() in superclass as final. Below is the implementation for the same – Java import java.io.*; class final_gfg{ final void myMethod(){ System.out.println("GeeksforGeeks"); }}class override_final_gfg extends final_gfg{ // trying to override the method available on final_gfg class void myMethod(){ System.out.println("Overrides GeeksforGeeks"); }}class GFG{ public static void main(String[] args) { override_final_gfg obj=new override_final_gfg(); obj.myMethod(); }} Error: Screenshot of Error The above code is producing an error because here, we are trying to override a method that is declared as final. myMethod() in the final_gfg class is declared as final, and we are trying to override that from the override_final_gfg class. A final method cannot be overridden; thus, the code snippet is producing an error here. abstract keyword is used to declare a class as partially implemented means an object cannot be created directly from that class. Any subclass needs to be either implement all the methods of the abstract class, or it should also need to be an abstract class. The abstract keyword cannot be used with static, final, or private keywords because they prevent overriding, and we need to override methods in the case of an abstract class. Java // abstract classabstract class abstract_gfg{ abstract void myMethod();} //extending abstract classclass MyClass extends abstract_gfg{ // overriding abstract method otherwise // code will produce error void myMethod(){ System.out.println("GeeksforGeeks"); }}class GFG{ public static void main(String[] args) { MyClass obj=new MyClass(); obj.myMethod(); }} GeeksforGeeks In the above code, abstract_gfg is an abstract class, and myMethod() is an abstract method. So, we first need to extend the abstract_gfg class that we have done here using MyClass. After extending, we also need to override the abstract method otherwise, the code will produce errors. synchronized keyword prevents a block of code from executing by multiple threads at once. It is very important for some critical operations. Let us understand by an example – Java import java.io.*; class Counter{ int count; void increment(){ count++; }}class GFG{ public static void main(String[] args) throws InterruptedException { Counter c=new Counter(); // Thread 1 Thread t1=new Thread(new Runnable() { @Override public void run() { for(int i=1;i<=10000;i++){ c.increment(); } } }); // Thread 2 Thread t2=new Thread(new Runnable() { @Override public void run() { for(int i=1;i<=10000;i++){ c.increment(); } } }); t1.start(); t2.start(); t1.join(); t2.join(); System.out.println(c.count); }} Output The above code should be an output value of 20000 as two threads increment it 10000 times each, and the main is waiting for Thread1, Thread2 to finish. Sometimes it may not be true. Depending upon the system, it may not give 20000 as output. As both threads are accessing the value of count, it may happen that Thread1 fetches the value of count, and before it could increment it, the Thread2 reads the value and increments that. So thus, the result may be less than 20000. To solve this issue, we use the synchronized keyword. If the synchronized keyword is used while declaring the increment() method, then a thread needs to wait for another thread to complete the operation of the method then only another one can work on it. So we can get guaranteed output of 20000. Below is the synchronized code: Java import java.io.*; class Counter{ int count; synchronized void increment(){ count++; }}class GFG{ public static void main(String[] args) throws InterruptedException { Counter c=new Counter(); // Thread 1 Thread t1=new Thread(new Runnable() { @Override public void run() { for(int i=1;i<=100000;i++){ c.increment(); } } }); // Thread 2 Thread t2=new Thread(new Runnable() { @Override public void run() { for(int i=1;i<=100000;i++){ c.increment(); } } }); t1.start(); t2.start(); t1.join(); t2.join(); System.out.println(c.count); }} 200000 The volatile keyword is used to make the class thread-safe. That means if a variable is declared as volatile, then that can be modified by multiple threads at the same time without any issues. The volatile keyword is only applicable to a variable. A volatile keyword reduces the chance of memory inconsistency. The value of a volatile variable is always read from the main memory and not from the local thread cache, and it helps to improve thread performance. Let us understand by an example: Java import java.io.*;import java.util.*; class Geeks extends Thread{ boolean running=true; @Override public void run(){ while(running){ System.out.println("GeeksforGeeks"); } } public void shutDown(){ running=false; }}class GFG{ public static void main(String[] args) { Geeks obj = new Geeks(); obj.start(); Scanner input = new Scanner(System.in); input.nextLine(); obj.shutDown(); }} Output In the above code, the program should ideally stop if Return Key/Enter is pressed, but in some machines, it may happen that the variable running is cached, and we are not able to change its value using the shutDown() method. In such a case, the program will execute infinite times and cannot be exited properly. To avoid caching and make it Thread-safe, we can use volatile keywords while declaring the running variable. Java import java.io.*;import java.util.*; class Geeks extends Thread{ volatile boolean running=true; @Override public void run(){ while(running){ System.out.println("GeeksforGeeks"); } } public void shutDown(){ running=false; }} class GFG{ public static void main(String[] args) { Geeks obj = new Geeks(); obj.start(); Scanner input = new Scanner(System.in); input.nextLine(); obj.shutDown(); }} Output In the above code, after using the volatile keyword, we can stop the infinite loop using the Return key, and the program exited properly with exit code 0. This needs prior knowledge of serialization in Java. You can refer to the following article for that:- serialization in java. The transient keyword may be applied to member variables of a class to indicate that the member variable should not be serialized when the containing class instance is serialized. Serialization is the ​process of converting an object into a byte stream. When we do not want to serialize the value of a variable, then we declare it as transient. To make it more transparent, let’s take an example of an application where we need to accept UserID and Password. At that moment, we need to declare some variable to take the input and store the data, but as the data is susceptible, so we do not want to keep it stored after the job is done. To achieve this, we can use the transient keyword for variable declaration. That particular variable will not participate in the serialization process, and when we deserialize that, we will receive the default value of the variable. Let’s see a sample code for the same – Java import java.io.*; class transient_gfg implements Serializable { // normal variable int a = 10; // Transient variables transient String UserID="admin"; transient String Password="tiger123"; }class GFG{ public static void main(String[] args) throws IOException, ClassNotFoundException { transient_gfg obj=new transient_gfg(); // printing the value of transient // variable before serialization process System.out.println("UserID :"+obj.UserID); System.out.println("Password: "+obj.Password); System.out.println("a = "+obj.a); // serialization FileOutputStream fos = new FileOutputStream("abc.txt"); ObjectOutputStream oos = new ObjectOutputStream(fos); oos.writeObject(obj); // de-serialization FileInputStream fis = new FileInputStream("abc.txt"); ObjectInputStream ois = new ObjectInputStream(fis); transient_gfg output = (transient_gfg)ois.readObject(); // printing the value of transient // variable after de-serialization process System.out.println("UserID :"+output.UserID); System.out.println("Password: "+output.Password); System.out.println("a = "+obj.a); }} Output As you see from the output, after serialization, the values of UserID and Password are no longer present. However, the value of ‘a’, which is a normal variable, is still present. The native keyword may be applied to a method to indicate that the method is implemented in a language other than Java. Using this java application can call code written in C, C++, or assembler language. A shared code library or DLL is required in this case. Let’s see an example first – Java import java.io.*; class GFG{ // native method public native void printMethod (); static { // The name of DLL file System.loadLibrary ("LibraryName"); } public static void main (String[] args) { GFG obj = new GFG (); obj.printMethod (); }} Output: In the above code, we have a native method. The method is defined in any other language, loaded by a java application using the shared DLL file. Implementation of the DLL file is out of scope for this article, so if you want to know more about it, you can refer to this article – Multi-Language Programming – Java Process Class, JNI, and IO. Java-Modifier Picked Java Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Stream In Java Introduction to Java Constructors in Java Exceptions in Java Generics in Java Functional Interfaces in Java Java Programming Examples Strings in Java Differences between JDK, JRE and JVM Abstraction in Java
[ { "code": null, "e": 28, "s": 0, "text": "\n01 Dec, 2021" }, { "code": null, "e": 238, "s": 28, "text": "Modifiers are specific keywords present in Java using that we can make changes to the characteristics of a variable, method, or class and limit its scope. Java programming lan...
Problem of 8 Neighbours of an element in a 2-D Matrix
08 Jul, 2022 Given a 2-D Matrix and an integer ‘K’, the task is to predict the matrix after ‘K’ iterations given as follows: An element 1 in the current matrix remains 1 in the next iteration only if it is surrounded by A number of 1s, where 0 <= range1a <= A <= range1b. An element 0 in the current matrix becomes 1 in the next iteration only if it is surrounded by B numbers of 1s, where 0 <= range0a <= B <= range0b. Let’s understand this with an example: Constraints: 1 <= K <= 100000 0 <= range1a, range1b, range0a, range0b <= 8 In the above image for cell(0, 0), the cell was ‘0’ in the first iteration but, since it was surrounded by only one adjacent cell containing ‘1’, which does not fall within the range [range0a, range0b]. So it will continue to remain ‘0’. For the second iteration, cell (0, 0) was 0, but this time it is surrounded by two cells containing ‘1’, and two falls within the range [range0a, range0b]. Therefore, it becomes ‘1’ in the next (2nd) iteration. Examples: Input: range1a = 2 range1b = 2 range0a = 2 range0b = 3 K = 1Output: 0 1 1 0 0 1 1 1 1 0 0 1 0 0 1 0Input: range1a = 2 range1b = 2 range0a = 2 range0b = 3 K = 2Output: 1 0 0 1 1 0 0 0 0 0 0 0 0 1 0 1 Below is the implementation of the above approach: C++ Java Python 3 C# PHP Javascript // C++ implementation of the approach#include <iostream>using namespace std; // Dimension of Array#define N 4 void predictMatrix(int arr[N][N], int range1a, int range1b, int range0a, int range0b, int K, int b[N][N]){ // Count of 1s int c = 0; while (K--) { for (int i = 0; i < N; i++) { for (int j = 0; j < N; j++) { c = 0; // Counting all neighbouring 1s if (i > 0 && arr[i - 1][j] == 1) c++; if (j > 0 && arr[i][j - 1] == 1) c++; if (i > 0 && j > 0 && arr[i - 1][j - 1] == 1) c++; if (i < N - 1 && arr[i + 1][j] == 1) c++; if (j < N - 1 && arr[i][j + 1] == 1) c++; if (i < N - 1 && j < N - 1 && arr[i + 1][j + 1] == 1) c++; if (i < N - 1 && j > 0 && arr[i + 1][j - 1] == 1) c++; if (i > 0 && j < N - 1 && arr[i - 1][j + 1] == 1) c++; // Comparing the number of // neighbouring 1s with // given ranges if (arr[i][j] == 1) { if (c >= range1a && c <= range1b) b[i][j] = 1; else b[i][j] = 0; } if (arr[i][j] == 0) { if (c >= range0a && c <= range0b) b[i][j] = 1; else b[i][j] = 0; } } } // Copying changes to // the main matrix for (int k = 0; k < N; k++) for (int m = 0; m < N; m++) arr[k][m] = b[k][m]; }} // Driver codeint main(){ int arr[N][N] = { 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1 }; int range1a = 2, range1b = 2; int range0a = 2, range0b = 3; int K = 3, b[N][N] = { 0 }; // Function call to calculate // the resultant matrix // after 'K' iterations. predictMatrix(arr, range1a, range1b, range0a, range0b, K, b); // Printing Result for (int i = 0; i < N; i++) { cout << endl; for (int j = 0; j < N; j++) cout << b[i][j] << " "; } return 0;} // Java implementation of the approachpublic class GFG{ // Dimension of Arrayfinal static int N = 4 ; static void predictMatrix(int arr[][], int range1a, int range1b, int range0a, int range0b, int K, int b[][]){ // Count of 1s int c = 0; while (K != 0) { K--; for (int i = 0; i < N; i++) { for (int j = 0; j < N; j++) { c = 0; // Counting all neighbouring 1s if (i > 0 && arr[i - 1][j] == 1) c++; if (j > 0 && arr[i][j - 1] == 1) c++; if (i > 0 && j > 0 && arr[i - 1][j - 1] == 1) c++; if (i < N - 1 && arr[i + 1][j] == 1) c++; if (j < N - 1 && arr[i][j + 1] == 1) c++; if (i < N - 1 && j < N - 1 && arr[i + 1][j + 1] == 1) c++; if (i < N - 1 && j > 0 && arr[i + 1][j - 1] == 1) c++; if (i > 0 && j < N - 1 && arr[i - 1][j + 1] == 1) c++; // Comparing the number of // neighbouring 1s with // given ranges if (arr[i][j] == 1) { if (c >= range1a && c <= range1b) b[i][j] = 1; else b[i][j] = 0; } if (arr[i][j] == 0) { if (c >= range0a && c <= range0b) b[i][j] = 1; else b[i][j] = 0; } } } // Copying changes to // the main matrix for (int k = 0; k < N; k++) for (int m = 0; m < N; m++) arr[k][m] = b[k][m]; } } // Driver codepublic static void main(String []args){ int arr[][] = { {0, 0, 0, 0}, {0, 1, 1, 0}, {0, 0, 1, 0}, {0, 1, 0, 1 } }; int range1a = 2, range1b = 2; int range0a = 2, range0b = 3; int K = 3; int b[][] = new int[N][N] ; // Function call to calculate // the resultant matrix // after 'K' iterations. predictMatrix(arr, range1a, range1b, range0a, range0b, K, b); // Printing Result for (int i = 0; i < N; i++) { System.out.println(); for (int j = 0; j < N; j++) System.out.print(b[i][j]+ " "); } }// This Code is contributed by Ryuga} # Python3 implementation of the approach # Dimension of ArrayN = 4 def predictMatrix(arr, range1a, range1b, range0a, range0b, K, b): # Count of 1s c = 0 while (K): for i in range(N) : for j in range(N): c = 0 # Counting all neighbouring 1s if (i > 0 and arr[i - 1][j] == 1): c += 1 if (j > 0 and arr[i][j - 1] == 1): c += 1 if (i > 0 and j > 0 and arr[i - 1][j - 1] == 1): c += 1 if (i < N - 1 and arr[i + 1][j] == 1): c += 1 if (j < N - 1 and arr[i][j + 1] == 1): c += 1 if (i < N - 1 and j < N - 1 and arr[i + 1][j + 1] == 1): c += 1 if (i < N - 1 and j > 0 and arr[i + 1][j - 1] == 1): c += 1 if (i > 0 and j < N - 1 and arr[i - 1][j + 1] == 1): c += 1 # Comparing the number of neighbouring # 1s with given ranges if (arr[i][j] == 1) : if (c >= range1a and c <= range1b): b[i][j] = 1 else: b[i][j] = 0 if (arr[i][j] == 0): if (c >= range0a and c <= range0b): b[i][j] = 1 else: b[i][j] = 0 K -= 1 # Copying changes to the main matrix for k in range(N): for m in range( N): arr[k][m] = b[k][m] # Driver codeif __name__ == "__main__": arr = [[0, 0, 0, 0], [0, 1, 1, 0], [0, 0, 1, 0], [0, 1, 0, 1]] range1a = 2 range1b = 2 range0a = 2 range0b = 3 K = 3 b = [[0 for x in range(N)] for y in range(N)] # Function call to calculate # the resultant matrix # after 'K' iterations. predictMatrix(arr, range1a, range1b, range0a, range0b, K, b) # Printing Result for i in range( N): print() for j in range(N): print(b[i][j], end = " ") # This code is contributed# by ChitraNayal // C# implementation of the approachusing System; class GFG{ // Dimension of Arrayreadonly static int N = 4 ; static void predictMatrix(int [,]arr, int range1a, int range1b, int range0a, int range0b, int K, int [,]b){ // Count of 1s int c = 0; while (K != 0) { K--; for (int i = 0; i < N; i++) { for (int j = 0; j < N; j++) { c = 0; // Counting all neighbouring 1s if (i > 0 && arr[i - 1, j] == 1) c++; if (j > 0 && arr[i, j - 1] == 1) c++; if (i > 0 && j > 0 && arr[i - 1, j - 1] == 1) c++; if (i < N - 1 && arr[i + 1, j] == 1) c++; if (j < N - 1 && arr[i, j + 1] == 1) c++; if (i < N - 1 && j < N - 1 && arr[i + 1, j + 1] == 1) c++; if (i < N - 1 && j > 0 && arr[i + 1, j - 1] == 1) c++; if (i > 0 && j < N - 1 && arr[i - 1, j + 1] == 1) c++; // Comparing the number of // neighbouring 1s with // given ranges if (arr[i,j] == 1) { if (c >= range1a && c <= range1b) b[i, j] = 1; else b[i, j] = 0; } if (arr[i,j] == 0) { if (c >= range0a && c <= range0b) b[i, j] = 1; else b[i, j] = 0; } } } // Copying changes to the main matrix for (int k = 0; k < N; k++) for (int m = 0; m < N; m++) arr[k, m] = b[k, m]; }} // Driver codepublic static void Main(){ int [,]arr = { {0, 0, 0, 0}, {0, 1, 1, 0}, {0, 0, 1, 0}, {0, 1, 0, 1 } }; int range1a = 2, range1b = 2; int range0a = 2, range0b = 3; int K = 3; int [,]b = new int[N, N]; // Function call to calculate // the resultant matrix // after 'K' iterations. predictMatrix(arr, range1a, range1b, range0a, range0b, K, b); // Printing Result for (int i = 0; i < N; i++) { Console.WriteLine(); for (int j = 0; j < N; j++) Console.Write(b[i, j] + " "); }}} // This code is contributed by 29AjayKumar <?php// PHP implementation of the approach // Dimension of Array#define N 4 function predictMatrix($arr, $range1a, $range1b, $range0a, $range0b, $K, $b){$N = 4; // Count of 1s $c = 0; while ($K--) { for ($i = 0; $i < $N; $i++) { for ($j = 0; $j < $N; $j++) { $c = 0; // Counting all neighbouring 1s if ($i > 0 && $arr[$i - 1][$j] == 1) $c++; if ($j > 0 && $arr[$i][$j - 1] == 1) $c++; if ($i > 0 && $j > 0 && $arr[$i - 1][$j - 1] == 1) $c++; if ($i < $N - 1 && $arr[$i + 1][$j] == 1) $c++; if ($j < $N - 1 && $arr[$i][$j + 1] == 1) $c++; if ($i < $N - 1 && $j < $N - 1 && $arr[$i + 1][$j + 1] == 1) $c++; if ($i < $N - 1 && $j > 0 && $arr[$i + 1][$j - 1] == 1) $c++; if ($i > 0 && $j < $N - 1 && $arr[$i - 1][$j + 1] == 1) $c++; // Comparing the number of // neighbouring 1s with // given ranges if ($arr[$i][$j] == 1) { if ($c >= $range1a && $c <= $range1b) $b[$i][$j] = 1; else $b[$i][$j] = 0; } if ($arr[$i][$j] == 0) { if ($c >= $range0a && $c <= $range0b) $b[$i][$j] = 1; else $b[$i][$j] = 0; } } } // Copying changes to // the main matrix for ($k = 0; $k < $N; $k++) for ($m = 0; $m < $N; $m++) $arr[$k][$m] = $b[$k][$m]; } return $b;} // Driver code$N = 4;$arr= array(array(0, 0, 0, 0), array(0, 1, 1, 0), array(0, 0, 1, 0), array(0, 1, 0, 1));$range1a = 2; $range1b = 2;$range0a = 2; $range0b = 3;$K = 3; $b = array(array(0)); // Function call to calculate// the resultant matrix// after 'K' iterations.$b1 = predictMatrix($arr, $range1a, $range1b, $range0a, $range0b, $K, $b); // Printing Resultfor ($i = 0; $i < $N; $i++){ echo "\n"; for ($j = 0; $j < $N; $j++) echo $b1[$i][$j] . " ";} // This code is contributed by Akanksha Rai <script>// Javascript implementation of the approach // Dimension of Arraylet N = 4 ; function predictMatrix(arr,range1a,range1b,range0a,range0b,K,b){ // Count of 1s let c = 0; while (K != 0) { K--; for (let i = 0; i < N; i++) { for (let j = 0; j < N; j++) { c = 0; // Counting all neighbouring 1s if (i > 0 && arr[i - 1][j] == 1) c++; if (j > 0 && arr[i][j - 1] == 1) c++; if (i > 0 && j > 0 && arr[i - 1][j - 1] == 1) c++; if (i < N - 1 && arr[i + 1][j] == 1) c++; if (j < N - 1 && arr[i][j + 1] == 1) c++; if (i < N - 1 && j < N - 1 && arr[i + 1][j + 1] == 1) c++; if (i < N - 1 && j > 0 && arr[i + 1][j - 1] == 1) c++; if (i > 0 && j < N - 1 && arr[i - 1][j + 1] == 1) c++; // Comparing the number of // neighbouring 1s with // given ranges if (arr[i][j] == 1) { if (c >= range1a && c <= range1b) b[i][j] = 1; else b[i][j] = 0; } if (arr[i][j] == 0) { if (c >= range0a && c <= range0b) b[i][j] = 1; else b[i][j] = 0; } } } // Copying changes to // the main matrix for (let k = 0; k < N; k++) for (let m = 0; m < N; m++) arr[k][m] = b[k][m]; }} // Driver code let arr = [[0, 0, 0, 0], [0, 1, 1, 0], [0, 0, 1, 0], [0, 1, 0, 1]]; let range1a = 2, range1b = 2; let range0a = 2, range0b = 3; let K = 3; let b = new Array(N) ; for(let i=0;i<N;i++) { b[i]=new Array(N); for(let j=0;j<N;j++) { b[i][j]=0; } } // Function call to calculate // the resultant matrix // after 'K' iterations. predictMatrix(arr, range1a, range1b, range0a, range0b, K, b); // Printing Result for (let i = 0; i < N; i++) { document.write("<br>"); for (let j = 0; j < N; j++) document.write(b[i][j]+ " "); } // This code is contributed by avanitrachhadiya2155</script> 0 1 0 0 0 1 0 0 1 1 1 0 0 0 1 0 Time Complexity: O(K*N2) Auxiliary Space: O(N2) ankthon 29AjayKumar ukasp Akanksha_Rai avanitrachhadiya2155 subhamkumarm348 Arrays Technical Scripter 2018 Competitive Programming Mathematical Matrix Technical Scripter Arrays Mathematical Matrix Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n08 Jul, 2022" }, { "code": null, "e": 142, "s": 28, "text": "Given a 2-D Matrix and an integer ‘K’, the task is to predict the matrix after ‘K’ iterations given as follows: " }, { "code": null, "e": 289, "s": 142, "...
Lodash | _.find() Method
21 May, 2022 The _.find() method accessing each value of the collection and returns the first element that passes a truth test for the predicate or undefined if no value passes the test. The function returns as soon as it finds the match. So it actually searches for elements according to the predicate.Syntax: _.find(collection, predicate, fromIndex) Parameters: This method accept three parameters as mentioned above and described below: collection: This parameter holds the array or object collection that need to inspect. predicate: This parameter holds the function invoked each iteration. fromIndex: This parameter holds the index from which you want to start searching (optional). If you don’t pass this parameter then it start searching from the beginning. Return Value: It returns the matched element or undefined if nothing match.Example 1: Tn this example, we will try to find the first number whose square is more than 100. javascript const _ = require('lodash'); let x = [2, 5, 7, 10, 13, 15]; let result = _.find(x, function(n) { if (n * n > 100) { return true; }}); console.log(result); Here, const _ = require(‘lodash’) is used to import the lodash library into the file.Output: 13 Example 2: In this example, we will find the first number in the list which is greater than 10 but start searching from index 2. javascript const _ = require('lodash'); let x = [-1, 29, 7, 10, 13, 15]; let result = _.find(x, function(n) { if (n > 10) { return true; }}, 2); console.log(result); Output: 13 Example 3: In this example, we will search for the first student (object) in the list who has more score than 90. javascript const _ = require('lodash'); let x = [ {'name': 'Akhil', marks:'78'}, {'name': 'Akhil', marks:'98'}, {'name': 'Akhil', marks:'97'}]; let result = _.find(x, function(obj) { if (obj.marks > 90) { return true; }}); console.log(result); Output: { name: 'Akhil', marks: '98' } Example 4: When none element return true on predicate. javascript const _ = require('lodash'); let x = [1, 2, 7, 10, 13, 15]; let result = _.find(x, function(n) { if (n < 0) { return true; }}); console.log(result); Output: undefined Note: This will not work in normal JavaScript because it requires the library lodash to be installed.Reference: https://lodash.com/docs/4.17.15#find anikakapoor michaelxfrench JavaScript-Lodash JavaScript Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n21 May, 2022" }, { "code": null, "e": 328, "s": 28, "text": "The _.find() method accessing each value of the collection and returns the first element that passes a truth test for the predicate or undefined if no value passes the test. T...
How to play/pause video using jQuery ?
03 Aug, 2021 Method 1: Using trigger() method: The trigger() method is used to execute a specified event and the default behavior of the event. The event to be executed is passed as a parameter to this method.The ‘play’ event is used to play any media element and similarly the ‘pause’ event is used to pause any media element. Using these events with the trigger() method will play or pause the video as required. Syntax: // Play the video$('#sample_video').trigger('play'); // Pause the video$('#sample_video').trigger('pause'); Example: <!DOCTYPE html><html> <head> <title> How to play/pause video using JQuery? </title> <script src= "https://code.jquery.com/jquery-3.4.1.min.js"> </script></head> <body> <h1 style="color: green"> GeeksforGeeks </h1> <b> Play/pause HTML 5 video using JQuery? </b> <p> Click on the buttons to play or pause the video. </p> <button onclick="playVideo()"> Play Video </button> <button onclick="pauseVideo()"> Pause Video </button> <br> <video id="sample_video" width="360" height="240"> <source src="https://media.geeksforgeeks.org/wp-content/uploads/20200107020629/sample_video.mp4" type="video/mp4"> </video> <script> function playVideo() { $('#sample_video').trigger('play'); } function pauseVideo() { $('#sample_video').trigger('pause'); } </script></body> </html> Output: Method 2: Using the play() and pause() method: The play() method in JavaScript is used to attempt the playback of a media file. In jQuery, the video file is first selected using a selector and the actual element is selected using the get() method. Then the play() method is used on this element to attempt to start the video. The pause() method in JavaScript is used to pause the playback of a media file. In jQuery, the video file is first selected using a selector and the actual element is selected using the get() method. The pause() method is then used on this element to pause the video. Syntax: // Play the video$('#sample_video').get(0).play(); // Pause the video$('#sample_video').get(0).pause(); Example: <!DOCTYPE html><html> <head> <title> How to play/pause video using JQuery? </title> <script src= "https://code.jquery.com/jquery-3.4.1.min.js"> </script></head> <body> <h1 style="color: green"> GeeksforGeeks </h1> <b> Play/pause HTML 5 video using JQuery? </b> <p> Click on the buttons to play or pause the video. </p> <button onclick="playVideo()"> Play Video </button> <button onclick="pauseVideo()"> Pause Video </button> <br> <video id="sample_video" width="360" height="240"> <source src="https://media.geeksforgeeks.org/wp-content/uploads/20200107020629/sample_video.mp4" type="video/mp4"> </video> <script> function playVideo() { $('#sample_video').get(0).play(); } function pauseVideo() { $('#sample_video').get(0).pause(); } </script></body> </html> Output: jQuery is an open source JavaScript library that simplifies the interactions between an HTML/CSS document, It is widely famous with it’s philosophy of “Write less, do more”.You can learn jQuery from the ground up by following this jQuery Tutorial and jQuery Examples. jQuery-Misc Picked JQuery Web Technologies Web technologies Questions Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Form validation using jQuery jQuery | children() with Examples Scroll to the top of the page using JavaScript/jQuery How to Dynamically Add/Remove Table Rows using jQuery ? How to get the value in an input text box using jQuery ? Installation of Node.js on Linux Top 10 Projects For Beginners To Practice HTML and CSS Skills Difference between var, let and const keywords in JavaScript How to insert spaces/tabs in text using HTML/CSS? How to fetch data from an API in ReactJS ?
[ { "code": null, "e": 28, "s": 0, "text": "\n03 Aug, 2021" }, { "code": null, "e": 430, "s": 28, "text": "Method 1: Using trigger() method: The trigger() method is used to execute a specified event and the default behavior of the event. The event to be executed is passed as a para...
Python – Type conversion in Nested and Mixed List
08 Jun, 2020 While working with Python lists, due to its heterogenous nature, we can have a problem in which we need to convert data type of each nested element of list to particular type. In mixed list, this becomes complex. Let’s discuss certain way in which this task can be performed. Input : test_list = [(‘7’, [‘8’, (‘5’, )])]Output : [(7, [8, (5, )])] Input : test_list = [‘6’]Output : [6] Method : Using recursion + isinstance()The combination of above functions can be used to solve this problem. In this, we use isinstance() to get the data type of element of list, and if its container, the inner elements are recursed to perform conversion. # Python3 code to demonstrate working of # Type conversion in Nested and Mixed List# Using recursion + isinstance() # helper_fncdef change_type(sub): if isinstance(sub, list): return [change_type(ele) for ele in sub] elif isinstance(sub, tuple): return tuple(change_type(ele) for ele in sub) else: return int(sub) # initializing listtest_list = ['6', '89', ('7', ['8', '10']), ['11', '15']] # printing original listprint("The original list is : " + str(test_list)) # Type conversion in Nested and Mixed List# Using recursion + isinstance()res = change_type(test_list) # printing result print("Data after type conversion : " + str(res)) The original list is : ['6', '89', ('7', ['8', '10']), ['11', '15']] Data after type conversion : [6, 89, (7, [8, 10]), [11, 15]] Python list-programs Python Python Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Different ways to create Pandas Dataframe Enumerate() in Python Read a file line by line in Python Python String | replace() How to Install PIP on Windows ? Python program to convert a list to string Defaultdict in Python Python | Get dictionary keys as a list Python | Convert a list to dictionary Python | Convert string dictionary to dictionary
[ { "code": null, "e": 28, "s": 0, "text": "\n08 Jun, 2020" }, { "code": null, "e": 304, "s": 28, "text": "While working with Python lists, due to its heterogenous nature, we can have a problem in which we need to convert data type of each nested element of list to particular type....
Java program to print the initials of a name with last name in full
When the full name is provided, the initials of the name are printed with the last name is printed in full. An example of this is given as follows − Full name = Amy Thomas Initials with surname is = A. Thomas A program that demonstrates this is given as follows − Live Demo import java.util.*; public class Example { public static void main(String[] args) { String name = "John Matthew Adams"; System.out.println("The full name is: " + name); System.out.print("Initials with surname is: "); int len = name.length(); name = name.trim(); String str1 = ""; for (int i = 0; i < len; i++) { char ch = name.charAt(i); if (ch != ' ') { str1 = str1 + ch; } else { System.out.print(Character.toUpperCase(str1.charAt(0)) + ". "); str1 = ""; } } String str2 = ""; for (int j = 0; j < str1.length(); j++) { if (j == 0) str2 = str2 + Character.toUpperCase(str1.charAt(0)); else str2 = str2 + Character.toLowerCase(str1.charAt(j)); } System.out.println(str2); } } The full name is: John Matthew Adams Initials with surname is: J. M. Adams Now let us understand the above program. The name is printed. Then the first letter of the name is printed i.e. the initials. The code snippet that demonstrates this is given as follows − String name = "John Matthew Adams"; System.out.println("The full name is: " + name); System.out.print("Initials with surname is: "); int len = name.length(); name = name.trim(); String str1 = ""; for (int i = 0; i < len; i++) { char ch = name.charAt(i); if (ch != ' ') { str1 = str1 + ch; } else { System.out.print(Character.toUpperCase(str1.charAt(0)) + ". "); str1 = ""; } } Then, the entire surname of the name is printed. The code snippet that demonstrates this is given as follows − String str2 = ""; for (int j = 0; j < str1.length(); j++) { if (j == 0) str2 = str2 + Character.toUpperCase(str1.charAt(0)); else str2 = str2 + Character.toLowerCase(str1.charAt(j)); } System.out.println(str2);
[ { "code": null, "e": 1336, "s": 1187, "text": "When the full name is provided, the initials of the name are printed with the last name is printed in full. An example of this is given as follows −" }, { "code": null, "e": 1396, "s": 1336, "text": "Full name = Amy Thomas\nInitials ...
float() in Python
Float method is part of python standard library which converts a number or a string containing numbers to a float data type. There are following rules when a string is considered to be valid for converting it to a float. The string must have only numbers in it. The string must have only numbers in it. Mathematical operators between the numbers can also be used. Mathematical operators between the numbers can also be used. The string can represent NaN or inf The string can represent NaN or inf The white spaces at the beginning and end are always ignored. The white spaces at the beginning and end are always ignored. The below program indicates how different values are returned when float function is applied. n = 89 print(type(n)) f = float(n) print(type(f)) print("input",7," with float function becomes ",float(7)) print("input",-21.6," with float function becomes ",float(-21.6)) print("input NaN, with float function becomes ",float("NaN")) print("input InF, with float function becomes ",float("InF")) Running the above code gives us the following result − <class 'int'> <class 'float'> input 7 with float function becomes 7.0 input -21.6 with float function becomes -21.6 input NaN, with float function becomes nan input InF, with float function becomes inf Passing a stream without having any numeric values throws error. print("input Tutorials, with float function becomes ",float("Tutorials")) Running the above code gives us the following result − Traceback (most recent call last): File "C:/xxx.py", line 18, in print("input Tutorials, with float function becomes ",float("Tutorials")) ValueError: could not convert string to float: 'Tutorials'
[ { "code": null, "e": 1408, "s": 1187, "text": "Float method is part of python standard library which converts a number or a string containing numbers to a float data type. There are following rules when a string is considered to be valid for converting it to a float." }, { "code": null, ...
FloatBuffer array() method in Java With Examples
06 Dec, 2018 The array() method of java.nio.FloatBuffer Class is used to Return the float array that backs this buffer. Modifications to this buffer’s content will cause the returned array’s content to be modified, and vice versa. Invoke() the hasArray() method are used before invoking this method in order to ensure that this buffer has an accessible backing array Syntax : public final float[] array() Return Value: This method returns the array that backs this buffer. Throws: This method throws the ReadOnlyBufferException(If this buffer is backed by an array but is read-only) Below program illustrates the array() method: Examples 1: // Java program to demonstrate// array() method import java.nio.*;import java.util.*; public class GFG { public static void main(String[] args) { // Declaring the capacity of the FloatBuffer int capacity = 10; // Creating the FloatBuffer try { // creating object of floatbuffer // and allocating size capacity FloatBuffer fb = FloatBuffer.allocate(capacity); // putting the value in floatbuffer fb.put(8.56F); fb.put(2, 9.61F); fb.rewind(); // getting array from fb FloatBuffer using array() method float[] fbb = fb.array(); // printing the FloatBuffer fb System.out.println("FloatBuffer: " + Arrays.toString(fbb)); } catch (IllegalArgumentException e) { System.out.println("IllegalArgumentException catched"); } catch (ReadOnlyBufferException e) { System.out.println("ReadOnlyBufferException catched"); } }} FloatBuffer: [8.56, 0.0, 9.61, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] Examples 2: // Java program to demonstrate// array() method import java.nio.*;import java.util.*; public class GFG { public static void main(String[] args) { // Declaring the capacity of the fb int capacity1 = 10; // Declaring the capacity of the fb1 int capacity2 = 5; // Creating the FloatBuffer try { // // fb // // creating object of floatbuffer fb // and allocating size capacity FloatBuffer fb = FloatBuffer.allocate(capacity1); // putting the value in fb fb.put(9.56F); fb.put(2, 7.61F); fb.put(3, 4.61F); fb.rewind(); // print the FloatBuffer System.out.println("FloatBuffer fb: " + Arrays.toString(fb.array())); // // fb1 // // creating object of floatbuffer fb1 // and allocating size capacity FloatBuffer fb1 = FloatBuffer.allocate(capacity2); // putting the value in fb1 fb1.put(1, 4.56F); fb1.put(2, 6.45F); fb1.rewind(); // print the FloatBuffer System.out.println("\nFloatBuffer fb1: " + Arrays.toString(fb1.array())); // Creating a read-only copy of FloatBuffer // using asReadOnlyBuffer() method FloatBuffer readOnlyFb = fb.asReadOnlyBuffer(); // print the FloatBuffer System.out.print("\nReadOnlyBuffer FloatBuffer: "); while (readOnlyFb.hasRemaining()) System.out.print(readOnlyFb.get() + ", "); // try to change readOnlyFb System.out.println("\n\nTrying to get the array" + " from ReadOnlyFb for editing"); float[] fbarr = readOnlyFb.array(); } catch (IllegalArgumentException e) { System.out.println("IllegalArgumentException catched"); } catch (ReadOnlyBufferException e) { System.out.println("Exception thrown: " + e); } }} FloatBuffer fb: [9.56, 0.0, 7.61, 4.61, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] FloatBuffer fb1: [0.0, 4.56, 6.45, 0.0, 0.0] ReadOnlyBuffer FloatBuffer: 9.56, 0.0, 7.61, 4.61, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, Trying to get the array from ReadOnlyFb for editing Exception thrown: java.nio.ReadOnlyBufferException Java - util package Java 8 java-basics Java-FloatBuffer Java-Functions Java-NIO package Java Java Programs Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n06 Dec, 2018" }, { "code": null, "e": 382, "s": 28, "text": "The array() method of java.nio.FloatBuffer Class is used to Return the float array that backs this buffer. Modifications to this buffer’s content will cause the returned array...
Scraping Reddit using Python
07 Oct, 2021 In this article, we are going to see how to scrape Reddit using Python, here we will be using python’s PRAW (Python Reddit API Wrapper) module to scrape the data. Praw is an acronym Python Reddit API wrapper, it allows Reddit API through Python scripts. To install PRAW, run the following commands on the command prompt: pip install praw Step 1: To extract data from Reddit, we need to create a Reddit app. You can create a new Reddit app(https://www.reddit.com/prefs/apps). Reddit – Create an App Step 2: Click on “are you a developer? create an app...”. Step 3: A form like this will show up on your screen. Enter the name and description of your choice. In the redirect uri box, enter http://localhost:8080 App Form Step 4: After entering the details, click on “create app”. Developed Application The Reddit app has been created. Now, we can use python and praw to scrape data from Reddit. Note down the client_id, secret, and user_agent values. These values will be used to connect to Reddit using python. In order to connect to Reddit, we need to create a praw instance. There are 2 types of praw instances: Read-only Instance: Using read-only instances, we can only scrape publicly available information on Reddit. For example, retrieving the top 5 posts from a particular subreddit. Authorized Instance: Using an authorized instance, you can do everything you do with your Reddit account. Actions like upvote, post, comment, etc., can be performed. Python3 # Read-only instancereddit_read_only = praw.Reddit(client_id="", # your client id client_secret="", # your client secret user_agent="") # your user agent # Authorized instancereddit_authorized = praw.Reddit(client_id="", # your client id client_secret="", # your client secret user_agent="", # your user agent username="", # your reddit username password="") # your reddit password Now that we have created an instance, we can use Reddit’s API to extract data. In this tutorial, we will be only using the read-only instance. There are different ways of extracting data from a subreddit. The posts in a subreddit are sorted as hot, new, top, controversial, etc. You can use any sorting method of your choice. Let’s extract some information from the redditdev subreddit. Python3 import prawimport pandas as pd reddit_read_only = praw.Reddit(client_id="", # your client id client_secret="", # your client secret user_agent="") # your user agent subreddit = reddit_read_only.subreddit("redditdev") # Display the name of the Subredditprint("Display Name:", subreddit.display_name) # Display the title of the Subredditprint("Title:", subreddit.title) # Display the description of the Subredditprint("Description:", subreddit.description) Output: Name, Title, and Description Now let’s extract 5 hot posts from the Python subreddit: Python3 subreddit = reddit_read_only.subreddit("Python") for post in subreddit.hot(limit=5): print(post.title) print() Output: Top 5 hot posts We will now save the top posts of the python subreddit in a pandas data frame: Python3 posts = subreddit.top("month")# Scraping the top posts of the current month posts_dict = {"Title": [], "Post Text": [], "ID": [], "Score": [], "Total Comments": [], "Post URL": [] } for post in posts: # Title of each post posts_dict["Title"].append(post.title) # Text inside a post posts_dict["Post Text"].append(post.selftext) # Unique ID of each post posts_dict["ID"].append(post.id) # The score of a post posts_dict["Score"].append(post.score) # Total number of comments inside the post posts_dict["Total Comments"].append(post.num_comments) # URL of each post posts_dict["Post URL"].append(post.url) # Saving the data in a pandas dataframetop_posts = pd.DataFrame(posts_dict)top_posts Output: top posts of the python subreddit Python3 import pandas as pd top_posts.to_csv("Top Posts.csv", index=True) Output: CSV File of Top Posts To extract data from Reddit posts, we need the URL of the post. Once we have the URL, we need to create a submission object. Python3 import prawimport pandas as pd reddit_read_only = praw.Reddit(client_id="", # your client id client_secret="", # your client secret user_agent="") # your user agent # URL of the posturl = "https://www.reddit.com/r/IAmA/comments/m8n4vt/\im_bill_gates_cochair_of_the_bill_and_melinda/" # Creating a submission objectsubmission = reddit_read_only.submission(url=url) We will extract the best comments from the post we have selected. We will need the MoreComments object from the praw module. To extract the comments, we will use a for-loop on the submission object. All the comments will be added to the post_comments list. We will also add an if-statement in the for-loop to check whether any comment has the object type of more comments. If it does, it means that our post has more comments available. So we will add these comments to our list as well. Finally, we will convert the list into a pandas data frame. Python3 from praw.models import MoreComments post_comments = [] for comment in submission.comments: if type(comment) == MoreComments: continue post_comments.append(comment.body) # creating a dataframecomments_df = pd.DataFrame(post_comments, columns=['comment'])comments_df Output: list into a pandas dataframe sweetyty Blogathon-2021 Picked python-utility Web-scraping Blogathon Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 52, "s": 24, "text": "\n07 Oct, 2021" }, { "code": null, "e": 306, "s": 52, "text": "In this article, we are going to see how to scrape Reddit using Python, here we will be using python’s PRAW (Python Reddit API Wrapper) module to scrape the data. Praw is an ...
Case Function in Tableau
22 Oct, 2020 In this article, we will learn about aggregate functions, their types and uses in Tableau. Tableau: Tableau is a very powerful data visualization tool that can be used by data analysts, scientists, statisticians, etc. to visualize the data and get a clear opinion based on the data analysis. Tableau is very famous as it can take in data and produce the required data visualization output in a very short time. Case Function: Case Function is the part of Logical functions in Tableau. These functions are used to perform the logical test and return the required value when the test expression is true. It starts with CASE function evaluating the 1st logical expression corresponding to set/sequence of values and when the logical expression becomes True, it returns the respective specified value as result. If no match is found in logical expression then the value of default return expression is used. In case no default value is being mentioned by the user then NULL is returned. This function finds the first that matches the given <expression> and returns the corresponding value as a result. CASE [<expression>] WHEN <expression> THEN <expression> WHEN <expression> THEN <expression> ELSE <expression> END Dataset used in the given examples is Dataset. In this example, we simply create a new calculated field by using the CASE function on some fields. View new calculated field. Use in Visualization. CASE-WHEN statements are easier to write down and comprehend. Due to its simplicity, for a user, it’s helpful to avoid making mistakes like referencing the incorrect field. CASE-WHEN statements perform faster than IF-ELSE statements. Usage of CASE-WHEN in Tableau is extremely limited as they can’t perform Boolean logic conditions. CASE-WHEN in tableau only compares the expression to the precise values. Conditional operators like OR, AND can’t be used with CASE-WHEN. Using CASE-WHEN, multiple expressions can’t be evaluated during a single line. Tableau Tableau Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Word Cloud in Tableau IF Function in Tableau Funnel Chart in Tableau Box Plot in Tableau Tableau - Data Terminology Tableau Installation Guide Combined Set in Tableau Start Page in Tableau Image Object on Dashboard in Tableau Quick Table Calculation in Tableau
[ { "code": null, "e": 28, "s": 0, "text": "\n22 Oct, 2020" }, { "code": null, "e": 119, "s": 28, "text": "In this article, we will learn about aggregate functions, their types and uses in Tableau." }, { "code": null, "e": 439, "s": 119, "text": "Tableau: Tablea...
Returns, Jumps and Labels in Kotlin
10 Nov, 2021 Kotlin is a statically typed, general-purpose programming language developed by JetBrains, that has built world-class IDEs like IntelliJ IDEA, PhpStorm, Appcode, etc. It was first introduced by JetBrains in 2011 and a new language for the JVM. Kotlin is an object-oriented language, and a “better language” than Java, but still be fully interoperable with Java code. As the Kotlin says: Kotlin has three structural jump expressions as follow: returnbreakcontinue return break continue These can be a part of the large expression as Kotlin say. So, let’s start with “return”. It’s a statement that generally we use in functions during declaration for returning the values after execution of a function. By default it returns from the nearest enclosing function or anonymous function. Let’s take an example, Example: Kotlin fun add(a:Int,b:Int) : Int { // ans having final value after execution val ans = a + b // here we have returned it return ans } fun main(args: Array<String>) { val first: Int = 10 val second: Int = 20 // called function and // Collected the returned value in sum val sum = add(first,second) println("The sum is: $sum")} Another use of return Kotlin fun GfG() { listOf(1, 2, 3, 4, 5).forEach { // non-local return directly // to the caller of GfG() if (it == 3) return print(it) } println("this point is unreachable")} So, that’s how the return statement work. 2.1. break A break statement is used to terminate the flow of a loop. but only terminates the nearest enclosing loop i.e. if you are having two nested for-loops and the break statement is present in the inner for-loop then the inner for-loop will be terminated first and after that, if another break is added then the outer for-loop will also be terminated. Example: Kotlin fun main(args : Array<String>){ for(ch in 'A'..'C'){ // Outer loop for (n in 1..4){ // Inner loop println("processing...") if(n == 2) // it will terminate Inner loop break } if(ch == B) break // but it will terminate Outer loop }} We can also optimize the above code or reduce the line of code as well, using labels. 2.2. labels Any expression in Kotlin may be marked with a label. Labels have the form of an identifier followed by the @ sign, such as name@ or xyz@. To label an expression, just add a label in front of it. Example: Kotlin fun main(args : Array<String>){ Outerloop@ for(ch in 'A'..'C'){ // Outer loop for (n in 1..4){ // Inner loop println("processing...") if(n == 2) // it will terminate Outerloop directly break@Outerloop } // here we don't need it /* if(ch==B) break */ }} 2.3. continue It is the same as the break statement but the only difference is, the break statement terminates the whole iteration of a loop whereas continuing skips the current iteration and we can use labels here as well. Example: Kotlin fun main(args: Array<String>) { // outerloop is label name outerloop@ for (i in 1..5) { for (j in 1..4) { if (i == 3 || j == 2) // here we have used that continue@outerloop println("Happy Diwali!") } }} That’s all about return, jump, and labels. Kotlin Control-flow Picked Kotlin Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Add Views Dynamically and Store Data in Arraylist in Android? How to Communicate Between Fragments in Android? Retrofit with Kotlin Coroutine in Android Suspend Function In Kotlin Coroutines How to Get Current Location in Android? Kotlin extension function Dagger Hilt in Android with Example Kotlin Sealed Classes Scopes in Kotlin Coroutines Singleton Class in Kotlin
[ { "code": null, "e": 54, "s": 26, "text": "\n10 Nov, 2021" }, { "code": null, "e": 497, "s": 54, "text": "Kotlin is a statically typed, general-purpose programming language developed by JetBrains, that has built world-class IDEs like IntelliJ IDEA, PhpStorm, Appcode, etc. It was ...
Puzzle 12 | (Maximize probability of White Ball)
22 Jul, 2021 There are two empty bowls in a room. You have 50 white balls and 50 black balls. After you place the balls in the bowls, a random ball will be picked from a random bowl. Distribute the balls (all of them) into the bowls to maximize the chance of picking a white ball. Explanation: First, let us assume that we divided the balls into jars equally so each jar will contain 50 balls. So the probability of selecting a white ball will be = probability of selecting the first jar*probability of white ball in the first jar + probability of selecting the second jar*probability of white ball in the second jar =(1/2)*(25/50)+(1/2)*(25/50)=0.5Since we have to maximize the probability so we will increase the probability of white ball in the first jar and keep the second probability same mean equal to 1 so we add 49 white balls with 50 black balls in the first jar and only one white ball in the second jar so the probability will be now= (1/2)*(49/99)+(1/2)*(1/1)=0.747 Therefore, probability of getting white ball becomes 1/2*1 + 1/2*49/99 which is approximately 3/4. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above Maximize probability of White Ball | Puzzle - YouTubeGeeksforGeeks School18.5K subscribersMaximize probability of White Ball | PuzzleWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.More videosMore videosYou're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:000:00 / 1:28•Live•<div class="player-unavailable"><h1 class="message">An error occurred.</h1><div class="submessage"><a href="https://www.youtube.com/watch?v=rul79-fYVnw" target="_blank">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div> puneet chamria srijanchandra26 Puzzles Puzzles Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Algorithm to solve Rubik's Cube Top 20 Puzzles Commonly Asked During SDE Interviews Puzzle 21 | (3 Ants and Triangle) Puzzle 24 | (10 Coins Puzzle) Container with Most Water Puzzle | Set 35 (2 Eggs and 100 Floors) Puzzle 31 | (Minimum cut Puzzle) Puzzle | Heaven and Hell Puzzle | 3 cuts to cut round cake into 8 equal pieces Puzzle 27 | (Hourglasses Puzzle)
[ { "code": null, "e": 54, "s": 26, "text": "\n22 Jul, 2021" }, { "code": null, "e": 323, "s": 54, "text": "There are two empty bowls in a room. You have 50 white balls and 50 black balls. After you place the balls in the bowls, a random ball will be picked from a random bowl. Dist...
set::size() in C++ STL
22 Jan, 2018 Sets are containers that store unique elements following a specific order. Internally, the elements in a set are always sorted. Sets are typically implemented as binary search trees. set::size()size() function is used to return the size of the set container or the number of elements in the set container. Syntax: set_name.size() Return Value: It returns the number of elements in the set container. Examples: Input : set1{'a', 'b', 'c', 'd'}; set1.size(); Output : 4 Input : set2{}; set2.size(); Output : 0 Errors and Exceptions 1. It has a no exception throw guarantee.2. Shows error when a parameter is passed. // C++ program to illustrate// size() function on set#include <bits/stdc++.h>using namespace std; int main(){ // Take any two sets set<char> set1, set2; for (int i = 0; i < 4; i++) { set1.insert('a' + i); } // Printing the size of sets cout << "set1 size: " << set1.size(); cout << endl; cout << "set2 size: " << set2.size(); return 0;} Output: set1 size: 4 set2 size: 0 Time complexity: Constant cpp-set STL C++ STL CPP Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Bitwise Operators in C/C++ vector erase() and clear() in C++ Inheritance in C++ Substring in C++ Priority Queue in C++ Standard Template Library (STL) The C++ Standard Template Library (STL) C++ Classes and Objects Object Oriented Programming in C++ Sorting a vector in C++ 2D Vector In C++ With User Defined Size
[ { "code": null, "e": 53, "s": 25, "text": "\n22 Jan, 2018" }, { "code": null, "e": 236, "s": 53, "text": "Sets are containers that store unique elements following a specific order. Internally, the elements in a set are always sorted. Sets are typically implemented as binary searc...
Rule Of Three in C++
16 Jul, 2021 This rule basically states that if a class defines one (or more) of the following, it should explicitly define all three, which are: destructor copy constructor copy assignment operator Now let us try to understand why? The default constructors and assignment operators do shallow copy and we create our own constructor and assignment operators when we need to perform a deep copy (For example when a class contains pointers pointing to dynamically allocated resources). First, what does a destructor do? It contains code that runs whenever an object is destroyed. Only affecting the contents of the object would be useless. An object in the process of being destroyed cannot have any changes made to it. Therefore, the destructor affects the program’s state as a whole. Now, suppose our class does not have a copy constructor. Copying an object will copy all of its data members to the target object. In this case when the object is destroyed the destructor runs twice. Also the destructor has the same information for each object being destroyed. In the absence of an appropriately defined copy constructor, the destructor is executed twice when it should only execute once. This duplicate execution is a source for trouble. A coding example follows: C++ // In the below C++ code, we have created// a destructor, but no copy constructor// and no copy assignment operator.class Array{private: int size; int* vals; public: ~Array(); Array( int s, int* v );}; Array::~Array(){ delete vals; vals = NULL;} Array::Array( int s, int* v ){ size = s; vals = new int[ size ]; std::copy( v, v + size, vals );} int main(){ int vals[ 4 ] = { 11, 22, 33, 44 }; Array a1( 4, vals ); // This line causes problems. Array a2( a1 ); return 0;} In the example above, once the program goes out of scope, the class destructor is called, not once but twice. First due to deletion of a1 and then of a2. The default copy constructor makes a copy of the pointer vals and does not allocate memory for it. Thus, on deletion of a1, the destructor frees vals. All subsequent vals containing instances when trying to be deleted by the destructor causes the program to crash, as vals do not exist anymore. This is similar in the case of copy assignment operator. If a class does not have an explicitly defined assignment operator, implicit assignment of all source’s data members to the target’s corresponding data members will occur. All in all, it creates a copy, which again is the same problem defined previously. References: https://en.wikipedia.org/wiki/Rule_of_three_(C%2B%2B_programming) http://www.drdobbs.com/c-made-easier-the-rule-of-three/184401400 This article is contributed by Nihar Ranjan Sarkar. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. stevewillson cpp-constructor cpp-destructor C++ CPP Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Sorting a vector in C++ Polymorphism in C++ Friend class and function in C++ std::string class in C++ Pair in C++ Standard Template Library (STL) Queue in C++ Standard Template Library (STL) Unordered Sets in C++ Standard Template Library List in C++ Standard Template Library (STL) std::find in C++ Inline Functions in C++
[ { "code": null, "e": 54, "s": 26, "text": "\n16 Jul, 2021" }, { "code": null, "e": 187, "s": 54, "text": "This rule basically states that if a class defines one (or more) of the following, it should explicitly define all three, which are:" }, { "code": null, "e": 198,...
ReactJS Blueprint Slider Component
08 Apr, 2022 BlueprintJS is a React-based UI toolkit for the web. This library is very optimized and popular for building interfaces that are complex data-dense for desktop applications. Slider Component provides a way for users to choose numbers between lower and upper bounds. We can use the following approach in ReactJS to use the ReactJS Blueprint Slider Component. Slider Props: className: It is used to denote a space-delimited list of class names to pass along to a child element. disabled: It is used to indicate whether the slider is non-interactive. initialValue: It is used to denote the initial value of the slider. intent: It is used to denote the visual intent color to apply to the element. labelPrecision: It is used to denote the number of decimal places to use when rendering label value. labelRenderer: It is a callback function to render a single label. labelStepSize: It is used for the increment between successive labels. labelValues: It is used to denote the array of specific values for the label placement. max: It is used to denote the maximum value of the slider. min: It is used to denote the minimum value of the slider. onChange: It is a callback function that is triggered when the value changes. onRelease: It is a callback function that is triggered when the handle is released. showTrackFill: It is used to indicate whether a solid bar should be rendered on the track between current and initial values, or between handles for RangeSlider. stepSize: It is used for the increment between successive values. value: It is used to denote the value of the slider. vertical: It is used to indicate whether to show the slider in a vertical orientation. RangeSlider Props: className: It is used to denote a space-delimited list of class names to pass along to a child element. disabled: It is used to indicate whether the slider is non-interactive. intent: It is used to denote the visual intent color to apply to the element. labelPrecision: It is used to denote the number of decimal places to use when rendering label value. labelRenderer: It is a callback function to render a single label. labelStepSize: It is used for the increment between successive labels. labelValues: It is used to denote the array of specific values for the label placement. max: It is used to denote the maximum value of the slider. min: It is used to denote the minimum value of the slider. onChange: It is a callback function that is triggered when the value changes. onRelease: It is a callback function that is triggered when the handle is released. showTrackFill: It is used to indicate whether a solid bar should be rendered on the track between current and initial values, or between handles for RangeSlider. stepSize: It is used for the increment between successive values. value: It is used to denote the value of the slider. vertical: It is used to indicate whether to show the slider in a vertical orientation. MultiSlider Props: className: It is used to denote a space-delimited list of class names to pass along to a child element. defaultTrackIntent: It is used to denote the default intent of a track segment. disabled: It is used to indicate whether the slider is non-interactive. intent: It is used to denote the visual intent color to apply to the element. labelPrecision: It is used to denote the number of decimal places to use when rendering label value. labelRenderer: It is a callback function to render a single label. labelStepSize: It is used for the increment between successive labels. labelValues: It is used to denote the array of specific values for the label placement. max: It is used to denote the maximum value of the slider. min: It is used to denote the minimum value of the slider. onChange: It is a callback function that is triggered when the value changes. onRelease: It is a callback function that is triggered when the handle is released. showTrackFill: It is used to indicate whether a solid bar should be rendered on the track between current and initial values, or between handles for RangeSlider. stepSize: It is used for the increment between successive values. vertical: It is used to indicate whether to show the slider in a vertical orientation. Handle Props: className: It is used to denote a space-delimited list of class names to pass along to a child element. intentAfter: It is used to denote the intent for the track segment immediately after this handle. intentBefore: It is used to denote the intent for the track segment immediately before this handle. interactionKind: It is used to denote how this handle interacts with other handles. onChange: It is a callback function that is triggered when the value changes. onRelease: It is a callback function that is triggered when the handle is released. trackStyleAfter: It is used to denote the style to use for the track segment immediately after this handle. trackStyleBefore: It is used to denote the style to use for the track segment immediately before this handle. type: It is used to denote the handle appearance type. value: It is used to denote the numeric value of this handle. Creating React Application And Installing Module: Step 1: Create a React application using the following command:npx create-react-app foldername Step 1: Create a React application using the following command: npx create-react-app foldername Step 2: After creating your project folder i.e. folder name, move to it using the following command:cd foldername Step 2: After creating your project folder i.e. folder name, move to it using the following command: cd foldername Step 3: After creating the ReactJS application, Install the required module using the following command:npm install @blueprintjs/core Step 3: After creating the ReactJS application, Install the required module using the following command: npm install @blueprintjs/core Project Structure: It will look like the following. Project Structure Example: Now write down the following code in the App.js file. Here, App is our default component where we have written our code. App.js import React from 'react'import '@blueprintjs/core/lib/css/blueprint.css';import { Slider } from "@blueprintjs/core"; function App() { return ( <div style={{ display: 'block', width: 400, padding: 30 }}> <h4>ReactJS Blueprint Slider Component</h4> <Slider min={0} max={100} stepSize={10} labelStepSize={10} /> </div > );} export default App; Step to Run Application: Run the application using the following command from the root directory of the project: npm start Output: Now open your browser and go to http://localhost:3000/, you will see the following output: Reference: https://blueprintjs.com/docs/#core/components/sliders Blueprint-Core Blueprint-Form Controls React-Blueprint JavaScript ReactJS Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n08 Apr, 2022" }, { "code": null, "e": 202, "s": 28, "text": "BlueprintJS is a React-based UI toolkit for the web. This library is very optimized and popular for building interfaces that are complex data-dense for desktop applications." ...
Perl - Subroutines
A Perl subroutine or function is a group of statements that together performs a task. You can divide up your code into separate subroutines. How you divide up your code among different subroutines is up to you, but logically the division usually is so each function performs a specific task. Perl uses the terms subroutine, method and function interchangeably. The general form of a subroutine definition in Perl programming language is as follows − sub subroutine_name { body of the subroutine } The typical way of calling that Perl subroutine is as follows − subroutine_name( list of arguments ); In versions of Perl before 5.0, the syntax for calling subroutines was slightly different as shown below. This still works in the newest versions of Perl, but it is not recommended since it bypasses the subroutine prototypes. &subroutine_name( list of arguments ); Let's have a look into the following example, which defines a simple function and then call it. Because Perl compiles your program before executing it, it doesn't matter where you declare your subroutine. #!/usr/bin/perl # Function definition sub Hello { print "Hello, World!\n"; } # Function call Hello(); When above program is executed, it produces the following result − Hello, World! You can pass various arguments to a subroutine like you do in any other programming language and they can be acessed inside the function using the special array @_. Thus the first argument to the function is in $_[0], the second is in $_[1], and so on. You can pass arrays and hashes as arguments like any scalar but passing more than one array or hash normally causes them to lose their separate identities. So we will use references ( explained in the next chapter ) to pass any array or hash. Let's try the following example, which takes a list of numbers and then prints their average − #!/usr/bin/perl # Function definition sub Average { # get total number of arguments passed. $n = scalar(@_); $sum = 0; foreach $item (@_) { $sum += $item; } $average = $sum / $n; print "Average for the given numbers : $average\n"; } # Function call Average(10, 20, 30); When above program is executed, it produces the following result − Average for the given numbers : 20 Because the @_ variable is an array, it can be used to supply lists to a subroutine. However, because of the way in which Perl accepts and parses lists and arrays, it can be difficult to extract the individual elements from @_. If you have to pass a list along with other scalar arguments, then make list as the last argument as shown below − #!/usr/bin/perl # Function definition sub PrintList { my @list = @_; print "Given list is @list\n"; } $a = 10; @b = (1, 2, 3, 4); # Function call with list parameter PrintList($a, @b); When above program is executed, it produces the following result − Given list is 10 1 2 3 4 When you supply a hash to a subroutine or operator that accepts a list, then hash is automatically translated into a list of key/value pairs. For example − #!/usr/bin/perl # Function definition sub PrintHash { my (%hash) = @_; foreach my $key ( keys %hash ) { my $value = $hash{$key}; print "$key : $value\n"; } } %hash = ('name' => 'Tom', 'age' => 19); # Function call with hash parameter PrintHash(%hash); When above program is executed, it produces the following result − name : Tom age : 19 You can return a value from subroutine like you do in any other programming language. If you are not returning a value from a subroutine then whatever calculation is last performed in a subroutine is automatically also the return value. You can return arrays and hashes from the subroutine like any scalar but returning more than one array or hash normally causes them to lose their separate identities. So we will use references ( explained in the next chapter ) to return any array or hash from a function. Let's try the following example, which takes a list of numbers and then returns their average − #!/usr/bin/perl # Function definition sub Average { # get total number of arguments passed. $n = scalar(@_); $sum = 0; foreach $item (@_) { $sum += $item; } $average = $sum / $n; return $average; } # Function call $num = Average(10, 20, 30); print "Average for the given numbers : $num\n"; When above program is executed, it produces the following result − Average for the given numbers : 20 By default, all variables in Perl are global variables, which means they can be accessed from anywhere in the program. But you can create private variables called lexical variables at any time with the my operator. The my operator confines a variable to a particular region of code in which it can be used and accessed. Outside that region, this variable cannot be used or accessed. This region is called its scope. A lexical scope is usually a block of code with a set of braces around it, such as those defining the body of the subroutine or those marking the code blocks of if, while, for, foreach, and eval statements. Following is an example showing you how to define a single or multiple private variables using my operator − sub somefunc { my $variable; # $variable is invisible outside somefunc() my ($another, @an_array, %a_hash); # declaring many variables at once } Let's check the following example to distinguish between global and private variables − #!/usr/bin/perl # Global variable $string = "Hello, World!"; # Function definition sub PrintHello { # Private variable for PrintHello function my $string; $string = "Hello, Perl!"; print "Inside the function $string\n"; } # Function call PrintHello(); print "Outside the function $string\n"; When above program is executed, it produces the following result − Inside the function Hello, Perl! Outside the function Hello, World! The local is mostly used when the current value of a variable must be visible to called subroutines. A local just gives temporary values to global (meaning package) variables. This is known as dynamic scoping. Lexical scoping is done with my, which works more like C's auto declarations. If more than one variable or expression is given to local, they must be placed in parentheses. This operator works by saving the current values of those variables in its argument list on a hidden stack and restoring them upon exiting the block, subroutine, or eval. Let's check the following example to distinguish between global and local variables − #!/usr/bin/perl # Global variable $string = "Hello, World!"; sub PrintHello { # Private variable for PrintHello function local $string; $string = "Hello, Perl!"; PrintMe(); print "Inside the function PrintHello $string\n"; } sub PrintMe { print "Inside the function PrintMe $string\n"; } # Function call PrintHello(); print "Outside the function $string\n"; When above program is executed, it produces the following result − Inside the function PrintMe Hello, Perl! Inside the function PrintHello Hello, Perl! Outside the function Hello, World! There are another type of lexical variables, which are similar to private variables but they maintain their state and they do not get reinitialized upon multiple calls of the subroutines. These variables are defined using the state operator and available starting from Perl 5.9.4. Let's check the following example to demonstrate the use of state variables − #!/usr/bin/perl use feature 'state'; sub PrintCount { state $count = 0; # initial value print "Value of counter is $count\n"; $count++; } for (1..5) { PrintCount(); } When above program is executed, it produces the following result − Value of counter is 0 Value of counter is 1 Value of counter is 2 Value of counter is 3 Value of counter is 4 Prior to Perl 5.10, you would have to write it like this − #!/usr/bin/perl { my $count = 0; # initial value sub PrintCount { print "Value of counter is $count\n"; $count++; } } for (1..5) { PrintCount(); } The context of a subroutine or statement is defined as the type of return value that is expected. This allows you to use a single function that returns different values based on what the user is expecting to receive. For example, the following localtime() returns a string when it is called in scalar context, but it returns a list when it is called in list context. my $datestring = localtime( time ); In this example, the value of $timestr is now a string made up of the current date and time, for example, Thu Nov 30 15:21:33 2000. Conversely − ($sec,$min,$hour,$mday,$mon, $year,$wday,$yday,$isdst) = localtime(time); Now the individual variables contain the corresponding values returned by localtime() subroutine.
[ { "code": null, "e": 2646, "s": 2354, "text": "A Perl subroutine or function is a group of statements that together performs a task. You can divide up your code into separate subroutines. How you divide up your code among different subroutines is up to you, but logically the division usually is so e...
How to change the text color of an element in HTML?
Use the color attribute in HTML to display the color of the text. Note − This attribute is not supported in HTML5. You can try to run the following code to learn how to implement color attribute in HTML − <!DOCTYPE html> <html> <head> <title>HTML Background Colors</title> </head> <body> <table width = "100%"> <tr> <td> <p><font color="blue">This is demo text.</font></p> </td> </tr> </table> </body> </html>
[ { "code": null, "e": 1253, "s": 1187, "text": "Use the color attribute in HTML to display the color of the text." }, { "code": null, "e": 1302, "s": 1253, "text": "Note − This attribute is not supported in HTML5." }, { "code": null, "e": 1392, "s": 1302, "text...
Different methods to copy in C++ STL | std::copy(), copy_n(), copy_if(), copy_backward()
18 Feb, 2021 Various varieties of copy() exist in C++ STL that allows to perform the copy operations in different manners, all of them having their own use. These all are defined in header <algorithm>. This article introduces everyone to these functions for usage in day-to-day programming. 1. copy(strt_iter1, end_iter1, strt_iter2) : The generic copy function used to copy a range of elements from one container to another. It takes 3 arguments: strt_iter1 : The pointer to the beginning of the source container, from where elements have to be started copying. end_iter1 : The pointer to the end of source container, till where elements have to be copied. strt_iter2 : The pointer to the beginning of destination container, to where elements have to be started copying. 2. copy_n(strt_iter1, num, strt_iter2) : This version of copy gives the freedom to choose how many elements have to be copied in the destination container. IT also takes 3 arguments: strt_iter1 : The pointer to the beginning of the source container, from where elements have to be started copying. num : Integer specifying how many numbers would be copied to destination container starting from strt_iter1. If a negative number is entered, no operation is performed. strt_iter2 : The pointer to the beginning of destination container, to where elements have to be started copying. CPP // C++ code to demonstrate the working of copy()// and copy_n() #include<iostream>#include<algorithm> // for copy() and copy_n()#include<vector>using namespace std; int main(){ // initializing source vector vector<int> v1 = { 1, 5, 7, 3, 8, 3 }; // declaring destination vectors vector<int> v2(6); vector<int> v3(6); // using copy() to copy 1st 3 elements copy(v1.begin(), v1.begin()+3, v2.begin()); // printing new vector cout << "The new vector elements entered using copy() : "; for(int i=0; i<v2.size(); i++) cout << v2[i] << " "; cout << endl; // using copy_n() to copy 1st 4 elements copy_n(v1.begin(), 4, v3.begin()); // printing new vector cout << "The new vector elements entered using copy_n() : "; for(int i=0; i<v3.size(); i++) cout << v3[i] << " "; } Output: The new vector elements entered using copy() : 1 5 7 0 0 0 The new vector elements entered using copy_n() : 1 5 7 3 0 0 3. copy_if(): As the name suggests, this function copies according to the result of a “condition“.This is provided with the help of a 4th argument, a function returning a boolean value. This function takes 4 arguments, 3 of them similar to copy() and an additional function, which when returns true, a number is copied, else number is not copied.4. copy_backward(): This function starts copying elements into the destination container from backward and keeps on copying till all numbers are not copied. The copying starts from the “strt_iter2” but in the backward direction. It also takes similar arguments as copy(). CPP // C++ code to demonstrate the working of copy_if()// and copy_backward() #include<iostream>#include<algorithm> // for copy_if() and copy_backward()#include<vector>using namespace std; int main(){ // initializing source vector vector<int> v1 = { 1, 5, 6, 3, 8, 3 }; // declaring destination vectors vector<int> v2(6); vector<int> v3(6); // using copy_if() to copy odd elements copy_if(v1.begin(), v1.end(), v2.begin(), [](int i){return i%2!=0;}); // printing new vector cout << "The new vector elements entered using copy_if() : "; for(int i=0; i<v2.size(); i++) cout << v2[i] << " "; cout << endl; // using copy_backward() to copy 1st 4 elements // ending at second last position copy_backward(v1.begin(), v1.begin() + 4, v3.begin()+ 5); // printing new vector cout << "The new vector elements entered using copy_backward() : "; for(int i=0; i<v3.size(); i++) cout << v3[i] << " "; } Output: The new vector elements entered using copy_if() : 1 5 3 3 0 0 The new vector elements entered using copy_backward() : 0 1 5 6 3 0 5. Copy using inserter(): Before copy() operation let us understand the syntax of inserter(). inserter() is used as a destination that where we want to copy the elements of the container. inserter() takes two parameters. The first is a container of arbitrary type and the second is an iterator into the container. It returns an instance of insert_iterator working on a container of arbitrary type. This wrapper function helps in creating insert_iterator instances. Typing the name of the %iterator requires knowing the precise full type of the container, which can be tedious and impedes generic programming. Using this function lets you take advantage of automatic template parameter deduction, making the compiler match the correct types for you. The syntax for inserter(): std::inserter(Container& x, typename Container::iterator it); x: Destination container where the new elements will be inserted. it: Iterator pointing to the insertion point. Returns: An insert_iterator that inserts elements into x at the position indicated by it. The syntax for copy using inserter(): copy(strt_iter1, end_iter1, inserter(Container& x, typename Container::iterator it)); C++ // C++ code to demonstrate the working of copy() using inserter() #include <iostream>#include <algorithm>#include <vector>using namespace std; int main(){ vector<int> v1 = {1, 5, 7, 3, 8, 3}; vector<int>::iterator itr; vector<int> v2; //using inserter() copy(v1.begin(), v1.end(), inserter(v2, itr)); cout << "\nThe new vector elements entered using inserter: "; for (int i = 0; i < v2.size(); i++) cout << v2[i] << " "; } Output: The new vector elements entered using inserter: 1 5 7 3 8 3 This article is contributed by Manjeet Singh. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. shubham_singh cyborg7459 soumyalahiri shubhamkaloge cpp-algorithm-library STL C++ STL CPP Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 54, "s": 26, "text": "\n18 Feb, 2021" }, { "code": null, "e": 490, "s": 54, "text": "Various varieties of copy() exist in C++ STL that allows to perform the copy operations in different manners, all of them having their own use. These all are defined in heade...
cmp(list) method in Python
13 Oct, 2020 cmp(list) is a method specified in Number in Python 2. The comparison of integral numbers have been discussed using cmp(). But many a times, there is a need to compare the entire list that can be composed of similar or different data types. In this case, different case scenarios occur and having knowledge of them can at times prove to be quite handy.This function takes 2 lists as input and checks if the first argument list is greater, equal or smaller than the second argument list. Syntax : cmp(list1, list2) Parameters :list1 : The first argument list to be compared.list2 : The second argument list to be compared. Returns : This function returns 1, if first list is “greater” than second list, -1 if first list is smaller than the second list else it returns 0 if both the lists are equal. There are certain case scenarios when we need to decide whether one list is smaller or greater or equal to the other list. Case 1 : When list contains just integers. This is the case when all the elements in the list are of type integers and hence when comparison is made, the number by number comparison is done left to right, if we get a larger number at any particular index, we term it to be greater and stop the further comparisons. If all the elements in both the list are similar and one list is larger(in size) than the other list, larger list is considered to be greater. Code #1 : Demonstrating cmp() using only integers. # Python code to demonstrate # the working of cmp()# only integer case. # initializing argument listslist1 = [ 1, 2, 4, 3]list2 = [ 1, 2, 5, 8]list3 = [ 1, 2, 5, 8, 10]list4 = [ 1, 2, 4, 3] # Comparing lists print "Comparison of list2 with list1 : ",print cmp(list2, list1) # prints -1, because list3 has larger size than list2print "Comparison of list2 with list3(larger size) : ",print cmp(list2, list3) # prints 0 as list1 and list4 are equalprint "Comparison of list4 with list1(equal) : ",print cmp(list4, list1) Output: Comparison of list2 with list1 : 1 Comparison of list2 with list3(larger size) : -1 Comparison of list4 with list1(equal) : 0 Case 2 : When list contains multiple datatypes.The case when more than one datatypes, eg. string is contained in the string, string is considered to be greater than integer, by this way, all datatypes are alphabetically sorted in case of comparison. Size rule remains intact in this case. Code #2 : Demonstrating cmp() using multiple data types. # Python code to demonstrate # the working of cmp()# multiple data types # initializing argument listslist1 = [ 1, 2, 4, 10]list2 = [ 1, 2, 4, 'a']list3 = [ 'a', 'b', 'c']list4 = [ 'a', 'c', 'b'] # Comparing lists # prints 1 because string# at end compared to number# string is greaterprint "Comparison of list2 with list1 : ",print cmp(list2, list1) # prints -1, because list3# has an alphabet at beginning# even though size of list2# is greater, Comparison# is terminated at 1st# element itself.print "Comparison of list2 with list3(larger size) : ",print cmp(list2, list3) # prints -1 as list4 is greater than# list3print "Comparison of list3 with list4 : ",print cmp(list3, list4) Output: Comparison of list2 with list1 : 1 Comparison of list2 with list3(larger size) : -1 Comparison of list3 with list4 : -1 Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 54, "s": 26, "text": "\n13 Oct, 2020" }, { "code": null, "e": 109, "s": 54, "text": "cmp(list) is a method specified in Number in Python 2." }, { "code": null, "e": 541, "s": 109, "text": "The comparison of integral numbers have been discu...
How to Plot Mean and Standard Deviation in Pandas?
23 Jul, 2021 Errorbar is the plotted chart that refers to the errors contained in the data frame, which shows the confidence & precision in a set of measurements or calculated values. Error bars help in showing the actual and exact missing parts as well as visually display the errors in different areas in the data frame. Error bars are the descriptive behavior that holds information about the variances in data as well as advice to make proper changes to build data more insightful and impactful for the users. Here we discuss how we plot errorbar with mean and standard deviation after grouping up the data frame with certain applied conditions such that errors become more truthful to make necessary for obtaining the best results and visualizations. Modules Needed: pip install numpy pip install pandas pip install matplotlib Here is the DataFrame from which we illustrate the errorbars with mean and std: Python3 # Import the necessary libraries to read# dataset and work on thatimport pandas as pdimport numpy as npimport matplotlib.pyplot as plt # Make the dataframe for evaluation on Errorbarsdf = pd.DataFrame({ 'insert': [0.0, 0.1, 0.3, 0.5, 1.0], 'mean': [0.009905, 0.45019, 0.376818, 0.801856, 0.643859], 'quality': ['good', 'good', 'poor', 'good', 'poor'], 'std': [0.003662, 0.281895, 0.306806, 0.243288, 0.322378]}) print(df) Output: Sample DataFrame groupby the subplots with mean and std to get error bars: Python3 # Subplots as having two types of qualityfig, ax = plt.subplots() for key, group in df.groupby('quality'): group.plot('insert', 'mean', yerr='std', label=key, ax=ax) plt.show() Output: Example 1: ErrorBar with group-plot Now we see error bars using NumPy keywords of mean and std: Python3 # Groupby the quality column using aggregate# value of mean and stdqual = df.groupby("quality").agg([np.mean, np.std])qual = qual['insert']qual.plot(kind = "barh", y = "mean", legend = False, xerr = "std", title = "Quality", color='green') Output: Example 2: ErrorBar with Bar Plot By the above example, we can see that errors in poor quality are higher than good instead of more good values in the data frame. Now, we move with another example with data frame below: Dataset – Toast By the above data frame, we have to manipulate this data frame to get the errorbars by using the ‘type’ column having different prices of the bags. To manipulation and perform calculations, we have to use a df.groupby function that has a prototype to check the field and execute the function to evaluate result. We are using two inbuilt functions of mean and std: df.groupby("col_to_group_by").agg([func_1, func_2, func_3, .....]) Python3 # reading the datasetdf = pd.read_csv('Toast.csv')df_prices = df.groupby("type").agg([np.mean, np.std]) As we have to evaluate the average price, so apply this groupby on ‘AveragePrice’. Also, check the result of prices and with the visualization display the errorbars Python3 prices = df_prices['AveragePrice'] # checking for resultsprices.head() Output: Result: the aggregate value of groupby() Errorbar using Mean: Python3 prices.plot(kind = "barh", y = "mean", legend = False, title = "Average Prices") Output: Example 3: Errorbar with Mean By the above visualization, it’s clear that organic has a higher mean price than conventional. Errorbar using Standard Deviation (std): Python3 prices.plot(kind = "barh", y = "mean", legend = False, title = "Average Prices", xerr = "std") Output: Example 4: Errorbar with Std Errorbars are more obstacles. They are easy to execute with good estimation values. Relatively uniform because of complex interpretation power with a data frame. surindertarika1234 Data Visualization Python-matplotlib Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Install PIP on Windows ? Python Classes and Objects Python | os.path.join() method Introduction To PYTHON Python OOPs Concepts How to drop one or multiple columns in Pandas Dataframe How To Convert Python Dictionary To JSON? Check if element exists in list in Python Python | Get unique values from a list Python | datetime.timedelta() function
[ { "code": null, "e": 53, "s": 25, "text": "\n23 Jul, 2021" }, { "code": null, "e": 554, "s": 53, "text": "Errorbar is the plotted chart that refers to the errors contained in the data frame, which shows the confidence & precision in a set of measurements or calculated values. Err...
Enumerated Types or Enums in C++
16 Mar, 2022 Enumerated type (enumeration) is a user-defined data type which can be assigned some limited values. These values are defined by the programmer at the time of declaring the enumerated type. If we assign a float value in a character value, then the compiler generates an error. In the same way if we try to assign any other value to the enumerated data types, the compiler generates an error. Enumerator types of values are also known as enumerators. It is also assigned by zero the same as the array. It can also be used with switch statements.For example: If a gender variable is created with value male or female. If any other value is assigned other than male or female then it is not appropriate. In this situation, one can declare the enumerated type in which only male and female values are assigned. Syntax: enum enumerated-type-name{value1, value2, value3.....valueN}; enum keyword is used to declare enumerated types after that enumerated type name was written then under curly brackets possible values are defined. After defining Enumerated type variables are created. It can be created in two types:- It can be declared during declaring enumerated types, just add the name of the variable before the semicolon. or,Beside this, we can create enumerated type variables as same as the normal variables. It can be declared during declaring enumerated types, just add the name of the variable before the semicolon. or, Beside this, we can create enumerated type variables as same as the normal variables. enumerated-type-name variable-name = value; By default, the starting code value of the first element of enum is 0 (as in the case of array) . But it can be changed explicitly. For example: enum enumerated-type-name{value1=1, value2, value3}; And, The consecutive values of the enum will have the next set of code value(s). For example: //first_enum is the enumerated-type-name enum first_enum{value1=1, value2=10, value3}; In this case, first_enum e; e=value3; cout<<e; Output: 11 Example 1: CPP #include <bits/stdc++.h>using namespace std; int main(){ // Defining enum Gender enum Gender { Male, Female }; // Creating Gender type variable Gender gender = Male; switch (gender) { case Male: cout << "Gender is Male"; break; case Female: cout << "Gender is Female"; break; default: cout << "Value can be Male or Female"; } return 0;} Gender is Male Example 2: CPP #include <bits/stdc++.h>using namespace std; // Defining enum Yearenum year { Jan, Feb, Mar, Apr, May, Jun, Jul, Aug, Sep, Oct, Nov, Dec}; // Driver Codeint main(){ int i; // Traversing the year enum for (i = Jan; i <= Dec; i++) cout << i << " "; return 0;} 0 1 2 3 4 5 6 7 8 9 10 11 ojhashashank41 cmaggio CPP-Basics C++ C++ Programs CPP Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 53, "s": 25, "text": "\n16 Mar, 2022" }, { "code": null, "e": 243, "s": 53, "text": "Enumerated type (enumeration) is a user-defined data type which can be assigned some limited values. These values are defined by the programmer at the time of declaring the e...
Maximize sum after K negations | Practice | GeeksforGeeks
Given an array of integers of size N and a number K., Your must modify array arr[] exactly K number of times. Here modify array means in each operation you can replace any array element either arr[i] by -arr[i] or -arr[i] by arr[i]. You need to perform this operation in such a way that after K operations, the sum of the array must be maximum. Example 1: Input: N = 5, K = 1 arr[] = {1, 2, -3, 4, 5} Output: 15 Explanation: We have k=1 so we can change -3 to 3 and sum all the elements to produce 15 as output. Example 2: Input: N = 10, K = 5 arr[] = {5, -2, 5, -4, 5, -12, 5, 5, 5, 20} Output: 68 Explanation: Here we have k=5 so we turn -2, -4, -12 to 2, 4, and 12 respectively. Since we have performed 3 operations so k is now 2. To get maximum sum of array we can turn positive turned 2 into negative and then positive again so k is 0. Now sum is 5+5+4+5+12+5+5+5+20+2 = 68 Your Task: You don't have to print anything, print ting is done by the driver code itself. You have to complete the function maximizeSum() which takes the array A[], its size N, and an integer K as inputs and returns the maximum possible sum. Expected Time Complexity: O(N*logN) Expected Auxiliary Space: O(1) Constraints: 1 ≤ N,K ≤ 105 -109 ≤ Ai ≤ 109 0 kushwahdeepu5055in 4 hours bhai mere pass problems ko disscuss karne ke liye koi achha friend nahi he mere ko coding me interest he bhai mene code discussion ke liye ek whatsapp group banaya he i invite you to guys please we only talk about codes and algos https://chat.whatsapp.com/HZx1yFezIthBDDRkBdk5Xe 0 kushwahdeepu5055in 15 minutes 5 5 1 2 3 4 5 iska correct ans 13 aayega yaa 14 mera ans 14 aa raha he joki wrong bata raha he plese help 0 skumarsahu55458 hours ago C++ solution class Solution{ public: long long int maximizeSum(long long int a[], int n, int k) { // Your code goes here vector<int> arr; for(int i=0; i<n; i++){ arr.push_back(a[i]); } sort(arr.begin(), arr.end()); int i=0; while(k>0 && i<n){ if(arr[i]<0){ arr[i] = arr[i]*(-1); k--; } i++; } if(k%2 != 0){ int min = *min_element(arr.begin(), arr.end()); auto it = find(arr.begin(), arr.end(), min); int index = it - arr.begin(); arr[index] *= (-1); } long long ans = 0; for(int k=0; k<n; k++){ ans += arr[k]; } return ans; }}; 0 akshaykumarmaurya4 days ago // C++ code in easy way long long int maximizeSum(long long int a[], int n, int k) { // Your code goes here sort(a, a+n); int i; for(i=0;i<k;i++){ if(0>a[i]){ a[i] = -a[i]; } else break; } long long int sum = 0; if((k-i)%2 == 0){ for(int j=0;j<n;j++){ sum+=a[j]; } return sum; } sort(a, a+n); a[0] = -a[0]; for(int i=0;i<n;i++){ sum+=a[i]; } return sum; } +1 nitishmishra9371 week ago Java | Heap public static long maximizeSum(long a[], int n, int k) { PriorityQueue<Long> heap = new PriorityQueue<>(); for(long e : a) { heap.add(e); } if(heap.peek() > 0) { if(k == 1 || k % 2 != 0) { heap.add(heap.remove() * -1); } } else { for(int i = 0; i < k; ++i) { heap.add(heap.remove() * -1); } } int sum = 0; while(!heap.isEmpty()) { sum += heap.remove(); } return sum; } We strongly recommend solving this problem on your own before viewing its editorial. Do you still want to view the editorial? Login to access your submissions. Problem Contest Reset the IDE using the second button on the top right corner. Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values. Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints. You can access the hints to get an idea about what is expected of you as well as the final solution code. You can view the solutions submitted by other users from the submission tab. Make sure you are not using ad-blockers. Disable browser extensions. We recommend using latest version of your browser for best experience. Avoid using static/global variables in coding problems as your code is tested against multiple test cases and these tend to retain their previous values. Passing the Sample/Custom Test cases in coding problems does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints.
[ { "code": null, "e": 583, "s": 238, "text": "Given an array of integers of size N and a number K., Your must modify array arr[] exactly K number of times. Here modify array means in each operation you can replace any array element either arr[i] by -arr[i] or -arr[i] by arr[i]. You need to perform th...
Python Program for Gnome Sort
22 Jun, 2022 Algorithm Steps: If you are at the start of the array then go to the right element (from arr[0] to arr[1]). If the current array element is larger or equal to the previous array element then go one step right if (arr[i] >= arr[i-1]) i++; If the current array element is smaller than the previous array element then swap these two elements and go one step backwards if (arr[i] < arr[i-1]) { swap(arr[i], arr[i-1]); i--; } Repeat steps 2) and 3) till ‘i’ reaches the end of the array (i.e- ‘n-1’) If the end of the array is reached then stop and the array is sorted. Python # Python program to implement Gnome Sort # A function to sort the given list using Gnome sortdef gnomeSort( arr, n): index = 0 while index < n: if index == 0: index = index + 1 if arr[index] >= arr[index - 1]: index = index + 1 else: arr[index], arr[index-1] = arr[index-1], arr[index] index = index - 1 return arr # Driver Codearr = [ 34, 2, 10, -9]n = len(arr) arr = gnomeSort(arr, n)print "Sorted sequence after applying Gnome Sort :",for i in arr: print i, # Contributed By Harshit Agrawal Sorted sequence after applying Gnome Sort : -9 2 10 34 Time Complexity: O(n2) Auxiliary Space: O(1)Please refer complete article on Gnome Sort for more details! surindertarika1234 chandramauliguptach python sorting-exercises Python Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n22 Jun, 2022" }, { "code": null, "e": 45, "s": 28, "text": "Algorithm Steps:" }, { "code": null, "e": 136, "s": 45, "text": "If you are at the start of the array then go to the right element (from arr[0] to arr[1])."...
DNSRecon – A powerful DNS enumeration script
14 Sep, 2021 DNSRecon is a free and open-source tool or script that is available on GitHub. Dnsrecon is one of the popular scripts in the security community which is used for reconnaissance on domains. This script is written in python language. You must have python language installed in your kali Linux operating system in order to use the script. This script checks all the DNS records for AXFR which can be useful for a security researcher for DNS enumeration on all types of records such as SOA, NS, TXT, SVR, SPF, etc. This script also used Google dorks for fetching indexed subdomains by Googlebot. Step 1: Open your kali Linux operating system and use the following command to install the tool. git clone https://github.com/darkoperator/dnsrecon.git Step 2: Now use the following command to move into the directory of the tool. cd dnsrecon Step 3: Now use the following command to install the dependencies of the tool. pip3 install -r requirements.txt --no-warn-script-location Step 4: Now use the following command to run the tool. python3 dnsrecon.py -h The tool has been installed and running successfully. Now we will see examples to use the tool. Example 1: Use dnsrecon for Base domain enumeration. python3 dnsrecon.py -d <domain> Example 2: Use dnsrecon for zonewalk. python3 dnsrecon.py -d <domain>-t zonewalk Kali-Linux Linux-Tools Linux-Unix Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Docker - COPY Instruction scp command in Linux with Examples Introduction to Linux Operating System chown command in Linux with Examples chmod command in Linux with examples SED command in Linux | Set 2 nohup Command in Linux with Examples Array Basics in Shell Scripting | Set 1 How to Permanently Disable Swap in Linux? mv command in Linux with examples
[ { "code": null, "e": 28, "s": 0, "text": "\n14 Sep, 2021" }, { "code": null, "e": 365, "s": 28, "text": "DNSRecon is a free and open-source tool or script that is available on GitHub. Dnsrecon is one of the popular scripts in the security community which is used for reconnaissanc...
Sum of all pair shortest paths in a Tree
22 Jun, 2021 Given a weighted undirected graph T consisting of nodes valued [0, N – 1] and an array Edges[][3] of type {u, v, w} that denotes an edge between vertices u and v having weight w. The task is to find the sum of all pair shortest paths in the given tree. Examples: Input: N = 3, Edges[][] = {{0, 2, 15}, {1, 0, 90}}Output: 210Explanation: Sum of weights of path between nodes 0 and 1 = 90Sum of weights of path between nodes 0 and 2 = 15Sum of weights of path between nodes 1 and 2 = 105Hence, sum = 90 + 15 + 105 Input: N = 4, Edges[][] = {{0, 1, 1}, {1, 2, 2}, {2, 3, 3}}Output: 20Explanation:Sum of weights of path between nodes 0 and 1 = 1Sum of weights of path between nodes 0 and 2 = 3Sum of weights of path between nodes 0 and 3 = 6Sum of weights of path between nodes 1 and 2 = 2Sum of weights of path between nodes 1 and 3 = 5Sum of weights of path between nodes 2 and 3 = 3Hence, sum = 1 + 3 + 6 + 2 + 5 + 3 = 20. Naive Approach: The simplest approach is to find the shortest path between every pair of vertices using the Floyd Warshall Algorithm. After precomputing the cost of the shortest path between every pair of nodes, print the sum of all the shortest paths. Below is the implementation of the above approach: C++ Java Python3 C# Javascript // C++ program for the above approach #include <iostream>using namespace std;#define INF 99999 // Function that performs the Floyd// Warshall to find all shortest pathsint floyd_warshall(int* graph, int V){ int dist[V][V], i, j, k; // Initialize the distance matrix for (i = 0; i < V; i++) { for (j = 0; j < V; j++) { dist[i][j] = *((graph + i * V) + j); } } for (k = 0; k < V; k++) { // Pick all vertices as // source one by one for (i = 0; i < V; i++) { // Pick all vertices as // destination for the // above picked source for (j = 0; j < V; j++) { // If vertex k is on the // shortest path from i to // j then update dist[i][j] if (dist[i][k] + dist[k][j] < dist[i][j]) { dist[i][j] = dist[i][k] + dist[k][j]; } } } } // Sum the upper diagonal of the // shortest distance matrix int sum = 0; // Traverse the given dist[][] for (i = 0; i < V; i++) { for (j = i + 1; j < V; j++) { // Add the distance sum += dist[i][j]; } } // Return the final sum return sum;} // Function to generate the treeint sumOfshortestPath(int N, int E, int edges[][3]){ int g[N][N]; for (int i = 0; i < N; i++) { for (int j = 0; j < N; j++) { g[i][j] = INF; } } // Add edges for (int i = 0; i < E; i++) { // Get source and destination // with weight int u = edges[i][0]; int v = edges[i][1]; int w = edges[i][2]; // Add the edges g[u][v] = w; g[v][u] = w; } // Perform Floyd Warshal Algorithm return floyd_warshall((int*)g, N);} // Driver codeint main(){ // Number of Vertices int N = 4; // Number of Edges int E = 3; // Given Edges with weight int Edges[][3] = { { 0, 1, 1 }, { 1, 2, 2 }, { 2, 3, 3 } }; // Function Call cout << sumOfshortestPath(N, E, Edges); return 0;} // Java program for// the above approachclass GFG{ static final int INF = 99999; // Function that performs the Floyd// Warshall to find all shortest pathsstatic int floyd_warshall(int[][] graph, int V){ int [][]dist = new int[V][V]; int i, j, k; // Initialize the distance matrix for (i = 0; i < V; i++) { for (j = 0; j < V; j++) { dist[i][j] = graph[i][j]; } } for (k = 0; k < V; k++) { // Pick all vertices as // source one by one for (i = 0; i < V; i++) { // Pick all vertices as // destination for the // above picked source for (j = 0; j < V; j++) { // If vertex k is on the // shortest path from i to // j then update dist[i][j] if (dist[i][k] + dist[k][j] < dist[i][j]) { dist[i][j] = dist[i][k] + dist[k][j]; } } } } // Sum the upper diagonal of the // shortest distance matrix int sum = 0; // Traverse the given dist[][] for (i = 0; i < V; i++) { for (j = i + 1; j < V; j++) { // Add the distance sum += dist[i][j]; } } // Return the final sum return sum;} // Function to generate the treestatic int sumOfshortestPath(int N, int E, int edges[][]){ int [][]g = new int[N][N]; for (int i = 0; i < N; i++) { for (int j = 0; j < N; j++) { g[i][j] = INF; } } // Add edges for (int i = 0; i < E; i++) { // Get source and destination // with weight int u = edges[i][0]; int v = edges[i][1]; int w = edges[i][2]; // Add the edges g[u][v] = w; g[v][u] = w; } // Perform Floyd Warshal Algorithm return floyd_warshall(g, N);} // Driver codepublic static void main(String[] args){ // Number of Vertices int N = 4; // Number of Edges int E = 3; // Given Edges with weight int Edges[][] = {{0, 1, 1}, {1, 2, 2}, {2, 3, 3}}; // Function Call System.out.print( sumOfshortestPath(N, E, Edges));}} // This code is contributed by 29AjayKumar # Python3 program for the above approachINF = 99999 # Function that performs the Floyd# Warshall to find all shortest pathsdef floyd_warshall(graph, V): dist = [[0 for i in range(V)] for i in range(V)] # Initialize the distance matrix for i in range(V): for j in range(V): dist[i][j] = graph[i][j] for k in range(V): # Pick all vertices as # source one by one for i in range(V): # Pick all vertices as # destination for the # above picked source for j in range(V): # If vertex k is on the # shortest path from i to # j then update dist[i][j] if (dist[i][k] + dist[k][j] < dist[i][j]): dist[i][j] = dist[i][k] + dist[k][j] # Sum the upper diagonal of the # shortest distance matrix sum = 0 # Traverse the given dist[][] for i in range(V): for j in range(i + 1, V): # Add the distance sum += dist[i][j] # Return the final sum return sum # Function to generate the treedef sumOfshortestPath(N, E,edges): g = [[INF for i in range(N)] for i in range(N)] # Add edges for i in range(E): # Get source and destination # with weight u = edges[i][0] v = edges[i][1] w = edges[i][2] # Add the edges g[u][v] = w g[v][u] = w # Perform Floyd Warshal Algorithm return floyd_warshall(g, N) # Driver codeif __name__ == '__main__': # Number of Vertices N = 4 # Number of Edges E = 3 # Given Edges with weight Edges = [ [ 0, 1, 1 ], [ 1, 2, 2 ], [ 2, 3, 3 ] ] # Function Call print(sumOfshortestPath(N, E, Edges)) # This code is contributed by mohit kumar 29 // C# program for// the above approachusing System;class GFG{ static readonly int INF = 99999; // Function that performs the Floyd// Warshall to find all shortest pathsstatic int floyd_warshall(int[,] graph, int V){ int [,]dist = new int[V, V]; int i, j, k; // Initialize the distance matrix for (i = 0; i < V; i++) { for (j = 0; j < V; j++) { dist[i, j] = graph[i, j]; } } for (k = 0; k < V; k++) { // Pick all vertices as // source one by one for (i = 0; i < V; i++) { // Pick all vertices as // destination for the // above picked source for (j = 0; j < V; j++) { // If vertex k is on the // shortest path from i to // j then update dist[i,j] if (dist[i, k] + dist[k, j] < dist[i, j]) { dist[i, j] = dist[i, k] + dist[k, j]; } } } } // Sum the upper diagonal of the // shortest distance matrix int sum = 0; // Traverse the given dist[,] for (i = 0; i < V; i++) { for (j = i + 1; j < V; j++) { // Add the distance sum += dist[i, j]; } } // Return the readonly sum return sum;} // Function to generate the treestatic int sumOfshortestPath(int N, int E, int [,]edges){ int [,]g = new int[N, N]; for (int i = 0; i < N; i++) { for (int j = 0; j < N; j++) { g[i, j] = INF; } } // Add edges for (int i = 0; i < E; i++) { // Get source and destination // with weight int u = edges[i, 0]; int v = edges[i, 1]; int w = edges[i, 2]; // Add the edges g[u, v] = w; g[v, u] = w; } // Perform Floyd Warshal Algorithm return floyd_warshall(g, N);} // Driver codepublic static void Main(String[] args){ // Number of Vertices int N = 4; // Number of Edges int E = 3; // Given Edges with weight int [,]Edges = {{0, 1, 1}, {1, 2, 2}, {2, 3, 3}}; // Function Call Console.Write(sumOfshortestPath(N, E, Edges));}} // This code is contributed by 29AjayKumar <script>// Javascript program for// the above approachlet INF = 99999; // Function that performs the Floyd// Warshall to find all shortest pathsfunction floyd_warshall(graph,V){ let dist = new Array(V); for(let i = 0; i < V; i++) { dist[i] = new Array(V); } let i, j, k; // Initialize the distance matrix for (i = 0; i < V; i++) { for (j = 0; j < V; j++) { dist[i][j] = graph[i][j]; } } for (k = 0; k < V; k++) { // Pick all vertices as // source one by one for (i = 0; i < V; i++) { // Pick all vertices as // destination for the // above picked source for (j = 0; j < V; j++) { // If vertex k is on the // shortest path from i to // j then update dist[i][j] if (dist[i][k] + dist[k][j] < dist[i][j]) { dist[i][j] = dist[i][k] + dist[k][j]; } } } } // Sum the upper diagonal of the // shortest distance matrix let sum = 0; // Traverse the given dist[][] for (i = 0; i < V; i++) { for (j = i + 1; j < V; j++) { // Add the distance sum += dist[i][j]; } } // Return the final sum return sum;} // Function to generate the treefunction sumOfshortestPath(N,E,edges){ let g = new Array(N); for (let i = 0; i < N; i++) { g[i] = new Array(N); for (let j = 0; j < N; j++) { g[i][j] = INF; } } // Add edges for (let i = 0; i < E; i++) { // Get source and destination // with weight let u = edges[i][0]; let v = edges[i][1]; let w = edges[i][2]; // Add the edges g[u][v] = w; g[v][u] = w; } // Perform Floyd Warshal Algorithm return floyd_warshall(g, N);} // Driver code// Number of Verticeslet N = 4; // Number of Edgeslet E = 3; // Given Edges with weightlet Edges = [[0, 1, 1], [1, 2, 2],[2, 3, 3]]; // Function Calldocument.write(sumOfshortestPath(N, E, Edges)); // This code is contributed by patel2127</script> 20 Time Complexity:O(N3), where N is the number of vertices.Auxiliary Space: O(N) Efficient Approach: The idea is to use the DFS algorithm, using the DFS, for each vertex, the cost to visit every other vertex from this vertex can be found in linear time. Follow the below steps to solve the problem: Traverse the nodes 0 to N – 1.For each node i, find the sum of the cost to visit every other vertex using DFS where the source will be node i, and let’s denote this sum by Si.Now, calculate S = S0 + S1 + ... + SN-1. and divide S by 2 because every path is calculated twice.After completing the above steps, print the value of sum S obtained. Traverse the nodes 0 to N – 1. For each node i, find the sum of the cost to visit every other vertex using DFS where the source will be node i, and let’s denote this sum by Si. Now, calculate S = S0 + S1 + ... + SN-1. and divide S by 2 because every path is calculated twice. After completing the above steps, print the value of sum S obtained. Below is the implementation of the above approach: C++ Java Python3 C# Javascript // C++ program for the above approach#include<bits/stdc++.h>using namespace std; // Function that performs the DFS// traversal to find cost to reach// from vertex v to other vertexesvoid dfs(int v, int p, vector<pair<int, int>> t[], int h, int ans[]){ // Traverse the Adjacency list // of u for(pair<int, int> u : t[v]) { if (u.first == p) continue; // Recursive Call dfs(u.first, v, t, h + u.second, ans); } // Update ans[v] ans[v] = h;} // Function to find the sum of// weights of all pathsint solve(int n, int edges[][3]){ // Stores the Adjacency List vector<pair<int, int>> t[n]; // Store the edges for(int i = 0; i < n - 1; i++) { t[edges[i][0]].push_back({edges[i][1], edges[i][2]}); t[edges[i][1]].push_back({edges[i][0], edges[i][2]}); } // sum is the answer int sum = 0; // Calculate sum for each vertex for(int i = 0; i < n; i++) { int ans[n]; // Perform the DFS Traversal dfs(i, -1, t, 0, ans); // Sum of distance for(int j = 0; j < n; j++) sum += ans[j]; } // Return the final sum return sum / 2;} // Driver Codeint main(){ // No of vertices int N = 4; // Given Edges int edges[][3] = { { 0, 1, 1 }, { 1, 2, 2 }, { 2, 3, 3 } }; // Function Call cout << solve(N, edges) << endl; return 0;} // This code is contributed by pratham76 // Java program for the above approach import java.io.*;import java.awt.*;import java.io.*;import java.util.*; @SuppressWarnings("unchecked")class GFG { // Function that performs the DFS // traversal to find cost to reach // from vertex v to other vertexes static void dfs(int v, int p, ArrayList<Point> t[], int h, int ans[]) { // Traverse the Adjacency list // of u for (Point u : t[v]) { if (u.x == p) continue; // Recursive Call dfs(u.x, v, t, h + u.y, ans); } // Update ans[v] ans[v] = h; } // Function to find the sum of // weights of all paths static int solve(int n, int edges[][]) { // Stores the Adjacency List ArrayList<Point> t[] = new ArrayList[n]; for (int i = 0; i < n; i++) t[i] = new ArrayList<>(); // Store the edges for (int i = 0; i < n - 1; i++) { t[edges[i][0]].add( new Point(edges[i][1], edges[i][2])); t[edges[i][1]].add( new Point(edges[i][0], edges[i][2])); } // sum is the answer int sum = 0; // Calculate sum for each vertex for (int i = 0; i < n; i++) { int ans[] = new int[n]; // Perform the DFS Traversal dfs(i, -1, t, 0, ans); // Sum of distance for (int j = 0; j < n; j++) sum += ans[j]; } // Return the final sum return sum / 2; } // Driver Code public static void main(String[] args) { // No of vertices int N = 4; // Given Edges int edges[][] = new int[][] { { 0, 1, 1 }, { 1, 2, 2 }, { 2, 3, 3 } }; // Function Call System.out.println(solve(N, edges)); }} # Python3 program for the above approach # Function that performs the DFS# traversal to find cost to reach# from vertex v to other vertexesdef dfs(v, p, t, h, ans): # Traverse the Adjacency list # of u for u in t[v]: if (u[0] == p): continue # Recursive Call dfs(u[0], v, t, h + u[1], ans) # Update ans[v] ans[v] = h # Function to find the sum of# weights of all pathsdef solve(n, edges): # Stores the Adjacency List t = [[] for i in range(n)] # Store the edges for i in range(n - 1): t[edges[i][0]].append([edges[i][1], edges[i][2]]) t[edges[i][1]].append([edges[i][0], edges[i][2]]) # sum is the answer sum = 0 # Calculate sum for each vertex for i in range(n): ans = [0 for i in range(n)] # Perform the DFS Traversal dfs(i, -1, t, 0, ans) # Sum of distance for j in range(n): sum += ans[j] # Return the final sum return sum // 2 # Driver Codeif __name__ == "__main__": # No of vertices N = 4 # Given Edges edges = [ [ 0, 1, 1 ], [ 1, 2, 2 ], [ 2, 3, 3 ] ] # Function Call print(solve(N, edges)) # This code is contributed by rutvik_56 // C# program for the above approachusing System;using System.Collections.Generic; class GFG{ // Function that performs the DFS// traversal to find cost to reach// from vertex v to other vertexesstatic void dfs(int v, int p, List<Tuple<int, int>> []t, int h, int []ans){ // Traverse the Adjacency list // of u foreach(Tuple<int, int> u in t[v]) { if (u.Item1 == p) continue; // Recursive call dfs(u.Item1, v, t, h + u.Item2, ans); } // Update ans[v] ans[v] = h;} // Function to find the sum of// weights of all pathsstatic int solve(int n, int [,]edges){ // Stores the Adjacency List List<Tuple<int, int>> []t = new List<Tuple<int, int>>[n]; for(int i = 0; i < n; i++) t[i] = new List<Tuple<int, int>>(); // Store the edges for(int i = 0; i < n - 1; i++) { t[edges[i, 0]].Add( new Tuple<int, int>(edges[i, 1], edges[i, 2])); t[edges[i, 1]].Add( new Tuple<int, int>(edges[i, 0], edges[i, 2])); } // sum is the answer int sum = 0; // Calculate sum for each vertex for(int i = 0; i < n; i++) { int []ans = new int[n]; // Perform the DFS Traversal dfs(i, -1, t, 0, ans); // Sum of distance for(int j = 0; j < n; j++) sum += ans[j]; } // Return the readonly sum return sum / 2;} // Driver Codepublic static void Main(String[] args){ // No of vertices int N = 4; // Given Edges int [,]edges = new int[,] { { 0, 1, 1 }, { 1, 2, 2 }, { 2, 3, 3 } }; // Function call Console.WriteLine(solve(N, edges));}} // This code is contributed by Amit Katiyar <script> // Javascript program for the above approach // Function that performs the DFS// traversal to find cost to reach// from vertex v to other vertexesfunction dfs(v, p, t, h, ans){ // Traverse the Adjacency list // of u for(let u = 0; u < t[v].length; u++) { if (t[v][u][0] == p) continue; // Recursive Call dfs(t[v][u][0], v, t, h + t[v][u][1], ans); } // Update ans[v] ans[v] = h;} // Function to find the sum of// weights of all pathsfunction solve(n, edges){ // Stores the Adjacency List let t = new Array(n); for(let i = 0; i < n; i++) t[i] = []; // Store the edges for(let i = 0; i < n - 1; i++) { t[edges[i][0]].push([edges[i][1], edges[i][2]]); t[edges[i][1]].push([edges[i][0], edges[i][2]]); } // Sum is the answer let sum = 0; // Calculate sum for each vertex for(let i = 0; i < n; i++) { let ans = new Array(n); // Perform the DFS Traversal dfs(i, -1, t, 0, ans); // Sum of distance for(let j = 0; j < n; j++) sum += ans[j]; } // Return the final sum return sum / 2;} // Driver Codelet N = 4;let edges = [ [ 0, 1, 1 ], [ 1, 2, 2 ], [ 2, 3, 3 ] ]; document.write(solve(N, edges)); // This code is contributed by unknown2108 </script> 20 Time Complexity: O(N2), where N is the number of vertices.Auxiliary Space: O(N) amit143katiyar 29AjayKumar mohit kumar 29 rutvik_56 pratham76 patel2127 unknown2108 DFS Google Graph Traversals Shortest Path Graph Recursion Searching Google Searching Recursion DFS Graph Shortest Path Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 52, "s": 24, "text": "\n22 Jun, 2021" }, { "code": null, "e": 305, "s": 52, "text": "Given a weighted undirected graph T consisting of nodes valued [0, N – 1] and an array Edges[][3] of type {u, v, w} that denotes an edge between vertices u and v having weigh...
Difference between textContent and innerHTML
17 Dec, 2020 The textContent and innerHTML are properties of JavaScript. However, there are differences in the way the specified text is handled in JavaScript. Let us take a look at the syntax of both these properties. Syntax: Let elem be a JavaScript variable that holds an element that is selected from the page. let elem = document.getElementById('test-btn'); The textContent and innerHTML properties can be used as follows: The textContent property: This property is used to get or set the text content of the specified node and its descendants.elem.textContent elem.textContent The innerHTML property: This property is used to get or set the HTML content of an element.elem.innerHTML elem.innerHTML HTML <!DOCTYPE html><html> <body style="text-align:center;"> <h1 style="color:#006600"> GeeksforGeeks </h1> <div id="test-btn"> The following element contains some <bold>bold</bold> and some <italic>italic text</italic>. </div> <p></p> <button onClick="innerHTMLfn()"> innerHTML </button> <button onClick="textContentfn()"> textContent </button> <p id="demo-para"></p> <script> function textContentfn() { var elem = document.getElementById('test-btn'); alert(elem.textContent); } function innerHTMLfn() { var elem = document.getElementById('test-btn'); alert(elem.innerHTML); } </script></body> </html> Output: Before any button is clicked: After the innerHTML button is clicked: After the textContent button is clicked: Differences: As we can see from the example above, the innerHTML property gets or sets HTML contents of the element. The textContent does not automatically encode and decode text and hence allows us to work with only the content part of the element. HTML-Misc JavaScript-Misc HTML JavaScript Web Technologies HTML Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n17 Dec, 2020" }, { "code": null, "e": 234, "s": 28, "text": "The textContent and innerHTML are properties of JavaScript. However, there are differences in the way the specified text is handled in JavaScript. Let us take a look at the sy...
Python – Triple quote String concatenation
22 Jun, 2020 Sometimes, while working with Python Strings, we can have problem in which we need to perform concatenation of Strings which are constructed by Triple quotes. This happens in cases we have multiline strings. This can have applications in many domains. Lets discuss certain way in which this task can be performed. Input : test_str1 = """mango is""" test_str2 = """good for health """ Output : mango good is for health Input : test_str1 = """Gold is""" test_str2 = """important for economy """ Output : Gold important is for economy Method : Using splitlines() + strip() + join()The combination of above functions can be used to perform this task. In this, we perform the ask of line splitting using splitlines(). The task of concatenation is done using strip() and join(). # Python3 code to demonstrate working of # Triple quote String concatenation# Using splitlines() + join() + strip() # initializing stringstest_str1 = """gfgis"""test_str2 = """bestfor geeks""" # printing original stringsprint("The original string 1 is : " + test_str1)print("The original string 2 is : " + test_str2) # Triple quote String concatenation# Using splitlines() + join() + strip()test_str1 = test_str1.splitlines()test_str2 = test_str2.splitlines()res = [] for i, j in zip(test_str1, test_str2): res.append(" " + i.strip() + " " + j.strip())res = '\n'.join(res) # printing result print("String after concatenation : " + str(res)) The original string 1 is : gfg is The original string 2 is : best for geeks String after concatenation : gfg best is for geeks Python string-programs Python Python Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n22 Jun, 2020" }, { "code": null, "e": 342, "s": 28, "text": "Sometimes, while working with Python Strings, we can have problem in which we need to perform concatenation of Strings which are constructed by Triple quotes. This happens in ...
Emotion classification using NRC Lexicon in Python
03 Sep, 2021 Many a time, for real-world projects, emotion recognition is often just the start of the project. That time writing a whole code on that will not only increase time but also efficiency is hindered. NRCLexicon is an MIT-approved pypi project by Mark M. Bailey which predicts the sentiments and emotion of a given text. The package contains approximately 27,000 words and is based on the National Research Council Canada (NRC) affect lexicon and the NLTK library’s WordNet synonym sets. To install this module type the below command in the terminal. pip install NRCLex Even after the installation of this module, MissingCorpusError may occur while running programs. So it is advised to also install textblob.download_corpora by using the below command on the command prompt. python -m textblob.download_corpora Approach: Import the module Python3 # Import required modulesfrom nrclex import NRCLex Assign input text Python3 # Assigning list of wordstext = ['hate', 'lovely', 'person', 'worst'] Create NRCLex object for each input text. Python3 for i in range(len(text)): # creating objects emotion = NRCLex(text[i]) Apply methods to classify emotions. Emotional affects measured include the following: fearangeranticipationtrustsurprisepositivenegativesadnessdisgustjoy fear anger anticipation trust surprise positive negative sadness disgust joy Below is the Implementation. Example 1: Based on the above approach, the below example classifies various emotions using top_emotions. Python3 # Import modulefrom nrclex import NRCLex # Assign list of stringstext = ['hate', 'lovely', 'person', 'worst'] # Iterate through listfor i in range(len(text)): # Create object emotion = NRCLex(text[i]) # Classify emotion print('\n\n', text[i], ': ', emotion.top_emotions) Output: Example 2: Here a single emotion love is classified using all the methods of NCRLex module. Python3 # Import modulefrom nrclex import NRCLex # Assign emotiontext = 'love' # Create objectemotion = NRCLex(text) # Using methods to classigy emotionprint('\n', emotion.words)print('\n', emotion.sentences)print('\n', emotion.affect_list)print('\n', emotion.affect_dict)print('\n', emotion.raw_emotion_scores)print('\n', emotion.top_emotions)print('\n', emotion.affect_frequencies) Output: sagartomar9927 Natural-language-processing Python-nltk Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 54, "s": 26, "text": "\n03 Sep, 2021" }, { "code": null, "e": 253, "s": 54, "text": "Many a time, for real-world projects, emotion recognition is often just the start of the project. That time writing a whole code on that will not only increase time but also ...
Moment.js moment().format() Function
29 Jul, 2020 The moment().format() function is used to format the date according to the user need. The format can be provided in string form which is passed as a parameter to this function. Syntax: moment().format(String); Parameters: This function accept single parameter of string type, which defines the format. Return Value: This function returns the date. Installation of moment module: You can visit the link to Install moment module. You can install this package by using this command.npm install momentAfter installing the moment module, you can check your moment version in command prompt using the command.npm version momentAfter that, you can just create a folder and add a file for example, index.js as shown below. You can visit the link to Install moment module. You can install this package by using this command.npm install moment npm install moment After installing the moment module, you can check your moment version in command prompt using the command.npm version moment npm version moment After that, you can just create a folder and add a file for example, index.js as shown below. Example 1: Filename: index.js // Requiring the moduleconst moment = require('moment'); // The format() function to format the date var formatedDate = moment().format( "dddd, MMMM Do YYYY, h:mm:ss a");console.log(formatedDate); Steps to run the program: The project structure will look like this:Run index.js file using below command:node index.jsOutput:Friday, July 17th 2020, 4:28:30 pm The project structure will look like this: Run index.js file using below command:node index.jsOutput:Friday, July 17th 2020, 4:28:30 pm node index.js Output: Friday, July 17th 2020, 4:28:30 pm Example 2: Filename: index.js // Requiring the moduleconst moment = require('moment'); function format_Date(date){ return moment().format("dddd, MMMM Do YYYY");} var result = format_Date(moment);console.log("Result:", result); Steps to run the program: The project structure will look like this:Run index.js file using below command:node index.jsOutput:Result: Friday, July 17th 2020 The project structure will look like this: Run index.js file using below command:node index.jsOutput:Result: Friday, July 17th 2020 node index.js Output: Result: Friday, July 17th 2020 Reference: https://momentjs.com/docs/#/displaying/format/ Moment.js Node.js Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n29 Jul, 2020" }, { "code": null, "e": 205, "s": 28, "text": "The moment().format() function is used to format the date according to the user need. The format can be provided in string form which is passed as a parameter to this function...
How to Delete a Row from Table using AngularJS ?
14 Oct, 2020 Given a HTML table and the task is to remove/delete the row from the table with the help of AngularJS. Approach: The approach is to delete the row from the array where it stored and served to the table data. When the user clicks on the button near to the table row, it passes the index of that table and that index is used to remove the entry from the array with the help of splice() method. Example 1: This example contains a single column, each row can be removed by the click next to it. <!DOCTYPE HTML><html> <head> <script src="//ajax.googleapis.com/ajax/libs/angularjs/1.2.13/angular.min.js"> </script> <script> var myApp = angular.module("app", []); myApp.controller("controller", function ($scope) { $scope.rows = ['row-1', 'row-2', 'row-3', 'row-4', 'row-5', 'row-6']; $scope.remThis = function (index, content) { if (index != -1) { $scope.rows.splice(index, 1); } }; }); </script></head> <body style="text-align:center;"> <h1 style="color:green;"> GeeksForGeeks </h1> <p> How to remove a row from the table in AngularJS </p> <div ng-app="app"> <div ng-controller="controller"> <table style="border: 1px solid black; margin: 0 auto;"> <tr> <th>Col-1</th> </tr> <tr ng-repeat="val in rows"> <td>{{val}}</td> <td><a href="#" ng-click= "remThis($index, content)"> click here </a> </td> </tr> </table><br> </div> </div></body> </html> Output: Example 2: This example contains three columns, each row can be removed by the click next to it. <!DOCTYPE HTML><html> <head> <script src="//ajax.googleapis.com/ajax/libs/angularjs/1.2.13/angular.min.js"> </script> <script> var myApp = angular.module("app", []); myApp.controller("controller", function ($scope) { $scope.rows = [{ 'ff': '11', 'fs': '12', 'ft': '13' }, { 'ff': '21', 'fs': '22', 'ft': '23' }, { 'ff': '31', 'fs': '32', 'ft': '33' }, { 'ff': '41', 'fs': '42', 'ft': '43' }]; $scope.c = 2; $scope.remThis = function (index, content) { if (index != -1) { $scope.rows.splice(index, 1); } }; }); </script></head> <body style="text-align:center;"> <h1 style="color:green;"> GeeksForGeeks </h1> <p> How to remove a row from the table in AngularJS </p> <div ng-app="app"> <div ng-controller="controller"> <table style= "border: 1px solid black; margin: 0 auto;"> <tr> <th>Col-1</th> <th>Col-2</th> <th>Col-3</th> </tr> <tr ng-repeat="val in rows"> <td>{{val.ff}}</td> <td>{{val.fs}}</td> <td>{{val.ft}}</td> <td><a href="#" ng-click= "remThis($index, content)"> click here</a> </td> </tr> </table><br> </div> </div></body> </html> Output: AngularJS-Misc HTML-Misc AngularJS HTML Web Technologies Web technologies Questions HTML Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n14 Oct, 2020" }, { "code": null, "e": 131, "s": 28, "text": "Given a HTML table and the task is to remove/delete the row from the table with the help of AngularJS." }, { "code": null, "e": 420, "s": 131, "text": "App...