title stringlengths 3 221 | text stringlengths 17 477k | parsed listlengths 0 3.17k |
|---|---|---|
Importing CSV Files in Neo4j. How to get started working with your... | by CJ Sullivan | Towards Data Science | Graph-enabled data science and machine learning has become a very hot topic in a variety of fields lately, ranging from fraud detection to knowledge graph generation, social network analytics, and so much more. Neo4j is one of the most popular and widely-used graph databases in the world and offers tremendous benefits to the data science community. And while Neo4j comes with some training graphs baked into the system, at some point the data scientist will want to populate it with their own data.
The easiest format for Neo4j to ingest data from is a CSV. A web search for how to populate the database reveals several potential methods, but I will focus in this post on two of the most common and most powerful ones, when you might want to consider each, and walk through some examples on how to use them.
The methods we are going to go through are
LOAD CSV: a simple method when the graphs are smallNeo4j administration tool: a fast method for when the graphs get large
LOAD CSV: a simple method when the graphs are small
Neo4j administration tool: a fast method for when the graphs get large
I will demonstrate both of these in this post and talk about when you might want to use each.
In order to get started, we will need to have Neo4j on our host computer. You can walk through the data loading examples below using the Neo4j Desktop, which provides a nice UI and is a great place to learn how to work with the database.
However, for the sake of this tutorial I have elected to use a simple Docker container for a few reasons.
First, containers are cool. I always mess stuff up and this is a very safe way to not ruin everything.
Second, so much data science happens in Docker containers these days that it just makes sense to think of Neo4j as being in a container as well.
Lastly, reproducibility is extremely important in data science, so using a container will allow for this.
All of that being said, you will need the following to run though the examples below:
Docker (installation instructions can be found here)A Neo4j Docker image (I will be using neo4j:latest , which at the time of writing this is version 4.2.2)A data set in CSV format
Docker (installation instructions can be found here)
A Neo4j Docker image (I will be using neo4j:latest , which at the time of writing this is version 4.2.2)
A data set in CSV format
For the data set, I am going to demonstrate data loading using the popular Game of Thrones graph that is available from this repository maintained by Andrew Beveridge.
github.com
One reason for using this graph as a walk through is that the data is formatted nicely and is reasonably clean — attributes that you will discover are quite helpful when loading in your data! Even still, we will wind up having to do some data cleaning and reformatting as we go along here, but none of it is too major.
networkofthrones.wordpress.com
Speaking of cleaning that data set, note that there is a typo or naming convention inconsistency in one of the file names. You will see that the season 5 node file is named got-s5-node.csv rather than the pattern we would expect of got-s5-nodes.csv.
Lastly, I assume some familiarity of the reader with Cypher. If this is not presently a skill you possess, I highly recommend the online Cypher tutorial at the Neo4j website (esp. the section on creating data). In particular, if you are just learning Cypher, I might recommend you check out the docs for LOAD CSV, MERGE, MATCH, SET, and PERIODIC COMMIT, which we will be using below.
neo4j.com
Prior to firing up the Docker container, we need to do a bit of housekeeping to get our data files in the right places.
First, we want to make sure the CSV files are in the right place. Of course, you can tell Docker to look wherever you want to put them. In my case, I have created a directory ~/graph_data/gameofthrones/ and I put all of my .csv’s there.
Once all of this is in place, run the following command from the CLI:
docker run -p 7474:7474 -p 7687:7687 \ --volume=$HOME/graph_data/data:/data \ --volume=$HOME/graph_data/gameofthrones/data:/var/lib/neo4j/import \ --env NEO4JLABS_PLUGINS='["apoc", "graph-data-science"]' \ --env apoc.import.file.enabled=true \ --env NEO4J_AUTH=neo4j/1234 \ neo4j:latest
So let’s break this down. We have some port forwarding going on there, which will allow you to connect to the Neo4j Browser UI in your web-browser on localhost:7474. Via the BOLT protocol at port 7687 you would make your connections for accessing the database via programs in Python or other programming languages.
Next, we have a series of folders that are forwarded into the container for read/write between your local machine and the container.
After that, we bring in some environment variables. These are pretty much all optional, but I include them above in case you want to use libraries like APOC or GDS. The first of these tells the container to load the latest versions of APOC and the GDS library as plugins. We also pass a config setting as environment variable that tell Neo4j that is is alright to allow APOC to read files.
And finally set a password (the wonderfully-complicated 1234) for the default neo4j user. (You can choose to not use that bit, but if you do you will have to reset the password for the user every time you fire up the container.)
The final note on this is that the container will automatically change the ownership and permissions of your files and they will only be accessible by the root user. So you might consider keeping a backup of them somewhere that the container doesn’t touch them if you plan on needing to view them or edit them outside of sudo.
Assuming all goes well, you should be able to point your web browser to localhost:7474 and see a running UI. So now we can go onto the next step!
The LOAD CSV command is one of the easiest ways to get your data into the database. It is a Cypher command that can usually run through the Neo4j UI. However, it can also be passed in via the Python connector (or the connector of your language of choice). We will save interfacing with the database via Python for a different blog post.
neo4j.com
This approach is great if you have a “small” graph. But what constitutes small? A good rule of thumb is that if you have less than about 100,000 nodes and edges, which the Game of Thrones graph certainly has, then this is a great option. However, it is not the fastest approach (like a bulk loader), so you might want to consider switching over to one of the other methods for loading if your graph is a bit larger.
Looking at our node files, we can see that we have one file per season. The files themselves follow a very simple format of Id, Label where the ID is just the capitalization of the name and the label is the actual character name.
It is generally good practice to create some uniqueness constraints on the nodes to ensure that there are no duplicates. One advantage of doing so is that this will create an index for the given label and property. In addition to speeding up query searches for that data, it will ensure that MERGE statements on nodes (such as the one used in our load statement) are significantly faster. To do so, we use:
CREATE CONSTRAINT UniqueCharacterId ON (c:Character) ASSERT c.id IS UNIQUE
Please note that Labels, Relationshiptypes and property keys are case sensitive in Neo4j. So id is not the same as Id. Common mistake!
Now we can load our node data into the database using:
WITH "file:///got-s1-nodes.csv" AS uriLOAD CSV WITH HEADERS FROM uri AS rowMERGE (c:Character {id:row.Id})SET c.name = row.Label
Yes, you do need to use 3 slashes, but the good news is that if you linked your data to /var/lib/neo4j/import, this is the default directory in the container for reading files and you will not need to specify a lengthy directory structure, which is wrought with peril!
We can see from the above that we are loading in the characters one row at a time and creating a node label called Character with a property called id while creating a new property called name that we set equal to the CSV value of Label (a name not to be confused with the fact that all nodes are given a Character label).
(Note that uri can actually be replaced with the web location of a CSV file as well, so you do not have to be limited by having the actual file on your local computer.)
Notice that we are using the MERGE command here. We could have also used the CREATE command, but there is an important difference in what they do. MERGE looks to see if there is already an instance of the node and then doesn’t create it. It acts as either a MATCH or aCREATE. A new node is only created then if it is not already found within the database. So in this way, MERGE commands are idempotent.
Next it is time to bring in the edge files. These are formatted in a similar fashion with Source, Target, Weight, Season .
To load these in, we will use the following command in the browser:
WITH "file:///got-s1-edges.csv" AS uriLOAD CSV WITH HEADERS FROM uri AS rowMATCH (source:Character {id: row.Source})MATCH (target:Character {id: row.Target})MERGE (source)-[:SEASON1 {weight: toInteger(row.Weight)}]-(target)
Again, the above command reads the files in row-by-row and sets up the edges with a source Character and target Character. In this case, these reference the values of Id in the node files.
I also assign an edge type of :SEASON1 (and change this for subsequent seasons) with weighted edges based on the number of interactions between the source and target character in that season.
I should also briefly mention that this graph is being loaded in as an undirected graph (as is specified in the data repository). You can tell in the last line based on the fact that there is no arrow showing the direction from source to target. If we wanted this to be a directed graph, we would indicated this through the use of an arrow, which would change the format to be (source)-[...]->(target).
Note that Neo4j treats each value from a CSV as a string, so I have converted the weights to integers via toInteger, which will be necessary for some of the calculations with algorithms should you wish to use them. And again, if you want to bring in the other seasons you just rinse and repeat for each edge file.
One note regarding importing larger graphs this way: Neo4j, being transactional, gets memory intensive for huge imports in a single transaction.
You might need to reduce the memory overhead of your importing by periodically having the data written to the database. To do so, you add the following before the query (the :auto prefix works only in Neo4j Browser):
:auto USING PERIODIC COMMIT 500
This tells Neo4j to write to the database every 500 lines. It is good practice to do this, particularly if you are memory constrained.
And now we have a populated database for our future graph analytics! It should roughly look like this (although I am not showing every node and have tinkered with the hairball to draw out certain characters):
There are many visualization options out there and the interested reader is encouraged to consult this list for options.
neo4j.com
Now that we have seen the loading of data via simple CSV files, we are going to make it slightly more complicated but significantly faster.
Let’s suppose you have a truly “large” graph. In this case, I am talking about graphs with greater than 10 million nodes and edges. The above method is going to take a very long time, largely due to the fact that it is transactional versus an offline load.
You may need to actually use LOAD CVS though if you are updating your database in real time. But even in that case, usually the updates will happen in smaller batches relative to the overall database size.
Prior to populated the graph, we have to format our data in a very particular way. We also need to make sure that the data are exceptionally clean. For the node list, we are going to change the format to be (with the first few rows shown):
Id:ID,name,:LABELADDAM_MARBRAND,Addam,CharacterAEGON,Aegon,CharacterAERYS,Aerys,CharacterALLISER_THORNE,Allister,CharacterARYA,Arya,Character
This might look rather simplistic for this data set, but we should not underestimate its power. This is because it allows for the easy importing of multiple node types at one time.
So, for example, maybe you have another node type that is a location in the stories. All you would have to do would be to change the value of :LABEL to achieve that. Also, you can add node properties through adding a column like propertyName and then giving the value as another cell entry in each row.
In a similar fashion, we restructure the edge files to look like:
:START_ID,:END_ID,weight:int,:TYPENED,ROBERT,192,SEASON1DAENERYS,JORAH,154,SEASON1JON,SAM,121,SEASON1LITTLEFINGER,NED,107,SEASON1NED,VARYS,96,SEASON1
As you would expect, we need one row for every edge in the graph, even if the edge types change (example: relationships between two characters in both :SEASON1 and :SEASON2).
It is very important that the naming conventions be maintained here! For example, your node files must always have a column labeled :ID and can have an optional column called:LABEL for the node labels. Additionally, any number of node properties can be specified here as well (although none are present in this data set). Your edge files must always have a :START_ID, :END_ID, and optionally :TYPE. The names of these marker suffixes cannot be changed.
(Note that in this case I have created new files and file names to reflect the change in format.)
IMPORTANT NOTE!!! There was a typo in the edge list of season 1 with regard to Vardis Egen (don’t worry...I had to look up who that was too). The node list has his Id spelled VARDIS_EGEN, but the edge list has a few places, although not all, where it is spelled VARDIS_EGAN. This has been very recently fixed, but if you have an older version of the repository, you might want to pull the update.Otherwise, the easiest fix for the assuming you don’t care about this particular character would be just to either add him as another node within the node list with the incorrect spelling or to fix the spelling in the edge list (which is what I have done). This did not cause a problem with the previous method, but the import tool is much more sensitive to these types of problems.
There are a lot of options that can be used with this format...too many to cover in this post. The interested reader is encouraged to read the documentation on this format, which can be found here.
neo4j.com
In the case of large data ingesting, Neo4j provides a command line tool for ingesting large amounts of data: neo4j-admin import, which can be found inside the container at /var/lib/neo4j/bin/neo4j-admin.
neo4j.com
The catch with this tool is that you cannot actually use it to create your graph while the database (at least in Neo4j Community Edition) is running. The database must be shut down first, which poses a bit of a problem for our Docker container. In this case, we are going to start with a fresh container where the database is not yet running.
We will then issue the following command at our local machine’s command line:
docker run \ --volume=$HOME/graph_data/data:/data \ --volume=$HOME/graph_data/gameofthrones/data:/var/lib/neo4j/import \ neo4j:latest bin/neo4j-admin import --nodes import/got-nodes-batch.csv --relationships import/got-edges.batch.csv
This starts up a container that immediately runs the import tool. We specify the directories within the container where the data files live (being sure to use the more complicated CSV-formatted files) relative to /var/lib/neo4j. In our case, our data on our local machine will connect into import/.
Once this command is run, a database is then created that you can access locally at $HOME/graph_data/data. From here we can start up the container using the command at the top of this post. (However, take note that if you ever want to start over with a new database and container, this whole directory must be deleted by root.)
Now that the database is populated and the container is started, we can go into the UI via localhost:7474 and interact with it as per normal.
Once everything is loaded (including all 8 seasons), regardless of your method, you should wind up with a schema that looks like this:
You will find that you have 407 nodes and 4110 relationships.
I have presented two common methods of importing data from CSV files into Neo4j databases. However, like anything with software there are practically an infinite number of ways that you can achieve the same result.
I hope this post provides a clear way to understand the main ones in hopes that this gives you just the beginning of your data science and machine learning journey!
Should you be seeking the next step on how to actually do a few things related to data science on graphs, please check out my post on How to get started with the Graph Data Science Library of Neo4j.
towardsdatascience.com
Special thanks to Mark Needham for help with some query tuning!
PS: Another note is that it is usually a good idea to tell Neo4j how much memory should be allocated to the database. Many of the Neo4j algorithms that a data scientist wants to run are memory-intensive. The exact configuration will, of course, depend on the machine that you are running on. Database configuration is something that is unique to your needs and beyond the scope of this post. The interested reader can consult the documentation here for fine tuning. For now I am just going with the (albeit limited) memory settings. | [
{
"code": null,
"e": 673,
"s": 172,
"text": "Graph-enabled data science and machine learning has become a very hot topic in a variety of fields lately, ranging from fraud detection to knowledge graph generation, social network analytics, and so much more. Neo4j is one of the most popular and widely-... |
HashSet clone() Method in Java - GeeksforGeeks | 26 Nov, 2018
The Java.util.HashSet.clone() method is used to return a shallow copy of the mentioned hash set. It just creates a copy of the set.
Syntax:
Hash_Set.clone()
Parameters: The method does not take any parameters.
Return Value: The method just returns a copy of the HashSet.
Below program illustrate the Java.util.HashSet.clone() method:
// Java code to illustrate clone()import java.io.*;import java.util.HashSet; public class Hash_Set_Demo { public static void main(String args[]) { // Creating an empty HashSet HashSet<String> set = new HashSet<String>(); // Use add() method to add elements into the Set set.add("Welcome"); set.add("To"); set.add("Geeks"); set.add("4"); set.add("Geeks"); // Displaying the HashSet System.out.println("HashSet: " + set); // Creating a new cloned set HashSet cloned_set = new HashSet(); // Cloning the set using clone() method cloned_set = (HashSet)set.clone(); // Displaying the new Set after Cloning; System.out.println("The new set: " + cloned_set); }}
HashSet: [4, Geeks, Welcome, To]
The new set: [Geeks, Welcome, To, 4]
Java - util package
Java-Collections
Java-Functions
java-hashset
Java
Java
Java-Collections
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
HashMap in Java with Examples
Interfaces in Java
ArrayList in Java
Initialize an ArrayList in Java
Overriding in Java
LinkedList in Java
Collections in Java
Queue Interface In Java
Multithreading in Java
Set in Java | [
{
"code": null,
"e": 24544,
"s": 24516,
"text": "\n26 Nov, 2018"
},
{
"code": null,
"e": 24676,
"s": 24544,
"text": "The Java.util.HashSet.clone() method is used to return a shallow copy of the mentioned hash set. It just creates a copy of the set."
},
{
"code": null,
... |
Java Program to Solve Travelling Salesman Problem Using Incremental Insertion Method - GeeksforGeeks | 30 Sep, 2021
Incremental is actually used in software development where the model is designed, implemented, and tested incrementally (a little more is added each time) until the product is finished. It involves both development and maintenance. The product is defined as finished when it satisfies all of its requirements.
Let us briefly discuss the traveling salesman problem. Here we are supposed to find the simplest or shortest distance between all the cities the salesman has to travel i.e., finding the shortest and optimal route between the nodes of the graph. It is also known as TSP and is the most known computer science optimized problem in this modern world.
Algorithm:
Start with a sub-graph consisting of node ‘i’ only.Find node r such that cir is minimal and form sub-tour i-r-i.(Selection step) Given a sub-tour, find node r not in the sub-tour closest to any node j in the sub-tour; i.e. with minimal crj(Insertion step) Find the arc (i, j) in the sub-tour which minimizes cir + crj – cij Insert ‘r’ between ‘i’ and ‘j’.If all the nodes are added to the tour, stop. Else go to step 3
Start with a sub-graph consisting of node ‘i’ only.
Find node r such that cir is minimal and form sub-tour i-r-i.
(Selection step) Given a sub-tour, find node r not in the sub-tour closest to any node j in the sub-tour; i.e. with minimal crj
(Insertion step) Find the arc (i, j) in the sub-tour which minimizes cir + crj – cij Insert ‘r’ between ‘i’ and ‘j’.
If all the nodes are added to the tour, stop. Else go to step 3
Implementation:
Java
// Java Program to Solve Travelling Salesman Problem// Using Incremental Insertion Method // Importing input output classesimport java.io.*;// Importing Scanner class to take input from the userimport java.util.Scanner; // Main classpublic class GFG { // Method 1 // Travelling Salesman Incremental Insertion Method static int tspdp(int c[][], int tour[], int start, int n) { int mintour[] = new int[10], temp[] = new int[10], mincost = 999, ccost, i, j, k; if (start == n - 1) { return (c[tour[n - 1]][tour[n]] + c[tour[n]][1]); } // Logic for implementing the minimal cost for (i = start + 1; i <= n; i++) { for (j = 1; j <= n; j++) temp[j] = tour[j]; temp[start + 1] = tour[i]; temp[i] = tour[start + 1]; if ((c[tour[start]][tour[i]] + (ccost = tspdp(c, temp, start + 1, n))) < mincost) { mincost = c[tour[start]][tour[i]] + ccost; for (k = 1; k <= n; k++) mintour[k] = temp[k]; } } // Now, iterating over the path (mintour) to // compute its cost for (i = 1; i <= n; i++) tour[i] = mintour[i]; // Returning the cost of min path return mincost; } // Method 2 // Main driver method public static void main(String[] args) { // Creating an object of Scanner class to take user // input // 1. Number of cities // 2. Cost matrix Scanner in = new Scanner(System.in); // Creating matrices in the main body int c[][] = new int[10][10], tour[] = new int[10]; // Declaring variables int i, j, cost; // Step 1: To read number of cities // Display message for asking user to // enter number of cities System.out.print("Enter No. of Cities: "); // Reading and storing using nextInt() of Scanner int n = in.nextInt(); // Base case // If the city is 1 then // path is not possible // Cost doesnot play any role if (n == 1) { // Display on the console System.out.println("Path is not possible!"); // terminate System.exit(0); } // Case 2 // Many cities // Again, reading the cost of the matrix // Display message System.out.println("Enter the Cost Matrix:"); // Travelling across cities using nested loops for (i = 1; i <= n; i++) for (j = 1; j <= n; j++) c[i][j] = in.nextInt(); for (i = 1; i <= n; i++) tour[i] = i; // Calling the above Method 1 to cost = tspdp(c, tour, 1, n); // Now, coming to logic to print the optimal tour // Display message for better readability System.out.print("The Optimal Tour is: "); for (i = 1; i <= n; i++) // Printing across which cities should Salesman // travel System.out.print(tour[i] + "->"); // Starting off with the city 1-> System.out.println("1"); // Print and display the (minimum)cost of the path // traversed System.out.println("Minimum Cost: " + cost); }}
Output:
gabaa406
Picked
Java
Java Programs
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Constructors in Java
Stream In Java
Different ways of Reading a text file in Java
Exceptions in Java
Functional Interfaces in Java
Convert a String to Character array in Java
Java Programming Examples
Convert Double to Integer in Java
Implementing a Linked List in Java using Class
How to Iterate HashMap in Java? | [
{
"code": null,
"e": 23894,
"s": 23866,
"text": "\n30 Sep, 2021"
},
{
"code": null,
"e": 24204,
"s": 23894,
"text": "Incremental is actually used in software development where the model is designed, implemented, and tested incrementally (a little more is added each time) until th... |
Salary Breakdown of the Top Data Science Jobs | Towards Data Science | IntroductionMachine Learning EngineerNatural Language Processing EngineerData EngineerData ScientistSummaryReferences
Introduction
Machine Learning Engineer
Natural Language Processing Engineer
Data Engineer
Data Scientist
Summary
References
When looking at data scientist salaries and data science roles, it became obvious that there are different, more specific facets within data science. These facets relate to unique job positions, specifically, machine learning operations, NLP, data engineering, and data science itself. Of course, there are even more specific positions than these, but these can give you a general summary of what to expect if you land a job in one of these positions. I wanted to pick these four roles, too, because they can be separated well, almost as if it was there was a clustering algorithm that found jobs that were the most different between one another but that were also in the same population. Below, I will be discussing the average base pay with a low and high range, as well as respective seniority levels, the number of estimates used to determine these numbers, and expected skills and experiences for each role.
A machine learning engineer tends to apply the already researched and built data science models into a production environment, usually comprising of both software engineering and, of course, machine learning algorithm knowledge. With that being said, you can imagine quite a good salary. This particular estimate comes from glassdoor [3].
Based on around 1,900 submitted salaries, there is an extensive the range of the following:
low — ~ $86,000
average — ~$128,000
high — ~$190,000
As you can see, there is a range, as with any position, and it is no surprise that the more experience you have, the more the salary is. In addition to years of experience, the state you work in, the skills you employ, and the company also work to create the final salary amount — and the same can be said for all of these positions. To get some more granularity, we can look at the various seniority levels in order to gain a sense of how an increase in level relates to salary amount:
Machine Learning Engineer L2 ~$128,000
Senior Machine Learningeer L3 ~ $153,000
Leader of Machine Learning L4 ~ $166,000
Here are some skills from personal experiences that you can expect to employ in a machine learning position:
SQL/Python/Java (sometimes)
Algorithm knowledge — the difference between unsupervised and supervised classification, time series, regression, and the popular libraries that package them
Deployment platform and tools — AWS, Google Cloud, Azure, Docker, Flask, MLFlow, and Airflow — deploying a model and working with a data scientist to compose an automated process
Often called an NLP Engineer, this role usually focuses on applying data science models or machine learning algorithms to text data. Some examples of NLP work would be topic modeling large amounts of texts, semantic analysis, and chatbot agents. With that being said, you can imagine quite a good salary as well — however, this salary breakdown will be lower than a machine learning engineer, most likely because this role is less inclusive and more focused on a particular topic within data science. This particular estimate comes from glassdoor [5] as well.
Based on around 20 submitted salaries, there is an extensive the range of the following:
It is important to note that the amount of reported salaries is quite low, so take this range with a grain of salt, but nevertheless, there is still very high confidence in this salary.
low — ~ $80,000
average — ~$115,000
high — ~$166,000
All of these amounts are lower than machine learning, however, they are still considerably high compared to most other roles.
Here are some skills from personal experiences that you can expect to employ in a natural language processing engineer position:
NLTK — natural language toolkit library
TextBlob
spaCy
Text cleaning and processing (removing punctuation, removing stop words, isolating the root of the word, stemming, and lemmatization)
Semantic Analysis — an example would be analyzing positive and negative reviews from customers
Topic Modeling — an example would be discovering common topics in large groups of text, like the same as customer reviews, but instead of just a good or bad review, the themes of the review that can be analyzed for improvement in the product: “bad quality” is associated with 90% of negative reviews.
Classification — using an algorithm like Random Forest to combine both traditional numerical features as well as text features like descriptions to create a model that groups data together — like customer segmentation
Perhaps a more common role, and more related to data science than under data science, is data engineering. However, this role is still critical to data science work, and sometimes, a data scientist can expect to know most of what a data engineer would know so that is why I will include it in this analysis. Some examples of data engineering would be creating an ETL job that stores data ultimately used for a data science model, as well as storing the model results automatically, and performing query optimization. This particular estimate comes from glassdoor [7] as well.
Based on around ~ 6,800 submitted salaries, there is an extensive the range of the following:
low — ~ $76,000
average — ~$111,000
high — ~$164,000
This range is more akin to that of the natural language processing engineer role, however, it might be the furthest from that actual job role in everyday work. It is also important to note that this position has quite a bit more estimates involved.
Here are some skills from personal experiences that you can expect to employ in a data engineer position:
ETL — extract, transform, and load
ELT — extract, load, and transform
Obtaining data that will be stored in a database or data lake, which could be queried for data analysis, queried for machine learning algorithm training, and used to store data science model results
Optimization of SQL queries — saving a company time and money
Last, but not least, is the role of a data scientist. While this role seems like it is the most general, it can actually be specific as well, usually composing of the model building process mainly— with sometimes having requirements of data engineering and machine learning engineer operations, and somewhat less likely — but still could involve having specialization in natural language processing (usually if the focus is NLP, then a data scientist will have that as their title — but not all the time). This role can have more variability still, so we can expect a wide range as well. This particular estimate comes from glassdoor [9] as well.
Based on around ~ 16,200 submitted salaries, there is an extensive the range of the following:
low — ~ $81,000
average — ~$115,000
high — ~$164,000
Surprisingly lower than expected, this role is around most of the others in this analysis. With that being said, it might be the truest and robust to outliers because it has, by far, the most amount of salaries submitted to compose these salary amounts.
Here are some skills from personal experiences that you can expect to employ in a data science position:
SQL, Python, R
Jupyter Notebooks
Visualization — Tableau, libraries and packages, Google Data Studio, Looker, and more...
Defining a problem statement, obtaining the dataset, feature engineering, model comparison, model deployment, and discussion of results
Example project — creating a classifier to group company products based on several features, and obtaining that data from various sources, using SQL, and Python, as well as deploying the model and interpreting the results and their impact on the company
While these roles can have several similarities and differences, the same can be said about their salary ranges. Nearly three of the four salaries were similar, with one standing out. That role was machine learning engineer — why is that? My understanding is that this role requires a knowledge of most data science concepts, and especially their output, as well the software engineering involved around deployment — that is a lot to know and employ, so it makes sense why a role that composes both software engineering and data science pays so well. In addition to the salary breakdown of each data science role — or similar to data science in some way, were the skills that you can expect to employ, so that you can have a better idea of the role and how that relates to the salary amount.
To summarize, here are the four positions we analyzed, along with their skills you can expect to employ:
* Machine Learning Engineer* Natural Language Processing Engineer* Data Engineer* Data Scientist
I hope you found my article both interesting and useful. Please feel free to comment down below if you agree with these numbers and ranges — why or why not? Do you think one role, in particular, is so far off from reality? What other data science roles can you think of that would have a different salary breakdown? What other factors about a role can impact salary?
These salaries are reported in the United States so they are in the United States dollar amount. I am not affiliated with any of these companies.
Please feel free to check out my profile and other articles, as well as reach out to me on LinkedIn.
[1] Photo by Thought Catalog on Unsplash, (2018)
[2] Photo by Possessed Photography on Unsplash, (2018)
[3] Glassdoor, Inc., Machine Learning Engineer Salaries, (2008–2021)
[4] Photo by Patrick Tomasso on Unsplash, (2016)
[5] Glassdoor, Inc., Natural Language Processing Engineer Salaries, (2008–2021)
[6] Photo by Caspar Camille Rubin on Unsplash, (2017)
[7] Glassdoor, Inc., Data Engineer Salaries, (2008–2021)
[8] Photo by Daria Nepriakhina on Unsplash, (2017)
[9] Glassdoor, Inc., Data Scientist Salaries, (2008–2021) | [
{
"code": null,
"e": 290,
"s": 172,
"text": "IntroductionMachine Learning EngineerNatural Language Processing EngineerData EngineerData ScientistSummaryReferences"
},
{
"code": null,
"e": 303,
"s": 290,
"text": "Introduction"
},
{
"code": null,
"e": 329,
"s": 303,... |
Crystal Reports - Apply Boolean Formulas | There are different Boolean operators that can be used in formula in Crystal Reports. They are −
AND
OR
NOT
Eqv
Imp
XOR
All these operators are used to pass multiple conditions in formulas −
AND operator is used when you want both the conditions in the formula to be true. Other Boolean operators and their meaning is as mentioned in the above snapshot.
Using Boolean Operators ‘AND’ −
If {CUSTOMER.CUSTOMER_NAME} [1 to 2] = "AN" and
ToText({CUSTOMER.CUSTOMER ID}) [2] = "4" then
"TRUE"
Else
"FALSE"
Using Boolean Operators ‘AND’ and ‘OR’ −
If ({CUSTOMER.CUSTOMER_NAME} [1 to 2] = "AN" and
ToText({CUSTOMER.CUSTOMER ID}) [1] = "4") or
({CUSTOMER.CUSTOMER_NAME} [1 to 2] = "Ja" and
ToText({CUSTOMER.CUSTOMER ID}) [1] = "2") then
"Five star rating CUSTOMER"
Else
"1 star rating CUSTOMER"
37 Lectures
2 hours
Neha Gupta
61 Lectures
4.5 hours
Sasha Miller
31 Lectures
32 mins
Prof Krishna N Sharma
35 Lectures
2 hours
Prof Krishna N Sharma
24 Lectures
2 hours
Prof Krishna N Sharma
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 3067,
"s": 2970,
"text": "There are different Boolean operators that can be used in formula in Crystal Reports. They are −"
},
{
"code": null,
"e": 3071,
"s": 3067,
"text": "AND"
},
{
"code": null,
"e": 3074,
"s": 3071,
"text": "OR"
},
... |
Java program to calculate the average of numbers in Java | An average of a set of numbers is their sum divided by their quantity. It can be defined as −
average = sum of all values / number of values
Here we shall learn how to programmatically calculate average.
1. Collect integer values in an array A of size N.
2. Add all values of A.
3. Divide the output of Step 2 with N.
4. Display the output of Step 3 as average.
Live Demo
public class AverageOfNNumbers {
public static void main(String args[]){
int i,total;
int a[] = {0,6,9,2,7};
int n = 5;
total = 0;
for(i=0; i<n; i++) {
total += a[i];
}
System.out.println("Average ::"+ total/(float)n);
}
}
Average ::4.8 | [
{
"code": null,
"e": 1156,
"s": 1062,
"text": "An average of a set of numbers is their sum divided by their quantity. It can be defined as −"
},
{
"code": null,
"e": 1203,
"s": 1156,
"text": "average = sum of all values / number of values"
},
{
"code": null,
"e": 1266... |
How to create a unique temporary file name using Python? | You can use the tempfile module to create a unique temporary file in the most secure manner possible. There are no race conditions in the file’s creation. The file is readable and writable only by the creating user ID. Note that the user of mkstemp() is responsible for deleting the temporary file when done with it. To create a new temporary file, use it as follows −
import tempfile
_, temp_file_path = tempfile.mkstemp()
print("File path: " + temp_file_path)
Note that you need to manually delete this file after you're done with it. | [
{
"code": null,
"e": 1431,
"s": 1062,
"text": "You can use the tempfile module to create a unique temporary file in the most secure manner possible. There are no race conditions in the file’s creation. The file is readable and writable only by the creating user ID. Note that the user of mkstemp() is... |
Building a real-time prediction pipeline using Spark Structured Streaming and Microservices | by Bogdan Cojocar | Towards Data Science | We will build a real-time pipeline for machine learning prediction. The main frameworks that we will use are:
Spark Structured Streaming: a mature and easy to use stream processing engine
Kafka: we will use the confluent version for kafka as our streaming platform
Flask: open source python package used to build RESTful microservices
Docker: used to start a kafka cluster locally
Jupyter lab: our environment to run the code
NLTK: NLP library for python with pre-trained models.
TL;DR: The code is on GitHub.
In a realtime ML pipeline we embed a model in two ways: by using the model directly into the framework that is doing the processing or by decoupling the model separately into a microservice. By building the wrapper for the ML model we require extra effort, so why bother? There are two major advantages. Firstly, when we want to deploy a new model we don’t need to deploy the whole pipeline, we just need to expose a new microservice version. Secondly, it gives you more power into testing different versions of that ML model. For example we are able to use canary deployments and use 80% of the stream of data on the version1 of the model and 20% on version2 . Once we are happy with the quality of version2 , we shift more and more traffic towards it.
Now let’s deep dive into the development of the application.
To build the cluster we will use a docker-compose file that will start all the docker containers needed: zookeeper and a broker.
Now very briefly, kafka is a distributed streaming platform capable of handling a large number of messages, that are organized or grouped together into topics. In order to be able to process a topic in parallel, it has to be split into partitions, and the data from these partitions are stored into separate machines called brokers. And finally, zookeeper is used to manage the resources of the brokers in the clusters.To read or write into a kafka cluster we need a broker address and a topic.
The docker-compose will start zookeper on port 2181 , a kafka broker on port 9092. Besides that we use another docker container kafka-create-topic for the sole purpose to create a topic (called test) in the kafka broker.
To start the kafka cluster, we have to run the following command line instruction in the same folder where we have defined the docker compose file:
docker-compose up
This will start all the docker containers with logs. We should see something like this in the console:
We are using the REST protocol for our web service. We will do sentiment analysis using NLTK’s Vader algorithm. This is a pre-trained model, so we can only focus on the prediction part:
@app.route('/predict', methods=['POST'])def predict(): result = sid.polarity_scores(request.get_json()['data']) return jsonify(result)
We are creating a POST request that received a JSON message in the form {"data": "some text"} , where the field data contains a sentence. We will apply the algorithm and send the response back as another JSON .
To run the app simply run:
python app.py
The REST service will be available at http://127.0.0.1:9000/predict .
After we start the Jupyter lab notebook we need to make sure that we have the kafka jar as a dependency for spark to be able to run the code. Add the following in the first cell of the notebook:
import osos.environ['PYSPARK_SUBMIT_ARGS'] = "--packages=org.apache.spark:spark-sql-kafka-0-10_2.11:2.4.4 pyspark-shell"
Following that we can start pySpark using the findspark package:
import findsparkfindspark.init()
To be able to consume data in realtime we first must write some messages into kafka. We will use the confluent_kafka library in python to write a producer:
We will send the sameJSON messages {"data": value} as previously, where value is a sentence from a predefined list. For each message we write into the queue we also need to assign a key. We will assign a random one based on the uuid to achieve a good distribution into the cluster. In the end, we also run a flush command to ensure that all the messages are sent.
Once we run the confluent_kafka_producer we should receive a log telling us that the data has been sent correctly:
we’ve sent 6 messages to 127.0.0.1:9092
As stated previously we will use Spark Structured Streaming to process the data in real-time. This is an easy to use API that treats micro batches of data as data frames. We first need to read the input data into a data frame:
df_raw = spark \ .readStream \ .format('kafka') \ .option('kafka.bootstrap.servers', bootstrap_servers) \ .option("startingOffsets", "earliest") \ .option('subscribe', topic) \ .load()
The startingOffset is earliest indicating that each time we run the code we will read all the data present in the queue.
This input will contain different columns that represent different metrics from kafka like keys, values, offsets, etc. We are only interested in the values, the actual data and we can run a transformation to reflect that:
df_json = df_raw.selectExpr('CAST(value AS STRING) as json')
In Structured Streaming we can use user defined functions, that can be applied to each row in the data frame.
def apply_sentiment_analysis(data): import requests import json result = requests.post('http://localhost:9000/predict', json=json.loads(data)) return json.dumps(result.json())
We need to make our imports in the function as this is a piece of code that can be distributed on multiple machines. We post a request to our endpoint and return the response.
vader_udf = udf(lambda data: apply_sentiment_analysis(data), StringType())
We will call our udf as vader_udf and it will return a new string column.
In this final step, we get to see our results. The format of the input data is in JSON and we can transform it into a string . For that, we will use the helper function from_json . The same thing we can do to the output column from the sentiment analysis algorithm that has also the JSON format:
We can display our results in the console. Because we are using the notebook, you will only be able to visualise it from the terminal you have started the Jupyter. The command trigger(once=True) , will only run the stream processing for a short period and show the output.
That was it folks, I hope you enjoy this tutorial and find it useful. We saw how by using Structured Streaming API together with a microservice calling the ML model we can construct a powerful pattern that can be the backbone of our next real-time application. | [
{
"code": null,
"e": 282,
"s": 172,
"text": "We will build a real-time pipeline for machine learning prediction. The main frameworks that we will use are:"
},
{
"code": null,
"e": 360,
"s": 282,
"text": "Spark Structured Streaming: a mature and easy to use stream processing engin... |
Discover what Singaporeans generally talk on SMS | by Jim Meng Kok | Towards Data Science | Recently, I have found a Kaggle dataset contains a corpus of English SMS messages collected for research at the Department of Computer Science at the National University of Singapore. This dataset consists of 55,835 English SMS messages taken from the corpus on March 9 2015.
By leveraging on this dataset, I am curious to find out what Singaporeans are generally talking about in their SMS conversations. Hence, this article will go through the processes of how this exercise is conducted.
In this exercise, Python programming language is used to conduct Topic Modelling, which helps to provide a quick summary of a corpus in the form of a set of topics.
# Import necessary librariesimport numpy as npimport pandas as pdimport jsonimport collections, reimport datetime# Import Gensim (LDA)import gensimimport gensim.corpora as corporafrom gensim.utils import simple_preprocessfrom gensim.models import CoherenceModel# Import NLTKimport nltkfrom nltk.corpus import stopwordsfrom nltk.tokenize import word_tokenize# For Lemmatizationfrom textblob import Word# For Visualisationimport matplotlib.pyplot as plt# For pyLDAVisimport pyLDAvisimport pyLDAvis.gensim
The above libraries would be used for data pre-processing as well as conducting topic modelling using Latent Dirichlet Allocation (LDA) from Gensim package.
LDA assumes that documents are produced from a mixture of topics. These topics generate words based on their probability distribution. Based on a given dataset, LDA backtracks and tries to find out what topics would be created for those documents in the first place (Bansal, 2016).
The dataset is in JSON format and this is imported using json.load.
# load the datasetwith open("smsCorpus_en_2015.03.09_all.json") as f: data = json.load(f)# get the messages details only of the dataset (smsCorpus)listofMsg = data['smsCorpus']['message']# set the dictionary of messages to dataframeMsgData = pd.DataFrame(listofMsg)# getting the id and the text of the messages detailssmsMsg = MsgData[['@id','text']]# create a dataframe of the id and the text of the messages detailssmsMsg = pd.DataFrame(smsMsg)# get the number of SMS messages, which is 55,835len(smsMsg.index)
After getting the relevant data from the dataset, the first five rows of the relevant in a dataframe (smsMsg) are as shown :
The following functions would be used for pre-processing the data:
Lowercase conversion
Punctuations removal
Stop-words removal
Lemmatization
Tokenisation
smsMsg['text'] = smsMsg['text'].apply(lambda x: " ".join(str(x).lower() for x in str(x).split()))
Converting all the texts to lowercase is necessary as it is beneficial for vectorisation, the later stage of Natural Language Processing (NLP).
smsMsg['text'] = smsMsg['text'].str.replace('[^\w\s]','')
def remove_stopwords(words): stopwords_list = stopwords.words('english') customize_stop_words = ['lol', 'hi', 'hai', 'haha', 'hahaha', 'yeah', 'yay', 'ya', 'yup', 'yupz', 'yea', 'yep', 'yeap', 'yes', 'nah', 'nope', 'yo', 'ok', 'okay', 'xp', 'xd', 'le', 'na', 'lmao', 'im', 'u', 'youre', 'lo', 'loh', 'lor', 'la', 'lah', 'oso', 'wat', 'leh', 'mah', 'meh', 'siao', 'wah liao', 'wah lau', 'wah piang', 'kenna', 'kena', 'btw', 'kk', 'ah', 'oic', 'ahh', 'oh', 'haiz', 'omg', 'omigod', 'omygawd', 'hey', 'r', 'g', 'o', 'k', 'sibei', 'sibeh', 'jialat', 'arh', 'eh', 'xx', 'hmm', 'de', '2', 'n', 'liao', 'cant', 'thru', 'dont', 'dun', 'v', '4', 'ur', 'cos', 'coz', 'ive', 'ha', 'tt', 'h', 'b', 'th', 'brb', 'decimal', 'duno', 'huh', 'hiya', 'hm', 'ill', 'dunno', 'den', 'aa', 'wtf', 'ill', 'fucking', 'fuck', 'fk', 'shit'] tokens = word_tokenize(words) stopwords_list.extend(customize_stop_words) return [w for w in tokens if w not in stopwords_list]smsMsg['text'] = smsMsg['text'].apply(remove_stopwords)smsMsg['text'] = smsMsg['text'].apply(" ".join)
Punctuations are removed based on the Regular Expressions (Regex) and stop-words are removed based on the function built (remove_stopwords(words)). Stop-words usually contain determiners, prepositions, and pronouns. Psst, these are closed class when it comes to the part of speech (POS).
The stop-words removal function uses the NLTK’s stop-words list which contains the following:
There is no universal list of stop-words. Hence, in the scope of this exercise, customised stop-words were included in the stop-words removal function as well. The customised stop-words are derived from Singaporean texting slangs (E.g.: lah), vulgarities (E.g.: wtf), as well as SMS texting style of the NLTK’s stop-words (E.g.: then → den).
Removing punctuations and stop-words are necessary for the data pre-processing stage as both do not bring in value to the data but, instead, adding noise to the data.
Stemming or Lemmatization is the process of normalising the words. Instead of using stemming, lemmatization is applied in this exercise as it helps to preserve the meaning of the text messages for vectorisation, the initial stage of conducting topic modelling.
The textblob library is used for lemmatization.
smsMsg['text_after_lemmatisation'] = smsMsg['text'].apply(lambda x: " ".join([Word(word).lemmatize() for word in str(x).split()]))
Tokenization is applied on the lemmatized text into words.
words_in_docs = [nltk.word_tokenize(row) for row in smsMsg['text_after_lemmatisation']]
The initial stage of topic modelling is to create both dictionary and the corpus (vecs). Both are the two main inputs to the LDA topic model.
dictionary = gensim.corpora.Dictionary(words_in_docs)vecs = [dictionary.doc2bow(doc) for doc in words_in_docs]
In vecs, Gensim has created a unique id for each word and its word frequency in the document.
Since we have everything we need to kick-start the process of building the topic model, we need to provide the number of topics (K) as well.
In this exercise, I have set the minimum of K at 8 and the maximum of K at 20. (In case you didn’t know: In Python, the minimum value set in the range is inclusive while the maximum value set in the range is exclusive)
model_list = []coherence_values = []model_topics = []for num_topics in range(8, 21, 2): lda_x = gensim.models.ldamulticore.LdaMulticore(corpus=vecs, id2word=dictionary, num_topics=num_topics, workers=2) coherencemodel = CoherenceModel(model=lda_x, texts=words_in_docs, dictionary=dictionary, coherence='c_v') model_topics.append(num_topics) model_list.append(lda_x) coherence_values.append(coherencemodel.get_coherence()) print("# Topics: " + str(num_topics) + " Score: " + str(coherencemodel.get_coherence()))
The variable model_list now stores 8 to 20 learned topics, based on themodel_topics collecting each num_topics.
However, we have to check the coherence values of each K from 8 to 20 inclusive, which has already stored in coherence_values, by looking at the Coherence Scores chart.
limit=21; start=8; step=2;x = range(start, limit, step)plt.plot(x, coherence_values)plt.xlabel("Number of Topics")plt.ylabel("Coherence score")plt.legend(("coherence_values"), loc='best')plt.show()
Based on the visualisation of the coherence scores from K=8 to K=20, we can infer that the optimal K is 16 due to the peak.
Evaluation of topic modelling is based on intrinsic evaluation measures — Topic Coherence and Perplexity.
Topic Coherence is applied to the top N words from the topics and it is a measure of how consistent or coherent the topics are. Whereas, Perplexity measures how well the model can predict an unseen document. A good topic model contains low perplexity and high topic coherence. The topic model built in this exercise fulfils the criteria, its result is shown below.
# Compute Perplexityprint('\nPerplexity: ', lda.log_perplexity(vecs))# Compute Coherence Scorecoherence_model_lda = CoherenceModel(model=lda, texts=words_in_docs, dictionary=dictionary, coherence='c_v')coherence_lda = coherence_model_lda.get_coherence()print('\nCoherence Score: ', coherence_lda)
To inspect these learned topics, show_topics is used. In this function, we would place 16 into num_topics and place 10 into num_words to get the top 10 words of each 16 topics.
topics = lda.show_topics(num_topics=16,formatted=False,num_words=10)for i in range(0, 16): print(lda.print_topic(i))
Topic 0 is a represented as 0.042“time” + 0.031“think” + 0.026“bus” + 0.018“still” + 0.017“know” + 0.017“tell” + 0.016“end” + 0.015“got” + 0.014“fine” + ‘0.013“wait”.
It means the top 10 keywords that contribute to this topic are: time, think, bus, still, know, tell, end, got, fine, and wait. Furthermore, the weight of time on topic 0 is 0.042. The weights reflect how important a keyword is to that topic.
Based on the above analysis and my own interpretation, the following table shows the respective topic number and its top 10 keywords as well as my inference of the themes that each topic belongs to.
From the visualisation, first topic, second topic, and third topic are of the same size and are the largest which means the topics provide the highest importance in the entire corpus. The rest of the topics are of the same size too.
Furthermore, the top 3 most relevant terms are “thanks”, “love”, and “home” across the whole corpus as the corpus is about SMS which mainly touches on one is checking on whether his/her romantic partner has reached home.
However, the sixteenth topic is further away than the rest of the topics.
Due to the distance and closeness:
First topic (Topic 6), second topic (Topic 9), third topic (Topic 8), and eleventh topic (Topic 11) have similarities
Both third topic (Topic 8) and thirteenth topic (Topic 3) have similarities.
The reason why Topic 6, Topic 8, Topic 9, and Topic 11 are close to one another is that they mentioned about the relevance of sleep and well-wishes. As for the closeness between Topic 3 and Topic 8, both mentioned about the relevance of love and feelings.
Based on the output of the themes I have come out with, we can sense that most Singaporeans are generally talking about these through SMS:
Hanging out with friends (i.e. lunch, dinner, movie, games)
Ensuring their respective romantic partner has reached home
Requesting for help
Having a serious conversation to talk freely about their feelings or personal problems
Expressing good wishes
Informing people that they are busy, tired, or going to sleep
Notifying people that they have reached their respective place of residence
In this exercise, most parts of the process seem to be fairly subjective. These include the choice of K to start building the topic model, and human interpretation is involved when labelling the theme for each topic.
Furthermore, the constraint of building the customised stop-words is that I would have to think of the texting style of how Singaporeans text. For example, “then” is included in the NLTK’s stop-words. However, some Singaporeans typed “den” instead of “then” while texting.
Additionally, lots of fine-tuning was done in this exercise to get good results from building the topic model using LDA.
One shortcoming is that even though the data is in English, there are a few data that the texts were written in Hanyu Pinyin or Bahasa Melayu which the LDA considered them as English to derive the topics. Another shortcoming is that the actual meaning of an English word could be different from how Singaporeans use. For example, in the English dictionary, the word “got” means received. However, to some Singaporeans, “got” means have. Hence, there is a need for more comprehensive translation before it can be incorporated.
For this exercise, more could be done by adding each dominant topic for each data together with the dominant topic’s percentage contribution to the respective data. Thereafter, clustering could be conducted using the models produced by LDA. Once the process of clustering would be done, model evaluation of computing purity and entropy could be conducted to retrieve the best clustering model (high purity and low entropy). By leveraging on that clustering model, profiling could be done to identify the main theme of each cluster.
This exercise has taught us that the data pre-processing methods are very influential to the building of topic model using LDA. LDA does come into handy when it is about conducting topic modelling. Hopefully, this article has given you a rough idea of how topic modelling, one of the text mining techniques, is conducted.
Bansal, S. (2016). Beginners Guide to Topic Modeling in Python. Analytics Vidhya. August 24. Retrieved from https://www.analyticsvidhya.com/blog/2016/08/beginners-guide-to-topic-modeling-in-python/
GeeksforGeeks. (2020). Removing stop words with NLTK in Python. November 24. Retrieved from https://www.geeksforgeeks.org/removing-stop-words-nltk-python/
Tao Chen and Min-Yen Kan (2013). Creating a Live, Public Short Message Service Corpus: The NUS SMS Corpus. Language Resources and Evaluation, 47(2)(2013), pages 299–355. URL: https://link.springer.com/article/10.1007%2Fs10579-012-9197-9 | [
{
"code": null,
"e": 447,
"s": 171,
"text": "Recently, I have found a Kaggle dataset contains a corpus of English SMS messages collected for research at the Department of Computer Science at the National University of Singapore. This dataset consists of 55,835 English SMS messages taken from the cor... |
Python program to split the string and convert it to dictionary - GeeksforGeeks | 11 Jan, 2022
Given a delimiter (denoted as delim in code) separated string, order the splits in form of dictionary.
Examples :
Input : test_str = ‘gfg*is*best*for*geeks’, delim = “*” Output : {0: ‘gfg’, 1: ‘is’, 2: ‘best’, 3: ‘for’, 4: ‘geeks’}
Input : test_str = ‘gfg*is*best’, delim = “*” Output : {0: ‘gfg’, 1: ‘is’, 2: ‘best’}
Method 1 : Using split() + loop
This is a brute way in which this task can be performed. In this, the split sections are stored in a temporary list using split() and then the new dictionary is created from the temporary list.
Python3
# Using split() + loop # initializing stringtest_str = 'gfg_is_best_for_geeks' # printing original stringprint("The original string is : " + str(test_str)) # initializing delimdelim = "_" # splitting using split()temp = test_str.split(delim) res = dict() # using loop to reform dictionary with splitsfor idx, ele in enumerate(temp): res[idx] = ele # printing resultprint("Dictionary after splits ordering : " + str(res))
Output:
The original string is : gfg_is_best_for_geeks Dictionary after splits ordering : {0: ‘gfg’, 1: ‘is’, 2: ‘best’, 3: ‘for’, 4: ‘geeks’}
Method 2 : Using dictionary comprehension + split() + enumerate()
This is a shorthand method with the help of which this task can be performed. In this, we perform the task of dictionary reconstruction using one-liner dictionary(dictionary comprehension) and enumerate() is used to help in ordering.
Python3
# Using dictionary comprehension + split() + enumerate() # initializing stringtest_str = 'gfg_is_best_for_geeks' # printing original stringprint("The original string is : " + str(test_str)) # initializing delimdelim = "_" # using one liner to rearrange dictionaryres = {idx: ele for idx, ele in enumerate(test_str.split(delim))} # printing resultprint("Dictionary after splits ordering : " + str(res))
Output:
The original string is : gfg_is_best_for_geeks Dictionary after splits ordering : {0: ‘gfg’, 1: ‘is’, 2: ‘best’, 3: ‘for’, 4: ‘geeks’}
rajeev0719singh
Python dictionary-programs
Python
Python Programs
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
How to Install PIP on Windows ?
Read a file line by line in Python
Enumerate() in Python
Iterate over a list in Python
Different ways to create Pandas Dataframe
Python program to convert a list to string
Defaultdict in Python
Python | Get dictionary keys as a list
Python | Split string into list of characters
Python | Convert a list to dictionary | [
{
"code": null,
"e": 24362,
"s": 24334,
"text": "\n11 Jan, 2022"
},
{
"code": null,
"e": 24465,
"s": 24362,
"text": "Given a delimiter (denoted as delim in code) separated string, order the splits in form of dictionary."
},
{
"code": null,
"e": 24476,
"s": 24465,
... |
Minimize the Heights I | Practice | GeeksforGeeks | Given an array arr[] denoting heights of N towers and a positive integer K, you have to modify the height of each tower either by increasing or decreasing them by K only once.
Find out what could be the possible minimum difference of the height of shortest and longest towers after you have modified each tower.
Note: Assume that height of the tower can be negative.
A slight modification of the problem can be found here.
Example 1:
Input:
K = 2, N = 4
Arr[] = {1, 5, 8, 10}
Output:
5
Explanation:
The array can be modified as
{3, 3, 6, 8}. The difference between
the largest and the smallest is 8-3 = 5.
Example 2:
Input:
K = 3, N = 5
Arr[] = {3, 9, 12, 16, 20}
Output:
11
Explanation:
The array can be modified as
{6, 12, 9, 13, 17}. The difference between
the largest and the smallest is 17-6 = 11.
Your Task:
You don't need to read input or print anything. Your task is to complete the function getMinDiff() which takes the arr[], n and k as input parameters and returns an integer denoting the minimum difference.
Expected Time Complexity: O(N*logN)
Expected Auxiliary Space: O(N)
Constraints
1 ≤ K ≤ 104
1 ≤ N ≤ 105
1 ≤ Arr[i] ≤ 105
0
viktor1 day ago
All testcase passed : JAVA Solution
int getMinDiff(int[] arr, int n, int k) { // code here if(n==1){ return 0; } Arrays.sort(arr); int diff = arr[n-1] - arr[0]; int minimum,maximum; for(int i=1;i<n;i++){ maximum = Math.max(arr[i-1]+k, arr[n-1]-k); minimum = Math.min(arr[0]+k, arr[i]-k); diff = Math.min(diff,maximum-minimum); } return diff; }
0
prasantpoudel332 weeks ago
arr.sort()
for i in range(n):
if arr[i]>k:
arr[i]=arr[i]-k
else:
arr[i]=arr[i]+k
arr.sort()
return arr[n-1]-arr[0]
for one test case
arr=[2,6,3,4,7,2,10,3,2,1]
k=5
n=10
the output should be 8 but while submiting the code this test case show the desire output is 7
but according to soln it should be 8
0
swapniltayal4223 weeks ago
class Solution{ public: int getMinDiff(int arr[], int n, int k) { // code here if (n == 0){ return -1; } sort(arr, arr+n); int diff = (arr[n-1] - arr[0]); int maxi, mini; for (int i=1; i<n; i++){ maxi = max(arr[i-1]+k, arr[n-1]-k); mini = min(arr[0] + k, arr[i]-k); diff = min(maxi - mini, diff); }return diff; }};
0
georgemonier19821 month ago
it was a hard one I have to say :)
====================
from queue import PriorityQueue
class Solution: def getMinDiff(self, arr, n, k): # code here if n == 1 : return 0
q = PriorityQueue() remove_dub = set() for x in arr : remove_dub.add(x) for h in remove_dub : q.put(h)
sorted_h = []
for _ in range(len(remove_dub)): sorted_h.append(q.get())
Tallest = None Shortest = None Diff = None
if sorted_h[-1] - k >= sorted_h[-2] + k : Diff = (sorted_h[-1] - k) - (sorted_h[0] + k) return Diff
if sorted_h[0] + k <= sorted_h[1] - k : Diff = (sorted_h[-1] - k) - (sorted_h[0] + k) return Diff
for i in range(2,len(sorted_h)+1):
if i == len(sorted_h) : if Diff : Diff = min((sorted_h[-1] - k - sorted_h[0] + k) , Diff ) else : Diff = (sorted_h[-1] - k - sorted_h[0] + k)
if sorted_h[-i] + k >= sorted_h[-i+1] - k : current_plus_k = sorted_h[-i] + k tallest = max(current_plus_k,sorted_h[-1]-k) previ_minus_k = sorted_h[-i+1] - k smallest_plus_k = sorted_h[0] + k shortest = min (smallest_plus_k,previ_minus_k) if not Diff : Diff = tallest - shortest else : Diff = min (tallest - shortest , Diff) #print(tallest,shortest,Diff) else : Diff = (sorted_h[-1] - k) - (sorted_h[0] + k) return Diff
return Diff
0
ghoshghoshbishal1 month ago
class Solution { int getMinDiff(int[] arr, int n, int k) { Arrays.sort(arr); int diff = arr[n-1] - arr[0]; int maxNow = arr[n-1] - k; int minNow = arr[0] + k; int max = maxNow; int min = minNow; for(int i = 0; i < n - 1; i++) { max = Math.max(maxNow, arr[i] + k); min = Math.min(minNow, arr[i+1] - k); diff = Math.min(diff, max - min); } return diff; }}
0
subhashishde081 month ago
class Solution{
public:
int getMinDiff(int arr[], int n, int k) {
// code here
if(n==1){
return 0;
}
sort(arr,arr+n);
int diff = arr[n-1] - arr[0];
int mini,maxi;
for(int i=1;i<n;i++){
maxi = max(arr[i-1]+k, arr[n-1]-k);
mini = min(arr[0]+k, arr[i]-k);
diff = min(diff,maxi-mini);
}
return diff;
}
};
+5
parthtagalpallewar1232 months ago
Can Anyone Help me by answering this question?
K = 5
int[] arr = {2 6 3 4 7 2 10 3 2 1}
Why the answer to this question is 7?
0
naveenmnka15
This comment was deleted.
+3
0231216rahulpatel2 months ago
sort(arr,arr+n); int ans=arr[n-1]-arr[0]; int mi,ma; for(int i=1;i<n;i++) { mi=min(arr[0]+k,arr[i]-k); ma=max(arr[i-1]+k,arr[n-1]-k); ans=min(ans,ma-mi); } return ans;
0
sunghunet3 months ago
int getMinDiff(int arr[], int n, int k) {
sort(arr, arr+n);
int minimum_range = (*(arr+(n-1)) - *(arr)), short_range=0, long_range=0;
for( int i = 0; i < n ; i++ ) {
short_range = ((arr[0]+k) > (arr[i+1]-k)) ? arr[i+1]-k : arr[0]+k;
long_range = ((arr[i]+k > arr[n-1]-k)) ? arr[i]+k : arr[n-1]-k;
minimum_range = (minimum_range > (long_range-short_range)) ? long_range-short_range : minimum_range;
}
return minimum_range;
}
We strongly recommend solving this problem on your own before viewing its editorial. Do you still
want to view the editorial?
Login to access your submissions.
Problem
Contest
Reset the IDE using the second button on the top right corner.
Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values.
Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints.
You can access the hints to get an idea about what is expected of you as well as the final solution code.
You can view the solutions submitted by other users from the submission tab. | [
{
"code": null,
"e": 662,
"s": 238,
"text": "Given an array arr[] denoting heights of N towers and a positive integer K, you have to modify the height of each tower either by increasing or decreasing them by K only once.\nFind out what could be the possible minimum difference of the height of shorte... |
Best way to test if a row exists in a MySQL table | To test whether a row exists in a MySQL table or not, use exists condition. The exists condition
can be used with subquery. It returns true when row exists in the table, otherwise false is
returned. True is represented in the form of 1 and false is represented as 0.
For better understanding, firstly we will create a table with the help of CREATE command.
The following is the query to create a table −
mysql> CREATE table ExistsRowDemo
-> (
-> ExistId int,
-> Name varchar(100)
-> );
Query OK, 0 rows affected (0.53 sec)
After creating the table successfully, we will insert some records with the help of INSERT
command. The query to insert records into the table −
mysql> INSERT into ExistsRowDemo values(100,'John');
Query OK, 1 row affected (0.16 sec)
mysql> INSERT into ExistsRowDemo values(101,'Bob');
Query OK, 1 row affected (0.17 sec)
mysql> INSERT into ExistsRowDemo values(103,'Carol');
Query OK, 1 row affected (0.20 sec)
mysql> INSERT into ExistsRowDemo values(104,'David');
Query OK, 1 row affected (0.13 sec)
After inserting all the records, we can display them with the help of SELECT command, which is
as follows −
mysql> SELECT * from ExistsRowDemo;
The following is the output −
+---------+-------+
| ExistId | Name |
+---------+-------+
| 100 | John |
| 101 | Bob |
| 103 | Carol |
| 104 | David |
+---------+-------+
4 rows in set (0.00 sec)
We added some records into the table. The syntax to check whether a row exists in a table or
not with the help of EXISTS condition is as follows −
SELECT EXISTS(SELECT * FROM yourTableName WHERE yourCondition);
I am applying the above query to get the result −
Note: Firstly, I am considering the condition when row exists in the table. After that, the
condition will be mentioned when a row does not exist.
In this case, I am giving a condition when row exists. Let us apply the the above syntax to test
whether row exists or not.
mysql> SELECT EXISTS(SELECT * from ExistsRowDemo WHERE ExistId=104);
The following is the output −
+------------------------------------------------------+
| EXISTS(SELECT * from ExistsRowDemo WHERE ExistId=104)|
+------------------------------------------------------+
| 1 |
+------------------------------------------------------+
1 row in set (0.00 sec)
From the above sample output, it is clear that row exists, since the value we got is 1. This
means TRUE!
In this case, I am explaining the condition when row does not exist.
Applying the above query.
mysql> SELECT EXISTS(SELECT * from ExistsRowDemo WHERE ExistId=105);
The following is the output −
+------------------------------------------------------+
| EXISTS(SELECT * from ExistsRowDemo WHERE ExistId=105)|
+------------------------------------------------------+
| 0 |
+------------------------------------------------------+
1 row in set (0.00 sec)
From the above output, we can see the output is 0 i.e. false (row does not exist). | [
{
"code": null,
"e": 1329,
"s": 1062,
"text": "To test whether a row exists in a MySQL table or not, use exists condition. The exists condition\ncan be used with subquery. It returns true when row exists in the table, otherwise false is\nreturned. True is represented in the form of 1 and false is re... |
Database INSERT Operation in Perl | Perl INSERT operation is required when you want to create some records into a table. Here we are using table TEST_TABLE to create our records. So once our database connection is established, we are ready to create records into TEST_TABLE. Following is the procedure to create single record into TEST_TABLE. You can create as many as records you like using the same concept.
Record creation takes the following steps −
Preparing SQL statement with INSERT statement. This will be done using prepare() API.
Executing SQL query to select all the results from the database. This will be done using execute() API.
Releasing Stattement handle. This will be done using finish() API.
If everything goes fine then commit this operation otherwise you can rollback complete transaction. Commit and Rollback are explained in next sections.
my $sth = $dbh->prepare("INSERT INTO TEST_TABLE
(FIRST_NAME, LAST_NAME, SEX, AGE, INCOME )
values
('john', 'poul', 'M', 30, 13000)");
$sth->execute() or die $DBI::errstr;
$sth->finish();
$dbh->commit or die $DBI::errstr;
There may be a case when values to be entered is not given in advance. So you can use bind variables which will take the required values at run time. Perl DBI modules make use of a question mark in place of actual value and then actual values are passed through execute() API at the run time. Following is the example −
my $first_name = "john";
my $last_name = "poul";
my $sex = "M";
my $income = 13000;
my $age = 30;
my $sth = $dbh->prepare("INSERT INTO TEST_TABLE
(FIRST_NAME, LAST_NAME, SEX, AGE, INCOME )
values
(?,?,?,?)");
$sth->execute($first_name,$last_name,$sex, $age, $income)
or die $DBI::errstr;
$sth->finish();
$dbh->commit or die $DBI::errstr; | [
{
"code": null,
"e": 1436,
"s": 1062,
"text": "Perl INSERT operation is required when you want to create some records into a table. Here we are using table TEST_TABLE to create our records. So once our database connection is established, we are ready to create records into TEST_TABLE. Following is t... |
Count substrings with different first and last characters - GeeksforGeeks | 21 May, 2021
Given a string S, the task is to print the count of substrings from a given string whose first and last characters are different.
Examples:
Input: S = “abcab”Output: 8Explanation: There are 8 substrings having first and last characters different {ab, abc, abcab, bc, bca, ca, cab, ab}.
Input: S = “aba”Output: 2Explanation: There are 2 substrings having first and last characters different {ab, ba}.
Naive Approach: The idea is to generate all possible substrings of a given string and for each substring, check if the first and the last characters are different or not. If found to be true, then increment the count by 1 and check for the next substring. Print the count after traversal of all the substring.
Below is the implementation of the above approach:
C++
Java
Python3
C#
Javascript
// C++ program for the above approach #include <bits/stdc++.h>using namespace std; // Function to count the substrings// having different first and last// charactersint countSubstring(string s, int n){ // Store the final count int ans = 0; // Loop to traverse the string for (int i = 0; i < n; i++) { // Counter for each iteration int cnt = 0; // Iterate over substrings for (int j = i + 1; j < n; j++) { // Compare the characters if (s[j] != s[i]) // Increase count cnt++; } // Adding count of substrings // to the final count ans += cnt; } // Print the final count cout << ans;} // Driver Codeint main(){ // Given string string S = "abcab"; // Length of the string int N = 5; // Function Call countSubstring(S, N); return 0;}
// Java program for the above approachimport java.util.*;import java.lang.*;import java.io.*; class GFG{ // Function to count the substrings// having different first and last// charactersstatic void countSubstring(String s, int n){ // Store the final count int ans = 0; // Loop to traverse the string for(int i = 0; i < n; i++) { // Counter for each iteration int cnt = 0; // Iterate over substrings for(int j = i + 1; j < n; j++) { // Compare the characters if (s.charAt(j) != s.charAt(i)) // Increase count cnt++; } // Adding count of substrings // to the final count ans += cnt; } // Print the final count System.out.print(ans);} // Driver Codepublic static void main(String[] args){ // Given string String S = "abcab"; // Length of the string int N = 5; // Function call countSubstring(S, N);}} // This code is contributed by code_hunt
# Python3 program for the above approach # Function to count the substrings# having different first and last# charactersdef countSubstring(s, n): # Store the final count ans = 0 # Loop to traverse the string for i in range(n): # Counter for each iteration cnt = 0 # Iterate over substrings for j in range(i + 1, n): # Compare the characters if (s[j] != s[i]): # Increase count cnt += 1 # Adding count of substrings # to the final count ans += cnt # Print the final count print(ans) # Driver Code # Given stringS = "abcab" # Length of the stringN = 5 # Function callcountSubstring(S, N) # This code is contributed by code_hunt
// C# program for the above approachusing System;using System.Collections;using System.Collections.Generic;using System.Text; class GFG{ // Function to count the substrings// having different first and last// charactersstatic void countSubstring(string s, int n){ // Store the final count int ans = 0; // Loop to traverse the string for(int i = 0; i < n; i++) { // Counter for each iteration int cnt = 0; // Iterate over substrings for(int j = i + 1; j < n; j++) { // Compare the characters if (s[j] != s[i]) // Increase count cnt++; } // Adding count of substrings // to the final count ans += cnt; } // Print the final count Console.Write(ans);} // Driver Codepublic static void Main(string[] args){ // Given string string S = "abcab"; // Length of the string int N = 5; // Function call countSubstring(S, N);}} // This code is contributed by rutvik_56
<script> // JavaScript program for// the above approach // Function to count the substrings// having different first and last// charactersfunction countSubstring(s, n){ // Store the final count let ans = 0; // Loop to traverse the string for(let i = 0; i < n; i++) { // Counter for each iteration let cnt = 0; // Iterate over substrings for(let j = i + 1; j < n; j++) { // Compare the characters if (s[j] != s[i]) // Increase count cnt++; } // Adding count of substrings // to the final count ans += cnt; } // Print the final count document.write(ans);} // Driver code // Given string let S = "abcab"; // Length of the string let N = 5; // Function call countSubstring(S, N); // This code is contributed by splevel62.</script>
8
Time Complexity: O(N2)Auxiliary Space: O(1)
Efficient Approach: The above approach can be optimized using Map by to store the frequency of the characters of the string. Follow the steps below to solve the problem:
Initialize two variables, one for counting different characters for every iteration (say cur) and one to store the final count of substrings (say ans).Initialize a map M to store the frequency of all characters in it.Traverse the given string and for each character, follow the steps below:Iterate the map M.If the first element i.e., the key of the map is not the same as the current character, then proceed.Otherwise, add the value corresponding to the current character.After traversal of the Map, add cur to the final result i.e., ans += cur.
Initialize two variables, one for counting different characters for every iteration (say cur) and one to store the final count of substrings (say ans).
Initialize a map M to store the frequency of all characters in it.
Traverse the given string and for each character, follow the steps below:Iterate the map M.If the first element i.e., the key of the map is not the same as the current character, then proceed.Otherwise, add the value corresponding to the current character.
Iterate the map M.
If the first element i.e., the key of the map is not the same as the current character, then proceed.
Otherwise, add the value corresponding to the current character.
After traversal of the Map, add cur to the final result i.e., ans += cur.
Below is the implementation of the above approach:
C++
Java
Python3
C#
Javascript
// C++ program for the above approach #include <bits/stdc++.h>using namespace std; // Function to count the substrings// having different first & last characterint countSubstring(string s, int n){ // Stores frequency of each char map<char, int> m; // Loop to store frequency of // the characters in a Map for (int i = 0; i < n; i++) m[s[i]]++; // To store final result int ans = 0; // Traversal of string for (int i = 0; i < n; i++) { // Store answer for every // iteration int cnt = 0; m[s[i]]--; // Map traversal for (auto value : m) { // Compare current char if (value.first == s[i]) { continue; } else { cnt += value.second; } } ans += cnt; } // Print the final count cout << ans;} // Driver Codeint main(){ // Given string string S = "abcab"; // Length of the string int N = 5; // Function Call countSubstring(S, N); return 0;}
// Java program for the above approachimport java.util.*; class GFG{ // Function to count the subStrings// having different first & last characterstatic void countSubString(char []s, int n){ // Stores frequency of each char HashMap<Character, Integer> mp = new HashMap<Character, Integer>(); // Loop to store frequency of // the characters in a Map for(int i = 0; i < n; i++) if (mp.containsKey(s[i])) { mp.put(s[i], mp.get(s[i]) + 1); } else { mp.put(s[i], 1); } // To store final result int ans = 0; // Traversal of String for(int i = 0; i < n; i++) { // Store answer for every // iteration int cnt = 0; if (mp.containsKey(s[i])) { mp.put(s[i], mp.get(s[i]) - 1); // Map traversal for(Map.Entry<Character, Integer> value : mp.entrySet()) { // Compare current char if (value.getKey() == s[i]) { continue; } else { cnt += value.getValue(); } } ans += cnt; } } // Print the final count System.out.print(ans);} // Driver Codepublic static void main(String[] args){ // Given String String S = "abcab"; // Length of the String int N = 5; // Function call countSubString(S.toCharArray(), N);}} // This code is contributed by Amit Katiyar
# Python3 program for the above approach # Function to count the substrings# having different first & last characterdef countSubstring(s, n): # Stores frequency of each char m = {} # Loop to store frequency of # the characters in a Map for i in range(n): if s[i] in m: m[s[i]] += 1 else: m[s[i]] = 1 # To store final result ans = 0 # Traversal of string for i in range(n): # Store answer for every # iteration cnt = 0 if s[i] in m: m[s[i]] -= 1 else: m[s[i]] = -1 # Map traversal for value in m: # Compare current char if (value == s[i]): continue else: cnt += m[value] ans += cnt # Print the final count print(ans) # Driver code # Given stringS = "abcab" # Length of the stringN = 5 # Function CallcountSubstring(S, N) # This code is contributed by divyeshrabadiya07
// C# program for the above approachusing System;using System.Collections.Generic; class GFG{ // Function to count the subStrings// having different first & last characterstatic void countSubString(char []s, int n){ // Stores frequency of each char Dictionary<char, int> mp = new Dictionary<char, int>(); // Loop to store frequency of // the characters in a Map for(int i = 0; i < n; i++) if (mp.ContainsKey(s[i])) { mp[s[i]] = mp[s[i]] + 1; } else { mp.Add(s[i], 1); } // To store readonly result int ans = 0; // Traversal of String for(int i = 0; i < n; i++) { // Store answer for every // iteration int cnt = 0; if (mp.ContainsKey(s[i])) { mp[s[i]] = mp[s[i]] - 1; // Map traversal foreach(KeyValuePair<char, int> value in mp) { // Compare current char if (value.Key == s[i]) { continue; } else { cnt += value.Value; } } ans += cnt; } } // Print the readonly count Console.Write(ans);} // Driver Codepublic static void Main(String[] args){ // Given String String S = "abcab"; // Length of the String int N = 5; // Function call countSubString(S.ToCharArray(), N);}} // This code is contributed by Amit Katiyar
<script>// Javascript program for the above approach // Function to count the substrings// having different first & last characterfunction countSubstring(s, n){ // Stores frequency of each char var m = new Map(); // Loop to store frequency of // the characters in a Map for (var i = 0; i < n; i++) { if(m.has(s[i])) m.set(s[i], m.get(s[i])+1) else m.set(s[i], 1) } // To store final result var ans = 0; // Traversal of string for (var i = 0; i < n; i++) { // Store answer for every // iteration var cnt = 0; if(m.has(s[i])) m.set(s[i], m.get(s[i])-1) // Map traversal m.forEach((value, key) => { // Compare current char if (key != s[i]) { cnt += value; } }); ans += cnt; } // Print the final count document.write( ans);} // Driver Code // Given stringvar S = "abcab"; // Length of the stringvar N = 5; // Function CallcountSubstring(S, N); // This code is contributed by itsok.</script>
8
Time Complexity: O(N*26)Auxiliary Space: O(N)
rutvik_56
code_hunt
amit143katiyar
divyeshrabadiya07
splevel62
itsok
cpp-map
frequency-counting
Greedy
Hash
Strings
Hash
Strings
Greedy
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Split the given array into K sub-arrays such that maximum sum of all sub arrays is minimum
Program for First Fit algorithm in Memory Management
Optimal Page Replacement Algorithm
Program for Best Fit algorithm in Memory Management
Bin Packing Problem (Minimize number of used Bins)
Given an array A[] and a number x, check for pair in A[] with sum as x (aka Two Sum)
Internal Working of HashMap in Java
Hashing | Set 1 (Introduction)
Count pairs with given sum
Hashing | Set 3 (Open Addressing) | [
{
"code": null,
"e": 24903,
"s": 24875,
"text": "\n21 May, 2021"
},
{
"code": null,
"e": 25033,
"s": 24903,
"text": "Given a string S, the task is to print the count of substrings from a given string whose first and last characters are different."
},
{
"code": null,
"... |
As HTTP is a stateless then how to maintain the session between web browser and web server? | HTTP is a "stateless" protocol which means each time a client retrieves a Webpage, the client opens a separate connection to the Web server and the server automatically does not keep any record of previous client request.
Let us now discuss a few options to maintain the session between the Web Client and the Web Server −
A webserver can assign a unique session ID as a cookie to each web client and for subsequent requests from the client they can be recognized using the received cookie.
This may not be an effective way as the browser at times does not support a cookie. It is not recommended to use this procedure to maintain the sessions.
A web server can send a hidden HTML form field along with a unique session ID as follows −
<input type = "hidden" name = "sessionid" value = "12345">
This entry means that, when the form is submitted, the specified name and value are automatically included in the GET or the POST data. Each time the web browser sends the request back, the session_id value can be used to keep the track of different web browsers.
This can be an effective way of keeping track of the session but clicking on a regular (<A HREF...>) hypertext link does not result in a form submission, so hidden form fields also cannot support general session tracking.
You can append some extra data at the end of each URL. This data identifies the session; the server can associate that session identifier with the data it has stored about that session.
For example, with http://tutorialspoint.com/file.htm;sessionid=12345, the session identifier is attached as sessionid = 12345 which can be accessed at the web server to identify the client.
URL rewriting is a better way to maintain sessions and works for the browsers when they don't support cookies. The drawback here is that you will have to generate every URL dynamically to assign a session ID though page is a simple static HTML page. | [
{
"code": null,
"e": 1284,
"s": 1062,
"text": "HTTP is a \"stateless\" protocol which means each time a client retrieves a Webpage, the client opens a separate connection to the Web server and the server automatically does not keep any record of previous client request."
},
{
"code": null,
... |
Loop through a Dictionary in Javascript | Here we'll implement a for each function in our class and accept a callback that we can call on every key-value pair. Let's see how we can implement such a function −
forEach(callback) {
for (let prop in this.container) {
// Call the callback as: callback(key, value)
callback(prop, this.container[prop]);
}
}
You can test this using −
const myMap = new MyMap();
myMap.put("key1", "value1");
myMap.put("key2", "value2");
myMap.forEach((k, v) => console.log(`Key is ${k} and value is ${v}`));
This will give the output −
Key is key1 and value is value1
Key is key2 and value is value2
ES6 Maps also have a prototype method forEach that you can use similar to how we've used it here. For example,
const myMap = new Map([
["key1", "value1"],
["key2", "value2"]
]);
myMap.forEach((k, v) => console.log(`Key is ${k} and value is ${v}`));
This will give the output −
Key is key1 and value is value1
Key is key2 and value is value2 | [
{
"code": null,
"e": 1230,
"s": 1062,
"text": "Here we'll implement a for each function in our class and accept a callback that we can call on every key-value pair. Let's see how we can implement such a function − "
},
{
"code": null,
"e": 1391,
"s": 1230,
"text": "forEach(callba... |
Spotify’s “This Is” playlists: the ultimate song analysis for 50 mainstream artists | by James Le | Towards Data Science | Each artist has their own unique musical styles. From Ed Sheeran who devotes his life to the acoustic guitar, to Drake who masters the art of rapping. From Adele who can belt some crazy high notes on her pop ballads, to Kygo who creates EDM magic on his DJ set. Music is about creativity, originality, inspiration, and feelings, and it is the perfect gateway to connect people across differences.
Spotify is the largest music streaming service available. With more than 35 million songs and 170 million monthly active users, it is the ideal platform for musicians to reach their audience. On the app, music can be browsed or searched via various parameters — such as artists, album, genre, playlist, or record label. Users can create, edit, and share playlists, share tracks on social media, and make playlists with other users.
Additionally, Spotify launched a variety of interesting playlists tailor-made for its users, of which I most admire these three:
Discover Weekly: a weekly generated playlist (updated on Mondays) that brings users two hours of custom-made music recommendations, mixing a user’s personal taste with songs enjoyed by similar listeners.
Release Radar: a personalized playlist that allows users to stay up-to-date on new music released by artists they listen to the most.
Daily Mix: a series of playlists that have “near endless playback” and mixes the user’s favorite tracks with new, recommended songs.
I recently discovered the ‘This Is” playlist series. One of Spotify’s best original features, “This Is” delivers on a major promise of the streaming revolution — the canonization and preservation of great artists’ repertoires for future generations to discover and appreciate.
Each one is dedicated to a different legendary artist, chronicling the high points of iconic discographies. “This is: Kanye West”. “This is: Maroon 5”. “This is: Elton John”. Spotify has provided a shortcut, giving us curated lists of the greatest songs from the greatest artists.
The purpose of this project is to analyze the music that different artists on Spotify produce. The focus will be placed on disentangling the musical taste of 50 different artists from a wide range of genres. Throughout the process, I also identify different clusters of artists that share a similar musical style.
For the study, I will access the Spotify Web API, which provides data from the Spotify music catalog. This can be accessed via standard HTTPS requests to an API endpoint.
The Spotify API provides, among other things, track information for each song, including audio statistics such as danceability, instrumentalness, or tempo. Each feature measures an aspect of a song. Detailed information on how each feature is calculated can be found in the Spotify API Website. The code snippets in this article might be a bit tricky to understand, especially for data beginners, so bear with me.
Here’s a quick summary of my approach:
Get the data from Spotify API.
Process the data to extract audio features for each artist.
Visualize the data using D3.js.
Apply k-means clustering to separate the artists into different groups.
Analyze each feature for all the artists.
Let’s now retrieve the audio feature information from “This Is” Playlists of 50 different artists on Spotify.
The first step was registering my application in the API Website and getting the keys (Client ID and Client Secret) for future requests.
The Spotify Web API has different URIs (Uniform Resource Identifiers) to access playlists, artists, or tracks information. Consequently, the process of getting data must be divided into two key steps:
get the “This Is” Playlist Series for multiple musicians.
get the audio features for each artist’s Playlist tracks.
First, I created two variables for the Client ID and the Client Secretcredentials.
spotifyKey <- "YOUR CLIEND ID"spotifySecret <- "YOUR CLIENT SECRET"
After that, I requested an access token in order to authorize my app to retrieve and manage Spotify data.
library(Rspotify)library(httr)library(jsonlite)spotifyEndpoint <- oauth_endpoint(NULL, "https://accounts.spotify.com/authorize","https://accounts.spotify.com/api/token")spotifyToken <- spotifyOAuth("Spotify Analysis", spotifyKey, spotifySecret)
The first step was to pull the artists’ “This Is” series is to get the URIs for each one. Here are the 50 musicians I have chosen, using their popularity, modernity, and diversity as the main criteria:
Pop: Taylor Swift, Ariana Grande, Shawn Mendes, Maroon 5, Adele, Justin Bieber, Ed Sheeran, Justin Timberlake, Charlie Puth, John Mayer, Lorde, Fifth Harmony, Lana Del Rey, James Arthur, Zara Larsson, Pentatonix.
Hip-Hop / Rap: Kendrick Lamar, Post Malone, Drake, Kanye West, Eminem, Future, 50 Cent, Lil Wayne, Wiz Khalifa, Snoop Dogg, Macklemore, Jay-Z.
R & B: Bruno Mars, Beyonce, Enrique Iglesias, Stevie Wonder, John Legend, Alicia Keys, Usher, Rihanna.
EDM / House: Kygo, The Chainsmokers, Avicii, Marshmello, Calvin Harris, Martin Garrix.
Rock: Coldplay, Elton John, One Republic, The Script, Jason Mraz.
Jazz: Frank Sinatra, Michael Buble, Norah Jones.
I basically went to each musician’s individual playlist, copied the URIs, stored each URI in a .csv file, and imported the .csv files into R.
library(readr)playlistURI <- read.csv("this-is-playlist-URI.csv", header = T, sep = ";")
With each Playlist URI, I applied the getPlaylistSongs from the “RSpotify” package, and stored the Playlist information in an empty data.frame.
# Empty dataframePlaylistSongs <- data.frame(PlaylistID = character(), Musician = character(), tracks = character(), id = character(), popularity = integer(), artist = character(), artistId = character(), album = character(), albumId = character(), stringsAsFactors=FALSE)# Getting each playlistfor (i in 1:nrow(playlistURI)) { i <- cbind(PlaylistID = as.factor(playlistURI[i,2]), Musician = as.factor(playlistURI[i,1]), getPlaylistSongs("spotify", playlistid = as.factor(playlistURI[i,2]), token=spotifyToken)) PlaylistSongs <- rbind(PlaylistSongs, i)}
First, I wrote a formula (getFeatures) that extracts the audio features for any specific ID stored as a vector.
getFeatures <- function (vector_id, token) { link <- httr::GET(paste0("https://api.spotify.com/v1/audio-features/?ids=", vector_id), httr::config(token = token)) list <- httr::content(link) return(list)}
Next, I included getFeatures in another formula (get_features). The latter formula extracts the audio features for the track ID’s vector, and returns them in a data.frame.
get_features <- function (x) { getFeatures2 <- getFeatures(vector_id = x, token = spotifyToken) features_output <- do.call(rbind, lapply(getFeatures2$audio_features, data.frame, stringsAsFactors=FALSE))}
Using the formula created above, I was able to extract the audio features for each track. In order to do so, I needed a vector containing each track ID. The rate limit for the Spotify API is 100 tracks, so I decided to create a vector with track IDs for each musician.
Next, I applied the get_features formula to each vector, obtaining the audio features for each musician.
After that, I merged each musician’s audio features data.frame into a new one, all_features. It contains the audio features for all the tracks in every musician’s “This Is” Playlist.
library(gdata)all_features <- combine(TaylorSwift,ArianaGrande,KendrickLamar,ShawnMendes,Maroon5,PostMalone,Kygo,TheChainsmokers,Adele,Drake,JustinBieber,Coldplay,KanyeWest,BrunoMars,EdSheeran,Eminem,Beyonce,Avicii,Marshmello,CalvinHarris,JustinTimberlake,FrankSinatra,CharliePuth,MichaelBuble,MartinGarrix,EnriqueIglesias,JohnMayer,Future,EltonJohn,FiftyCent,Lorde,LilWayne,WizKhalifa,FifthHarmony,LanaDelRay,NorahJones,JamesArthur,OneRepublic,TheScript,StevieWonder,JasonMraz,JohnLegend,Pentatonix,AliciaKeys,Usher,SnoopDogg,Macklemore,ZaraLarsson,JayZ,Rihanna)
Finally, I computed the mean of each musician’s audio features using the aggregate function. The resulting data.frame contains the audio features for each musician, expressed as the mean of the tracks in their respective playlists.
mean_features <- aggregate(all_features[, c(1:11,17)], list(all_features$source), mean)names(mean_features) <- c("Musician", "danceability", "energy", "key", "loudness", "mode", "speechiness", "acousticness", "instrumentalness", "liveness", "valence", "tempo", "duration_ms")
The image below shows a subset of the mean_features data.frame, for your reference.
The description of each feature from the Spotify Web API Guidance can be found below:
Danceability: describes the suitability of a track for dancing. This is based on a combination of musical elements including tempo, rhythm stability, beat strength, and overall regularity. A value of 0.0 is least danceable and 1.0 is most danceable.
Energy: a measure from 0.0 to 1.0, and represents a perceptual measure of intensity and activity. Typically, energetic tracks feel fast, loud, and noisy. For example, death metal has high energy, while a Bach prelude scores low on the scale. Perceptual features contributing to this attribute include dynamic range, perceived loudness, timbre, onset rate, and general entropy.
Key: the key the track is in. Integers map to pitches using standard Pitch Class notation. E.g. 0 = C, 1 = C♯/D♭, 2 = D, and so on.
Loudness: the overall loudness of a track in decibels (dB). Loudness values are averaged across the entire track and are useful for comparing the relative loudness of tracks. Loudness is the quality of a sound that is the primary psychological correlate of physical strength (amplitude). Values typical range between -60 and 0 db.
Mode: indicates the modality (major or minor) of a track, the type of scale from which its melodic content is derived. Major is represented by 1 and minor is 0.
Speechiness: Speechiness detects the presence of spoken words in a track. The more exclusively speech-like the recording (e.g. talk show, audio book, poetry), the closer to 1.0 the attribute value. Values above 0.66 describe tracks that are probably made entirely of spoken words. Values between 0.33 and 0.66 describe tracks that may contain both music and speech, either in sections or layered, including such cases as rap music. Values below 0.33 most likely represent instrumental music and other non-speech-like tracks.
Acousticness: a confidence measure from 0.0 to 1.0 of whether the track is acoustic. 1.0 represents high confidence that the track is acoustic.
Instrumentalness: predicts whether a track contains no vocals. “Ooh” and “aah” sounds are treated as instrumental in this context. Rap or spoken word tracks are clearly “vocal”. The closer the instrumentalness value is to 1.0, the greater likelihood the track contains no vocal content. Values above 0.5 are intended to represent instrumental tracks, but confidence is higher as the value approaches 1.0.
Liveness: detects the presence of an audience in the recording. Higher liveness values represent an increased probability that the track was performed live. A value above 0.8 provides strong likelihood that the track is live.
Valence: a measure from 0.0 to 1.0 describing the musical positiveness conveyed by a track. Tracks with high valence sound more positive (for example happy, cheerful, euphoric), while tracks with low valence sound more negative (for example sad, depressed, angry).
Tempo: the overall estimated tempo of a track in beats per minute (BPM). In musical terminology, tempo is the speed or pace of a given piece, and derives directly from the average beat duration.
Duration_ms: the duration of the track in milliseconds.
A radar chart is useful to compare the musical vibes of these musicians in a more visual way. The first visualization is an R implementation of the radar chart from the chart.js JavaScript library, and evaluates the audio features for ten selected musicians.
In order to plot, I normalized the key, loudness, tempo, and duration_ms values to be from 0 to 1. This helps to make the chart more clear and readable.
mean_features_norm <- cbind(mean_features[1], apply(mean_features[-1],2, function(x){(x-min(x))/diff(range(x))}))
Okay, let’s plot these interactive radar charts in batches of ten musicians. Each chart displays data set labels when you hover over each radial line, showing the value for the selected feature. The code below details the process of making the radar chart for the first batch of ten musicians. The code for the other four batches has been omitted, but the radar charts are displayed.
Batch 1: Taylor Swift, Ariana Grande, Kendrick Lamar, Shawn Mendes, Maroon 5, Post Malone, Kygo, The Chainsmokers, Adele, Drake
Batch 2: Justin Bieber, Coldplay, Kanye West, Bruno Mars, Ed Sheeran, Eminem, Beyonce, Avicii, Marshmello, Calvin Harris
Batch 3: Justin Timberlake, Frank Sinatra, Charlie Puth, Michael Buble, Martin Garrix, Enrique Iglesias, John Mayer, Future, Elton John, 50 Cent
Batch 4: Lorde, Lil Wayne, Wiz Khalifa, Fifth Harmony, Lana Del Rey, Norah Jones, James Arthur, One Republic, The Script, Stevie Wonder
Batch 5: Jason Mraz, John Legend, Pentatonix, Alicia Keys, Usher, Snoop Dogg, Macklemore, Zara Larsson, Jay-Z, Rihanna
Another way to find out the differences between these musicians in their musical repertoire is by grouping them in clusters. The general idea of a clustering algorithm is to divide a given dataset into multiple groups on the basis of similarity in the data.
In this case, musicians will be grouped in different clusters according to their music preferences. Rather than defining groups before looking at the data, clustering allows me to find and analyze the musician groupings that have formed organically.
Prior to clustering data, it is important to rescale the numeric variables of the dataset. Since I have mixed numerical data, where each audio feature is different from another and has different measurements, running the scale function (aka z-standardizing them) is a good practice to give equal weight to them. After that, I kept the musicians as the row names to be able to show them as labels in the plot.
scaled.features <- scale(mean_features[-1])rownames(scaled.features) <- mean_features$Musician
I applied the K-Means Clustering method, which is one of the most popular techniques of unsupervised statistical learning methods. It is used for unlabeled data. The algorithm finds groups in the data, with the number of groups represented by the variable K. The algorithm works iteratively to assign each data point to one of K groups based on the variables that are provided. Data points are clustered based on similarity.
In this instance, I chose K = 6 — the clusters can be formed based on the six different genres I used when choosing the artists (Pop, Hip-Hop, R&B, EDM, Rock, and Jazz).
After I applied the K-Means algorithm for each musician, I can plot a two-dimensional view of the data. In the first plot, the x-axis and y-axis correspond to the first and second principal components respectively. The eigenvectors (represented by red arrows) indicate the directional influence each variable has on the principal components.
Let’s have a look at the clusters that result from applying the K-Means algorithm to my dataset.
As you can see in the graph above, the x-axis is PC1 (30.24%) and the y-axis is PC2 (16.54%). These are the first two principal components. The PCA graph shows that PC1 separates artists by loudness/energy vs acoustic/mellowness, while PC2 appears to separate artists on a scale of valence vs key, tempo and instrumentalness.
Because my data is multivariate, it is tedious to inspect all the many bivariate scatterplots. Instead, a single “summarizing” scatterplot is more convenient. The scatterplot of the first two principal components which were derived from the data has been shown in the graph. The percentage, likewise, is the variance explained by each component of the overall variability: the 1st component captured 30.24% and the 2nd component captured 16.54% of the information about the multivariate data.
If you’re interested to learn more about the math behind this algorithm, I recommend that you brush up on Principal Component Analysis.
Let’s see which artists belong to which clusters:
k_means$cluster
I have also plotted another radar chart containing the features for each cluster. It is useful to compare the attributes of the songs that each cluster creates.
Cluster 1 contains four artists: Coldplay, Avicii, Marshmello, and Martin Garrix. Their music is mostly performed live and instrumental, usually loud and full of energy with high tempo. This is not too surprising, as three of the four artists perform EDM / House music, and Coldplay is known for their live concerts.
Cluster 2 contains 2 artists: Frank Sinatra and Norah Jones (any jazz fans out there?). Their music scores high on acousticness and the Major scale mode. However, they score low in all the remaining attributes. Typical Jazz tunes.
Cluster 3 contains ten artists: Post Malone, Kygo, The Chainsmokers, Adele, Lorde, Lana Del Rey, James Arthur, One Republic, John Legend, and Alicia Keys. This cluster scores average in mostly all the attributes. This suggests that this group of artists is well-balanced and versatile with style and creation, hence the diversity of genres presented in this cluster (EDM, Pop, R&B).
Cluster 4 contains 15 artists: Ariana Grande, Maroon 5, Drake, Justin Bieber, Bruno Mars, Calvin Harris, Charlie Puth, Enrique Iglesias, Future, Wiz Khalifa, Fifth Harmony, Usher, Macklemore, Zara Larsson, and Rihanna. Their music is danceable, loud, high-tempo, and energetic. This group has the presence of many young mainstream artists in the Pop and Hip-Hop genres.
Cluster 5 contains 10 artists: Taylor Swift, Shawn Mendes, Ed Sheeran, Michael Buble, John Mayer, Elton John, The Script, Stevie Wonder, Jason Mraz, and Pentatonix. This is my favorite group! Taylor Swift? Ed Sheeran? John Mayer? Jason Mraz? Elton John? I guess I listen to a lot of singer-songwriter artists. Their music is mostly in the Major scale, while achieving perfect balance (average score) in all other attributes.
Cluster 6 contains nine artists: Kendrick Lamar, Kanye West, Eminem, Beyonce, Justin Timberlake, 50 Cent, Lil Wayne, Snoop Dogg, and Jay-Z. You already see the trend here: seven of them are Rappers, and even Beyonce and JT regularly collaborate with rappers. Their songs have high number of spoken words and speech-like sections, are long in duration, and often performed live. Any better description of rap music?
The following charts show the values for each feature for every musician. The code below details the process for making the danceability diverging bar plot. The code for the other features has been omitted, but each feature’s plot is displayed subsequently.
Danceability
If you want to bust the moves and impress your crush, try listen to more of Future, Drake, Wiz Khalifa, Snoop Doog, and Eminem. On the other hand, don’t even attempt to dance to Frank Sinatra or Lana Del Rey’s tunes.
Energy
You’re a fairly energetic person if you listen to lots of Marhsmello, Calvin Harris, Enrique Iglesias, Martin Garrix, Eminem, Jay-Z. The opposite is true if you’re a fan of Frank Sinatra and Norah Jones.
Loudness
The Loudness ranking is almost the same as the Energy one.
Speechiness
All the Rap fans out there: what’s your favorite songs from Kendrick Lamar? or 50 Cent? or Jay-Z? Hmm, I’m surprised Eminem does not rank higher, as I personally think that he’s the GOAT of all rappers.
Acousticness
Acousticness is the exact opposite of Loudness and Energy. Mr. Sinatra and Mrs. Jones released some powerful acoustic tracks throughout their careers.
Instrumentalness
EDM for the win! Martin Garrix, Avicii, and Marshmello produce tracks that contain almost no vocals.
Liveness
So who are the 5 artists who performed the most live audio recordings? Jason Mraz, Coldplay, Martin Garrix, Kanye West, and Kendrick Lamar, in that order.
Valence
Valence is the feature that describes musical positiveness conveyed by a track. Music by Bruno Mars, Stevie Wonder, and Enrique Iglesias are very positive, while music by Lana Del Rey, Coldplay, and Martin Garrix sound quite negative.
Tempo
Future, Marshmello, and Wiz Khalifa are kings of speed. They produce tracks with the highest tempo in beats per minute. And Snoop Dogg, lol? He tends to take some time to utter his magic words.
Duration
Last but not least, songs by Justin Timberlake, followed by Elton John and Eminem, are, sometimes excruciatingly, long. In contrast, Frank Sinatra, Zara Larsson, and Pentatonix favor shorter music.
Whoa, I had a lot of fun doing this analysis and visualization project on Spotify data. Who could have thought that James Arthur and Post Malone are in the same cluster? Or Kendrick Lamar is the speediest rapper in the game? Or that Marshmello would beat Martin Garrix in producing energetic tracks?
Anyway, you can view the complete R Markdown, separate R code for processing and visualizing data, and the original dataset in my GitHub repository here. From my own perspective, R is much better in data visualization than Python, with the likes of libraries such as ggplot and plot.ly. I highly encourage you to give R a try!
— —
If you enjoyed this piece, I’d love it if you hit the clap button 👏 so others might stumble upon it. You can find my own code on GitHub, and more of my writing and projects at https://jameskle.com/. You can also follow me on Twitter, email me directly or find me on LinkedIn. Sign up for my newsletter to receive my latest thoughts on data science, machine learning, and artificial intelligence right at your inbox! | [
{
"code": null,
"e": 569,
"s": 172,
"text": "Each artist has their own unique musical styles. From Ed Sheeran who devotes his life to the acoustic guitar, to Drake who masters the art of rapping. From Adele who can belt some crazy high notes on her pop ballads, to Kygo who creates EDM magic on his D... |
Implementation of Minesweeper Game | 12 Apr, 2022
Remember the old Minesweeper ? We play on a square board and we have to click on the board on the cells which do not have a mine. And obviously we don’t know where mines are. If a cell where a mine is present is clicked then we lose, else we are still in the game. There are three levels for this game-
Beginner – 9 * 9 Board and 10 MinesIntermediate – 16 * 16 Board and 40 MinesAdvanced – 24 * 24 Board and 99 Mines
Beginner – 9 * 9 Board and 10 Mines
Intermediate – 16 * 16 Board and 40 Mines
Advanced – 24 * 24 Board and 99 Mines
Probability of finding a mine –
Beginner level – 10/81 (0.12)
Intermediate level – 40/256 (0.15)
Advanced level – 99 / 576 (0.17)
The increasing number of tiles raises the difficulty bar. So the complexity level increases as we proceed to next levels.It might seem like a complete luck-based game (you are lucky if you don’t step over any mine over the whole game and unlucky if you have stepped over one). But this is not a complete luck based game. Instead you can win almost every time if you follow the hints given by the game itself.
Hints for Winning the Game
When we click on a cell having adjacent mines in one or more of the surrounding eight cells, then we get to know how many adjacent cells have mines in them. So we can do some logical guesses to figure out which cells have mines.
If you click on a cell having no adjacent mines (in any of the surrounding eight cells) then all the adjacent cells are automatically cleared, thus saving our time.
So we can see that we don’t always have to click on all the cells not having the mines (total number of cells – number of mines) to win. If we are lucky then we can win in very short time by clicking on the cells which don’t have any adjacent cells having mines.
Implementation
Two implementations of the game are given here:
In the first implementation, the user’s move is selected randomly using rand() function.In the second implementation, the user himself select his moves using scanf() function.
In the first implementation, the user’s move is selected randomly using rand() function.
In the second implementation, the user himself select his moves using scanf() function.
Also there are two boards- realBoard and myBoard. We play our game in myBoard and realBoard stores the location of the mines. Throughout the game, realBoard remains unchanged whereas myBoard sees many changes according to the user’s move.We can choose any level among – BEGINNER, INTERMEDIATE and ADVANCED. This is done by passing one of the above in the function – chooseDifficultyLevel() [However in the user-input game this option is asked to the user before playing the game].Once the level is chosen, the realBoard and myBoard are initialized accordingly and we place the mines in the realBoard randomly. We also assign the moves using the function assignMoves() before playing the game [However in the user-input game the user himself assign the moves during the whole game till the game ends].We can cheat before playing (by knowing the positions of the mines) using the function – cheatMinesweepeer(). In the code this function is commented . So if you are afraid of losing then uncomment this function and then play ! Then the game is played till the user either wins (when the user never steps/clicks on a mine-containing cell) or lose (when the user steps/clicks on a mine-containing cell). This is represented in a while() loop. The while() loop terminates when the user either wins or lose.The makeMove() function inside the while loop gets a move randomly from then randomly assigned moves. [However in the user-input game this function prompts the user to enter his own move].Also to guarantee that the first move of the user is always safe (because the user can lose in the first step itself by stepping/clicking on a cell having a mine, and this would be very much unfair), we put a check by using the if statement – if (currentMoveIndex == 0)The lifeline of this program is the recursive function – playMinesweeperUtil()This function returns a true if the user steps/clicks on a mine and hence he loses else if he step/click on a safe cell, then we get the count of mines surrounding that cell. We use the function countAdjacentMines() to calculate the adjacent mines. Since there can be maximum 8 surrounding cells, so we check for all 8 surrounding cells. If there are no adjacent mines to this cell, then we recursively click/step on all the safe adjacent cells (hence reducing the time of the game-play). And if there is atleast a single adjacent mine to this cell then that count is displayed on the current cell. This is given as a hint to the player so that he can avoid stepping/clicking on the cells having mines by logic. Also if you click on a cell having no adjacent mines (in any of the surrounding eight cells) then all the adjacent cells are automatically cleared, thus saving our time.So we can see that we don’t always have to click on all the cells not having the mines (total number of cells – number of mines) to win. If we are lucky then we can win in very short time by clicking on the cells which don’t have any adjacent cells having mines. The user keeps on playing until he steps/clicks on a cell having a mine (in this case the user loses) or if he had clicked/stepped on all the safe cell (in this case the user wins).
CPP
// A C++ Program to Implement and Play Minesweeper #include<bits/stdc++.h>using namespace std; #define BEGINNER 0#define INTERMEDIATE 1#define ADVANCED 2#define MAXSIDE 25#define MAXMINES 99#define MOVESIZE 526 // (25 * 25 - 99) int SIDE ; // side length of the boardint MINES ; // number of mines on the board // A Utility Function to check whether given cell (row, col)// is a valid cell or notbool isValid(int row, int col){ // Returns true if row number and column number // is in range return (row >= 0) && (row < SIDE) && (col >= 0) && (col < SIDE);} // A Utility Function to check whether given cell (row, col)// has a mine or not.bool isMine (int row, int col, char board[][MAXSIDE]){ if (board[row][col] == '*') return (true); else return (false);} // A Function to get the user's movevoid makeMove(int *x, int *y){ // Take the input move printf("Enter your move, (row, column) -> "); scanf("%d %d", x, y); return;} // A Function to print the current gameplay boardvoid printBoard(char myBoard[][MAXSIDE]){ int i, j; printf (" "); for (i=0; i<SIDE; i++) printf ("%d ", i); printf ("\n\n"); for (i=0; i<SIDE; i++) { printf ("%d ", i); for (j=0; j<SIDE; j++) printf ("%c ", myBoard[i][j]); printf ("\n"); } return;} // A Function to count the number of// mines in the adjacent cellsint countAdjacentMines(int row, int col, int mines[][2], char realBoard[][MAXSIDE]){ int i; int count = 0; /* Count all the mines in the 8 adjacent cells N.W N N.E \ | / \ | / W----Cell----E / | \ / | \ S.W S S.E Cell-->Current Cell (row, col) N --> North (row-1, col) S --> South (row+1, col) E --> East (row, col+1) W --> West (row, col-1) N.E--> North-East (row-1, col+1) N.W--> North-West (row-1, col-1) S.E--> South-East (row+1, col+1) S.W--> South-West (row+1, col-1) */ //----------- 1st Neighbour (North) ------------ // Only process this cell if this is a valid one if (isValid (row-1, col) == true) { if (isMine (row-1, col, realBoard) == true) count++; } //----------- 2nd Neighbour (South) ------------ // Only process this cell if this is a valid one if (isValid (row+1, col) == true) { if (isMine (row+1, col, realBoard) == true) count++; } //----------- 3rd Neighbour (East) ------------ // Only process this cell if this is a valid one if (isValid (row, col+1) == true) { if (isMine (row, col+1, realBoard) == true) count++; } //----------- 4th Neighbour (West) ------------ // Only process this cell if this is a valid one if (isValid (row, col-1) == true) { if (isMine (row, col-1, realBoard) == true) count++; } //----------- 5th Neighbour (North-East) ------------ // Only process this cell if this is a valid one if (isValid (row-1, col+1) == true) { if (isMine (row-1, col+1, realBoard) == true) count++; } //----------- 6th Neighbour (North-West) ------------ // Only process this cell if this is a valid one if (isValid (row-1, col-1) == true) { if (isMine (row-1, col-1, realBoard) == true) count++; } //----------- 7th Neighbour (South-East) ------------ // Only process this cell if this is a valid one if (isValid (row+1, col+1) == true) { if (isMine (row+1, col+1, realBoard) == true) count++; } //----------- 8th Neighbour (South-West) ------------ // Only process this cell if this is a valid one if (isValid (row+1, col-1) == true) { if (isMine (row+1, col-1, realBoard) == true) count++; } return (count);} // A Recursive Function to play the Minesweeper Gamebool playMinesweeperUtil(char myBoard[][MAXSIDE], char realBoard[][MAXSIDE], int mines[][2], int row, int col, int *movesLeft){ // Base Case of Recursion if (myBoard[row][col] != '-') return (false); int i, j; // You opened a mine // You are going to lose if (realBoard[row][col] == '*') { myBoard[row][col]='*'; for (i=0; i<MINES; i++) myBoard[mines[i][0]][mines[i][1]]='*'; printBoard (myBoard); printf ("\nYou lost!\n"); return (true) ; } else { // Calculate the number of adjacent mines and put it // on the board int count = countAdjacentMines(row, col, mines, realBoard); (*movesLeft)--; myBoard[row][col] = count + '0'; if (!count) { /* Recur for all 8 adjacent cells N.W N N.E \ | / \ | / W----Cell----E / | \ / | \ S.W S S.E Cell-->Current Cell (row, col) N --> North (row-1, col) S --> South (row+1, col) E --> East (row, col+1) W --> West (row, col-1) N.E--> North-East (row-1, col+1) N.W--> North-West (row-1, col-1) S.E--> South-East (row+1, col+1) S.W--> South-West (row+1, col-1) */ //----------- 1st Neighbour (North) ------------ // Only process this cell if this is a valid one if (isValid (row-1, col) == true) { if (isMine (row-1, col, realBoard) == false) playMinesweeperUtil(myBoard, realBoard, mines, row-1, col, movesLeft); } //----------- 2nd Neighbour (South) ------------ // Only process this cell if this is a valid one if (isValid (row+1, col) == true) { if (isMine (row+1, col, realBoard) == false) playMinesweeperUtil(myBoard, realBoard, mines, row+1, col, movesLeft); } //----------- 3rd Neighbour (East) ------------ // Only process this cell if this is a valid one if (isValid (row, col+1) == true) { if (isMine (row, col+1, realBoard) == false) playMinesweeperUtil(myBoard, realBoard, mines, row, col+1, movesLeft); } //----------- 4th Neighbour (West) ------------ // Only process this cell if this is a valid one if (isValid (row, col-1) == true) { if (isMine (row, col-1, realBoard) == false) playMinesweeperUtil(myBoard, realBoard, mines, row, col-1, movesLeft); } //----------- 5th Neighbour (North-East) ------------ // Only process this cell if this is a valid one if (isValid (row-1, col+1) == true) { if (isMine (row-1, col+1, realBoard) == false) playMinesweeperUtil(myBoard, realBoard, mines, row-1, col+1, movesLeft); } //----------- 6th Neighbour (North-West) ------------ // Only process this cell if this is a valid one if (isValid (row-1, col-1) == true) { if (isMine (row-1, col-1, realBoard) == false) playMinesweeperUtil(myBoard, realBoard, mines, row-1, col-1, movesLeft); } //----------- 7th Neighbour (South-East) ------------ // Only process this cell if this is a valid one if (isValid (row+1, col+1) == true) { if (isMine (row+1, col+1, realBoard) == false) playMinesweeperUtil(myBoard, realBoard, mines, row+1, col+1, movesLeft); } //----------- 8th Neighbour (South-West) ------------ // Only process this cell if this is a valid one if (isValid (row+1, col-1) == true) { if (isMine (row+1, col-1, realBoard) == false) playMinesweeperUtil(myBoard, realBoard, mines, row+1, col-1, movesLeft); } } return (false); }} // A Function to place the mines randomly// on the boardvoid placeMines(int mines[][2], char realBoard[][MAXSIDE]){ bool mark[MAXSIDE*MAXSIDE]; memset (mark, false, sizeof (mark)); // Continue until all random mines have been created. for (int i=0; i<MINES; ) { int random = rand() % (SIDE*SIDE); int x = random / SIDE; int y = random % SIDE; // Add the mine if no mine is placed at this // position on the board if (mark[random] == false) { // Row Index of the Mine mines[i][0]= x; // Column Index of the Mine mines[i][1] = y; // Place the mine realBoard[mines[i][0]][mines[i][1]] = '*'; mark[random] = true; i++; } } return;} // A Function to initialise the gamevoid initialise(char realBoard[][MAXSIDE], char myBoard[][MAXSIDE]){ // Initiate the random number generator so that // the same configuration doesn't arises srand(time (NULL)); // Assign all the cells as mine-free for (int i=0; i<SIDE; i++) { for (int j=0; j<SIDE; j++) { myBoard[i][j] = realBoard[i][j] = '-'; } } return;} // A Function to cheat by revealing where the mines are// placed.void cheatMinesweeper (char realBoard[][MAXSIDE]){ printf ("The mines locations are-\n"); printBoard (realBoard); return;} // A function to replace the mine from (row, col) and put// it to a vacant spacevoid replaceMine (int row, int col, char board[][MAXSIDE]){ for (int i=0; i<SIDE; i++) { for (int j=0; j<SIDE; j++) { // Find the first location in the board // which is not having a mine and put a mine // there. if (board[i][j] != '*') { board[i][j] = '*'; board[row][col] = '-'; return; } } } return;} // A Function to play Minesweeper gamevoid playMinesweeper (){ // Initially the game is not over bool gameOver = false; // Actual Board and My Board char realBoard[MAXSIDE][MAXSIDE], myBoard[MAXSIDE][MAXSIDE]; int movesLeft = SIDE * SIDE - MINES, x, y; int mines[MAXMINES][2]; // stores (x,y) coordinates of all mines. initialise (realBoard, myBoard); // Place the Mines randomly placeMines (mines, realBoard); /* If you want to cheat and know where mines are before playing the game then uncomment this part cheatMinesweeper(realBoard); */ // You are in the game until you have not opened a mine // So keep playing int currentMoveIndex = 0; while (gameOver == false) { printf ("Current Status of Board : \n"); printBoard (myBoard); makeMove (&x, &y); // This is to guarantee that the first move is // always safe // If it is the first move of the game if (currentMoveIndex == 0) { // If the first move itself is a mine // then we remove the mine from that location if (isMine (x, y, realBoard) == true) replaceMine (x, y, realBoard); } currentMoveIndex ++; gameOver = playMinesweeperUtil (myBoard, realBoard, mines, x, y, &movesLeft); if ((gameOver == false) && (movesLeft == 0)) { printf ("\nYou won !\n"); gameOver = true; } } return;} // A Function to choose the difficulty level// of the gamevoid chooseDifficultyLevel (){ /* --> BEGINNER = 9 * 9 Cells and 10 Mines --> INTERMEDIATE = 16 * 16 Cells and 40 Mines --> ADVANCED = 24 * 24 Cells and 99 Mines */ int level; printf ("Enter the Difficulty Level\n"); printf ("Press 0 for BEGINNER (9 * 9 Cells and 10 Mines)\n"); printf ("Press 1 for INTERMEDIATE (16 * 16 Cells and 40 Mines)\n"); printf ("Press 2 for ADVANCED (24 * 24 Cells and 99 Mines)\n"); scanf ("%d", &level); if (level == BEGINNER) { SIDE = 9; MINES = 10; } if (level == INTERMEDIATE) { SIDE = 16; MINES = 40; } if (level == ADVANCED) { SIDE = 24; MINES = 99; } return;} // Driver Program to test above functionsint main(){ /* Choose a level between --> BEGINNER = 9 * 9 Cells and 10 Mines --> INTERMEDIATE = 16 * 16 Cells and 40 Mines --> ADVANCED = 24 * 24 Cells and 99 Mines */ chooseDifficultyLevel (); playMinesweeper (); return (0);}
Input:
0
1 2
2 3
3 4
4 5
Output:
Enter the Difficulty Level
Press 0 for BEGINNER (9 * 9 Cells and 10 Mines)
Press 1 for INTERMEDIATE (16 * 16 Cells and 40 Mines)
Press 2 for ADVANCED (24 * 24 Cells and 99 Mines)
Current Status of Board :
0 1 2 3 4 5 6 7 8
0 - - - - - - - - -
1 - - - - - - - - -
2 - - - - - - - - -
3 - - - - - - - - -
4 - - - - - - - - -
5 - - - - - - - - -
6 - - - - - - - - -
7 - - - - - - - - -
8 - - - - - - - - -
Enter your move, (row, column) -> Current Status of Board :
0 1 2 3 4 5 6 7 8
0 - - - - - - - - -
1 - - 2 - - - - - -
2 - - - - - - - - -
3 - - - - - - - - -
4 - - - - - - - - -
5 - - - - - - - - -
6 - - - - - - - - -
7 - - - - - - - - -
8 - - - - - - - - -
Enter your move, (row, column) -> 0 1 2 3 4 5 6 7 8
0 - - - - - - - * *
1 - - 2 * - - - - -
2 - - - * * - - - -
3 - - - - - - - * -
4 - - - - - - - - -
5 - - - - - - - - -
6 - - * - - - - - -
7 - - - - * - - * -
8 - * - - - - - - -
You lost!
C program implementation when user input is choose randomly
CPP
// A C++ Program to Implement and Play Minesweeper// without taking input from user #include<bits/stdc++.h>using namespace std; #define BEGINNER 0#define INTERMEDIATE 1#define ADVANCED 2#define MAXSIDE 25#define MAXMINES 99#define MOVESIZE 526 // (25 * 25 - 99) int SIDE ; // side length of the boardint MINES ; // number of mines on the board // A Utility Function to check whether given cell (row, col)// is a valid cell or notbool isValid(int row, int col){ // Returns true if row number and column number // is in range return (row >= 0) && (row < SIDE) && (col >= 0) && (col < SIDE);} // A Utility Function to check whether given cell (row, col)// has a mine or not.bool isMine (int row, int col, char board[][MAXSIDE]){ if (board[row][col] == '*') return (true); else return (false);} // A Function to get the user's move and print it// All the moves are assumed to be distinct and valid.void makeMove (int *x, int *y, int moves[][2], int currentMoveIndex){ *x = moves[currentMoveIndex][0]; *y = moves[currentMoveIndex][1]; printf ("\nMy move is (%d, %d)\n", *x, *y); /* // The above moves are pre-defined // If you want to make your own move // then uncomment this section and comment // the above section scanf("%d %d", x, y); */ return;} // A Function to randomly assign movesvoid assignMoves (int moves[][2], int movesLeft){ bool mark[MAXSIDE*MAXSIDE]; memset(mark, false, sizeof(mark)); // Continue until all moves are assigned. for (int i=0; i<movesLeft; ) { int random = rand() % (SIDE*SIDE); int x = random / SIDE; int y = random % SIDE; // Add the mine if no mine is placed at this // position on the board if (mark[random] == false) { // Row Index of the Mine moves[i][0]= x; // Column Index of the Mine moves[i][1] = y; mark[random] = true; i++; } } return;} // A Function to print the current gameplay boardvoid printBoard(char myBoard[][MAXSIDE]){ int i,j; printf (" "); for (i=0; i<SIDE; i++) printf ("%d ", i); printf ("\n\n"); for (i=0; i<SIDE; i++) { printf ("%d ", i); for (j=0; j<SIDE; j++) printf ("%c ", myBoard[i][j]); printf ("\n"); } return;} // A Function to count the number of// mines in the adjacent cellsint countAdjacentMines(int row ,int col ,int mines[][2], char realBoard[][MAXSIDE]){ int i; int count = 0; /* Count all the mines in the 8 adjacent cells N.W N N.E \ | / \ | / W----Cell----E / | \ / | \ S.W S S.E Cell-->Current Cell (row, col) N --> North (row-1, col) S --> South (row+1, col) E --> East (row, col+1) W --> West (row, col-1) N.E--> North-East (row-1, col+1) N.W--> North-West (row-1, col-1) S.E--> South-East (row+1, col+1) S.W--> South-West (row+1, col-1) */ //----------- 1st Neighbour (North) ------------ // Only process this cell if this is a valid one if (isValid (row-1, col) == true) { if (isMine (row-1, col, realBoard) == true) count++; } //----------- 2nd Neighbour (South) ------------ // Only process this cell if this is a valid one if (isValid (row+1, col) == true) { if (isMine (row+1, col, realBoard) == true) count++; } //----------- 3rd Neighbour (East) ------------ // Only process this cell if this is a valid one if (isValid (row, col+1) == true) { if (isMine (row, col+1, realBoard) == true) count++; } //----------- 4th Neighbour (West) ------------ // Only process this cell if this is a valid one if (isValid (row, col-1) == true) { if (isMine (row, col-1, realBoard) == true) count++; } //----------- 5th Neighbour (North-East) ------------ // Only process this cell if this is a valid one if (isValid (row-1, col+1) == true) { if (isMine (row-1, col+1, realBoard) == true) count++; } //----------- 6th Neighbour (North-West) ------------ // Only process this cell if this is a valid one if (isValid (row-1, col-1) == true) { if (isMine (row-1, col-1, realBoard) == true) count++; } //----------- 7th Neighbour (South-East) ------------ // Only process this cell if this is a valid one if (isValid (row+1, col+1) == true) { if (isMine (row+1, col+1, realBoard) == true) count++; } //----------- 8th Neighbour (South-West) ------------ // Only process this cell if this is a valid one if (isValid (row+1, col-1) == true) { if (isMine (row+1, col-1, realBoard) == true) count++; } return (count);} // A Recursive Function to play the Minesweeper Gamebool playMinesweeperUtil(char myBoard[][MAXSIDE], char realBoard[][MAXSIDE], int mines[][2], int row, int col, int *movesLeft){ // Base Case of Recursion if (myBoard[row][col]!='-') return (false); int i, j; // You opened a mine // You are going to lose if (realBoard[row][col] == '*') { myBoard[row][col]='*'; for (i=0; i<MINES; i++) myBoard[mines[i][0]][mines[i][1]]='*'; printBoard (myBoard); printf ("\nYou lost!\n"); return (true) ; } else { // Calculate the number of adjacent mines and put it // on the board. int count = countAdjacentMines(row, col, mines, realBoard); (*movesLeft)--; myBoard[row][col] = count + '0'; if (!count) { /* Recur for all 8 adjacent cells N.W N N.E \ | / \ | / W----Cell----E / | \ / | \ S.W S S.E Cell-->Current Cell (row, col) N --> North (row-1, col) S --> South (row+1, col) E --> East (row, col+1) W --> West (row, col-1) N.E--> North-East (row-1, col+1) N.W--> North-West (row-1, col-1) S.E--> South-East (row+1, col+1) S.W--> South-West (row+1, col-1) */ //----------- 1st Neighbour (North) ------------ // Only process this cell if this is a valid one if (isValid (row-1, col) == true) { if (isMine (row-1, col, realBoard) == false) playMinesweeperUtil(myBoard, realBoard, mines, row-1, col, movesLeft); } //----------- 2nd Neighbour (South) ------------ // Only process this cell if this is a valid one if (isValid (row+1, col) == true) { if (isMine (row+1, col, realBoard) == false) playMinesweeperUtil(myBoard, realBoard, mines, row+1, col, movesLeft); } //----------- 3rd Neighbour (East) ------------ // Only process this cell if this is a valid one if (isValid (row, col+1) == true) { if (isMine (row, col+1, realBoard) == false) playMinesweeperUtil(myBoard, realBoard, mines, row, col+1, movesLeft); } //----------- 4th Neighbour (West) ------------ // Only process this cell if this is a valid one if (isValid (row, col-1) == true) { if (isMine (row, col-1, realBoard) == false) playMinesweeperUtil(myBoard, realBoard, mines, row, col-1, movesLeft); } //----------- 5th Neighbour (North-East) ------------ // Only process this cell if this is a valid one if (isValid (row-1, col+1) == true) { if (isMine (row-1, col+1, realBoard) == false) playMinesweeperUtil(myBoard, realBoard, mines, row-1, col+1, movesLeft); } //----------- 6th Neighbour (North-West) ------------ // Only process this cell if this is a valid one if (isValid (row-1, col-1) == true) { if (isMine (row-1, col-1, realBoard) == false) playMinesweeperUtil(myBoard, realBoard, mines, row-1, col-1, movesLeft); } //----------- 7th Neighbour (South-East) ------------ // Only process this cell if this is a valid one if (isValid (row+1, col+1) == true) { if (isMine (row+1, col+1, realBoard) == false) playMinesweeperUtil(myBoard, realBoard, mines, row+1, col+1, movesLeft); } //----------- 8th Neighbour (South-West) ------------ // Only process this cell if this is a valid one if (isValid (row+1, col-1) == true) { if (isMine (row+1, col-1, realBoard) == false) playMinesweeperUtil(myBoard, realBoard, mines, row+1, col-1, movesLeft); } } return (false); }} // A Function to place the mines randomly// on the boardvoid placeMines(int mines[][2], char realBoard[][MAXSIDE]){ bool mark[MAXSIDE*MAXSIDE]; memset (mark, false, sizeof (mark)); // Continue until all random mines have been created. for (int i=0; i<MINES; ) { int random = rand() % (SIDE*SIDE); int x = random / SIDE; int y = random % SIDE; // Add the mine if no mine is placed at this // position on the board if (mark[random] == false) { // Row Index of the Mine mines[i][0]= x; // Column Index of the Mine mines[i][1] = y; // Place the mine realBoard[mines[i][0]][mines[i][1]] = '*'; mark[random] = true; i++; } } return;} // A Function to initialise the gamevoid initialise (char realBoard[][MAXSIDE], char myBoard[][MAXSIDE]){ // Initiate the random number generator so that // the same configuration doesn't arises srand (time (NULL)); // Assign all the cells as mine-free for (int i=0; i<SIDE; i++) { for (int j=0; j<SIDE; j++) { myBoard[i][j] = realBoard[i][j] = '-'; } } return;} // A Function to cheat by revealing where the mines are// placed.void cheatMinesweeper (char realBoard[][MAXSIDE]){ printf ("The mines locations are-\n"); printBoard (realBoard); return;} // A function to replace the mine from (row, col) and put// it to a vacant spacevoid replaceMine (int row, int col, char board[][MAXSIDE]){ for (int i=0; i<SIDE; i++) { for (int j=0; j<SIDE; j++) { // Find the first location in the board // which is not having a mine and put a mine // there. if (board[i][j] != '*') { board[i][j] = '*'; board[row][col] = '-'; return; } } } return;} // A Function to play Minesweeper gamevoid playMinesweeper (){ // Initially the game is not over bool gameOver = false; // Actual Board and My Board char realBoard[MAXSIDE][MAXSIDE], myBoard[MAXSIDE][MAXSIDE]; int movesLeft = SIDE * SIDE - MINES, x, y; int mines[MAXMINES][2]; // Stores (x, y) coordinates of all mines. int moves[MOVESIZE][2]; // Stores (x, y) coordinates of the moves // Initialise the Game initialise (realBoard, myBoard); // Place the Mines randomly placeMines (mines, realBoard); // Assign Moves // If you want to make your own input move, // then the below function should be commented assignMoves (moves, movesLeft); /* //If you want to cheat and know //where mines are before playing the game //then uncomment this part cheatMinesweeper(realBoard); */ // You are in the game until you have not opened a mine // So keep playing int currentMoveIndex = 0; while (gameOver == false) { printf ("Current Status of Board : \n"); printBoard (myBoard); makeMove (&x, &y, moves, currentMoveIndex); // This is to guarantee that the first move is // always safe // If it is the first move of the game if (currentMoveIndex == 0) { // If the first move itself is a mine // then we remove the mine from that location if (isMine (x, y, realBoard) == true) replaceMine (x, y, realBoard); } currentMoveIndex ++; gameOver = playMinesweeperUtil (myBoard, realBoard, mines, x, y, &movesLeft); if ((gameOver == false) && (movesLeft == 0)) { printf ("\nYou won !\n"); gameOver = true; } } return;} // A Function to choose the difficulty level// of the gamevoid chooseDifficultyLevel (int level){ /* --> BEGINNER = 9 * 9 Cells and 10 Mines --> INTERMEDIATE = 16 * 16 Cells and 40 Mines --> ADVANCED = 24 * 24 Cells and 99 Mines */ if (level == BEGINNER) { SIDE = 9; MINES = 10; } if (level == INTERMEDIATE) { SIDE = 16; MINES = 40; } if (level == ADVANCED) { SIDE = 24; MINES = 99; } return;} // Driver Program to test above functionsint main(){ /* Choose a level between --> BEGINNER = 9 * 9 Cells and 10 Mines --> INTERMEDIATE = 16 * 16 Cells and 40 Mines --> ADVANCED = 24 * 24 Cells and 99 Mines */ chooseDifficultyLevel (BEGINNER); playMinesweeper (); return (0);}
Output:
Current Status of Board :
0 1 2 3 4 5 6 7 8
0 - - - - - - - - -
1 - - - - - - - - -
2 - - - - - - - - -
3 - - - - - - - - -
4 - - - - - - - - -
5 - - - - - - - - -
6 - - - - - - - - -
7 - - - - - - - - -
8 - - - - - - - - -
My move is (4, 7)
Current Status of Board :
0 1 2 3 4 5 6 7 8
0 - - - - - - - - -
1 - - - - - - - - -
2 - - - - - - - - -
3 - - - - - - - - -
4 - - - - - - - 2 -
5 - - - - - - - - -
6 - - - - - - - - -
7 - - - - - - - - -
8 - - - - - - - - -
My move is (3, 7)
Current Status of Board :
0 1 2 3 4 5 6 7 8
0 - - - - - - - - -
1 - - - - - - - - -
2 - - - - - - - - -
3 - - - - - - - 1 -
4 - - - - - - - 2 -
5 - - - - - - - - -
6 - - - - - - - - -
7 - - - - - - - - -
8 - - - - - - - - -
My move is (7, 3)
0 1 2 3 4 5 6 7 8
0 - - - - - - - - -
1 - - - - * - - - -
2 - - - - * - - - -
3 - - - - - - - 1 -
4 - - - * - - * 2 -
5 - - - - - - - * -
6 * - * - - - * - -
7 - - - * * - - - -
8 - - - - - - - - -
You lost!
This article is contributed by Rachit Belwariar. If you like GeeksforGeeks and would like to contribute, you can also write an article and mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above
saurabh1990aror
simmytarika5
Project
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 52,
"s": 24,
"text": "\n12 Apr, 2022"
},
{
"code": null,
"e": 357,
"s": 52,
"text": "Remember the old Minesweeper ? We play on a square board and we have to click on the board on the cells which do not have a mine. And obviously we don’t know where mines are.... |
PostgreSQL - GROUP BY | The PostgreSQL GROUP BY clause is used in collaboration with the SELECT statement to group together those rows in a table that have identical data. This is done to eliminate redundancy in the output and/or compute aggregates that apply to these groups.
The GROUP BY clause follows the WHERE clause in a SELECT statement and precedes the ORDER BY clause.
The basic syntax of GROUP BY clause is given below. The GROUP BY clause must follow the conditions in the WHERE clause and must precede the ORDER BY clause if one is used.
SELECT column-list
FROM table_name
WHERE [ conditions ]
GROUP BY column1, column2....columnN
ORDER BY column1, column2....columnN
You can use more than one column in the GROUP BY clause. Make sure whatever column you are using to group, that column should be available in column-list.
Consider the table COMPANY having records as follows −
# select * from COMPANY;
id | name | age | address | salary
----+-------+-----+-----------+--------
1 | Paul | 32 | California| 20000
2 | Allen | 25 | Texas | 15000
3 | Teddy | 23 | Norway | 20000
4 | Mark | 25 | Rich-Mond | 65000
5 | David | 27 | Texas | 85000
6 | Kim | 22 | South-Hall| 45000
7 | James | 24 | Houston | 10000
(7 rows)
If you want to know the total amount of salary of each customer, then GROUP BY query would be as follows −
testdb=# SELECT NAME, SUM(SALARY) FROM COMPANY GROUP BY NAME;
This would produce the following result −
name | sum
-------+-------
Teddy | 20000
Paul | 20000
Mark | 65000
David | 85000
Allen | 15000
Kim | 45000
James | 10000
(7 rows)
Now, let us create three more records in COMPANY table using the following INSERT statements −
INSERT INTO COMPANY VALUES (8, 'Paul', 24, 'Houston', 20000.00);
INSERT INTO COMPANY VALUES (9, 'James', 44, 'Norway', 5000.00);
INSERT INTO COMPANY VALUES (10, 'James', 45, 'Texas', 5000.00);
Now, our table has the following records with duplicate names −
id | name | age | address | salary
----+-------+-----+--------------+--------
1 | Paul | 32 | California | 20000
2 | Allen | 25 | Texas | 15000
3 | Teddy | 23 | Norway | 20000
4 | Mark | 25 | Rich-Mond | 65000
5 | David | 27 | Texas | 85000
6 | Kim | 22 | South-Hall | 45000
7 | James | 24 | Houston | 10000
8 | Paul | 24 | Houston | 20000
9 | James | 44 | Norway | 5000
10 | James | 45 | Texas | 5000
(10 rows)
Again, let us use the same statement to group-by all the records using NAME column as follows −
testdb=# SELECT NAME, SUM(SALARY) FROM COMPANY GROUP BY NAME ORDER BY NAME;
This would produce the following result −
name | sum
-------+-------
Allen | 15000
David | 85000
James | 20000
Kim | 45000
Mark | 65000
Paul | 40000
Teddy | 20000
(7 rows)
Let us use ORDER BY clause along with GROUP BY clause as follows −
testdb=# SELECT NAME, SUM(SALARY)
FROM COMPANY GROUP BY NAME ORDER BY NAME DESC;
This would produce the following result −
name | sum
-------+-------
Teddy | 20000
Paul | 40000
Mark | 65000
Kim | 45000
James | 20000
David | 85000
Allen | 15000 | [
{
"code": null,
"e": 3212,
"s": 2959,
"text": "The PostgreSQL GROUP BY clause is used in collaboration with the SELECT statement to group together those rows in a table that have identical data. This is done to eliminate redundancy in the output and/or compute aggregates that apply to these groups."... |
wc command in Linux with Examples | wc - print newline count, word count and byte count for each file.
wc [OPTION]... [FILE]...
wc [OPTION]... --files0-from=F
The options below may be used to select which counts are printed, always in the following order: newline, word, character, byte, maximum line length.
-c, --bytes
print the byte counts
-m, --chars
print the character counts
-l, --lines
print the newline counts
--files0-from=F
read input from the files specified by NUL-terminated names in file F; If F is - then read names from standard input
-L, --max-line-length
print the maximum display width
-w, --words
print the word counts
--help
display this help and exit
--version
output version information and exit
wc stands for word count is a command in Unix and Unix-like operating systems. It is mainly used for counting purpose.
By default it displays four-columnar output. First column shows number of lines present in a file specified, second column shows number of words present in the file, third column shows number of characters present in file and fourth column itself is the file name which are given as argument.
‘wc’ prints one line of counts for each file, and if the file was given as an argument, it prints the file name following the counts. If more than one FILE is given, ‘wc' prints a final line containing the cumulative counts, with the file name ‘total’. The counts are printed in this order: newlines, words, characters, bytes, maximum line length.
Each count is printed right-justified in a field with at least one space between fields so that the numbers and file names normally line up nicely in columns. The width of the count fields varies depending on the inputs, so you should not depend on a particular field width.
Let us consider two files having name test1.txt and test2.txt
$ cat test1.txt
delhi
mumbai
bangalore
chennai
kolkata
$ cat test2.txt
a) a.k. shukla
b) anat hari
c) barun kumar
d) jai sharma
e) sumit singh
1. Passing only one file name in the argument and count number of lines, words and characters.
$ wc test1.txt
5 5 39 test1.txt
$ wc test2.txt
5 15 72 test2.txt
2. wc command allow us to pass more than one file name in the argument.
$ wc test1.txt test2.txt
5 5 39 test1.txt
5 15 72 test2.txt
10 20 111 total
$wc Bikaner_*
229 2513 45154 Bikaner_eng2ban_gmt.txt
229 2756 46626 Bikaner_hin2ban_bmt.txt
229 2802 48698 Bikaner_hin2ban_gmt.txt
687 8071 140478 total
3. wc command allow us to count lines, words and characters in utf-8 encoded files as well.pass more than one file name in the argument.
$ wc test1.txt test2.txt
5 5 39 test1.txt
5 15 72 test2.txt
10 20 111 total
$wc Bikaner_*
229 2513 45154 Bikaner_eng2ban_gmt.txt
229 2756 46626 Bikaner_hin2ban_bmt.txt
229 2802 48698 Bikaner_hin2ban_gmt.txt
687 8071 140478 total
4. To print number of lines present in a file use -l or --lines option.
$ wc -l test2.txt
5 test2.txt
$ wc --lines test2.txt
5 test2.txt
5. To print the numbers of bytes in a file use -c or --bytes option.
$ wc -c test1.txt
39 test1.txt
$ wc --bytes test1.txt
39 test1.txt
6. To print the number of characters in a file use -m or --chars option.
$ wc -m test2.txt
72 test2.txt
$ wc --chars test2.txt
72 test2.txt
7. To print the number of words present in a file use -w or --words option.
$ wc -w test1.txt
5 test1.txt
$ wc --words test1.txt
5 test1.txt
8. In case of files that contain characters that have multiple bytes (UTF-8), the number of bytes reported is different than the number of characters. These files that are listed below contain Hindi and Bangla Unicode characters with UTF-8 encodings. So we can see that byte count reported by '-b' option is different than characters count reported by '-m' option.
$ wc -c Bikaner_*
45154 Bikaner_eng2ban_gmt.txt
46626 Bikaner_hin2ban_bmt.txt
48698 Bikaner_hin2ban_gmt.txt
140478 total
$ wc -m Bikaner_*
17394 Bikaner_eng2ban_gmt.txt
17952 Bikaner_hin2ban_bmt.txt
18702 Bikaner_hin2ban_gmt.txt
54048 total
9. The wc command can be used in combination with other commands through piping.
We can Count the number of files in the Current Directory by the help of wc command. The find command passes a list of all files in the current directory with each file name on a single line to the wc command, which counts the number of lines.
$ find . -type f | wc -l
64425
10. To count the number of records (or rows) in several CSV files the wc can used in conjunction with pipes.
$ cat *.csv | wc -l | [
{
"code": null,
"e": 10778,
"s": 10711,
"text": "wc - print newline count, word count and byte count for each file."
},
{
"code": null,
"e": 10834,
"s": 10778,
"text": "wc [OPTION]... [FILE]...\nwc [OPTION]... --files0-from=F"
},
{
"code": null,
"e": 11431,
"s": 1... |
JavaScript Array flat() Method | 29 Oct, 2021
Below is the example of the Array flat() method.
Example:<script> // Creating multilevel array const numbers = [['1', '2'], ['3', '4', ['5',['6'], '7']]]; const flatNumbers= numbers.flat(Infinity); document.write(flatNumbers);</script>
<script> // Creating multilevel array const numbers = [['1', '2'], ['3', '4', ['5',['6'], '7']]]; const flatNumbers= numbers.flat(Infinity); document.write(flatNumbers);</script>
Ourtput:1,2,3,4,5,6,7
1,2,3,4,5,6,7
The arr.flat() method was introduced in ES2019. It is used to flatten an array, to reduce the nesting of an array.
The flat() method is heavily used in functional programming paradigm of JavaScript. Before flat() method was introduced in JavaScript, various libraries such as underscore.js were primarily used.
Syntax:
arr.flat([depth])
Parameters: This method accepts a single parameter as mentioned above and described below:
depth: It specifies, how deep the nested array should be flattened. The default value is 1 if no depth value is passed as you guess it is an optional parameter.
Return value: It returns an array i.e. depth levels flat than the original array, it removes nesting according to the depth levels.
More codes for the above function are defined as follows:
Program 1: The following code snippet shows, how flat() method works.
<script> let nestedArray = [1, [2, 3], [[]], [4, [5]], 6]; let zeroFlat = nestedArray.flat(0); document.write( `Zero levels flattened array: ${zeroFlat}`); document.write("<br>"); // 1 is the default value even // if no parameters are passed let oneFlat = nestedArray.flat(1); document.write( `One level flattened array: ${oneFlat}`); document.write("<br>"); let twoFlat = nestedArray.flat(2); document.write( `One level flattened array: ${twoFlat}`); document.write("<br>"); // No effect when depth is 3 or // more since array is already // flattened completely. let threeFlat = nestedArray.flat(3); document.write( `Three levels flattened array: ${threeFlat}`);</script>
Output:
Zero levels flattened array: [1, [2, 3], [[]], [4, [5]], 6]
One level flattened array: [1, 2, 3, [], 4, [5], 6]
Two levels flattened array: [1, 2, 3, 4, 5, 6]
Three levels flattened array: [1, 2, 3, 4, 5, 6]
Note: For depth greater than 2, array remains the same, since it is already flattened completely.
Program 2: We can also remove empty slots or empty values in an array by using flat() method.
<script> let arr = [1, 2, 3, , 4]; let newArr = arr.flat(); document.write(newArr);</script>
Output:
[1, 2, 3, 4]
Supported Browsers:
Google Chrome 69 and above
Edge 79 and above
Firefox 62 and above
Opera 56 and above
Safari 12 and above
ysachin2314
javascript-array
JavaScript-Methods
JavaScript
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Difference between var, let and const keywords in JavaScript
Differences between Functional Components and Class Components in React
Difference Between PUT and PATCH Request
How to append HTML code to a div using JavaScript ?
How to Open URL in New Tab using JavaScript ?
Installation of Node.js on Linux
Top 10 Projects For Beginners To Practice HTML and CSS Skills
Difference between var, let and const keywords in JavaScript
How to insert spaces/tabs in text using HTML/CSS?
How to fetch data from an API in ReactJS ? | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n29 Oct, 2021"
},
{
"code": null,
"e": 77,
"s": 28,
"text": "Below is the example of the Array flat() method."
},
{
"code": null,
"e": 298,
"s": 77,
"text": "Example:<script> // Creating multilevel array const n... |
How to send an email from JavaScript ? | 16 May, 2022
In this article, we will learn how to send mail using Simple Mail Transfer Protocol which is a free JavaScript library. It is basically used to send emails, so it only works for outgoing emails. To be able to send emails, you need to provide the correct SMTP server when you set up your email client. Most internet systems use SMTP as a method to transfer mail from one user to another. It is a push protocol. In order to use SMTP, you need to configure your Gmail. You need to change two settings of your Gmail account from which you are sending the mail i.e.
Revoke 2-step verification
Enabling less secure apps to access Gmail. You can easily do this by clicking on the link Enable
After this just create a HTML file and include SMTP in your <script></script> tag :
<script src="https://smtpjs.com/v3/smtp.js"></script>
Below is the HTML file which you will need to run in order to send the mail.
html
<!DOCTYPE html><html> <head> <title>Send Mail</title> <script src= "https://smtpjs.com/v3/smtp.js"> </script> <script type="text/javascript"> function sendEmail() { Email.send({ Host: "smtp.gmail.com", Username: "sender@email_address.com", Password: "Enter your password", To: 'receiver@email_address.com', From: "sender@email_address.com", Subject: "Sending Email using javascript", Body: "Well that was easy!!", }) .then(function (message) { alert("mail sent successfully") }); } </script></head> <body> <form method="post"> <input type="button" value="Send Email" onclick="sendEmail()" /> </form></body> </html>
Just click on the button and the mail will be sent:
You will see below pop-up if the mail has been sent successfully.
Now the question is what if you have multiple receivers. In that case, you have to do nothing just configure your sendMail() function as described below:
to: 'first_username@gmail.com, second_username@gmail.com',
Rest all we be same. If you want to send HTML formatted text to the receiver, then you need to add the below code in your mail function:
html: "<h1>GeeksforGeeks</h1>
<p>A computer science portal</p>"
At last, in order to send an attachment just write the following code in sendMail() function:
Attachments : [{
name : "File_Name_with_Extension",
path:"Full Path of the file"
}]
So the final JavaScript code after the above configuration will look like as follows:
html
<!DOCTYPE html><html> <head> <title>Sending Mail</title> <script src="https://smtpjs.com/v3/smtp.js"></script> <script type="text/javascript"> function sendEmail() { Email.send({ Host: "smtp.gmail.com", Username: "sender@email_address.com", Password: "Enter your password", To: 'receiver@email_address.com', From: "sender@email_address.com", Subject: "Sending Email using javascript", Body: "Well that was easy!!", Attachments: [ { name: "File_Name_with_Extension", path: "Full Path of the file" }] }) .then(function (message) { alert("Mail has been sent successfully") }); } </script></head> <body> <form method="post"> <input type="button" value="Send Mail" onclick="sendEmail()" /> </form></body> </html>
JavaScript is best known for web page development but it is also used in a variety of non-browser environments. You can learn JavaScript from the ground up by following this JavaScript Tutorial and JavaScript Examples.
HTML is the foundation of webpages and is used for webpage development by structuring websites and web apps. You can learn HTML from the ground up by following this HTML Tutorial and HTML Examples.
ramsamarth21bcs24
HTML-Misc
JavaScript-Misc
Picked
HTML
JavaScript
Web Technologies
Web technologies Questions
HTML
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to update Node.js and NPM to next version ?
REST API (Introduction)
CSS to put icon inside an input element in a form
Types of CSS (Cascading Style Sheet)
Design a Tribute Page using HTML & CSS
Difference between var, let and const keywords in JavaScript
Differences between Functional Components and Class Components in React
Remove elements from a JavaScript Array
Difference Between PUT and PATCH Request
How to append HTML code to a div using JavaScript ? | [
{
"code": null,
"e": 54,
"s": 26,
"text": "\n16 May, 2022"
},
{
"code": null,
"e": 615,
"s": 54,
"text": "In this article, we will learn how to send mail using Simple Mail Transfer Protocol which is a free JavaScript library. It is basically used to send emails, so it only works ... |
How to Convert JSON Array to String Array in Java? | 28 Mar, 2021
JSON stands for JavaScript Object Notation. It is one of the widely used formats to exchange data by web applications. JSON arrays are almost the same as arrays in JavaScript. They can be understood as a collection of data (strings, numbers, booleans) in an indexed manner. Given a JSON array, we will discuss how to convert it into a String array in Java.
Creating a JSON array
Let’s start with creating a JSON array in Java. Here, we will use some example data to input into the array, but you can use the data as per your requirements.
1. Defining the array
JSONArray exampleArray = new JSONArray();
Note that we will import the org.json package in order to use this command. This is later discussed in the code.
2. Inserting data into the array
We now add some example data into the array.
exampleArray.put("Geeks ");
exampleArray.put("For ");
exampleArray.put("Geeks ");
Notice the space given after each string. This is being done here because when we will convert it into a String array, we want to ensure that there is space between each element.
Now that we have our JSON array ready, we can move forward to the next and final step, converting it into a String array.
Conversion into a String array
The approach that we are using here, will first insert all the JSON array elements into a List since it is then easier to convert List into an array.
1. Creating a List
Let’s start by creating a List.
List<String> exampleList = new ArrayList<String>();
2. Adding JSON array data into the List
We can loop through the JSON array to add all elements to our List.
for(int i=0; i< exampleArray.length; i++){
exampleList.add(exampleArray.getString(i));
}
Now we have all the elements inside our List as strings, and we can simply convert the List into a String array.
3. Getting String array as output
We will use toArray() method to convert the List into a String array.
int size = exampleList.size();
String[] stringArray = exampleList.toArray(new String[size]);
This will convert our JSON array into a String array. The code has been provided below for reference.
Implementation:
Java
// importing the packagesimport java.util.*;import org.json.*; public class GFG { public static void main(String[] args) { // Initialising a JSON example array JSONArray exampleArray = new JSONArray(); // Entering the data into the array exampleArray.put("Geeks "); exampleArray.put("For "); exampleArray.put("Geeks "); // Printing the contents of JSON example array System.out.print("Given JSON array: " + exampleArray); System.out.print("\n"); // Creating example List and adding the data to it List<String> exampleList = new ArrayList<String>(); for (int i = 0; i < exampleArray.length; i++) { exampleList.add(exampleArray.getString(i)); } // Creating String array as our // final required output int size = exampleList.size(); String[] stringArray = exampleList.toArray(new String[size]); // Printing the contents of String array System.out.print("Output String array will be : "); for (String s : stringArray) { System.out.print(s); } }}
Output:
Given JSON array: ["Geeks ","For ","Geeks "]
Output String array will be : Geeks For Geeks
Java-Array-Programs
Picked
Java
Java Programs
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n28 Mar, 2021"
},
{
"code": null,
"e": 385,
"s": 28,
"text": "JSON stands for JavaScript Object Notation. It is one of the widely used formats to exchange data by web applications. JSON arrays are almost the same as arrays in JavaScript.... |
PyQt5 - QInputDialog Widget | This is a preconfigured dialog with a text field and two buttons, OK and Cancel. The parent window collects the input in the text box after the user clicks on Ok button or presses Enter.
The user input can be a number, a string or an item from the list. A label prompting the user what he should do is also displayed.
The QInputDialog class has the following static methods to accept input from the user −
getInt()
Creates a spinner box for integer number
getDouble()
Spinner box with floating point number can be input
getText()
A simple line edit field to type text
getItem()
A combo box from which user can choose item
The following example implements the input dialog functionality. The top level window has three buttons. Their clicked() signal pops up InputDialog through connected slots.
items = ("C", "C++", "Java", "Python")
item, ok = QInputDialog.getItem(
self, "select input dialog", "list of languages", items, 0, False
)
if ok and item:
self.le.setText(item)
def gettext(self):
text, ok = QInputDialog.getText(self, 'Text Input Dialog', 'Enter your name:')
if ok:
self.le1.setText(str(text))
def getint(self):
num,ok = QInputDialog.getInt(self,"integer input dualog","enter a number")
if ok:
self.le2.setText(str(num))
The complete code is as follows −
import sys
from PyQt5.QtCore import *
from PyQt5.QtGui import *
from PyQt5.QtWidgets import *
class inputdialogdemo(QWidget):
def __init__(self, parent = None):
super(inputdialogdemo, self).__init__(parent)
layout = QFormLayout()
self.btn = QPushButton("Choose from list")
self.btn.clicked.connect(self.getItem)
self.le = QLineEdit()
layout.addRow(self.btn,self.le)
self.btn1 = QPushButton("get name")
self.btn1.clicked.connect(self.gettext)
self.le1 = QLineEdit()
layout.addRow(self.btn1,self.le1)
self.btn2 = QPushButton("Enter an integer")
self.btn2.clicked.connect(self.getint)
self.le2 = QLineEdit()
layout.addRow(self.btn2,self.le2)
self.setLayout(layout)
self.setWindowTitle("Input Dialog demo")
def getItem(self):
items = ("C", "C++", "Java", "Python")
item, ok = QInputDialog.getItem(
self, "select input dialog", "list of languages", items, 0, False
)
if ok and item:
self.le.setText(item)
def gettext(self):
text, ok = QInputDialog.getText(self, 'Text Input Dialog', 'Enter your name:')
if ok:
self.le1.setText(str(text))
def getint(self):
num,ok = QInputDialog.getInt(self,"integer input dualog","enter a number")
if ok:
self.le2.setText(str(num))
def main():
app = QApplication(sys.argv)
ex = inputdialogdemo()
ex.show()
sys.exit(app.exec_())
if __name__ == '__main__':
main()
The above code produces the following output − | [
{
"code": null,
"e": 2284,
"s": 2097,
"text": "This is a preconfigured dialog with a text field and two buttons, OK and Cancel. The parent window collects the input in the text box after the user clicks on Ok button or presses Enter."
},
{
"code": null,
"e": 2415,
"s": 2284,
"tex... |
C# | Math.Round() Method | Set – 1 | 31 Jan, 2019
In C#, Math.Round() is a Math class method which is used to round a value to the nearest integer or to the particular number of fractional digits. This method can be overloaded by changing the number and type of the arguments passed. There are total 8 methods in the overload list of the Math.Round() method. Here we will discuss only 4 methods and remaining 4 methods are discussed in C# | Math.Round() Method | Set – 2.
Math.Round(Double)
Math.Round(Double, Int32)
Math.Round(Decimal)
Math.Round(Decimal, Int32)
Math.Round(Double, Int32, MidpointRounding)
Math.Round(Double, MidpointRounding)
Math.Round(Decimal, Int32, MidpointRounding)
Math.Round(Decimal, MidpointRounding)
This method rounds a double-precision floating-point value to the nearest integer value.
Syntax:
public static double Round(double x)
Parameter:
x: A double floating-point number which is to be rounded. Type of this parameter is System.Double.
Return Type:It returns the integer nearest to x and return type is System.Double.
Note: In case if the fractional component of x is halfway between two integers, one of which is even and the other odd, then the even number is returned.
Example:
// C# program to demonstrate the // Math.Round(Double) methodusing System; class Geeks { // Main method static void Main(string[] args) { // Case-1 // A double value whose fractional part is // less than the halfway between two // consecutive integers Double dx1 = 12.434565d; // Output value will be 12 Console.WriteLine("Rounded value of " + dx1 + " is " + Math.Round(dx1)); // Case-2 // A double value whose fractional part is // greater than the halfway between two // consecutive integers Double dx2 = 12.634565d; // Output value will be 13 Console.WriteLine("Rounded value of " + dx2 + " is " + Math.Round(dx2)); }}
Rounded value of 12.434565 is 12
Rounded value of 12.634565 is 13
Explanation: In above code suppose the user wants to round off the above specified double value to nearest integer. So the compiler will first check whether that double value is greater than or less than the even and odd integral value of that double number. If it is less than the halfway value, its output will be floor value, else if greater than halfway value, its output will be the ceiling value.
This method rounds a double precision floating-point value to a specified number of fractional digits.
Syntax:
public static double Round(double x, Int32 y)
Parameter:
x: A double floating-point number which is to be rounded. Type of this parameter is System.Double.y: It is the number of fractional digits in the returned value. Type of this parameter is System.Int32.
Return Type:It returns the integer nearest to x which contains a number of fractional digits equal to y and return type is System.Double.
Exception: This method will give ArgumentOutOfRangeException if the value of y is less than 0 or greater than 15.
Example:
// C# program to demonstrate the // Math.Round(Double, Int32) methodusing System; class Geeks { // Main method static void Main(string[] args) { // double type Double dx1 = 12.434565d; // using method Console.WriteLine("Rounded value of " + dx1 + " is " + Math.Round(dx1, 4)); // double type Double dx2 = 12.634565d; // using method Console.WriteLine("Rounded value of " + dx2 + " is " + Math.Round(dx2,2)); }}
Rounded value of 12.434565 is 12.4346
Rounded value of 12.634565 is 12.63
Explanation: Method will check that whether the digit next to the specified number of decimal digits is greater than or equals to 5 or not. If it is greater than or equal to 5 then it increments the previous number else previous digit remains the same.
This method rounds off a decimal value whose precision is 128 bits to the nearest integer value.
Syntax:
public static decimal Round(decimal x)
Parameter:
x: It is decimal number which is to be rounded. Type of this parameter is System.Decimal.
Return Type:It returns the integer nearest to x and return type is System.Decimal.
Note: In case if the fractional component of x is halfway between two integers, one of which is even and the other odd, then the even number is returned.
Example:
// C# program to demonstrate the // Math.Round(Decimal) methodusing System; class Geeks { // Main method static void Main(string[] args) { // Case-1 // A decimal value whose fractional part is // less than the halfway between two // consecutive integers Decimal dec1 = 12.345m; Console.WriteLine("Value of dec1 is " + dec1); Console.WriteLine("Rounded value of " + dec1 + " is " + Math.Round(dec1)); // Case-2 // A double value whose fractional part is // greater than the halfway between two // consecutive integers Decimal dec2 = 12.785m; Console.WriteLine("Value of dec2 is " + dec2); Console.WriteLine("Rounded value of " + dec2 + " is " + Math.Round(dec2)); }}
Value of dec1 is 12.345
Rounded value of 12.345 is 12
Value of dec2 is 12.785
Rounded value of 12.785 is 13
This method rounds a decimal value to a specified number of fractional digits.
Syntax:
public static decimal Round(decimal x, Int32 y)
Parameter:
x: A decimal number which is to be rounded. Type of this parameter is System.Decimal.y: It is the number of fractional digits in the returned value. Type of this parameter is System.Int32.
Return Type:It returns the integer nearest to x which contains a number of fractional digits equal to y and return type is System.Decimal.
Exception: This method will give ArgumentOutOfRangeException if the value of y is less than 0 or greater than 15 and OverflowException in case of result is outside the range of a Decimal.
Example:
// C# program to demonstrate the // Math.Round(Decimal, Int32) methodusing System; class Geeks { // Main Method static void Main(string[] args) { // Case - 1 // The value of digit after specified // number is less than 5 & Here y = 3 Decimal dx1 = 12.2234565m; // Output value will be 12.223 Console.WriteLine("Rounded value of " + dx1 + " is " + Math.Round(dx1, 3)); // Case - 2 // The value of digit after specified // number is greater than 5 & Here y = 4 Decimal dx2 = 12.8734765m; // Output value will be 12.8735 Console.WriteLine("Rounded value of " + dx2 + " is " + Math.Round(dx2, 4)); }}
Rounded value of 12.2234565 is 12.223
Rounded value of 12.8734765 is 12.8735
Note: The above codes for Decimal Value works in a similar way as in case of Double value. The only difference between Decimal type and Double type lies within the concept of precision i.e. in case of Double, precision is 64-bit and in case of Decimal precision is 128-bits.
Reference: https://docs.microsoft.com/en-us/dotnet/api/system.math.round?view=netframework-4.7.2
CSharp-Math
CSharp-method
C#
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n31 Jan, 2019"
},
{
"code": null,
"e": 450,
"s": 28,
"text": "In C#, Math.Round() is a Math class method which is used to round a value to the nearest integer or to the particular number of fractional digits. This method can be overloade... |
Get and Set the stack size of thread attribute in C | To get and set the stack size of thread attribute in C, we use the following thread attributes:
Use for get threads stack size. The stacksize attribute gives the minimum stack size allocated to threads stack. In case of a successful run, then it gives 0 otherwise gives any value.
It takes two arguments −
pthread_attr_getstacksize(pthread_attr_t *attr, size_t *stacksize)
First one for pthread attribute.
Second one for giving the size of the thread attribute.
Used for set new threads stack size. The stacksize attribute gives the minimum stack size allocated to threads stack. In case of a successful run, then it gives 0 otherwise it gives any value.
It takes two arguments −
pthread_attr_setstacksize(pthread_attr_t *attr, size_t *stacksize)
First one for pthread attribute.
Second one for give the size of the new stack in bytes.
Begin
Declare stack size and declare pthread attribute a.
Gets the current stacksize by pthread_attr_getstacksize() and print it.
Set the new stack size by pthread_attr_setstacksize() and get the stack size pthread_attr_getstacksize() and print it.
End
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
int main() {
size_t stacksize;
pthread_attr_t a;
pthread_attr_getstacksize(&a, &stacksize);
printf("Current stack size = %d\n", stacksize);
pthread_attr_setstacksize(&a, 67626);
pthread_attr_getstacksize(&a, &stacksize);
printf("New stack size= %d\n", stacksize);
return 0;
}
Current stack size = 50
New stack size= 67626 | [
{
"code": null,
"e": 1158,
"s": 1062,
"text": "To get and set the stack size of thread attribute in C, we use the following thread attributes:"
},
{
"code": null,
"e": 1343,
"s": 1158,
"text": "Use for get threads stack size. The stacksize attribute gives the minimum stack size a... |
Font Awesome - Web Application Icons | This chapter explains the usage of Font Awesome Web Application icons. Assume that custom is the CSS class name where we defined the size and color, as shown in the example given below.
<html>
<head>
<link rel = "stylesheet" href = "https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.3.0/css/font-awesome.min.css">
<style>
i.custom {font-size: 2em; color: gray;}
</style>
</head>
<body>
<i class = "fa fa-adjust custom"></i>
</body>
</html>
The following table shows the usage and the results of Font Awesome Web Application icons. Replace the < body > tag of the above program with the code given in the table to get the respective outputs −
26 Lectures
2 hours
Neha Gupta
20 Lectures
2 hours
Asif Hussain
43 Lectures
5 hours
Sharad Kumar
411 Lectures
38.5 hours
In28Minutes Official
71 Lectures
10 hours
Chaand Sheikh
207 Lectures
33 hours
Eduonix Learning Solutions
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2752,
"s": 2566,
"text": "This chapter explains the usage of Font Awesome Web Application icons. Assume that custom is the CSS class name where we defined the size and color, as shown in the example given below."
},
{
"code": null,
"e": 3063,
"s": 2752,
"text... |
Get similar words suggestion using Enchant in Python - GeeksforGeeks | 14 Apr, 2018
For the given user input, get similar words through Enchant module.
Enchant is a module in python which is used to check the spelling of a word, gives suggestions to correct words. Also, gives antonym and synonym of words. It checks whether a word exists in dictionary or not. Other dictionaries can also be added, as, (“en_UK”), (“en_CA”), (“en_GB”) etc.
To install enchant :
pip install pyenchant
Examples :
Input : Helo
Output : Hello, Help, Hero, Helot, Hole
Input : Trth
Output : Truth, Trash, Troth, Trench
Below is the implementation :
# Python program to print the similar# words using Enchant module # Importing the Enchant moduleimport enchant # Using 'en_US' dictionaryd = enchant.Dict("en_US") # Taking input from userword = input("Enter word: ") d.check(word) # Will suggest similar words# form given dictionaryprint(d.suggest(word))
Output :
Enter word: aple
['pale', 'ale', 'ape', 'maple', 'ample', 'apple', 'plea', 'able', 'apse']
python-modules
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Python Dictionary
How to Install PIP on Windows ?
Read a file line by line in Python
Different ways to create Pandas Dataframe
Python program to convert a list to string
Create a Pandas DataFrame from Lists
Python String | replace()
Reading and Writing to text files in Python
*args and **kwargs in Python
How to drop one or multiple columns in Pandas Dataframe | [
{
"code": null,
"e": 24388,
"s": 24360,
"text": "\n14 Apr, 2018"
},
{
"code": null,
"e": 24456,
"s": 24388,
"text": "For the given user input, get similar words through Enchant module."
},
{
"code": null,
"e": 24744,
"s": 24456,
"text": "Enchant is a module in... |
TreeSet headSet() Method in Java With Examples - GeeksforGeeks | 01 Nov, 2021
TreeSet is one of the most important implementations of the SortedSet interface in Java that uses a Tree for storage. The ordering of the elements is maintained by a set using their natural ordering whether or not an explicit comparator is provided. This must be consistent with equals if it is to correctly implement the Set interface.
The headSet() method of TreeSet class been present inside java.util package is used as a limit setter for a tree set, to return the elements up to a limit defined in the parameter of the method in a sorted manner excluding the element.
Syntax:
head_set = (TreeSet)tree_set.headSet(Object element)
Parameters: The parameter element is of the type of the tree set and is the headpoint that is the limit up to which the tree is allowed to return values excluding the element itself.
Return Values: The method returns the portion of the values in a sorted manner that is strictly less than the element mentioned in the parameter.
now we will be discussing different scenarios while implementing the headSet() method in TreeSet class:
Case 1: In a sorted TreeSet
Case 2-A: In an unsorted TreeSet
Case 2-B: In an unsorted TreeSet but with String type elements
Example 1:
Java
// Java program to Illustrate headSet() method// of TreeSet class In a sorted TreeSet // Importing required classesimport java.io.*;import java.util.Iterator;import java.util.TreeSet; // Main classpublic class GFG { // Main driver method public static void main(String[] args) { // Creating an empty TreeSet by // declaring object of TreeSet class TreeSet<Integer> tree_set = new TreeSet<Integer>(); // Adding the elements // using add() method tree_set.add(1); tree_set.add(2); tree_set.add(3); tree_set.add(4); tree_set.add(5); tree_set.add(10); tree_set.add(20); tree_set.add(30); tree_set.add(40); tree_set.add(50); // Creating the headSet tree TreeSet<Integer> head_set = new TreeSet<Integer>(); // Limiting the values till 5 head_set = (TreeSet<Integer>)tree_set.headSet(30); // Creating an Iterator Iterator iterate; iterate = head_set.iterator(); // Displaying the tree set data System.out.println( "The resultant values till head set: "); // Holds true till there is single element // remaining in the object while (iterate.hasNext()) { // Iterating through the headSet // using next() method System.out.println(iterate.next() + " "); } }}
The resultant values till head set:
1
2
3
4
5
10
20
Example 2-A:
Java
// Java Program to illustrate headSet() method// of TreeSet class In an unsorted TreeSet // Importing required classes import java.io.*;import java.util.Iterator;import java.util.TreeSet; // Main class public class GFG { // Main driver method public static void main(String[] args) { // Creating an empty TreeSet TreeSet<Integer> tree_set = new TreeSet<Integer>(); // Adding the elements // using add() method tree_set.add(9); tree_set.add(2); tree_set.add(100); tree_set.add(40); tree_set.add(50); tree_set.add(10); tree_set.add(20); tree_set.add(30); tree_set.add(15); tree_set.add(16); // Creating the headSet tree TreeSet<Integer> head_set = new TreeSet<Integer>(); // Limiting the values till 5 head_set = (TreeSet<Integer>)tree_set.headSet(30); // Creating an Iterator Iterator iterate; iterate = head_set.iterator(); // Displaying the tree set data System.out.println("The resultant values till head set: "); // Iterating through the headSet while (iterate.hasNext()) { // Printing the elements System.out.println(iterate.next() + " "); } }}
The resultant values till head set:
2
9
10
15
16
20
Example 2-B:
Java
// Java code to illustrate headSet() method of TreeSet class// In an unsorted treeset but with String type elements // Importing required classesimport java.io.*;import java.util.Iterator;import java.util.TreeSet; // Main classpublic class GFG { // Main driver method public static void main(String[] args) { // Creating an empty TreeSet TreeSet<String> tree_set = new TreeSet<String>(); // Adding the elements using add() tree_set.add("Welcome"); tree_set.add("To"); tree_set.add("Geek"); tree_set.add("4"); tree_set.add("Geeks"); tree_set.add("TreeSet"); // Creating the headSet tree TreeSet<String> head_set = new TreeSet<String>(); // Limiting the values till 5 head_set = (TreeSet<String>)tree_set.headSet("To"); // Creating an Iterator Iterator iterate; iterate = head_set.iterator(); // Displaying the tree set data System.out.println( "The resultant values till head set: "); // Iterating through the headSet while (iterate.hasNext()) { // Printing elements using next() method System.out.println(iterate.next() + " "); } }}
The resultant values till head set:
4
Geek
Geeks
solankimayank
Java - util package
Java-Collections
java-treeset
Java
Java
Java-Collections
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Constructors in Java
Stream In Java
Exceptions in Java
Functional Interfaces in Java
Different ways of Reading a text file in Java
Java Programming Examples
Internal Working of HashMap in Java
Checked vs Unchecked Exceptions in Java
Strings in Java
StringBuilder Class in Java with Examples | [
{
"code": null,
"e": 23868,
"s": 23840,
"text": "\n01 Nov, 2021"
},
{
"code": null,
"e": 24206,
"s": 23868,
"text": "TreeSet is one of the most important implementations of the SortedSet interface in Java that uses a Tree for storage. The ordering of the elements is maintained by... |
Topic Modelling in Python with NLTK and Gensim | by Susan Li | Towards Data Science | In this post, we will learn how to identity which topic is discussed in a document, called topic modelling. In particular, we will cover Latent Dirichlet Allocation (LDA): a widely used topic modelling technique. And we will apply LDA to convert set of research papers to a set of topics.
Research paper topic modelling is an unsupervised machine learning method that helps us discover hidden semantic structures in a paper, that allows us to learn topic representations of papers in a corpus. The model can be applied to any kinds of labels on documents, such as tags on posts on the website.
We pick the number of topics ahead of time even if we’re not sure what the topics are.
Each document is represented as a distribution over topics.
Each topic is represented as a distribution over words.
The research paper text data is just a bunch of unlabeled texts and can be found here.
We use the following function to clean our texts and return a list of tokens:
import spacyspacy.load('en')from spacy.lang.en import Englishparser = English()def tokenize(text): lda_tokens = [] tokens = parser(text) for token in tokens: if token.orth_.isspace(): continue elif token.like_url: lda_tokens.append('URL') elif token.orth_.startswith('@'): lda_tokens.append('SCREEN_NAME') else: lda_tokens.append(token.lower_) return lda_tokens
We use NLTK’s Wordnet to find the meanings of words, synonyms, antonyms, and more. In addition, we use WordNetLemmatizer to get the root word.
import nltknltk.download('wordnet')from nltk.corpus import wordnet as wndef get_lemma(word): lemma = wn.morphy(word) if lemma is None: return word else: return lemma from nltk.stem.wordnet import WordNetLemmatizerdef get_lemma2(word): return WordNetLemmatizer().lemmatize(word)
Filter out stop words:
nltk.download('stopwords')en_stop = set(nltk.corpus.stopwords.words('english'))
Now we can define a function to prepare the text for topic modelling:
def prepare_text_for_lda(text): tokens = tokenize(text) tokens = [token for token in tokens if len(token) > 4] tokens = [token for token in tokens if token not in en_stop] tokens = [get_lemma(token) for token in tokens] return tokens
Open up our data, read line by line, for each line, prepare text for LDA, then add to a list.
Now we can see how our text data are converted:
import randomtext_data = []with open('dataset.csv') as f: for line in f: tokens = prepare_text_for_lda(line) if random.random() > .99: print(tokens) text_data.append(tokens)
[‘sociocrowd’, ‘social’, ‘network’, ‘base’, ‘framework’, ‘crowd’, ‘simulation’][‘detection’, ‘technique’, ‘clock’, ‘recovery’, ‘application’][‘voltage’, ‘syllabic’, ‘companding’, ‘domain’, ‘filter’][‘perceptual’, ‘base’, ‘coding’, ‘decision’][‘cognitive’, ‘mobile’, ‘virtual’, ‘network’, ‘operator’, ‘investment’, ‘pricing’, ‘supply’, ‘uncertainty’][‘clustering’, ‘query’, ‘search’, ‘engine’][‘psychological’, ‘engagement’, ‘enterprise’, ‘starting’, ‘london’][‘10-bit’, ‘200-ms’, ‘digitally’, ‘calibrate’, ‘pipelined’, ‘using’, ‘switching’, ‘opamps’][‘optimal’, ‘allocation’, ‘resource’, ‘distribute’, ‘information’, ‘network’][‘modeling’, ‘synaptic’, ‘plasticity’, ‘within’, ‘network’, ‘highly’, ‘accelerate’, ‘i&f’, ‘neuron’][‘tile’, ‘interleave’, ‘multi’, ‘level’, ‘discrete’, ‘wavelet’, ‘transform’][‘security’, ‘cross’, ‘layer’, ‘protocol’, ‘wireless’, ‘sensor’, ‘network’][‘objectivity’, ‘industrial’, ‘exhibit’][‘balance’, ‘packet’, ‘discard’, ‘improve’, ‘performance’, ‘network’][‘bodyqos’, ‘adaptive’, ‘radio’, ‘agnostic’, ‘sensor’, ‘network’][‘design’, ‘reliability’, ‘methodology’][‘context’, ‘aware’, ‘image’, ‘semantic’, ‘extraction’, ‘social’][‘computation’, ‘unstable’, ‘limit’, ‘cycle’, ‘large’, ‘scale’, ‘power’, ‘system’, ‘model’][‘photon’, ‘density’, ‘estimation’, ‘using’, ‘multiple’, ‘importance’, ‘sampling’][‘approach’, ‘joint’, ‘blind’, ‘space’, ‘equalization’, ‘estimation’][‘unify’, ‘quadratic’, ‘programming’, ‘approach’, ‘mix’, ‘placement’]
First, we are creating a dictionary from the data, then convert to bag-of-words corpus and save the dictionary and corpus for future use.
from gensim import corporadictionary = corpora.Dictionary(text_data)corpus = [dictionary.doc2bow(text) for text in text_data]import picklepickle.dump(corpus, open('corpus.pkl', 'wb'))dictionary.save('dictionary.gensim')
We are asking LDA to find 5 topics in the data:
import gensimNUM_TOPICS = 5ldamodel = gensim.models.ldamodel.LdaModel(corpus, num_topics = NUM_TOPICS, id2word=dictionary, passes=15)ldamodel.save('model5.gensim')topics = ldamodel.print_topics(num_words=4)for topic in topics: print(topic)
(0, ‘0.034*”processor” + 0.019*”database” + 0.019*”issue” + 0.019*”overview”’)(1, ‘0.051*”computer” + 0.028*”design” + 0.028*”graphics” + 0.028*”gallery”’)(2, ‘0.050*”management” + 0.027*”object” + 0.027*”circuit” + 0.027*”efficient”’)(3, ‘0.019*”cognitive” + 0.019*”radio” + 0.019*”network” + 0.019*”distribute”’)(4, ‘0.029*”circuit” + 0.029*”system” + 0.029*”rigorous” + 0.029*”integration”’)
Topic 0 includes words like “processor”, “database”, “issue” and “overview”, sounds like a topic related to database. Topic 1 includes words like “computer”, “design”, “graphics” and “gallery”, it is definite a graphic design related topic. Topic 2 includes words like “management”, “object”, “circuit” and “efficient”, sounds like a corporate management related topic. And so on.
With LDA, we can see that different document with different topics, and the discriminations are obvious.
Let’s try a new document:
new_doc = 'Practical Bayesian Optimization of Machine Learning Algorithms'new_doc = prepare_text_for_lda(new_doc)new_doc_bow = dictionary.doc2bow(new_doc)print(new_doc_bow)print(ldamodel.get_document_topics(new_doc_bow))
[(38, 1), (117, 1)][(0, 0.06669136), (1, 0.40170625), (2, 0.06670282), (3, 0.39819494), (4, 0.066704586)]
My new document is about machine learning algorithms, the LDA out put shows that topic 1 has the highest probability assigned, and topic 3 has the second highest probability assigned. We agreed!
Remember that the above 5 probabilities add up to 1.
Now we are asking LDA to find 3 topics in the data:
ldamodel = gensim.models.ldamodel.LdaModel(corpus, num_topics = 3, id2word=dictionary, passes=15)ldamodel.save('model3.gensim')topics = ldamodel.print_topics(num_words=4)for topic in topics: print(topic)
(0, ‘0.029*”processor” + 0.016*”management” + 0.016*”aid” + 0.016*”algorithm”’)(1, ‘0.026*”radio” + 0.026*”network” + 0.026*”cognitive” + 0.026*”efficient”’)(2, ‘0.029*”circuit” + 0.029*”distribute” + 0.016*”database” + 0.016*”management”’)
We can also find 10 topics:
ldamodel = gensim.models.ldamodel.LdaModel(corpus, num_topics = 10, id2word=dictionary, passes=15)ldamodel.save('model10.gensim')topics = ldamodel.print_topics(num_words=4)for topic in topics: print(topic)
(0, ‘0.055*”database” + 0.055*”system” + 0.029*”technical” + 0.029*”recursive”’)(1, ‘0.038*”distribute” + 0.038*”graphics” + 0.038*”regenerate” + 0.038*”exact”’)(2, ‘0.055*”management” + 0.029*”multiversion” + 0.029*”reference” + 0.029*”document”’)(3, ‘0.046*”circuit” + 0.046*”object” + 0.046*”generation” + 0.046*”transformation”’)(4, ‘0.008*”programming” + 0.008*”circuit” + 0.008*”network” + 0.008*”surface”’)(5, ‘0.061*”radio” + 0.061*”cognitive” + 0.061*”network” + 0.061*”connectivity”’)(6, ‘0.085*”programming” + 0.008*”circuit” + 0.008*”subdivision” + 0.008*”management”’)(7, ‘0.041*”circuit” + 0.041*”design” + 0.041*”processor” + 0.041*”instruction”’)(8, ‘0.055*”computer” + 0.029*”efficient” + 0.029*”channel” + 0.029*”cooperation”’)(9, ‘0.061*”stimulation” + 0.061*”sensor” + 0.061*”retinal” + 0.061*”pixel”’)
pyLDAvis is designed to help users interpret the topics in a topic model that has been fit to a corpus of text data. The package extracts information from a fitted LDA topic model to inform an interactive web-based visualization.
Visualizing 5 topics:
dictionary = gensim.corpora.Dictionary.load('dictionary.gensim')corpus = pickle.load(open('corpus.pkl', 'rb'))lda = gensim.models.ldamodel.LdaModel.load('model5.gensim')import pyLDAvis.gensimlda_display = pyLDAvis.gensim.prepare(lda, corpus, dictionary, sort_topics=False)pyLDAvis.display(lda_display)
Saliency: a measure of how much the term tells you about the topic.
Relevance: a weighted average of the probability of the word given the topic and the word given the topic normalized by the probability of the topic.
The size of the bubble measures the importance of the topics, relative to the data.
First, we got the most salient terms, means terms mostly tell us about what’s going on relative to the topics. We can also look at individual topic.
Visualizing 3 topics:
lda3 = gensim.models.ldamodel.LdaModel.load('model3.gensim')lda_display3 = pyLDAvis.gensim.prepare(lda3, corpus, dictionary, sort_topics=False)pyLDAvis.display(lda_display3)
Visualizing 10 topics:
lda10 = gensim.models.ldamodel.LdaModel.load('model10.gensim')lda_display10 = pyLDAvis.gensim.prepare(lda10, corpus, dictionary, sort_topics=False)pyLDAvis.display(lda_display10)
When we have 5 or 10 topics, we can see certain topics are clustered together, this indicates the similarity between topics. What a a nice way to visualize what we have done thus far!
Try it out, find a text dataset, remove the label if it is labeled, and build a topic model yourself!
Source code can be found on Github. I look forward to hearing any feedback or questions. | [
{
"code": null,
"e": 461,
"s": 172,
"text": "In this post, we will learn how to identity which topic is discussed in a document, called topic modelling. In particular, we will cover Latent Dirichlet Allocation (LDA): a widely used topic modelling technique. And we will apply LDA to convert set of re... |
Data Extraction. Motion Data Domain (Mixamo FBX) | by Rahuldubey | Towards Data Science | The applications of machine learning and deep learning models are emerging every day and a paramount question arises for a beginner: “From where to start?” As a newcomer in Data Science field, mind juggles between choices such as NLP, Computer Vision or anything else. Yet application of any ML/DL algorithm follow the same pipeline: Data extraction, cleaning, model training, evaluation, model selection, deployment. This post ain’t different from others, yet it is curated towards deriving data for Animation or Health care informatics field.
Motion data is complex in nature as it is a hierarchically linked structure just like a graph where each data point or node is a joint and links or edges to other joints are bones. The orientation and location of bones and data points describes the pose which might vary as the number of frames increases and the captured or simulated subject changes it’s position.
By not wasting any more time, I’ll start with script itself. It’s broken down in several steps, but if you want, you can directly jump to the script code given in GitHub link below.
github.com
Before moving on to further steps, I suggest you download a sample Mixamo(.fbx) file from the link provided below.Put this file into “regular” folder.
www.mixamo.com
I have also provided a sample file inside the folder to test given script.
First we import the following libraries. Remember, “bpy” is a Blender library which can only be accessed by Blender work bench. Hence, write the script only in Blender.
#Library Importsimport bpyimport osimport timeimport sysimport jsonfrom mathutils import Vectorimport numpy as np
Next, we set the following setting variables for I/O directory path.
#Settings#This is the main file which loads with clear 'Scene' setting HOME_FILE_PATH = os.path.abspath('homefile.blend')MIN_NR_FRAMES = 64RESOLUTION = (512, 512) #Crucial joints sufficient for visualisation #FIX ME - Add more #joints if desirable for MixamRigBASE_JOINT_NAMES = ['Head', 'Neck','RightArm', 'RightForeArm', 'RightHand', 'LeftArm', 'LeftForeArm', 'LeftHand', 'Hips', 'RightUpLeg', 'RightLeg', 'RightFoot', 'LeftUpLeg', 'LeftLeg', 'LeftFoot', ] #Source directory where .fbx existSRC_DATA_DIR ='regular' #Ouput directory where .fbx to JSON dict will be storedOUT_DATA_DIR ='fbx2json' #Final directory where NPY files will ve storedFINAL_DIR_PATH ='json2npy'
In the above code snippet, we have set the RESOLUTION as well as Number of Joints that will be used for data extraction. You can limit the number of frames as well to have uniform number of frames for each animation file. The final processed data will be stored in “\\json2npy” file.
Once the BASE_JOINT_NAMES are selected, we will create a dictionary to access each element of the rig which is by default named as “MixamoRig”.
#Number of joints to be used from MixamoRigjoint_names = ['mixamorig:' + x for x in BASE_JOINT_NAMES]
First, the .fbx files are converted to JSON object dictionaries with each dictionary containing per frame joint location information. This is done by using function given below.
def fbx2jointDict(): #Remove 'Cube' object if exists in the scene if bpy.data.objects.get('Cube') is not None: cube = bpy.data.objects['Cube'] bpy.data.objects.remove(cube) #Intensify Light Point in the scene if bpy.data.objects.get('Light') is not None: bpy.data.objects['Light'].data.energy = 2 bpy.data.objects['Light'].data.type = 'POINT' #Set resolution and it's rendering percentage bpy.data.scenes['Scene'].render.resolution_x = RESOLUTION[0] bpy.data.scenes['Scene'].render.resolution_y = RESOLUTION[1] bpy.data.scenes['Scene'].render.resolution_percentage = 100 #Base file for blender bpy.ops.wm.save_as_mainfile(filepath=HOME_FILE_PATH) #Get animation(.fbx) file paths anims_path = os.listdir(SRC_DATA_DIR) #Make OUT_DATA_DIR if not os.path.exists(OUT_DATA_DIR): os.makedirs(OUT_DATA_DIR) for anim_name in anims_path: anim_file_path = os.path.join(SRC_DATA_DIR,anim_name) save_dir = os.path.join(OUT_DATA_DIR,anim_name.split('.')[0],'JointDict') #Make save_dir if not os.path.exists(save_dir): os.makedirs(save_dir) #Load HOME_FILE and .fbx file bpy.ops.wm.read_homefile(filepath=HOME_FILE_PATH) bpy.ops.import_scene.fbx(filepath=anim_file_path) #End Frame Index for .fbx file frame_end = bpy.data.actions[0].frame_range[1] for i in range(int(frame_end)+1): bpy.context.scene.frame_set(i) bone_struct = bpy.data.objects['Armature'].pose.bones armature = bpy.data.objects['Armature']out_dict = {'pose_keypoints_3d': []} for name in joint_names: global_location = armature.matrix_world @ bone_struct[name].matrix @ Vector((0, 0, 0)) l = [global_location[0], global_location[1], global_location[2]] out_dict['pose_keypoints_3d'].extend(l) save_path = os.path.join(save_dir,'%04d_keypoints.json'%i) with open(save_path,'w') as f: json.dump(out_dict, f)--EXPLANATION--In the function above, first we remove pre-rendered objects like “Cube”, which is set as default when Blender is opened. Then we set the “Light” object settings to increase radiance of energy as well as type as “Point”. These objects can be access using “bpy.data.objects[name of object]” to manipulate the data related to it. Also, we have set the resolution settings by using “object.data.scenes[name of scene]” to manipulate scene rendering setting.Once it is done, we save the file as “main_file()” blend file. All the files in the directory ending with “.fbx” is listed and loaded in loop where each loop extracts the global location of Armature and it’s bones. Armature object can be used to get matrix of location in the video file and it’s pose structure for each frame and save it as a dictionary of bones and it’s location in JSON object file.
Finally, jointDict2npy() method is used to collect each of the JSON object file per animation with all the animation concatenated to represent it as a matrix of frames and locations, just like images.
def jointDict2npy(): json_dir = OUT_DATA_DIR npy_dir = FINAL_DIR_PATH if not os.path.exists(npy_dir): os.makedirs(npy_dir) anim_names = os.listdir(json_dir) for anim_name in anim_names: files_path = os.path.join(json_dir,anim_name,'jointDict') frame_files = os.listdir(files_path) motion = [] for frame_file in frame_files: file_path = os.path.join(files_path,frame_file) with open(file_path) as f: info = json.load(f) joint = np.array(info['pose_keypoints_3d']).reshape((-1, 3)) motion.append(joint[:15,:]) motion = np.stack(motion,axis=2) save_path = os.path.join(npy_dir,anim_name) if not os.path.exists(save_path): os.makedirs(save_path) print(save_path) np.save(save_path+'\\'+'{i}.npy'.format(i=anim_name),motion)--EXPLANATION--Above function saves each of the animation in the form of .npy file which consist of 3D-Data array like Tensor formatted as (NumberOfJoints, NumberOfAxes, NumberOfFrames). “.npy” is an efficient structure which saves the data encoded in binary representation. Another alternative is to save it as a compressed file using “savez_compressed()” to “.npz” files.
Execute the script using Command Prompt running as Administrator. Type the following command to execute script:
CMD COMMAND : blender --background -P fbx2npy.py'if __name__ == '__main__': #Convert .fbx files to JSON dict fbx2jointDict() #Convert JSON dict to NPY jointDict2npy()
Final results of processing will look like the given images below. These are some 5 frames selected from the video motion file. You can use the “visualise_frame.py” file to visualise the results given below on the sample data.
In this post we learnt about how we can process the animation data in .fbx format to get the data about the locations of each joint on per frame basis. There are many applications in the wild that require the motion data to learn tasks like Pose Estimation, Motion Retargeting etc. In the next post, I’ll share how this type of data can be normalised so that it can be used by deep learning models. Till then, see ya amigos!!! | [
{
"code": null,
"e": 717,
"s": 172,
"text": "The applications of machine learning and deep learning models are emerging every day and a paramount question arises for a beginner: “From where to start?” As a newcomer in Data Science field, mind juggles between choices such as NLP, Computer Vision or a... |
SQL Tryit Editor v1.6 | Edit the SQL Statement, and click "Run SQL" to see the result.
This SQL-Statement is not supported in the WebSQL Database.
The example still works, because it uses a modified version of SQL.
Your browser does not support WebSQL.
Your are now using a light-version of the Try-SQL Editor, with a read-only Database.
If you switch to a browser with WebSQL support, you can try any SQL statement, and play with the Database as much as you like. The Database can also be restored at any time.
Our Try-SQL Editor uses WebSQL to demonstrate SQL.
A Database-object is created in your browser, for testing purposes.
You can try any SQL statement, and play with the Database as much as you like. The Database can be restored at any time, simply by clicking the "Restore Database" button.
WebSQL stores a Database locally, on the user's computer. Each user gets their own Database object.
WebSQL is supported in Chrome, Safari, Opera, and Edge(79).
If you use another browser you will still be able to use our Try SQL Editor, but a different version, using a server-based ASP application, with a read-only Access Database, where users are not allowed to make any changes to the data. | [
{
"code": null,
"e": 105,
"s": 42,
"text": "Edit the SQL Statement, and click \"Run SQL\" to see the result."
},
{
"code": null,
"e": 165,
"s": 105,
"text": "This SQL-Statement is not supported in the WebSQL Database."
},
{
"code": null,
"e": 233,
"s": 165,
"t... |
LocalDateTime now() Method in Java with Examples - GeeksforGeeks | 21 Jan, 2019
In LocalDateTime class, there are three types of now() method depending upon the parameters passed to it.
now() method of a LocalDateTime class used to obtain the current date-time from the system clock in the default time-zone.This method will return LocalDateTime based on system clock with default time-zone to obtain the current date-time.
Syntax:
public static LocalDateTime now()
Parameters: This method accepts no parameter.
Return value: This method returns the current date-time using the system clock.
Below programs illustrate the now() method:Program 1:
// Java program to demonstrate// LocalDateTime.now() method import java.time.*; public class GFG { public static void main(String[] args) { // create an LocalDateTime object LocalDateTime lt = LocalDateTime.now(); // print result System.out.println("LocalDateTime : " + lt); }}
LocalDateTime : 2019-01-21T05:47:08.644
now(Clock clock) method of a LocalDateTime class used to return the current date-time based on the specified clock passed as parameter
Syntax:
public static LocalDateTime now(Clock clock)
Parameters: This method accepts clock as parameter which is the clock to use.
Return value: This method returns the current date-time.
Below programs illustrate the now() method:Program 1:
// Java program to demonstrate// LocalDateTime.now() method import java.time.*; public class GFG { public static void main(String[] args) { // create a clock Clock cl = Clock.systemUTC(); // create an LocalDateTime object using now(Clock) LocalDateTime lt = LocalDateTime.now(cl); // print result System.out.println("LocalDateTime : " + lt); }}
LocalDateTime : 2019-01-21T05:47:20.949
now(ZoneId zone) method of a LocalDateTime class used to return the current date-time from system clock in the specified time-zone passed as parameter.Specifying the time-zone avoids dependence on the default time-zone.
Syntax:
public static LocalDateTime now(ZoneId zone)
Parameters: This method accepts zone as parameter which is the zone to use.
Return value: This method returns the current date-time.
Below programs illustrate the now() method:Program 1:
// Java program to demonstrate// LocalDateTime.now() method import java.time.*; public class GFG { public static void main(String[] args) { // create a clock ZoneId zid = ZoneId.of("Europe/Paris"); // create an LocalDateTime object using now(zoneId) LocalDateTime lt = LocalDateTime.now(zid); // print result System.out.println("LocalDateTime : " + lt); }}
LocalDateTime : 2019-01-21T06:47:22.756
References:https://docs.oracle.com/javase/10/docs/api/java/time/LocalDateTime.html#now()https://docs.oracle.com/javase/10/docs/api/java/time/LocalDateTime.html#now(java.time.Clock)https://docs.oracle.com/javase/10/docs/api/java/time/LocalDateTime.html#now(java.time.ZoneId)
Java-Functions
Java-LocalDateTime
Java-time package
Java
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Initialize an ArrayList in Java
Interfaces in Java
ArrayList in Java
Multidimensional Arrays in Java
Stack Class in Java
Singleton Class in Java
LinkedList in Java
Collections in Java
Set in Java
Overriding in Java | [
{
"code": null,
"e": 23791,
"s": 23763,
"text": "\n21 Jan, 2019"
},
{
"code": null,
"e": 23897,
"s": 23791,
"text": "In LocalDateTime class, there are three types of now() method depending upon the parameters passed to it."
},
{
"code": null,
"e": 24135,
"s": 2389... |
How to overload Python comparison operators? | Python has magic methods to define overloaded behaviour of operators. The comparison operators (<, <=, >, >=, == and !=) can be overloaded by providing definition to __lt__, __le__, __gt__, __ge__, __eq__ and __ne__ magic methods.
Following program overloads == and >= operators to compare objects of distance class.
class distance:
def __init__(self, x=5,y=5):
self.ft=x
self.inch=y
def __eq__(self, other):
if self.ft==other.ft and self.inch==other.inch:
return "both objects are equal"
else:
return "both objects are not equal"
def __ge__(self, other):
in1=self.ft*12+self.inch
in2=other.ft*12+other.inch
if in1>=in2:
return "first object greater than or equal to other"
else:
return "first object smaller than other"
d1=distance(5,5)
d2=distance()
print (d1==d2)
d3=distance()
d4=distance(6,10)
print (d1==d2)
d5=distance(3,11)
d6=distance()
print(d5>=d6)
Result of above program shows overloaded use of == and >= comparison operators
both objects are equal
both objects are equal
first object smaller than other | [
{
"code": null,
"e": 1294,
"s": 1062,
"text": "Python has magic methods to define overloaded behaviour of operators. The comparison operators (<, <=, >, >=, == and !=) can be overloaded by providing definition to __lt__, __le__, __gt__, __ge__, __eq__ and __ne__ magic methods. "
},
{
"code":... |
C# Program to search for a string in an array of strings | Use Linq Contains() method to search for as specific string in an array of strings.
string[] arr = { "Bag", "Pen", "Pencil"};
Now, add the string in a string variable i.e. the string you want to search.
string str = "Pen";
Use the Contains() method to search the above string.
arr.AsQueryable().Contains(str);
Let us see the entire example.
Live Demo
using System;
using System.Linq;
using System.Collections.Generic;
class Demo {
static void Main() {
string[] arr = { "Bag", "Pen", "Pencil"};
string str = "Pen";
bool res = arr.AsQueryable().Contains(str);
Console.WriteLine("String Pen is in the array? "+res);
}
}
String Pen is in the array? True | [
{
"code": null,
"e": 1146,
"s": 1062,
"text": "Use Linq Contains() method to search for as specific string in an array of strings."
},
{
"code": null,
"e": 1188,
"s": 1146,
"text": "string[] arr = { \"Bag\", \"Pen\", \"Pencil\"};"
},
{
"code": null,
"e": 1265,
"s"... |
Set the Background Image Position with CSS | To set the background image position, use the background-position property. You can try to run the following code to learn how to work with the background-position property:
It sets the background image position 80 pixels away from the left side:
<html>
<head>
<style>
body {
background-image: url("/css/images/css.jpg");
background-position:80px;
}
</style>
</head>
<body>
<p>Tutorials point</>
</body>
</html> | [
{
"code": null,
"e": 1236,
"s": 1062,
"text": "To set the background image position, use the background-position property. You can try to run the following code to learn how to work with the background-position property:"
},
{
"code": null,
"e": 1309,
"s": 1236,
"text": "It sets ... |
Transform data and create beautiful visualisation using ggplot2 | by Shubham Gupta | Towards Data Science | The purpose of creating visualisations is to explore data, find hidden trends and communicate trends. To create any visualisation we need a question that we wish to explore and we need the data which can help us answer the question. The question we will be exploring is “How has mobility pattern of people changed due to COVID-19?” and the data we will be using compares changes in baseline mobility trends at different places due to COVID-19 and is provided by Google here.
Following this tutorial will help you understand how to transform data in R and plot a stacked bar chart. Here’s the end result:
Description of columns of interest:
Country code — “country_region_code”Country name — “country_region”Change in Retail/Recreation spaces — “retail_and_recreation_percent_avg”Change in Grocery/Pharmacy spaces — “grocery_and_pharmacy_percent_avg”Change in Park spaces — “parks_percent_avg”Change in Transit station spaces — “transit_stations_percent_avg”Change in Workplace spaces — “workplaces_percent_avg”
If you are only interested in ggplot2 customisation's, please jump to Step 3.
Our data exploration journey begins:
# Importing librarieslibrary("dplyr")library("magrittr")library("plotly")library("ggplot2")library("tidyr")library("cowplot")
# Reading downloaded csvdata <- read.csv(file = 'Global_Mobility_Report.csv')# Print column names in dataframecolnames(data)
We will be creating visualisation for European countries hence we will have to filter other countries out.
# Required country codescountry_region_code <- c("GB","SK","SI","SE","RO","PT","PL","NO","NL","MT","LV","LU","LT","IT","IE","HU","HR","GR","FR","FI","ES","EE","DK","DE","CZ","BG","BE","AT")# Creating a subset using required country codessubset = data[data$country_region_code %in% country_region_code ,]# Check countries in subsetunique(subset$country_region)
In our data, we have changes in mobility trends listed for each day but we want to plot the change for entire period so we will have to aggregate data. We will do this by grouping using country_region_code and calculating mean for each of our mobility categories.
# Aggregating data to get average percent changeaggdata <- subset %>%group_by(country_region_code, country_region) %>% summarize(retail_and_recreation_percent_avg = mean(retail_and_recreation_percent_change_from_baseline, na.rm = TRUE), grocery_and_pharmacy_percent_avg = mean(grocery_and_pharmacy_percent_change_from_baseline, na.rm = TRUE), parks_percent_avg = mean(parks_percent_change_from_baseline, na.rm = TRUE), transit_stations_percent_avg = mean(transit_stations_percent_change_from_baseline, na.rm = TRUE), workplaces_percent_avg = mean(workplaces_percent_change_from_baseline, na.rm = TRUE), residential_percent_avg = mean(residential_percent_change_from_baseline, na.rm = TRUE))colnames(aggdata)
We will add another columns overall_mob_percent which will overall change in mobility percentage so that we can sort the data from countries with most affected mobility changes to least.
# Adding additional average change columnaggdata <- transform(aggdata, overall_mob_percent=(retail_and_recreation_percent_avg+grocery_and_pharmacy_percent_avg+ parks_percent_avg+transit_stations_percent_avg+ workplaces_percent_avg+residential_percent_avg))colnames(aggdata)
Currently our data is stored in wide format where each category of mobility change has separate column.
We will have to transform out data to long format before plotting using gather.
# Gathering dataaggdata_tsfm <- gather(aggdata, key = "parameter", value = "percent", -country_region_code, -country_region, -overall_mob_percent)colnames(aggdata_tsfm)
Ordering data.
# Sort based on overall_mob_percentaggdata_tsfm <- aggdata_tsfm[order(aggdata_tsfm$overall_mob_percent),]
Converting country_region to factor so that ordering is preserved in our plot.
# Converting to factor for preserving sequence in our visualisationaggdata_tsfm$country_region <- factor(aggdata_tsfm$country_region, levels = unique(aggdata_tsfm$country_region))
Here, aggdata_tsfm is our dataframe, x axis has countries, y axis has percent change in mobility values and we will fill stacked bar chart with our different place categories. We’ve set position to stack to create a stacked bar chart. You could set position to dodge to create side by side bar chart. To give our bar blocks a black outline we’ve set color to black.
# Creating the plotmainplot <- ggplot(aggdata_tsfm, aes(width = 0.9, fill=parameter, y=percent, x=country_region)) + geom_bar(position="stack", stat="identity", color="black")
Adding horizontal line to differentiate between -ve, +ve y axis since our data has positive as well as negative values along y axis.
# Adding line to differentiate -ve and +ve y axismainplot <- mainplot + geom_hline(yintercept=0, color = “white”, size=2)
Adding y ticks because by default the number of ticks is very less. Our ticks will scale from -250 to 100 increasing by 50.
# Adding y ticksmainplot <- mainplot + scale_y_continuous(breaks = round(seq(-250, max(100), by = 50),1))
We will customise legend of our plot to change color, label and order.
Instead of struggling to decide which colour palette to use, you can use ColorBrewer which provides nice colour palettes for both qualitative and quantitative data which are also optimised for colour blind people.
Once we have selected the colours, we can use them by setting values parameter. To change label names in our legend, we can set labels. To change sequence of our labels we use breaks to specify required order.
# Changes to legendmainplot <- mainplot + scale_fill_manual(name="Mobility categories",values = c("retail_and_recreation_percent_avg" = "#8dd3c7", "grocery_and_pharmacy_percent_avg" = "#ffffb3", "parks_percent_avg" = "#bebada", "transit_stations_percent_avg" = "#fb8072", "workplaces_percent_avg" = "#80b1d3", "residential_percent_avg" = "#fdb462"), labels=c("retail_and_recreation_percent_avg" = "Retail/Recreation", "grocery_and_pharmacy_percent_avg" = "Grocery/Pharmacy", "parks_percent_avg" = "Parks", "transit_stations_percent_avg" = "Transit stations", "workplaces_percent_avg" = "Workplaces", "residential_percent_avg" = "Residential"), breaks=c("residential_percent_avg", "workplaces_percent_avg", "transit_stations_percent_avg", "retail_and_recreation_percent_avg", "parks_percent_avg", "grocery_and_pharmacy_percent_avg"))
Changing the title of our plot.
# plot titlemainplot <- mainplot + ggtitle(“Changes in movement of people due to COVID-19”)
Setting the subtitle and caption of our plot.
# plot subtitle, captionmainplot <- mainplot + ggtitle(“Changes in movement of people due to COVID-19”)
Setting x and y labels.
# Setting x and y labelsmainplot <- mainplot + xlab(“Countries”) + ylab(“Percent change in mobility rate across different activities”)
Since x axis has country names, we will rotate text to avoid overlap of text.
# x axis markings rotatedmainplot <- mainplot + theme(axis.text.x = element_text(angle = 90))
Changing the text size to improve readability.
# change text sizemainplot <- mainplot + theme(text = element_text(size=20))
Now we can plot the chart which we showed at the beginning.
# Plottingmainplot
Looking at the visualisation it’s easier to get inferences from data, like people’s movement at residential places has increased. Places with least negative mobility were groceries and pharmacy indicating that these places are still getting footfall but nothing like they used to.
If you wish to plot multiple charts in a grid, you can easily do it using cowplot's plot_grid. Plots resplo, parplot, recplot, groplot, traplot, andworplot are different plots for each of our categories created using same methods which we demonstrated in Step 3 above and we plot them in a grid.
# Plotting a grid of plotsplot_grid(resplot, parplot, recplot, groplot, traplot, worplot, ncol = 2, nrow = 3)
We have seen how easy it is to create powerful visualisation’s using ggplot2 and so many ways to customise your plot. We’ve barely explored ggplot2 and it has so much more to offer. I highly recommend exploring other charts and functionalities ggplot2 has to offer.
Thanks for reading. | [
{
"code": null,
"e": 647,
"s": 172,
"text": "The purpose of creating visualisations is to explore data, find hidden trends and communicate trends. To create any visualisation we need a question that we wish to explore and we need the data which can help us answer the question. The question we will b... |
7 ways to load external data into Google Colab | by B. Chen | Towards Data Science | Colab (short for Colaboratory) is a free platform from Google that allows users to code in Python. Colab is essentially the Google version of a Jupyter Notebook. Some of the advantages of Colab over Jupyter include zero configuration, free access to GPUs & CPUs, and seamless sharing of code.
More and more people are using Colab to take the advantage of the high-end computing resources without being restricted by their price. Loading data is the first step in any data science project. Often, loading data into Colab require some extra setups or coding. In this article, you’ll learn the 7 common ways to load external data into Google Colab. This article is structured as follows:
Uploading file through Files explorerUploading file using files moduleReading a file from GithubCloning a Github RepositoryDownloading files using Linux wget commandAccessing Google Drive by mounting it locallyLoading Kaggle Datasets
Uploading file through Files explorer
Uploading file using files module
Reading a file from Github
Cloning a Github Repository
Downloading files using Linux wget command
Accessing Google Drive by mounting it locally
Loading Kaggle Datasets
You can use the upload option at the top of the Files explorer to upload any file(s) from your local machine to Google Colab.
Here is what you need to do:
Step 1: Click the Files icon to open the “Files explorer” pane
Step 2: Click the upload icon and select the file(s) you wish to upload from the “File Upload” dialog window.
Step 3: Once the upload is complete, you can read the file as you would normally. For instance, pd.read_csv('Salary_Data.csv')
Instead of clicking the GUI, you can also use Python code to upload files. You can import files module from google.colab. Then call upload() to launch a “File Upload” dialog and select the file(s) you wish to upload.
from google.colab import filesuploaded = files.upload()
Once the upload is complete, your file(s) should appear in “Files explorer” and you can read the file as you would normally.
One of the easiest ways to read data is through Github. Click on the dataset in the Github repository, then click the “Raw” button.
Copy the raw data link and pass it to the function that can take a URL. For instance, pass a raw CSV URL to Pandas read_csv():
import pandas as pddf = pd.read_csv('https://raw.githubusercontent.com/BindiChen/machine-learning/master/data-analysis/001-pandad-pipe-function/data/train.csv')
You can also clone a Github repository into your Colab environment in the same way as you would in your local machine, using git clone.
!git clone https://github.com/BindiChen/machine-learning.git
Once the repository is cloned, you should be able to see its contents in “Files explorer” and you can simply read the file as you would normally.
Since Google Colab lets you do everything which you can in a locally hosted Jupyter Notebook, you can also use Linux shell command like ls, dir, pwd, cd etc using !.
Among those available Linux commands, the wget allows you to download files using HTTP, HTTPS, and FTP protocols.
In its simplest form, when used without any option, wget will download the resource specified in the URL to the current directory, for instance:
Sometimes, you may want to save the downloaded file under a different name. To do that, simply pass the -O option followed by the new name:
!wget https://example.com/cats_and_dogs_filtered.zip \ -O new_cats_and_dogs_filtered.zip
By default, wget will save files in the current working directory. To save the file to a specific location, use the -P option:
!wget https://example.com/cats_and_dogs_filtered.zip \ -P /tmp/
If you want to download a file over HTTPS from a host that has an invalid SSL certificate, you can pass the --no-check-certificate option:
!wget https://example.com/cats_and_dogs_filtered.zip \ --no-check-certificate
If you want to download multiple files at once, use the -i option followed by the path to a file containing a list of the URLs to be downloaded. Each URL needs to be on a separate line.
!wget -i dataset-urls.txt
The following is an example shows dataset-urls.txt:
http://example-1.com/dataset.ziphttps://example-2.com/train.csvhttp://example-3.com/test.csv
You can use the drive module from google.colab to mount your Google Drive to Colab.
from google.colab import drivedrive.mount('/content/drive')
Executing the above statement, you will be provided an authentication link and a text box to enter your authorization code.
Click the authentication link and follow the steps to generate your authorization code. Copy the code displayed and paste it into the text box as shown above. Once it is mounted, you should get a message like:
Mounted at /content/drive
After that, you should be able to explore the contents via “Files explorer” and read the data as you would normally.
Finally, to unmount your Google Drive:
drive.flush_and_unmount()
It is possible to download any dataset seamlessly from Kaggle into your Google Colab. Here is what you need to do:
Step 1: Download your Kaggle API Token: Go to Account and scroll down to the API section.
By clicking “Create New API Token”, a kaggle.json file will be generated and downloaded to your local machine.
Step 2: Upload kaggle.json to your Colab project: for instance, you can import files module from google.colab, and call upload() to launch a File Upload dialog and select the kaggle.json from your local machine.
Step 3: Update KAGGLE_CONFIG_DIR path to the current working directory. You can run !pwd to get the current working directory and assign the value to os.environ['KAGGLE_CONFIG_DIR'] :
Step 4: Finally, you should be able to run the following Kaggle API to download datasets:
!kaggle competitions download -c titanic!kaggle datasets download -d alexanderbader/forbes-billionaires-2021-30
Note for the competition dataset, the Kaggle API should be available under the Data tab
For the general dataset, the Kaggle API can be accessed as follows:
Google Colab is a great tool for individuals who want to take advantage of the capabilities of high-end computing resources (like GPUs, TPUs) without being restricted by their price.
In this article, we have gone through most of the ways you can improve your Google Colab experience by loading external data into Google Colab. I hope this article will help you to save time in learning Colab and Data Analysis.
Thanks for reading. Stay tuned if you are interested in the practical aspect of machine learning.
10 tricks for Converting numbers and strings to datetime in Pandas
Using Pandas method chaining to improve code readability
How to do a Custom Sort on Pandas DataFrame
All the Pandas shift() you should know for data analysis
When to use Pandas transform() function
Pandas concat() tricks you should know
Difference between apply() and transform() in Pandas
All the Pandas merge() you should know
Working with datetime in Pandas DataFrame
Pandas read_csv() tricks you should know
4 tricks you should know to parse date columns with Pandas read_csv()
More tutorials can be found on my Github | [
{
"code": null,
"e": 465,
"s": 172,
"text": "Colab (short for Colaboratory) is a free platform from Google that allows users to code in Python. Colab is essentially the Google version of a Jupyter Notebook. Some of the advantages of Colab over Jupyter include zero configuration, free access to GPUs ... |
How to centre align action bar title in android? | This example demonstrates how do I centre align action bar title in android.
Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project.
Step 2 − Add the following code to res/layout/activity_main.xml.
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:padding="4dp"
tools:context=".MainActivity">
<TextView
android:padding="4dp"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:text="My Action Bar"
android:gravity="center"
android:textSize="16sp"
android:textStyle="bold"/>
</RelativeLayout>
Step 3 − Add the following code to src/MainActivity.java
import android.app.ActionBar;
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
public class MainActivity extends AppCompatActivity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
getSupportActionBar().setDisplayOptions(ActionBar.DISPLAY_SHOW_CUSTOM);
getSupportActionBar().setCustomView(R.layout.activity_main);
}
}
Step 4 − Add the following code to androidManifest.xml
<?xml version="1.0" encoding="utf-8"?>
<manifest
xmlns:android="http://schemas.android.com/apk/res/android"
package="app.com.sample">
<application
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="@style/AppTheme">
<activity android:name=".MainActivity">
<intent-filter>
<action
android:name="android.intent.action.MAIN" />
<category
android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen − | [
{
"code": null,
"e": 1139,
"s": 1062,
"text": "This example demonstrates how do I centre align action bar title in android."
},
{
"code": null,
"e": 1268,
"s": 1139,
"text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details t... |
Python program to find Union of two or more Lists? | Union operation means, we have to take all the elements from List1 and List 2 and all the elements store in another third list.
List1::[1,2,3]
List2::[4,5,6]
List3::[1,2,3,4,5,6]
Step 1: Input two lists.
Step 2: for union operation we just use + operator.
# UNION OPERATION
A=list()
B=list()
n=int(input("Enter the size of the List ::"))
print("Enter the Element of first list::")
for i in range(int(n)):
k=int(input(""))
A.append(k)
print("Enter the Element of second list::")
for i in range(int(n)):
k=int(input(""))
B.append(k)
C = A + B
print("THE FINAL LIST IS ::>",C)
Enter the size of the List ::4
Enter the Element of first list::
23
11
22
33
Enter the Element of second list::
33
22
67
45
THE FINAL LIST IS ::> [23, 11, 22, 33, 33, 22, 67, 45] | [
{
"code": null,
"e": 1190,
"s": 1062,
"text": "Union operation means, we have to take all the elements from List1 and List 2 and all the elements store in another third list."
},
{
"code": null,
"e": 1242,
"s": 1190,
"text": "List1::[1,2,3]\nList2::[4,5,6]\nList3::[1,2,3,4,5,6]\n... |
Children Sum Parent | Practice | GeeksforGeeks | Given a Binary Tree. Check whether all of its nodes have the value equal to the sum of their child nodes.
Example 1:
Input:
10
/
10
Output: 1
Explanation: Here, every node is sum of
its left and right child.
Example 2:
Input:
1
/ \
4 3
/ \
5 N
Output: 0
Explanation: Here, 1 is the root node
and 4, 3 are its child nodes. 4 + 3 =
7 which is not equal to the value of
root node. Hence, this tree does not
satisfy the given conditions.
Your Task:
You don't need to read input or print anything. Your task is to complete the function isSumProperty() that takes the root Node of the Binary Tree as input and returns 1 if all the nodes in the tree satisfy the following properties. Else, it returns 0.
For every node, data value must be equal to the sum of data values in left and right children. Consider data value as 0 for NULL child. Also, leaves are considered to follow the property.
Expected Time Complexiy: O(N).
Expected Auxiliary Space: O(Height of the Tree).
Constraints:
1 <= N <= 105
1 <= Data on nodes <= 105
0
nitishkumarkushwaha18133 weeks ago
time : 0.3 sec
public static int isSumProperty(Node root) { // add your code here if(root == null) return 1; if(root.left == null && root.right == null) return 1; int rightSum = 0; int leftSum = 0; if(root.left != null) leftSum = root.left.data; if(root.right != null) rightSum = root.right.data; if(root.data == (rightSum + leftSum) && (isSumProperty(root.left) == 1 && isSumProperty(root.right) == 1 )) return 1; else return 0; }
0
adityagagtiwari3 weeks ago
Java solution
Time Complexiy: O(N).Auxiliary Space: O(Height of the Tree)
class Tree
{
//Function to check whether all nodes of a tree have the value
//equal to the sum of their child nodes.
public static int isSumProperty(Node root)
{
// add your code here
boolean flag =sumProperty(root);
if(flag)
{
return 1;
}
return 0;
}
public static boolean sumProperty(Node root)
{
if(root==null)
return true;
if(root.left!=null && root.right!=null)
{
if(root.data==root.left.data+root.right.data)
return sumProperty(root.left)&&sumProperty(root.right);
else
return false;
}
else if(root.left!=null)
{
if(root.data==root.left.data)
return sumProperty(root.left);
else
return false;
}
else if(root.right!=null)
{
if(root.data==root.right.data)
return sumProperty(root.right);
else
return false;
}
else if(root.left==null && root.right==null)
{
return true;
}
return false;
}
}
0
1ashishchauhan20023 weeks ago
if(root==NULL){ return 1; } if(root->left==NULL && root->right==NULL){ return 1; } int sum=0; if(root->left){ sum+=root->left->data; } if(root->right){ sum+=root->right->data; } if(sum==root->data){ return isSumProperty(root->right) && isSumProperty(root->left); } return 0;
+1
amishasahu3283 weeks ago
int isSumProperty(Node *root)
{
// Add your code here
if(!root) return 1;
if(!root->left && !root->right) return 1;
int sum = 0;
if(root->left)
sum += root->left->data;
if(root->right)
sum += root->right->data;
return (sum == root->data && isSumProperty(root->left) && isSumProperty(root->right) );
}
+1
adarshgupta4014 weeks ago
6 lines Java Soln
public static int isSumProperty(Node root)
{
if(root == null ||(root.left==null&&root.right==null)) return 1;
int childSum = 0;
if(root.left!= null) childSum += root.left.data;
if(root.right!= null) childSum += root.right.data;
return (childSum==root.data &&
isSumProperty(root.left)==1 &&
isSumProperty(root.right)==1
)?1:0;
}
0
mayank20211 month ago
C++ : 0.0/1.0int isSumProperty(Node *root) { int left=0, right=0; if(root && (root->left || root->right )) { if (root->left) left=root->left->data; if (root->right) right=root->right->data; if(root->data == left+right) { left=isSumProperty(root->left); right=isSumProperty(root->right); if(left && right) return 1; else return 0; } else return 0; } return 1; }
+1
crawler1 month ago
int isSumProperty(Node *root)
{
if(!root)
return 1;
if(!root->left and !root->right)
return 1;
int val = (root->left ? root->left->data : 0);
val += (root->right ? root->right->data : 0);
return val == root->data and isSumProperty(root->left) and isSumProperty(root->right);
}
0
detroix071 month ago
int helper(Node* root, int &f) { // base case if(!root) return 0; if(!root->left && !root->right) { return root->data; } int left = helper(root->left,f); int right = helper(root->right,f); if(left+right!=root->data) f=0; }
int isSumProperty(Node *root) { int f = 1; int ans = helper(root,f); return f; }
+1
harshupadhayay9061 month ago
int isSumProperty(Node *root) { if(!root || (!root->right && !root->left)) return 1; int sum = 0; if(root->right) sum += root->right->data; if(root->left) sum += root->left->data; if(sum == root->data) return isSumProperty(root->right) && isSumProperty(root->left); return false; // Add your code here }
0
tharabhai1 month ago
C++ recursive solution:
int isSumProperty(Node *root) { // Add your code here if(!root) return 1; if(isSumProperty(root->left)==0 || isSumProperty(root->right)==0) return 0; if(root->left!=NULL && root->right!=NULL) { if(root->data==(root->left->data+root->right->data)) return 1; } else if(root->left!=NULL) { if(root->data==root->left->data) return 1; } else if(root->right!=NULL) { if(root->data==root->right->data) return 1; } else return 1; return 0; }
We strongly recommend solving this problem on your own before viewing its editorial. Do you still
want to view the editorial?
Login to access your submissions.
Problem
Contest
Reset the IDE using the second button on the top right corner.
Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values.
Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints.
You can access the hints to get an idea about what is expected of you as well as the final solution code.
You can view the solutions submitted by other users from the submission tab. | [
{
"code": null,
"e": 344,
"s": 238,
"text": "Given a Binary Tree. Check whether all of its nodes have the value equal to the sum of their child nodes."
},
{
"code": null,
"e": 356,
"s": 344,
"text": "\nExample 1:"
},
{
"code": null,
"e": 459,
"s": 356,
"text":... |
What is the need of wrapper classes in Java? | In java.lang package you can find a set of classes which wraps a primitive variable with in their object these are known as wrapper classes. Following is the list of primitive datatypes and their respective classes −
Live Demo
public class Sample {
public static void main (String args[]){
Integer obj = new Integer("2526");
int i = obj.intValue();
System.out.println(i);
}
}
2526
Java provides primitive datatypes (char, byte, short, int, long, float, double, boolean) and, reference types to store values.
Whenever we pass primitive datatypes to a method the value of those will be passed instead of the reference therefore you cannot modify the arguments we pass to the methods. In this scenario you can use objects of such variables.
Classes in certain packages like util handles only objects.
Collection types such as ArrayList, Vectors etc. stores only objects (not primitive datatypes).
Your data need to be an objects for synchronization, serialization and, multithreading.
Therefore, at times we need to have primitive datatypes in the form of objects. At such scenarios you can use wrapper classes. | [
{
"code": null,
"e": 1279,
"s": 1062,
"text": "In java.lang package you can find a set of classes which wraps a primitive variable with in their object these are known as wrapper classes. Following is the list of primitive datatypes and their respective classes −"
},
{
"code": null,
"e":... |
Docker Storage. In this section, we will discuss how... | by Sumeet Gyanchandani | Towards Data Science | In this section, we will discuss how docker stores data on the local file system, understand which layers are writable and would deepen our knowledge of persistent storage for containers.
IntroductionDocker FileBasic Docker CommandsPort and Volume MappingDocker NetworkingDocker Storage (You are here!)Docker ComposeDeleting Docker Entities
Introduction
Docker File
Basic Docker Commands
Port and Volume Mapping
Docker Networking
Docker Storage (You are here!)
Docker Compose
Deleting Docker Entities
On a linux system, docker stores data pertaining to images, containers, volumes, etc under /var/lib/docker.
When we run the docker build command, docker builds one layer for each instruction in the dockerfile. These image layers are read-only layers. When we run the docker run command, docker builds container layer(s), which are read-write layers.
You can create new files on the container, for instance, temp.txt in the image below. You can also modify a file that belongs to the image layers on the container, for instance, app.py in the image below. When you do this, a local copy of that file is created on the container layer and the changes only live on the container — this is called the Copy-on-Write mechanism. This is important as several containers and child images use the same image layers. The life of the files on the container is as long as the container is alive. When the container is destroyed, the files/modifications on it are also destroyed. To persist the data, we can use volume mapping techniques that we saw in the previous section.
You can create a docker volume by using the docker volume create command. This command will create a volume in the /var/lib/docker/volumes directory.
docker volume create data_volume
Now when you run the docker run command, you can specify which volume to use using the -v flag. This is called Volume Mounting.
docker run -v data_volume:/var/lib/postgres postgres
If the volume does not exist, docker creates one for you. Now, even if the container is destroyed the data will persist in the volume.
If you want to have your data on a specific location on the docker host or already have existing data on the disk, you can mount this location on the container as well. This is called Bind Mounting.
docker run -v /data/postgres:/var/lib/postgres postgres
In the next section, we will learn about Docker Compose, its file, and its commands.
Reference:
[1] Mumshad Mannambeth, Docker for the Absolute Beginners (2020), KodeKloud.com | [
{
"code": null,
"e": 360,
"s": 172,
"text": "In this section, we will discuss how docker stores data on the local file system, understand which layers are writable and would deepen our knowledge of persistent storage for containers."
},
{
"code": null,
"e": 513,
"s": 360,
"text":... |
Spring Boot - How to load initial data on startup - onlinetutorialspoint | PROGRAMMINGJava ExamplesC Examples
Java Examples
C Examples
C Tutorials
aws
JAVAEXCEPTIONSCOLLECTIONSSWINGJDBC
EXCEPTIONS
COLLECTIONS
SWING
JDBC
JAVA 8
SPRING
SPRING BOOT
HIBERNATE
PYTHON
PHP
JQUERY
PROGRAMMINGJava ExamplesC Examples
Java Examples
C Examples
C Tutorials
aws
In Spring Boot, we can load the initial data into the database while startup the application. It is a powerful feature while we are working with different environments. This tutorial will see how to load initial data on startup.
I wrote an article on spring boot + h2 database integration some time back. I loaded my SQL script, including DDL and DML, into the H2 DB while starting the application; however, check this for a complete example.
As we know that we can easily create the database tables with Spring Boot JPA entities like below.
package com.onlinetutorialspoint.model;
import javax.persistence.*;
@Entity
@Table(name = "items")
public class Item {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private long id;
@Column(name="item_name")
private String itemName;
@Column(name="item_category")
private String category;
public Item() {
}
public Item(String itemName, String category) {
this.itemName = itemName;
this.category = category;
}
public long getId() {
return id;
}
public String getItemName() {
return itemName;
}
public String getCategory() {
return category;
}
}
Using the above entity class, we can create a item table in any database, but ultimately it’s an empty table now; let’s populate some data init.
So coming to the loading of initial data while startup, we need to separate our DDL (create) and DML (inserts/updates) scripts in two different files that are schema.sql and data.sql respectively. That way Spring Boot can differentiate the scripts.
If you are working with multiple database vendors, for example, MySQL and PostgreSQL you can make these files like schema-mysql.sql or schema-postgresql.sqlsimilar for data.sql files.
Based on your database, you have to configure the below property in the .properties file.
spring.datasource.platform=mysql #specify vendor name here
Spring Boot always looks these files into the classpath resources – Hence we have to create these files in the src/main/resources folder so that these will be available automatically by the Spring context on startup.
schema.sql
CREATE TABLE `item` (
`id` INT(11) NOT NULL,
`name` VARCHAR(50) NULL DEFAULT NULL,
`category` VARCHAR(50) NULL DEFAULT NULL,
PRIMARY KEY (`id`)
);
data.sql
INSERT INTO `item` (`id`, `name`, `category`) VALUES (1, 'IPhone 6S', 'Mobile');
INSERT INTO `item` (`id`, `name`, `category`) VALUES (2, 'Samsung Galaxy', 'Mobile');
INSERT INTO `item` (`id`, `name`, `category`) VALUES (3, 'Lenovo', 'Laptop');
INSERT INTO `item` (`id`, `name`, `category`) VALUES (4, 'LG', 'Telivision');
I am not sure whether you noticed this or not? Do we really need schema.sqlfile? Because JPA entities create database tables while startup, right? So does it make any sense to you?
For me, it doesn’t make any sense, but still, there are some scenarios that we have to create tables from the startup scripts while having the entities in the source; in such cases, we have to tell the spring to remove the ambiguity by defining the below property in application.properties file.
spring.jpa.hibernate.ddl-auto=none
If you are using Spring Boot 2or later versions, this database initialization works only for in-memory databases like H2 and HSQLDB etc. So if we want to make this for other DBS we need to configure the spring.datasource.initialization-mode property in the .properties file below.
spring.datasource.initialization-mode=always
Done!
Spring Boot in-memory DB example
Spring Boot database auto initialization
Happy Learning 🙂
Spring Boot H2 Database + JDBC Template Example
Spring Boot MongoDB + Spring Data Example
Spring Boot Redis Data Example CRUD Operations
Spring Boot Lazy Loading Beans Example
Spring Boot Multiple Data Sources Example
Spring Boot JNDI Configuration – External Tomcat
Spring Boot PostgreSQL DB CRUD Example
Spring Boot MockMvc JUnit Test Example
Spring Boot JdbcTemplate CRUD Operations Mysql
Spring Boot Hazelcast Cache Example
Spring Boot DataRest Example RepositoryRestResource
Spring Boot EhCache Example
How to set Spring Boot SetTimeZone
Spring Boot In Memory Basic Authentication Security
Spring Boot Actuator Database Health Check
Spring Boot H2 Database + JDBC Template Example
Spring Boot MongoDB + Spring Data Example
Spring Boot Redis Data Example CRUD Operations
Spring Boot Lazy Loading Beans Example
Spring Boot Multiple Data Sources Example
Spring Boot JNDI Configuration – External Tomcat
Spring Boot PostgreSQL DB CRUD Example
Spring Boot MockMvc JUnit Test Example
Spring Boot JdbcTemplate CRUD Operations Mysql
Spring Boot Hazelcast Cache Example
Spring Boot DataRest Example RepositoryRestResource
Spring Boot EhCache Example
How to set Spring Boot SetTimeZone
Spring Boot In Memory Basic Authentication Security
Spring Boot Actuator Database Health Check
Δ
Spring Boot – Hello World
Spring Boot – MVC Example
Spring Boot- Change Context Path
Spring Boot – Change Tomcat Port Number
Spring Boot – Change Tomcat to Jetty Server
Spring Boot – Tomcat session timeout
Spring Boot – Enable Random Port
Spring Boot – Properties File
Spring Boot – Beans Lazy Loading
Spring Boot – Set Favicon image
Spring Boot – Set Custom Banner
Spring Boot – Set Application TimeZone
Spring Boot – Send Mail
Spring Boot – FileUpload Ajax
Spring Boot – Actuator
Spring Boot – Actuator Database Health Check
Spring Boot – Swagger
Spring Boot – Enable CORS
Spring Boot – External Apache ActiveMQ Setup
Spring Boot – Inmemory Apache ActiveMq
Spring Boot – Scheduler Job
Spring Boot – Exception Handling
Spring Boot – Hibernate CRUD
Spring Boot – JPA Integration CRUD
Spring Boot – JPA DataRest CRUD
Spring Boot – JdbcTemplate CRUD
Spring Boot – Multiple Data Sources Config
Spring Boot – JNDI Configuration
Spring Boot – H2 Database CRUD
Spring Boot – MongoDB CRUD
Spring Boot – Redis Data CRUD
Spring Boot – MVC Login Form Validation
Spring Boot – Custom Error Pages
Spring Boot – iText PDF
Spring Boot – Enable SSL (HTTPs)
Spring Boot – Basic Authentication
Spring Boot – In Memory Basic Authentication
Spring Boot – Security MySQL Database Integration
Spring Boot – Redis Cache – Redis Server
Spring Boot – Hazelcast Cache
Spring Boot – EhCache
Spring Boot – Kafka Producer
Spring Boot – Kafka Consumer
Spring Boot – Kafka JSON Message to Kafka Topic
Spring Boot – RabbitMQ Publisher
Spring Boot – RabbitMQ Consumer
Spring Boot – SOAP Consumer
Spring Boot – Soap WebServices
Spring Boot – Batch Csv to Database
Spring Boot – Eureka Server
Spring Boot – MockMvc JUnit
Spring Boot – Docker Deployment | [
{
"code": null,
"e": 158,
"s": 123,
"text": "PROGRAMMINGJava ExamplesC Examples"
},
{
"code": null,
"e": 172,
"s": 158,
"text": "Java Examples"
},
{
"code": null,
"e": 183,
"s": 172,
"text": "C Examples"
},
{
"code": null,
"e": 195,
"s": 183,
... |
Difference between text() and html() method in jQuery - GeeksforGeeks | 21 Dec, 2020
Text() Method: This method returns the text content of the selected elements. It overwrites the content of all the matches element while setting the content.
Syntax:
To return text content:$(selector).text()
$(selector).text()
To set text content:$(selector).text(content)
$(selector).text(content)
To set the text content using a function:$(selector).text(function(index, currentcontent))
$(selector).text(function(index, currentcontent))
html() Method: This method is used to set or return the content (innerHTML) of the selected elements. It returns the content of the first matches element or sets the content of all matches elements.
Syntax:
Return content:$(selector).html()
Return content:
$(selector).html()
Set content:$(selector).html(content)
Set content:
$(selector).html(content)
Set content using a function:$(selector).html(function(index, currentcontent))
Set content using a function:
$(selector).html(function(index, currentcontent))
Example:
<!DOCTYPE html><html> <head> <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js"> </script> <script> $(document).ready(function () { $("#button1").click(function () { alert("Using text()- " + $("#demo").text()); }); $("#button2").click(function () { prompt("Using html()- " + $("#demo").html()); }); }); </script></head> <body> <p id="demo">By clicking on the below buttons, <b>difference</b> between <i>text</i> and <i>html</i> in <b>jQuery</b> is shown. </p> <button id="button1">Text</button> <button id="button2">HTML</button></body> </html>
Output:
Before clicking on buttons:
After clicking on Text Button:
After clicking on html Button:
Differences between text() and html() methods:
Supported Browsers:
Google Chrome
Internet Explorer
Firefox
Opera
Safari
jQuery-Misc
Difference Between
JQuery
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Difference between var, let and const keywords in JavaScript
Difference between Process and Thread
Stack vs Heap Memory Allocation
Difference Between Method Overloading and Method Overriding in Java
Differences between JDK, JRE and JVM
JQuery | Set the value of an input text field
Form validation using jQuery
How to change selected value of a drop-down list using jQuery?
How to change the background color after clicking the button in JavaScript ?
How to add options to a select element using jQuery? | [
{
"code": null,
"e": 24814,
"s": 24786,
"text": "\n21 Dec, 2020"
},
{
"code": null,
"e": 24972,
"s": 24814,
"text": "Text() Method: This method returns the text content of the selected elements. It overwrites the content of all the matches element while setting the content."
},... |
C# - SortedList Class | The SortedList class represents a collection of key-and-value pairs that are sorted by the keys and are accessible by key and by index.
A sorted list is a combination of an array and a hash table. It contains a list of items that can be accessed using a key or an index. If you access items using an index, it is an ArrayList, and if you access items using a key, it is a Hashtable. The collection of items is always sorted by the key value.
The following table lists some of the commonly used properties of the SortedList class −
Capacity
Gets or sets the capacity of the SortedList.
Count
Gets the number of elements contained in the SortedList.
IsFixedSize
Gets a value indicating whether the SortedList has a fixed size.
IsReadOnly
Gets a value indicating whether the SortedList is read-only.
Item
Gets and sets the value associated with a specific key in the SortedList.
Keys
Gets the keys in the SortedList.
Values
Gets the values in the SortedList.
The following table lists some of the commonly used methods of the SortedList class −
public virtual void Add(object key, object value);
Adds an element with the specified key and value into the SortedList.
public virtual void Clear();
Removes all elements from the SortedList.
public virtual bool ContainsKey(object key);
Determines whether the SortedList contains a specific key.
public virtual bool ContainsValue(object value);
Determines whether the SortedList contains a specific value.
public virtual object GetByIndex(int index);
Gets the value at the specified index of the SortedList.
public virtual object GetKey(int index);
Gets the key at the specified index of the SortedList.
public virtual IList GetKeyList();
Gets the keys in the SortedList.
public virtual IList GetValueList();
Gets the values in the SortedList.
public virtual int IndexOfKey(object key);
Returns the zero-based index of the specified key in the SortedList.
public virtual int IndexOfValue(object value);
Returns the zero-based index of the first occurrence of the specified value in the SortedList.
public virtual void Remove(object key);
Removes the element with the specified key from the SortedList.
public virtual void RemoveAt(int index);
Removes the element at the specified index of SortedList.
public virtual void TrimToSize();
Sets the capacity to the actual number of elements in the SortedList.
The following example demonstrates the concept −
using System;
using System.Collections;
namespace CollectionsApplication {
class Program {
static void Main(string[] args) {
SortedList sl = new SortedList();
sl.Add("001", "Zara Ali");
sl.Add("002", "Abida Rehman");
sl.Add("003", "Joe Holzner");
sl.Add("004", "Mausam Benazir Nur");
sl.Add("005", "M. Amlan");
sl.Add("006", "M. Arif");
sl.Add("007", "Ritesh Saikia");
if (sl.ContainsValue("Nuha Ali")) {
Console.WriteLine("This student name is already in the list");
} else {
sl.Add("008", "Nuha Ali");
}
// get a collection of the keys.
ICollection key = sl.Keys;
foreach (string k in key) {
Console.WriteLine(k + ": " + sl[k]);
}
}
}
}
When the above code is compiled and executed, it produces the following result −
001: Zara Ali
002: Abida Rehman
003: Joe Holzner
004: Mausam Banazir Nur
005: M. Amlan
006: M. Arif
007: Ritesh Saikia
008: Nuha Ali
119 Lectures
23.5 hours
Raja Biswas
37 Lectures
13 hours
Trevoir Williams
16 Lectures
1 hours
Peter Jepson
159 Lectures
21.5 hours
Ebenezer Ogbu
193 Lectures
17 hours
Arnold Higuit
24 Lectures
2.5 hours
Eric Frick
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2406,
"s": 2270,
"text": "The SortedList class represents a collection of key-and-value pairs that are sorted by the keys and are accessible by key and by index."
},
{
"code": null,
"e": 2712,
"s": 2406,
"text": "A sorted list is a combination of an array and... |
How to do text classification with CNNs, TensorFlow and word embedding | by Lak Lakshmanan | Towards Data Science | Suppose I gave you the title of an article “Amazing Flat version of Twitter Bootstrap” and asked you which publication that article appeared in: the New York Times, TechCrunch, or GitHub. What would be your guess? How about an article titled “Supreme Court to Hear Major Case on Partisan Districts”?
Did you guess GitHub and New York Times? Why? Words like Twitter and Major are likely to occur in any of the publications, but word sequences like Twitter Bootstrap and Supreme Court are more likely in GitHub and the New York Times respectively. Can we train a neural network to learn this?
Note: Estimators have now moved into core Tensorflow. Updated code that uses tf.estimator instead of tf.contrib.learn.estimator is now on GitHub — use the updated code as a starting point.
Machine learning means to learn from examples. To learn which publication is the likely source of an article given its title, we need lots of examples of article titles along with their source. Although it suffers from severe selection bias (since only articles of interest to the nerdy membership of HN are included), the BigQuery public dataset of Hacker News articles is a reasonable source of this information.
query="""SELECT source, REGEXP_REPLACE(title, '[^a-zA-Z0-9 $.-]', ' ') AS title FROM(SELECT ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source, titleFROM `bigquery-public-data.hacker_news.stories`WHERE REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$') AND LENGTH(title) > 10)WHERE (source = 'github' OR source = 'nytimes' OR source = 'techcrunch')"""traindf = bq.Query(query + " AND MOD(ABS(FARM_FINGERPRINT(title)),4) > 0").execute().result().to_dataframe()evaldf = bq.Query(query + " AND MOD(ABS(FARM_FINGERPRINT(title)),4) = 0").execute().result().to_dataframe()
Essentially, I pull the URL and the title from the Hacker News stories dataset in BigQuery and separate it into a training and evaluation dataset (See Datalab notebook for complete code). The possible labels are github, nytimes, or techcrunch. Here’s what the resulting dataset looks like:
I wrote the two Pandas dataframes out to CSV files (a total of 72,000 training examples approximately equally distributed between nytimes, github, and techcrunch).
My training dataset consists of the label (“source”) and a single input column (“title”). However, the title is not numeric and neural networks need numeric inputs. So, we need to convert the text input column to be numeric. How?
The simplest approach would be to one-hot encode the titles. Assuming that there are 72,000 unique titles in the dataset, we will end up with 72,000 columns. If we then train a neural network on this, the neural network will essentially have to memorize the titles — no further generalization is possible.
In order for the network to generalize, we need to convert the titles into numbers in such a way that similar titles end up with similar numbers. One way is to find the individual words in the title and map the words to unique numbers. Then, titles with words in common will have similar numbers for that part of the sequence. The set of unique words in the training dataset is called the vocabulary.
Assume that we have four titles:
lines = ['Some title', 'A longer title', 'An even longer title', 'This is longer than doc length']
Because the titles are all of varying length, I will pad out short titles with a dummy word and truncate very long titles. This way, I will get to deal with titles that all have the same length.
I can create the vocabulary using the following code (this is not ideal, since the vocabulary processor stores everything in memory; for larger datasets and more sophisticated preprocessing such as incorporating stop words and case-insensitivity, tf.transform is a better solution — that’s a topic for another blog post):
import tensorflow as tffrom tensorflow.contrib import lookupfrom tensorflow.python.platform import gfileMAX_DOCUMENT_LENGTH = 5 PADWORD = 'ZYXW'# create vocabularyvocab_processor = tf.contrib.learn.preprocessing.VocabularyProcessor(MAX_DOCUMENT_LENGTH)vocab_processor.fit(lines)with gfile.Open('vocab.tsv', 'wb') as f: f.write("{}\n".format(PADWORD)) for word, index in vocab_processor.vocabulary_._mapping.iteritems(): f.write("{}\n".format(word))N_WORDS = len(vocab_processor.vocabulary_)
In the code above, I will pad short titles with a PADWORD which I expect will never occur in actual text. The titles will be padded or truncated to a length of 5 words. I pass in the training dataset (“lines” in the above sample) and then write out the resulting vocabulary. The vocabulary turns out to be:
ZYXWAevenlongertitleThisdocisSomeAnlengththan<UNK>
Note that I added the padword and that the vocabulary processor found all the unique words in the set of lines. Finally, words that are encountered during evaluation/prediction that were not in the training dataset will be replaced by <UNK>, so that is also part of the vocabulary.
Given the vocabulary above, we can convert any title to a set of numbers:
table = lookup.index_table_from_file( vocabulary_file='vocab.tsv', num_oov_buckets=1, vocab_size=None, default_value=-1)numbers = table.lookup(tf.constant('Some title'.split()))with tf.Session() as sess: tf.tables_initializer().run() print "{} --> {}".format(lines[0], numbers.eval())
The code above will look up the words ‘Some’ and ‘title’ and return the indexes [8, 4] based on the vocabulary. Of course, in the actual training/prediction graph, we will need to make sure to pad/truncate as well. Let’s see how to do that next.
First, we start with the lines (each line is a title) and split the titles into words:
# string operationstitles = tf.constant(lines)words = tf.string_split(titles)
This results in:
titles= ['Some title' 'A longer title' 'An even longer title' 'This is longer than doc length']words= SparseTensorValue(indices=array([[0, 0], [0, 1], [1, 0], [1, 1], [1, 2], [2, 0], [2, 1], [2, 2], [2, 3], [3, 0], [3, 1], [3, 2], [3, 3], [3, 4], [3, 5]]), values=array(['Some', 'title', 'A', 'longer', 'title', 'An', 'even', 'longer', 'title', 'This', 'is', 'longer', 'than', 'doc', 'length'], dtype=object), dense_shape=array([4, 6]))
TensorFlow’s string_split() function ends up creating a SparseTensor. Talk about an overly helpful API. I don’t want that automatically created mapping though, so I will convert the sparse tensor to a dense one and then lookup the index from my own vocabulary:
# string operationstitles = tf.constant(lines)words = tf.string_split(titles)densewords = tf.sparse_tensor_to_dense(words, default_value=PADWORD)numbers = table.lookup(densewords)
Now, the densewords and numbers are as expected (note the padding with the PADWORD:
dense= [['Some' 'title' 'ZYXW' 'ZYXW' 'ZYXW' 'ZYXW'] ['A' 'longer' 'title' 'ZYXW' 'ZYXW' 'ZYXW'] ['An' 'even' 'longer' 'title' 'ZYXW' 'ZYXW'] ['This' 'is' 'longer' 'than' 'doc' 'length']]numbers= [[ 8 4 0 0 0 0] [ 1 3 4 0 0 0] [ 9 2 3 4 0 0] [ 5 7 3 11 6 10]]
Note also that the numbers matrix has the width of the longest title in the dataset. Because this width will change with each batch that is processed, it is not ideal. For consistency, let’s pad it out to MAX_DOCUMENT_LENGTH and then truncate it:
padding = tf.constant([[0,0],[0,MAX_DOCUMENT_LENGTH]])padded = tf.pad(numbers, padding)sliced = tf.slice(padded, [0,0], [-1, MAX_DOCUMENT_LENGTH])
This creates a batchsize x 5 matrix where shorter titles are padded with zero:
padding= [[0 0] [0 5]] padded= [[ 8 4 0 0 0 0 0 0 0 0 0] [ 1 3 4 0 0 0 0 0 0 0 0] [ 9 2 3 4 0 0 0 0 0 0 0] [ 5 7 3 11 6 10 0 0 0 0 0]] sliced= [[ 8 4 0 0 0] [ 1 3 4 0 0] [ 9 2 3 4 0] [ 5 7 3 11 6]]
I used a MAX_DOCUMENT_LENGTH of 5 in the examples above so that I could show you what is happening. In the real dataset, titles are longer than 5 words. So, In I’ll use
MAX_DOCUMENT_LENGTH = 20
The shape of the sliced matrix will be batchsize x MAX_DOCUMENT_LENGTH, i.e. batchsize x 20.
Now that our words have been replaced by numbers, we could simply do one-hot encoding but that would result in an extremely wide input — there are thousands of unique words in the titles dataset. A better approach is to reduce the dimensionality of the input — this is done through an embedding layer (see full code here):
EMBEDDING_SIZE = 10embeds = tf.contrib.layers.embed_sequence(sliced, vocab_size=N_WORDS, embed_dim=EMBEDDING_SIZE)
Once we have the embedding, we now have a representation for each word in the title. The result of embedding is a batchsize x MAX_DOCUMENT_LENGTH x EMBEDDING_SIZE tensor because a title consists of MAX_DOCUMENT_LENGTH words, and each word is now represented by EMBEDDING_SIZE numbers. (Get into the habit of figuring out tensor shapes at each step of your TensorFlow code — this will help you understand what the code is doing, and what the dimensions mean).
We could, if we wanted, simply wire the embedded words through a deep neural network, train it, and go our merry way. But just using words by themselves does not take advantage of the fact that word sequences have specific meanings. After all, “supreme” could appear in a number of contexts, but “supreme court” has a much more specific connotation. How do we learn word sequences?
One way to learn sequences is to embed not just unique words, but also digrams (word pairs), trigrams (word triplets), etc. However, with a relatively small dataset, this starts becoming akin to one-hot encoding each unique word in the dataset.
A better approach is to add a convolutional layer. Convolution is simply a way of applying a moving window to your input data and letting the neural network learn the weights to apply to adjacent words. Although more common when working with image data, it is a handy way to help any neural network learn about the correlations between nearby inputs:
WINDOW_SIZE = EMBEDDING_SIZESTRIDE = int(WINDOW_SIZE/2)conv = tf.contrib.layers.conv2d(embeds, 1, WINDOW_SIZE, stride=STRIDE, padding='SAME') # (?, 4, 1) conv = tf.nn.relu(conv) # (?, 4, 1) words = tf.squeeze(conv, [2]) # (?, 4)
Recall that the result of embedding is a 20 x 10 tensor (let’s disregard the batchsize for now; all operations here are carried out on a single title at a time). I am now applying a weighted average in a 10x10 window to the embedded representation of the title, moving the window by 5 words (STRIDE=5), and applying it again. So, I will have 4 such convolution results. I then apply a non-linear transform (relu) to the convolution results.
I have four results now. I can simply wire them through a dense layer to the output layer:
n_classes = len(TARGETS) logits = tf.contrib.layers.fully_connected(words, n_classes, activation_fn=None)
If you are used to image models, you might be surprised that I used a convolutional layer, but no maxpool layer. The reason to use a maxpool layer is to add spatial invariance to the network — intuitively speaking, you want to find a cat regardless of where in the image the cat is. However, the spatial location within the title is quite important. It is quite possible that New York Times articles’ titles tend to start with different words than GitHub ones. Hence, I didn’t use a maxpool layer for this task.
Given the logits, we can figure out the source by essentially doing TARGETS[max(logits)]. In TensorFlow, this is doing using tf.gather:
predictions_dict = { 'source': tf.gather(TARGETS, tf.argmax(logits, 1)), 'class': tf.argmax(logits, 1), 'prob': tf.nn.softmax(logits) }
Just to be complete, I also send along the actual class index and probabilities of each class.
With the code all written (see full code here), I can train it on Cloud ML Engine:
OUTDIR=gs://${BUCKET}/txtcls1/trained_modelJOBNAME=txtcls_$(date -u +%y%m%d_%H%M%S)echo $OUTDIR $REGION $JOBNAMEgsutil -m rm -rf $OUTDIRgsutil cp txtcls1/trainer/*.py $OUTDIRgcloud ml-engine jobs submit training $JOBNAME \ --region=$REGION \ --module-name=trainer.task \ --package-path=$(pwd)/txtcls1/trainer \ --job-dir=$OUTDIR \ --staging-bucket=gs://$BUCKET \ --scale-tier=BASIC --runtime-version=1.2 \ -- \ --bucket=${BUCKET} \ --output_dir=${OUTDIR} \ --train_steps=36000
The dataset is quite small, so training finished in less than five minutes and I got an accuracy on the evaluation dataset of 73%.
I can then deploy the model as a microservice to Cloud ML Engine:
MODEL_NAME="txtcls"MODEL_VERSION="v1"MODEL_LOCATION=$(gsutil ls \ gs://${BUCKET}/txtcls1/trained_model/export/Servo/ | tail -1)gcloud ml-engine models create ${MODEL_NAME} --regions $REGIONgcloud ml-engine versions create ${MODEL_VERSION} --model \ ${MODEL_NAME} --origin ${MODEL_LOCATION}
To get the model to predict, we can send it a JSON request:
from googleapiclient import discoveryfrom oauth2client.client import GoogleCredentialsimport jsoncredentials = GoogleCredentials.get_application_default()api = discovery.build('ml', 'v1beta1', credentials=credentials, discoveryServiceUrl='https://storage.googleapis.com/cloud-ml/discovery/ml_v1beta1_discovery.json')request_data = {'instances': [ { 'title': 'Supreme Court to Hear Major Case on Partisan Districts' }, { 'title': 'Furan -- build and push Docker images from GitHub to target' }, { 'title': 'Time Warner will spend $100M on Snapchat original shows and ads' }, ]}parent = 'projects/%s/models/%s/versions/%s' % (PROJECT, 'txtcls', 'v1')response = api.projects().predict(body=request_data, name=parent).execute()print "response={0}".format(response)
This results in a JSON response:
response={u'predictions': [{u'source': u'nytimes', u'prob': [0.7775614857673645, 5.86951500736177e-05, 0.22237983345985413], u'class': 0}, {u'source': u'github', u'prob': [0.1087314561009407, 0.8909648656845093, 0.0003036781563423574], u'class': 1}, {u'source': u'techcrunch', u'prob': [0.0021869686897844076, 1.563105769264439e-07, 0.9978128671646118], u'class': 2}]}
The trained model predicts that the Supreme Court article is 78% likely to come from New York Times. The Docker article is 89% likely to be from GitHub according to the service and the Time Warner one is 100% likely to be from TechCrunch. That’s 3/3.
Resources: All the code is on GitHub here: https://github.com/GoogleCloudPlatform/training-data-analyst/tree/master/blogs/textclassification | [
{
"code": null,
"e": 471,
"s": 171,
"text": "Suppose I gave you the title of an article “Amazing Flat version of Twitter Bootstrap” and asked you which publication that article appeared in: the New York Times, TechCrunch, or GitHub. What would be your guess? How about an article titled “Supreme Cour... |
ScrollView in Android - GeeksforGeeks | 24 May, 2021
In Android, a ScrollView is a view group that is used to make vertically scrollable views. A scroll view contains a single direct child only. In order to place multiple views in the scroll view, one needs to make a view group(like LinearLayout) as a direct child and then we can define many views inside it. A ScrollView supports Vertical scrolling only, so in order to create a horizontally scrollable view, HorizontalScrollView is used.
Attribute
Description
Inherited Attributes:
From FrameLayout
Attributes
Description
From View
Attributes
Description
From ViewGroup
Attributes
Description
This example demonstrates the steps involved to create a ScrollView in Android using Kotlin.
Step 1: Create a new project
Click on File, then New => New Project.Choose “Empty Activity” for the project template.Select language as Kotlin.Select the minimum SDK as per your need.
Click on File, then New => New Project.
Choose “Empty Activity” for the project template.
Select language as Kotlin.
Select the minimum SDK as per your need.
Step 2: Modify strings.xml
Add some strings inside the strings.xml file to display those strings in the app.
<resources>
<string name="app_name">gfgapp_scrollview</string>
<string name="scrolltext">Kotlin is a statically typed,
general-purpose programming language developed
by JetBrains, that has built world-class IDEs
like IntelliJ IDEA, PhpStorm, Appcode, etc.
It was first introduced by JetBrains in 2011
and a new language for the JVM. Kotlin is
object-oriented language, and a “better language”
than Java, but still be fully interoperable
with Java code. Kotlin is sponsored by Google,
announced as one of the official languages for
Android Development in 2017.
Advantages of Kotlin language:
Easy to learn – Basic is almost similar to java.
If anybody worked in java then easily understand
in no time. Kotlin is multi-platform – Kotlin is
supported by all IDEs of java so you can write
your program and execute them on any machine
which supports JVM. It’s much safer than Java.
It allows using the Java frameworks and libraries
in your new Kotlin projects by using advanced
frameworks without any need to change the whole
project in Java. Kotlin programming language,
including the compiler, libraries and all the
tooling is completely free and open source and
available on github. Here is the link for
Github https://github.com/JetBrains/kotlin
Applications of Kotlin language:
You can use Kotlin to build Android Application.
Kotlin can also compile to JavaScript, and making
it available for the frontend. It is also designed
to work well for web development and server-side
development.Kotlin is a statically typed, general-purpose
programming language developed by JetBrains that
has built world-class IDEs like IntelliJ IDEA,
PhpStorm, Appcode, etc. It was first introduced
by JetBrains in 2011.Kotlin is object-oriented
language and a better language than Java, but still
be fully interoperable with Java code. A constructor
is a special member function that is invoked when
an object of the class is created primarily to initialize
variables or properties. A class needs to have a constructor
and if we do not declare a constructor, then the compiler
generates a default constructor.
Kotlin has two types of constructors –
Primary Constructor
Secondary Constructor
A class in Kotlin can have at most one primary
constructor, and one or more secondary constructors.
The primary constructor
initializes the class, while the secondary
constructor is used
to initialize the class and introduce some extra logic.
Explanation:
When we create the object add for the class then
the values 5 and 6
passes to the constructor. The constructor
parameters a and b
initialize with the parameters 5 and 6 respectively.
The local variable c contains the sum of variables.
In the main, we access the property of
constructor using ${add.c}.
Explanation:
Here, we have initialized the constructor
parameters with some
default values emp_id = 100 and emp_name = “abc”.
When the object emp is created we passed the values for
both the parameters so it prints those values.
But, at the time of object emp2 creation,
we have not passed
the emp_name so initializer block uses
the default values and
print to the standard output.</string>
</resources>
Step 3: Modify activity_main.xml
Add the ScrollView and inside the ScrollView add a TextView to display the strings that are taken in the strings.xml file.
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<ScrollView
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:layout_editor_absoluteX="0dp"
tools:layout_editor_absoluteY="-127dp">
<TextView
android:id="@+id/scrolltext"
style="@style/AppTheme"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:text="@string/scrolltext"
android:textColor="@color/green"/>
</ScrollView>
</androidx.constraintlayout.widget.ConstraintLayout>
As already mentioned above, Scrollview can only contain one direct child. In this case, the child is textview. On noticing this textview you will realize that the text added inside textview is mentioned as @string/scrolltext which refers to a string resource inside the strings.xml file.
Step 4: MainActivity.kt file
There is nothing to do with the MainActivity.kt file, so keep it as it is.
package com.example.gfgapp_scrollview
import androidx.appcompat.app.AppCompatActivity
import android.os.Bundle
class MainActivity : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
}
}
arorakashish0911
android
Android-View
Picked
Android
Kotlin
Android
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Broadcast Receiver in Android With Example
Services in Android with Example
How to Create and Add Data to SQLite Database in Android?
Content Providers in Android with Example
Android RecyclerView in Kotlin
Broadcast Receiver in Android With Example
Services in Android with Example
Content Providers in Android with Example
Android RecyclerView in Kotlin
Android UI Layouts | [
{
"code": null,
"e": 24057,
"s": 24029,
"text": "\n24 May, 2021"
},
{
"code": null,
"e": 24496,
"s": 24057,
"text": "In Android, a ScrollView is a view group that is used to make vertically scrollable views. A scroll view contains a single direct child only. In order to place mul... |
Count the number of subarrays | Practice | GeeksforGeeks | Given an array A[] of N integers and a range(L, R). The task is to find the number of subarrays having sum in the range L to R (inclusive).
Example 1:
Input:
N = 3, L = 3, R = 8
A[] = {1, 4, 6}
Output:
3
Explanation:
The subarrays are [1,4], [4] and [6]
Example 2:
Input:
N = 4, L = 4, R = 13
A[] = {2, 3, 5, 8}
Output:
6
Explanation:
The subarrays are [2,3], [2,3,5],
[3,5],[5], [5,8] and [8]
Your Task:
You don't need to read input or print anything. Complete the function countSubarray( ) which takes the integer N , the array A[], the integer L and the integer R as input parameters and returns the number of subarays.
Expected Time Complexity: O(N)
Expected Auxiliary Space: O(1)
Constraints:
1 ≤ N ≤ 106
1 ≤ A[] ≤ 109
1 ≤ L ≤ R ≤ 1015
+3
jayesh292 weeks ago
Java 0.85 sec -
long countSubarray(int N,int arr[],long L,long R) {
long ans2 = count(N,arr,R);
long ans1 = count(N,arr,L-1);
return (ans2-ans1);
}
long count(int n,int arr[],long x){
long ans=0, sum=0;
for(int r=0,l=0;r<n;r++){
sum+=arr[r];
while(sum>x){
sum-=arr[l++];
}
ans+=(r-l+1);
}
return ans;
}
0
masudkabir001 month ago
class Solution { public: long long int solve(vector<int>A,int N,long long int a){ long long j=0; long long ans=0; long long sum=0; for(int i=0;i<N;i++){ sum+=A[i]; while(sum>a){ sum-=A[j]; j++; } ans+=i-j+1; } return ans; } long long countSubarray(int N,vector<int> A,long long L,long long R) { // code here return solve(A,N,R)-solve(A,N,L-1); }};
0
lindan1233 months ago
long long int help(int n, vector<int>&a, long long int num)
{
long long int st=0;
long long int ans =0;
long long int sum=0;
for(long long int ed=0;ed<n;ed++)
{
sum+=a[ed];
while(sum>num)
{
sum-=a[st];
st++;
}
ans += ed-st+1;
}
return ans;
}
long long countSubarray(int N,vector<int> A,long long L,long long R) {
return help(N,A,R)-help(N,A,L-1);
}
Time Taken : 0.4
Cpp
0
tilmandressler5 months ago
Can someone help me? It works for all the test-cases I could come up but when I submit, it tells me it takes too long( although everything is instantanous on my machine)
I know its not O(N), but I'm doing the recommended arrays slider thing. Don't know how that's supposed to be O(N).
long long countSubarray(int N, vector<int> A, long long L, long long R) {
int start_index = 0; //where the array slider starts
int curr_index = 0; // where its atm
long long partial_sum = 0; //the sum of array slider
int result = 0; // how many arrays have been found
while (curr_index != N)
{
partial_sum += A[curr_index];
if (partial_sum > R)
{
while (partial_sum > R && start_index != curr_index)
{
partial_sum -= A[start_index];
start_index++;
}
}
if (partial_sum > R)
{
curr_index++;
start_index = curr_index;
partial_sum = 0;
continue;
}
if (partial_sum >= L) //lower boundary
{
long long lower_bound_sum = 0;
int bound = curr_index;
while (lower_bound_sum + A[bound] < L && bound != start_index)
{
lower_bound_sum += A[bound];
bound--;
}
result += (bound - start_index + 1);
}
curr_index++;
}
return result;
}
+1
shagunagl75 months ago
class Solution {
public:
long long countSubarray(int n,vector<int> a,long long l,long long r) {
// code here
long long x=0,y=0;
l=l-1; /*as reference- leetcode.com/problems/binary-subarrays-with-sum/discuss/1174685/Sliding-Window-C%2B%2B-O(1)space*/
x=helper(a,n,l);
y=helper(a,n,r);
return y-x;
}
long long helper(vector<int>&a,int n,long long t){
long long i=0,j=0,sum=0,ans=0;
while(j<n){
sum+=a[j];
while(sum>t){
sum-=a[i];
i++;
}
ans+=(j-i+1);
j++;
}
return ans;
}
};
+3
ritumodi55 months ago
We can solve this using sliding window and a key insight.
Getting the number of subarrays with sum in the range [L, R] using sliding window is difficult, but we can transform the problem into another easier one - Find number of subarrays with sum ≤ k.
How can we use that here?
Well, number of subarrays with sum in range [L, R] = (number of subbarrays with the sum ≤ R) - (number of subarrays with sum ≤ L-1).
And that's it, the code then becomes pretty easy.
Here is my solution in Java:
class Solution {
long countSubarray(int N,int A[],long L,long R) {
return count(A, R) - count(A, L-1);
}
// number of subarrays with sum <= limit
private long count(int[] A, long limit){
int i = 0, j = 0;
long sum = 0, ans = 0;
while(j < A.length){
sum += A[j];
while(sum > limit) sum -= A[i++];
ans += j - i + 1;
j++;
}
return ans;
}
}
0
yashh2711
This comment was deleted.
+4
_beerus_sama_5 months ago
C++ easiest solution
#define ll long long
class Solution {
public:
/*
find the number of subarrays having sum less than or equal to R.
From this count, subtract the number of subarrays
having sum less than or equal to L-1.
*/
long long count_subarray_sum_less_than_k(int n,vector<int> &a,long long k)
{
ll res=0,l=0,r=0,csum=0;
while(r<n)
{
csum += a[r];
while(csum>k)
csum -= a[l++];
res += r-l+1;
r++;
}
return res;
}
long long countSubarray(int n,vector<int> a,long long l,long long r) {
// code here
ll lessThanl_1 = count_subarray_sum_less_than_k(n,a,l-1);
ll lessThanR = count_subarray_sum_less_than_k(n,a,r);
return lessThanR - lessThanl_1;
}
};
+6
tushargupta19995 months ago
Concise C++ solution
class Solution {
long long countLessThan(vector<int> arr, long long lim) {
long long ans = 0;
for(long long i=0, j=0, cur=0; j<arr.size(); j++) {
cur += arr[j];
while(cur > lim) cur -= arr[i++];
ans += j-i;
}
return ans;
}
public:
long long countSubarray(int N,vector<int> A,long long L,long long R) {
return countLessThan(A, R) - countLessThan(A,L-1);
}
};
Ref: Number of subarrays having sum less than K
0
tyagianshika66
This comment was deleted.
We strongly recommend solving this problem on your own before viewing its editorial. Do you still
want to view the editorial?
Login to access your submissions.
Problem
Contest
Reset the IDE using the second button on the top right corner.
Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values.
Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints.
You can access the hints to get an idea about what is expected of you as well as the final solution code.
You can view the solutions submitted by other users from the submission tab. | [
{
"code": null,
"e": 366,
"s": 226,
"text": "Given an array A[] of N integers and a range(L, R). The task is to find the number of subarrays having sum in the range L to R (inclusive)."
},
{
"code": null,
"e": 377,
"s": 366,
"text": "Example 1:"
},
{
"code": null,
"e"... |
How to create a new table from merging two tables with MySQL union? | The following is the syntax to merge two tables using MySQL union
create table yourTableName
(
select *from yourTableName1
)
UNION
(
select *from yourTableName2
);
To understand the above syntax, let us create a table. The query to create first table is as follows
mysql> create table Old_TableDemo
-> (
-> UserId int NOT NULL AUTO_INCREMENT PRIMARY KEY,
-> UserName varchar(20)
-> );
Query OK, 0 rows affected (0.63 sec)
The query to create second table is as follows
mysql> create table Old_TableDemo2
-> (
-> UserId int NOT NULL AUTO_INCREMENT PRIMARY KEY,
-> UserName varchar(20)
-> );
Query OK, 0 rows affected (0.60 sec)
Insert some records in the first table using insert command. The query is as follows −
mysql> insert into Old_TableDemo(UserName) values('John');
Query OK, 1 row affected (0.22 sec)
mysql> insert into Old_TableDemo(UserName) values('Carol');
Query OK, 1 row affected (0.15 sec)
Display all records from the first table using select statement. The query is as follows −
mysql> select *from Old_TableDemo;
The following is the output
+--------+----------+
| UserId | UserName |
+--------+----------+
| 1 | John |
| 2 | Carol |
+--------+----------+
2 rows in set (0.00 sec)
Now you can insert some records in the second table using insert command. The query is as follows −
mysql> insert into Old_TableDemo2(UserName) values('Larry');
Query OK, 1 row affected (0.22 sec)
mysql> insert into Old_TableDemo2(UserName) values('Sam');
Query OK, 1 row affected (0.10 sec)
Display all records from the second table using select statement. The query is as follows −
mysql> select *from Old_TableDemo2;
The following is the output
+--------+----------+
| UserId | UserName |
+--------+----------+
| 1 | Larry |
| 2 | Sam |
+--------+----------+
2 rows in set (0.00 sec)
Here is the query to create a new table from merging two tables with union
mysql> create table UserTableDemo
-> (
-> select *from Old_TableDemo
-> )
-> UNION
-> (
-> select *from Old_TableDemo2
-> );
Query OK, 4 rows affected (1.18 sec)
Records: 4 Duplicates: 0 Warnings: 0
Let us check the table records of new table. The query is as follows −
mysql> select *from UserTableDemo;
The following is the output
+--------+----------+
| UserId | UserName |
+--------+----------+
| 1 | John |
| 2 | Carol |
| 1 | Larry |
| 2 | Sam |
+--------+----------+
4 rows in set (0.00 sec) | [
{
"code": null,
"e": 1128,
"s": 1062,
"text": "The following is the syntax to merge two tables using MySQL union"
},
{
"code": null,
"e": 1232,
"s": 1128,
"text": "create table yourTableName\n(\n select *from yourTableName1\n)\nUNION\n(\n select *from yourTableName2\n);"
},... |
Dictionary Methods in Java | Dictionary is an abstract class that represents a key/value storage repository and operates much like Map. Given a key and value, you can store the value in a Dictionary object. Once the value is stored, you can retrieve it by using its key. Thus, like a map, a dictionary can be thought of as a list of key/value pairs.
Following are the methods defined by Dictionary are listed below −
Following is an example implementing put() and get() method of the Dictionary class −
import java.util.*;
public class Demo {
public static void main(String[] args) {
Dictionary dictionary = new Hashtable();
dictionary.put("20", "John");
dictionary.put("40", "Tom");
dictionary.put("60", "Steve");
dictionary.put("80", "Kevin");
dictionary.put("100", "Ryan");
dictionary.put("120", "Tim");
dictionary.put("140", "Jacob");
dictionary.put("160", "David");
System.out.println("Value at key 20 = " + dictionary.get("20"));
System.out.println("Value at key 40 = " + dictionary.get("40"));
System.out.println("Value at key 30 = " + dictionary.get("30"));
System.out.println("Value at key 90 = " + dictionary.get("90"));
}
}
Value at key 20 = John
Value at key 40 = Tom
Value at key 30 = null
Value at key 90 = null
Let us see another example wherein we are displaying the Dictionary values as well using the elements () method −
import java.util.*;
public class Demo {
public static void main(String[] args) {
Dictionary dictionary = new Hashtable();
dictionary.put("20", "John");
dictionary.put("40", "Tom");
dictionary.put("60", "Steve");
dictionary.put("80", "Kevin");
dictionary.put("100", "Ryan");
dictionary.put("120", "Tim");
dictionary.put("140", "Jacob");
dictionary.put("160", "David");
System.out.println("Dictionary Values...");
for (Enumeration i = dictionary.elements(); i.hasMoreElements();) {
System.out.println(i.nextElement());
}
System.out.println("Value at key 20 = " + dictionary.get("20"));
System.out.println("Value at key 40 = " + dictionary.get("40"));
System.out.println("Value at key 30 = " + dictionary.get("30"));
System.out.println("Value at key 90 = " + dictionary.get("90"));
}
}
Dictionary Values...
Tom
Jacob
Steve
Ryan
David
John
Kevin
Tim
Value at key 20 = John
Value at key 40 = Tom
Value at key 30 = null
Value at key 90 = null | [
{
"code": null,
"e": 1383,
"s": 1062,
"text": "Dictionary is an abstract class that represents a key/value storage repository and operates much like Map. Given a key and value, you can store the value in a Dictionary object. Once the value is stored, you can retrieve it by using its key. Thus, like ... |
R - Chi Square Test | Chi-Square test is a statistical method to determine if two categorical variables have a significant correlation between them. Both those variables should be from same population and they should be categorical like − Yes/No, Male/Female, Red/Green etc.
For example, we can build a data set with observations on people's ice-cream buying pattern and try to correlate the gender of a person with the flavor of the ice-cream they prefer. If a correlation is found we can plan for appropriate stock of flavors by knowing the number of gender of people visiting.
The function used for performing chi-Square test is chisq.test().
The basic syntax for creating a chi-square test in R is −
chisq.test(data)
Following is the description of the parameters used −
data is the data in form of a table containing the count value of the variables in the observation.
data is the data in form of a table containing the count value of the variables in the observation.
We will take the Cars93 data in the "MASS" library which represents the sales of different models of car in the year 1993.
library("MASS")
print(str(Cars93))
When we execute the above code, it produces the following result −
'data.frame': 93 obs. of 27 variables:
$ Manufacturer : Factor w/ 32 levels "Acura","Audi",..: 1 1 2 2 3 4 4 4 4 5 ...
$ Model : Factor w/ 93 levels "100","190E","240",..: 49 56 9 1 6 24 54 74 73 35 ...
$ Type : Factor w/ 6 levels "Compact","Large",..: 4 3 1 3 3 3 2 2 3 2 ...
$ Min.Price : num 12.9 29.2 25.9 30.8 23.7 14.2 19.9 22.6 26.3 33 ...
$ Price : num 15.9 33.9 29.1 37.7 30 15.7 20.8 23.7 26.3 34.7 ...
$ Max.Price : num 18.8 38.7 32.3 44.6 36.2 17.3 21.7 24.9 26.3 36.3 ...
$ MPG.city : int 25 18 20 19 22 22 19 16 19 16 ...
$ MPG.highway : int 31 25 26 26 30 31 28 25 27 25 ...
$ AirBags : Factor w/ 3 levels "Driver & Passenger",..: 3 1 2 1 2 2 2 2 2 2 ...
$ DriveTrain : Factor w/ 3 levels "4WD","Front",..: 2 2 2 2 3 2 2 3 2 2 ...
$ Cylinders : Factor w/ 6 levels "3","4","5","6",..: 2 4 4 4 2 2 4 4 4 5 ...
$ EngineSize : num 1.8 3.2 2.8 2.8 3.5 2.2 3.8 5.7 3.8 4.9 ...
$ Horsepower : int 140 200 172 172 208 110 170 180 170 200 ...
$ RPM : int 6300 5500 5500 5500 5700 5200 4800 4000 4800 4100 ...
$ Rev.per.mile : int 2890 2335 2280 2535 2545 2565 1570 1320 1690 1510 ...
$ Man.trans.avail : Factor w/ 2 levels "No","Yes": 2 2 2 2 2 1 1 1 1 1 ...
$ Fuel.tank.capacity: num 13.2 18 16.9 21.1 21.1 16.4 18 23 18.8 18 ...
$ Passengers : int 5 5 5 6 4 6 6 6 5 6 ...
$ Length : int 177 195 180 193 186 189 200 216 198 206 ...
$ Wheelbase : int 102 115 102 106 109 105 111 116 108 114 ...
$ Width : int 68 71 67 70 69 69 74 78 73 73 ...
$ Turn.circle : int 37 38 37 37 39 41 42 45 41 43 ...
$ Rear.seat.room : num 26.5 30 28 31 27 28 30.5 30.5 26.5 35 ...
$ Luggage.room : int 11 15 14 17 13 16 17 21 14 18 ...
$ Weight : int 2705 3560 3375 3405 3640 2880 3470 4105 3495 3620 ...
$ Origin : Factor w/ 2 levels "USA","non-USA": 2 2 2 2 2 1 1 1 1 1 ...
$ Make : Factor w/ 93 levels "Acura Integra",..: 1 2 4 3 5 6 7 9 8 10 ...
The above result shows the dataset has many Factor variables which can be considered as categorical variables. For our model we will consider the variables "AirBags" and "Type". Here we aim to find out any significant correlation between the types of car sold and the type of Air bags it has. If correlation is observed we can estimate which types of cars can sell better with what types of air bags.
# Load the library.
library("MASS")
# Create a data frame from the main data set.
car.data <- data.frame(Cars93$AirBags, Cars93$Type)
# Create a table with the needed variables.
car.data = table(Cars93$AirBags, Cars93$Type)
print(car.data)
# Perform the Chi-Square test.
print(chisq.test(car.data))
When we execute the above code, it produces the following result −
Compact Large Midsize Small Sporty Van
Driver & Passenger 2 4 7 0 3 0
Driver only 9 7 11 5 8 3
None 5 0 4 16 3 6
Pearson's Chi-squared test
data: car.data
X-squared = 33.001, df = 10, p-value = 0.0002723
Warning message:
In chisq.test(car.data) : Chi-squared approximation may be incorrect
The result shows the p-value of less than 0.05 which indicates a string correlation.
12 Lectures
2 hours
Nishant Malik
10 Lectures
1.5 hours
Nishant Malik
12 Lectures
2.5 hours
Nishant Malik
20 Lectures
2 hours
Asif Hussain
10 Lectures
1.5 hours
Nishant Malik
48 Lectures
6.5 hours
Asif Hussain
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2655,
"s": 2402,
"text": "Chi-Square test is a statistical method to determine if two categorical variables have a significant correlation between them. Both those variables should be from same population and they should be categorical like − Yes/No, Male/Female, Red/Green etc."... |
CompoundName getAll() method in Java with Examples - GeeksforGeeks | 27 Mar, 2020
The getAll() method of a javax.naming.CompoundName class is used to return all the component of this compound object as enumeration of string. The update effects applied to this compound name on this enumeration is undefined.
Syntax:
public Enumeration getAll()
Parameters: This method accepts nothing.
Return value: This method returns a non-null enumeration of the components of this compound name. Each element of the enumeration is of class String.
Below programs illustrate the CompoundName.getAll() method:Program 1:
// Java program to demonstrate// CompoundName.getAll() import java.util.Enumeration;import java.util.Properties;import javax.naming.CompoundName;import javax.naming.InvalidNameException; public class GFG { public static void main(String[] args) throws InvalidNameException { // need properties for CompoundName Properties props = new Properties(); props.put("jndi.syntax.separator", ":"); props.put("jndi.syntax.direction", "left_to_right"); // create compound name object CompoundName compoundName = new CompoundName( "a:b:z:y:x", props); // apply getAll() Enumeration<String> components = compoundName.getAll(); // print value int i = 0; while (components.hasMoreElements()) { System.out.println( "position " + i + " :" + components.nextElement()); i++; } }}
position 0 :a
position 1 :b
position 2 :z
position 3 :y
position 4 :x
Program 2:
// Java program to demonstrate// CompoundName.getAll() method import java.util.Enumeration;import java.util.Properties;import javax.naming.CompoundName;import javax.naming.InvalidNameException; public class GFG { public static void main(String[] args) throws InvalidNameException { // need properties for CompoundName Properties props = new Properties(); props.put("jndi.syntax.separator", "/"); props.put("jndi.syntax.direction", "left_to_right"); // create compound name object CompoundName compoundName = new CompoundName( "c/e/d/v/a/b/z/y/x/f", props); // apply getAll() Enumeration<String> components = compoundName.getAll(); // print value int i = 0; while (components.hasMoreElements()) { System.out.println( "position " + i + " :" + components.nextElement()); i++; } }}
position 0 :c
position 1 :e
position 2 :d
position 3 :v
position 4 :a
position 5 :b
position 6 :z
position 7 :y
position 8 :x
position 9 :f
References: https://docs.oracle.com/javase/10/docs/api/javax/naming/CompoundName.html#getAll()
Java-Functions
Java
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Initialize an ArrayList in Java
Interfaces in Java
ArrayList in Java
Multidimensional Arrays in Java
Stack Class in Java
LinkedList in Java
Overriding in Java
Singleton Class in Java
Collections in Java
Multithreading in Java | [
{
"code": null,
"e": 24360,
"s": 24332,
"text": "\n27 Mar, 2020"
},
{
"code": null,
"e": 24586,
"s": 24360,
"text": "The getAll() method of a javax.naming.CompoundName class is used to return all the component of this compound object as enumeration of string. The update effects a... |
How to get nth row in a Pandas DataFrame? | To get the nth row in a Pandas DataFrame, we can use the iloc() method. For example, df.iloc[4] will return the 5th row because row numbers start from 0.
Make two-dimensional, size-mutable, potentially heterogeneous tabular data, df.
Print input DataFrame, df.
Initialize a variable nth_row.
Use iloc() method to get nth row.
Print the returned DataFrame.
import pandas as pd
df = pd.DataFrame(
dict(
name=['John', 'Jacob', 'Tom', 'Tim', 'Ally'],
marks=[89, 23, 100, 56, 90],
subjects=["Math", "Physics", "Chemistry", "Biology", "English"]
)
)
print "Input DataFrame is:\n", df
nth_row = 3
df = df.iloc[nth_row]
print "Row ", nth_row, "of the DataFrame is: \n", df
Input DataFrame is:
name marks subjects
0 John 89 Math
1 Jacob 23 Physics
2 Tom 100 Chemistry
3 Tim 56 Biology
4 Ally 90 English
Row 3 of the DataFrame is:
name Tim
marks 56
subjects Biology
Name: 3, dtype: object | [
{
"code": null,
"e": 1216,
"s": 1062,
"text": "To get the nth row in a Pandas DataFrame, we can use the iloc() method. For example, df.iloc[4] will return the 5th row because row numbers start from 0."
},
{
"code": null,
"e": 1296,
"s": 1216,
"text": "Make two-dimensional, size-m... |
How to remove a directory in R? - GeeksforGeeks | 14 Sep, 2021
In this article, we will discuss how to remove a directory using R programming language.
To remove a directory in R we use unlink(). This function deletes the named directory.
Syntax: unlink(directory-name, recursive = BOOLEAN)
Parameter:
directory-name: a character vector with the names of the directories to be deleted.
recursive: BOOLEAN TRUE/FALSE, if true directory is deleted recursively otherwise only empty directories are deleted.
Return: It returns normally 0 for success, 1 for failure. Not deleting a non-existent file is not a failure so in that case return 0.
Note: File naming conventions depend on the platform.
Directory in use:
Directory used
Example: Removing directory using R programming language
R
# list files or directories in working directory list.files(path=".", pattern=NULL, all.files=FALSE, full.names=FALSE) # delete the directory demo in working directoryunlink("Demo",recursive=TRUE) # list files or directories in working directory # after deletionlist.files(path=".", pattern=NULL, all.files=FALSE, full.names=FALSE)
Output:
Console Output
Final Directory after running the above code:
Final state of Directory
Picked
R directory-programs
R Language
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Change Color of Bars in Barchart using ggplot2 in R
How to Change Axis Scales in R Plots?
Group by function in R using Dplyr
How to Split Column Into Multiple Columns in R DataFrame?
How to filter R DataFrame by values in a column?
How to import an Excel File into R ?
How to filter R dataframe by multiple conditions?
Replace Specific Characters in String in R
Time Series Analysis in R
R - if statement | [
{
"code": null,
"e": 25242,
"s": 25214,
"text": "\n14 Sep, 2021"
},
{
"code": null,
"e": 25331,
"s": 25242,
"text": "In this article, we will discuss how to remove a directory using R programming language."
},
{
"code": null,
"e": 25418,
"s": 25331,
"text": "T... |
A Practical Guide to MLOps in AWS Sagemaker — Part I | by Akash Agnihotri | Towards Data Science | This guide results from my own frustration to find a complete end-to-end work on model development, evaluation, deployment on AWS. All the guides, tutorials I saw out there only cover part of the picture and never fully connect the dots. I wanted to write something that will help people understand the complete work that goes into building a model and deploying it such that it can be accessed by front-end developers on their websites and Apps.
So, let us get started!
I have divided this guide into 2 parts.
1. Model Development and Evaluation using AWS Sagemaker Studio.
2. Model Deployment using AWS Lambda and REST API’s
Prerequisites:
· AWS Account — The cost to run the entire tutorial will be less than $0.50 so do not worry.
· Understanding of Python — Most of the Machine Learning work today is being done in Python.
· Patience — Failure is the most important prerequisite to success, so keep on trying until it works.
We will set up a Project in Sagemaker Studio to build our development pipeline.
1. log in to your AWS Account and Select Sagemaker from the list of services.
2. Select Sagemaker Studio and use Quickstart to create Studio.
Once the Studio is Ready, Open Studio with the user you just created. It might take few minutes for the Application to be created, but once everything is set up, we can create our project. The thing to understand is, we can only create one Studio, but multiple users in that studio, and every user can create multiple projects in the studio.
3. Select Sagemaker components and registries from the left navbar and select Create Projects.
By default, Sagemaker provides templates that can be used to build, evaluate host models. We will use one such template and modify it to fit our use case.
4. Select the MLOps template for model development, evaluation, and deployment from the list and create a project.
Once your new project is created, you will find 2 pre-built repositories. The first one defines your model development and evaluation and the other build your model into a package and deploys it into an endpoint for consumption by API. In this guide, we will modify the first template to run our own use case.
5. Clone the first repository so we can modify the files we need.
The use case we will be working on is one of the Customer Churn Models to predict if a customer is likely to unsubscribe to the services in the future. As the idea behind this notebook is to learn model development and deployment in Cloud, I will not go into the data exploration and directly jump into pipeline development.
This is the file structure of the repository we just cloned now let us go over some of the files we will be working with.
· The folder pipelines contain the file needed to create our model development pipeline, by default this pipeline is named abalone.
· pipeline.py defines the components of our pipeline, currently, it is defined with default values, but we will change the code for our use case.
· preprocess.py and evaluate.py define the code that we need to execute for preprocessing and evaluation steps in our pipeline.
· codebuild-buildspec-yml creates and orchestrates the pipeline.
You can add more steps into pipeline.py and corresponding processing files, the templates also have a test folder defined along with a test_pipelines.py file which can be used to build a separate test pipeline.
6. Rename the folder abalone to customer-churn make the change in the codebuild-buildspec-yml file to reflect that change.
run-pipeline --module-name pipelines.customer_churn.pipeline \
7. We need to download the data into our default AWS s3 bucket for consumption, we can do this using a notebook. Create a new notebook in the repository from the File tab in the studio, select a kernel with a basic data science python package and paste the below code in the cell and run.
!aws s3 cp s3://sagemaker-sample-files/datasets/tabular/synthetic/churn.txt ./import osimport boto3import sagemakerprefix = 'sagemaker/DEMO-xgboost-churn'region = boto3.Session().region_namedefault_bucket = sagemaker.session.Session().default_bucket()role = sagemaker.get_execution_role()RawData = boto3.Session().resource('s3')\.Bucket(default_bucket).Object(os.path.join(prefix, 'data/RawData.csv'))\.upload_file('./churn.txt')print(os.path.join("s3://",default_bucket, prefix, 'data/RawData.csv'))
Now we need to modify the code inside pipeline.py, evaluate.py, and preprocess.py to fit our need.
8. For the sake of the guide, copy the code from the link to update code in pipeline.py, preprocess.py, and evaluate.py but make sure to go through the code a better understanding of the details.
All set, once we update the code in these 3 files, we are ready to run our first pipeline execution, but as we are trying to implement CI/CD template this will automatically be taken care of once we commit and push our code.
9. Select the GIT tab from the side nav bar and select the files you have modified to add to the staging area and commit and push changes to the remote repository.
Now go to the pipeline tab on the project page and select the pipeline you created to check the executions running, you should find one Succeeded job which got automatically executed when we cloned the repository the other one would be in Executing state which you just executed by pushing code, now double click on this to check the pipeline diagram and more details.
Hurray!! Congrats you just executed your first training job.
Unless something goes wrong, you should see your job Succeeded, but remember if it were easy anyone would do it. Failure is the first step to success.
Once the pipeline completes it will create a model and add it to your model group, as we have added a model approval condition as “Manual” in the pipeline we will need to select the model and approve it manually to create an Endpoint that can be used for inference.
10. Go to the Model Group tab on your project home page and select the model which has been created, you can check the Metrics page to review the results of the evaluation phase.
11. If you are satisfied with the metrics, you can select the Approval option in the top right corner to approve the model.
Here is when our second repository comes into the picture, once you approve the model deployment pipeline defined in the second repository will execute to deploy and host a new ENDPOINT which we can use to make inferences from our API.
I have tried to keep this guide to the point of using Sagemaker as it is long anyways and there is still a part 2 to come. The objective here is to give a quick overview of the different components of Sagemaker through implementing a simple project. My suggestion to readers would be not to follow the guide step by step and experiment with your own ideas and steps, you will fail often but you will learn a lot, and that is the agenda. Hope you enjoy working through this guide as much as I have enjoyed putting it together. Feel free to drop any suggestion or feedback in comments, would love to hear them.
A Practical Guide to MLOps in AWS Sagemaker — Part II | [
{
"code": null,
"e": 618,
"s": 171,
"text": "This guide results from my own frustration to find a complete end-to-end work on model development, evaluation, deployment on AWS. All the guides, tutorials I saw out there only cover part of the picture and never fully connect the dots. I wanted to write... |
C++ Program to Implement Bubble Sort | Bubble Sort is comparison based sorting algorithm. In this algorithm adjacent elements are compared and swapped to make correct sequence. This algorithm is simpler than other algorithms, but it has some drawbacks also. This algorithm is not suitable for large number of data set. It takes much time to solve the sorting tasks.
Time Complexity: O(n) for best case, O(n2) for average and worst case
Time Complexity: O(n) for best case, O(n2) for average and worst case
Space Complexity: O(1)
Space Complexity: O(1)
Input − A list of unsorted data: 56 98 78 12 30 51
Output − Array after Sorting: 12 30 51 56 78 98
Input: An array of data, and the total number in the array
Output: The sorted Array
Begin
for i := 0 to size-1 do
flag := 0;
for j:= 0 to size –i – 1 do
if array[j] > array[j+1] then
swap array[j] with array[j+1]
flag := 1
done
if flag ≠ 1 then
break the loop.
done
End
#include<iostream>
using namespace std;
void swapping(int &a, int &b) { //swap the content of a and b
int temp;
temp = a;
a = b;
b = temp;
}
void display(int *array, int size) {
for(int i = 0; i<size; i++)
cout << array[i] << " ";
cout << endl;
}
void bubbleSort(int *array, int size) {
for(int i = 0; i<size; i++) {
int swaps = 0; //flag to detect any swap is there or not
for(int j = 0; j<size-i-1; j++) {
if(array[j] > array[j+1]) { //when the current item is bigger than next
swapping(array[j], array[j+1]);
swaps = 1; //set swap flag
}
}
if(!swaps)
break; // No swap in this pass, so array is sorted
}
}
int main() {
int n;
cout << "Enter the number of elements: ";
cin >> n;
int arr[n]; //create an array with given number of elements
cout << "Enter elements:" << endl;
for(int i = 0; i<n; i++) {
cin >> arr[i];
}
cout << "Array before Sorting: ";
display(arr, n);
bubbleSort(arr, n);
cout << "Array after Sorting: ";
display(arr, n);
}
Enter the number of elements: 6
Enter elements:
56 98 78 12 30 51
Array before Sorting: 56 98 78 12 30 51
Array after Sorting: 12 30 51 56 78 98 | [
{
"code": null,
"e": 1389,
"s": 1062,
"text": "Bubble Sort is comparison based sorting algorithm. In this algorithm adjacent elements are compared and swapped to make correct sequence. This algorithm is simpler than other algorithms, but it has some drawbacks also. This algorithm is not suitable for... |
React & D3: Adding A Bar Chart. So the last component to refactor in... | by Joe Keohan | Towards Data Science | So the last component to refactor in Streetball Mecca is the bar chart. In my last article, React & D3: Preparing The Data With D3.Nest, I discussed how the data for the chart was organized and formatted using d3.nest. This involved grouping the parks based on their neighborhood location and calculating the overall average of all parks associated with that neighborhood.
Here is the current working version used in this article and the CodeSandbox as well.
As I started to contemplate how best to refactor the D3 bar chart code into the React ecosystem, I leveraged the lessons learned from all the previous refactoring. One of the most important changes was to allow React to manage state, render/update DOM elements, and then allow D3 to do what it does best, such as creating scales and nesting data.
For this refactor I decided to take it one step further and opted to take a non-svg approach to rendering the bar and circle elements. This decision was born initially from many previous frustrations having to manually position g elements. It was also influenced by the fact that I just taught a class on Flexbox and as part of that prep I came across a bookmark I saved over a year ago to the article “Making Data Vis Without SVG Using D3 & Flexbox” by Amber Thomas. I felt like the stars had aligned and all the signs pointed to Flexbox.
Initially, I had ported over the D3 code and refactored it just slightly to work within a single React BarChart component. This D3 within React approach was meant as a first step to understanding how to coerce D3 to work within React. I also kept much of the D3 code intact in order to get something up and running. It however ended up being close to 200 lines of code and started to feel unmanageable.
Now that I was ready to move forward with the second refactor and take a “D3 for the math and React for everything else” approach, the first thing I needed to plan out was the component hierarchy. I used an app called excalidraw to draw out the initial mockup and help work through the though and design process.
The BarChart is the first component in this hierarchy so it makes sense to start there. Since both the XAxis and Neighborhoods components would require the same D3 scale, I decided to create it here and then pass it down to both as a prop.
const BarChart = (props) => { const margin = { right: 25 }; const xScale = d3.scaleLinear() .domain([0, 100]) .range([0, 1100 - margin.right]);return ( <> <div id="axis"><XAxis xScale={xScale} /></div> <div id="bar-chart"> <Neighborhoods {...props} xScale={xScale} /> </div> </> )}
D3 has several methods that can be used for creating scales. Scales take an input (usually a number, date, or category) and return a value (such as a coordinate, color, length, or radius). They come in several categories that range from continuous input/output (d3.scaleLinear()) to discrete input/output (d3.scaleOridinal()) and somewhere in-between (d3.scaleQuantize()). Scales themselves could be an entire lecture in their own right so if you’re interested in learning more I highly suggest you read d3indepth.com.
The scale I needed for this chart would be a dynamic, continuous input/output scale so I went with d3.scaleLinear(). All D3 scales require that you define a .domain() and .range(). The domain of values used for this scale is based on the overall rating of the parks. They could potentially be in total disrepair and assigned 0 (lowest value possible) or the creme de la creme of parks with a rating of 100 (highest value possible). Here, for example, is the 100% Playground park located in Canarsie Brooklyn which falls somewhere in between with an overall rating of 56.
The average of those parks was also needed as this would be assigned to the parent neighborhood and I had used D3.nest() to rollup those values as seen in the example below.
If you’re curious about working with d3.nest (which I hope you are), you can read more about how I nested the data in my previous article: Preparing The Data With D3.Nest.
The .range() is based on the pixel space provided to the .bar-group element which is assigned a width of 1100px. I did need to subtract 25px in order to provide a bit of margin between both, the last tick value(100) and the circle and the right border of the .bar-group element.
As you continue to examine more and more D3 code in general, you will see a common pattern of defining a set of margins that are used to define this additional space which is then referenced by the scale but quite possibly throughout the code in key places.
const margin = { right: 25 };const xScale = d3.scaleLinear() .domain([0, 100]) .range([0, 1100 - margin.right]);
Almost any D3 tutorial on scales will provide a similar visual in order to convey the relationship between the domain and range so I thought I’d do the same as well.
A Little Bit About Axes
If you have ever worked with D3 in the past, using it to create any one of the more standard chart types such as bar, line, or scatterplot, then it’s safe to say you have some familiarity with adding an axis. If, however, you are new to D3, then know that axes require the following steps:
Creating a scale (we did so using d3.scaleLinear())
Setting its orientation: the orientation of an axis is fixed, however, it can be changed using one of the four orientation methods — .axisBottom(), .axisTop(), .axisLeft(), and .axisRight().
Appending the Axis: once the axis has been created it is then appended using .call(xAxis).
The axis also has additional configurations for setting tick values, sizes, and format.
Being that I’ll be using React for the DOM and D3 for where it’s best suited, I decided that for this refactor I would only make use of d3.scaleLinear() and render the ticks using React.
The only prop being passed to the XAxis component was the xScale so I’ll just object de-structure that value to make it clear that this is all that the component needs as it pertains to props.
const Axis = ({xScale}) => {}
For much of the XAxis refactor I had used as a reference a beautifully written tutorial on React & D3 by Amelia Wattenberger, She wrote the article in order to promote her recent book “Full Stack D3 and Data Visualization”.
I was, at one point, tempted to buy it and would have done so had I not already owned the following books on D3:
D3.js In Action by Elijah Meeks (both editions)
React & D3v4 by Swizec Teller (along with a video subscription)
Developing A D3 Edge by Chris Viau, Andrew Thornton, Ger Hobbelt and Roland Dunn
D3 Tips and Tricks by Malcom Maclean
Learning D3 Mapping by Thomas Newton and Oscar Villarreal
In Amelia’s article she looped over the array of individual ticks (0, 10, 20, etc.) using .ticks() and assigned each of them an xOffset value based on the returned value from the xScale.
const Axis = ({xScale}) => { const ticks = xScale.ticks().map((value) => ({ value, xOffset: xScale(value) }))}
It’s been a while since I’ve worked on such a low level with scales and forgot that they included a .ticks() method that returns an array of the ticks.
For the axis I decided to stay in line with Amelia’s article rendering an svg that uses a nested g element to position the axis to the right 200px. It then maps over the ticks array and positions each one of those g elements which, itself, renders the tick value.
const Axis = ({xScale}) => { const ticks = xScale.ticks().map((value) => ({ value, xOffset: xScale(value) })) return ( <svg style={{ height: '20px' }}> <g transform={'translate(200,0)'}> {ticks.map(({ value, xOffset }) => ( <g key={value} transform={`translate(${xOffset}, 0)`}> <text key={value} style={{ fontSize: '14px', textAnchor: 'middle', transform: 'translateY(20px)' }}> {value} </text> </g> ))} </g> </svg> )}
Side Note: I was able to render the ticks as divs using Flexbox, but that solution required creating an additional xScale and removing not only an additional 200px but also another 8px for half the font size. Since I felt only one scale was need, I opted to put that solution on the shelf for now.
This component is responsible for mapping over the nestedData array and rendering individual Neighborhood components. It passes down several props to the child component, including the dispatch function which will be passed as an action and which the useReducer uses to update state. If you’re curious about useReducer, take a look at this previous article I wrote: Managing Complex State Transitions With useReducer.
const Neighborhoods = ({ nestedData, dispatch, activeNeighborhood, xScale}) => { const neighborhoods = nestedData.map((d, i) => { return ( <Neighborhood key={i} title={d.key} parks={d.value.parks} width={xScale(d.value.avg)} xScale={xScale} neighborhood={d} dispatch={dispatch} activeNeighborhood={activeNeighborhood} /> ); }); return ( <>{neighborhoods}</> )};
This component is responsible for rendering the following elements:
neighborhood title
bar — show the average of all parks
circles — one for every park
It also makes use of the following hooks:
useState — used to keep track of its active status which is used assign a background color the entire element when the user clicks and it becomes the active neighborhood
useEffect — used to update global state if it determines to set the activeNeighborhood value to this neighborhood
const Neighborhood = ({ parks, width, xScale, title, dispatch, neighborhood, activeNeighborhood}) => { const [active, setActive] = useState(false); useEffect(() => { activeNeighborhood === title ? setActive(true) : setActive(false); }, [activeNeighborhood, title]); // more code...}
The next step is to map over the parks array and create a Circle component. One thing to note is that the element determines how many pixels it should position itself left using the xScale but it also needs to subtract 5px. These additional pixels were needed in order to move the circle left, to center it directly under the axis value.
const circles = parks.map((d, i) => { return ( <Circle key={i} dispatch={dispatch} color={d.color} park={d} left={xScale(+d.overall) - 5} /> );});
Let’s break down the return statement in the map of the Neighborhood component as there are a few things happening here.
const neighborhoods = nestedData.map((d, i) => { return ( <Neighborhood key={i} title={d.key} parks={d.value.parks} width={xScale(d.value.avg)} xScale={xScale} neighborhood={d} dispatch={dispatch} activeNeighborhood={activeNeighborhood} /> ); });
First, the neighborhood element will reference the activeNeighborhood value in state to determine if it needs to highlight its background color in order to indicate it is the active neighborhood as seen below.
I opted to allow the user to click anywhere in on the element in order to trigger the onClick event which calls the dispatch function and passes it an object containing a type and payload as required by useReducer. You can read more about working with useReducer and actions in this article: React: Managing Complex State Transitions With useReducer.
return ( <div className="neighborhood" style={{ background: active ? 'lightyellow' : '' }} onClick={() => dispatch({ type: 'FILTER_ACTIVE_NEIGHBORHOOD', payload: { neighborhood } })}> // more code... </div>)
Next, we have to render the neighborhood title and assign it a specific color based on which borough it belongs to.
return ( <div className="neighborhood" style={{ background: active ? 'lightyellow' : '' }} onClick={() => dispatch({ type: 'FILTER_ACTIVE_NEIGHBORHOOD', payload: { neighborhood } })}> <div className="title" style={{ color: parks[0].boroughColor }} > {title} </div> // more code... </div>)
And finally, the Bars and Circles needed to be rendered. The Bar component requires the width value which was calculated using the xScale along with the entire neighborhood object as key data points are needed by the tooltip.
return ( <div className="neighborhood" style={{ background: active ? 'lightyellow' : '' }} onClick={() => dispatch({ type: 'FILTER_ACTIVE_NEIGHBORHOOD', payload: { neighborhood } })}> <div className="title" style={{ color: parks[0].boroughColor }} > {title} </div> <div className="bar-group"> <Bar width={width} neighborhood={neighborhood}/> {circles} </div> </div>)
Flexbox was enabled on the
This component would require the logic needed to render a tooltip when the user hovered over the colored bar. That logic involves adding both onMouseMove and onMouseOut event listeners and updating a local version of state in order to trigger the re-render.
The first step to take was create the following state values:
toolTipCoords — this stores the left and top values to position the tooltip
isActive — this determines if the tooltip is active
const Bar = ({ width, neighborhood }) => { const [toolTipCoords, setToolTipCoords] = useState(null) const [isActive, setIsActive] = useState(false) // more code...}
Now we add the handler functions for the event listeners. For this I had to do a bit of research as the standard event object’s e.pageY, e.clientY, and e.screenY were returning values that were much greater than the actual position of the mouse. I ended up using e.nativeEvent.offsetX and e.nativeEvent.offsetY and pulled some of that insight right from the React docs themselves:
If you find that you need the underlying browser event for some reason, simply use the nativeEvent attribute to get it. The synthetic events are different from, and do not map directly to, the browser’s native events. For example in onMouseLeave event.nativeEvent will point to a mouseout event.
const Bar = ({ width, neighborhood }) => { const [toolTipCoords, setToolTipCoords] = useState(null) const [isActive, setIsActive] = useState(false) const handleMouseOut = () => setIsActive(false) const handleMouseOver = (e) => { const top = e.nativeEvent.offsetY - 60; const left = e.nativeEvent.offsetX; setToolTipCoords({ top, left }); setIsActive(true);};// more code...}
All that was left was to return the DOM elements configured, assign the event listeners and add some conditional logic to render the tooltip only when isActive was true.
const Bar = ({ width, neighborhood }) => { const [toolTipCoords, setToolTipCoords] = useState(null) const [isActive, setIsActive] = useState(false) const handleMouseOut = () => setIsActive(false) const handleMouseOver = (e) => { const top = e.nativeEvent.offsetY - 60; const left = e.nativeEvent.offsetX; setToolTipCoords({ top, left }); setIsActive(true); }; return ( <> <div className="bar" style={{ width: width }} onMouseMove={(e) => handleMouseOver(e)} onMouseOut={() => handleMouseOut()} ></div> {isActive && ( <ToolTip coords={toolTipCoords} neighborhood={neighborhood} /> )} </> );}
This component required a much of the same logic for the tooltip as was needed for the Bar component along with one additional onClick event listener that filtered the map for the selected park.
The handleClick function required the use of e.stopPropagation() in order to prevent the event from bubbling up to the Neighborhood component where an additional onClick event was configured. I found that without e.stopPropagation(), all parks, from that neighborhood, would be rendered within the map instead of just that single park.
const Circle = ({ color, left, park, dispatch}) => { const [toolTipCoords, setToolTipCoords] = useState(null); const [isActive, setIsActive] = useState(false); const handleClick = (e) => { e.stopPropagation() dispatch({ type: 'FILTER_ACTIVE_PARK', payload: { park } }) } const handleMouseOut = () => { setIsActive(false); }; const handleMouseOver = (e) => { const top = e.nativeEvent.offsetY - 90; const left = e.clientX - 500; setToolTipCoords({ top, left }); };
I had looked forward to using fancy Flexbox properties for this particular layout but it turned out it was only needed enable it on the neighborhood and title elements.
Here are is the actual html layout of the .neighborhood element and its children.
Those elements, along with their corresponding CSS rendered as follows:
As for Flexbox I first enabled it on the .neighborhood element which positioned the title and bar-group horizontally to one another.
.neighborhood { display: flex; height: 28px; margin-bottom: 5px;}
The .title element was also assigned a display of flex and used align-items to to vertically center the title text
.title { display: flex; align-items: center; width:200px;}
This is the 7th article of my Streetball Mecca series, where I document refactoring the project from its humble beginnings as a D3 visualization over to a React build to manage state and the DOM and use D3 for its unique helper functions.
Thus far, I’ve replaced all instances of D3’s Enter/Update/Exit with either React’s useReducer or useState hooks. useReducer is being used to manage the global state and overall business logic of the app, and useState is used for more localized state as in the tooltips. Although React took the reins here, D3 was used for what it does best; creating scales (d3.scaleLinear), rendering geo-projections (d3.geoMercator/d3.geoPath), and nesting data (d3.nest).
This project has certainly helped validate my opinion that React is by far the better choice for managing state and the overall flow of the application logic. Besides all the native React hooks used, such as useReducer, useState, useRef, and useCallback, I also leveraged a few custom hooks such as useDataApi, useOnClickOutside, and useLocalStorage. My favorite part of the refactor has been working with the React-Spring animation library in order to replace all the D3 transitions. A close runner up was building a bar chart using DIV’s and Flexbox instead of SVGs and G elements, something I’ve wanted to try for some time.
I certainly have a new appreciation for both libraries and look forward to building more interactive dashboards. | [
{
"code": null,
"e": 545,
"s": 172,
"text": "So the last component to refactor in Streetball Mecca is the bar chart. In my last article, React & D3: Preparing The Data With D3.Nest, I discussed how the data for the chart was organized and formatted using d3.nest. This involved grouping the parks bas... |
Building A ‘Serverless’ Chrome Extension | by Bilal Tahir | Towards Data Science | This is a tutorial on building a Chrome Extension that leverages Serverless architecture. Specifically — we will use Google Cloud Functions in the Back-End of our Chrome Extension to do some fancy Python magic for us.
The Extension we will build is the SummarLight Chrome Extension:
medium.com
The SummarLight Extension takes the text of he current web page you are on (presumably a cool blog on medium such as mine) and highlights the most important parts of that page/article.
In order to do this, we will setup a UI (a button in this case) which will send the text on our current web page to our Back-End. The ‘Back-End’ in this case will be a Google Cloud Function which will analyze that text and return its Extractive Summary (the most important sentences of that text).
As we can see, the architecture is very straightforward and flexible. You can setup a simple UI like an App or, in this case, a Chrome Extension, and pass any complex work to your serverless functions. You can easily change your logic in the function and re-deploy it to try alternative methods. And finally, you can scale it for as many API calls as needed.
This is not an article on the benefits of serverless so I will not go into details about the advantages of using it over traditional servers. But usually, a serverless solution can be much cheaper and scalable (but not always...depends on your use case).
Here is a good guide on the setup of a Chrome Extension:
www.freecodecamp.org
And all the code for the SummarLight Extension can be found here:
github.com
The main.py file in the root directory is where we define our Google Cloud Function. The extension_bundle folder has all the files that go into creating the Chrome Extension.
Google Cloud Function
I chose Google instead of AWS Lamba because I had some free credits (thanks Google!) but you can easily do it with AWS as well. It was a huge plus for me that they had just released Google Cloud Functions for Python as I do most of my data crunching in that beautiful language.
You can learn more about deploying Google Cloud Functions here:
cloud.google.com
I highly recommend using the gcloud sdk and starting with there hello_world example. You can edit the function in the main.py file they provide for your needs. Here is my function:
import sysfrom flask import escapefrom gensim.summarization import summarizeimport requestsimport jsondef read_article(file_json): article = '' filedata = json.dumps(file_json) if len(filedata) < 100000: article = filedata return articledef generate_summary(request): request_json = request.get_json(silent=True) sentences = read_article(request_json) summary = summarize(sentences, ratio=0.3) summary_list = summary.split('.') for i, v in enumerate(summary_list): summary_list[i] = v.strip() + '.' summary_list.remove('.') return json.dumps(summary_list)
Pretty straight forward. I receive some text via the read_article() function and then, using the awesome Gensim library, I return a Summary of that text. The Gensim Summary function works by ranking all the sentences in order of importance. In this case, I have chosen to return the top 30% of the most important sentences. This will highlight the top one third of the article/blog.
Alternative Approaches: I tried different methods for summarization including using Glove Word Embeddings but the results were not that much better compared to Gensim (especially considering the increased processing compute/time because of loading in those massive embeddings). There is still a lot of room for improvement here though. This is an active area of research and better text summarization approaches are being developed:
nlpprogress.com
Once we are good with the function we can deploy it and it will be available at an HTTP endpoint which we can call from our App/Extension.
Extension Bundle
Now for the Front-End. To start, we need a popup.html file. This will deal with the UI part. It will create a menu with a button.
<!DOCTYPE html><html> <head> <link rel="stylesheet" href="styles.css"> </head> <body> <ul> <li> <a><button id="clickme" class="dropbtn">Highlight Summary</button></a> <script type="text/javascript" src="popup.js" charset="utf-8"></script> </li> </ul> </body></html>
As we can see, the ‘Highlight Summary’ button has an onClick event that triggers the popup.js file. This in turn will call the summarize function:
function summarize() { chrome.tabs.executeScript(null, { file: "jquery-2.2.js" }, function() { chrome.tabs.executeScript(null, { file: "content.js" }); });}document.getElementById('clickme').addEventListener('click', summarize);
The summarize function calls the content.js script (yeah yeah I know we could have avoided this extra step...).
alert("Generating summary highlights. This may take up to 30 seconds depending on length of article.");function unicodeToChar(text) { return text.replace(/\\u[\dA-F]{4}/gi, function (match) { return String.fromCharCode(parseInt(match.replace(/\\u/g, ''), 16)); });}// capture all textvar textToSend = document.body.innerText;// summarize and send backconst api_url = 'YOUR_GOOGLE_CLOUD_FUNCTION_URL';fetch(api_url, { method: 'POST', body: JSON.stringify(textToSend), headers:{ 'Content-Type': 'application/json' } }).then(data => { return data.json() }).then(res => { $.each(res, function( index, value ) { value = unicodeToChar(value).replace(/\\n/g, ''); document.body.innerHTML = document.body.innerHTML.split(value).join('<span style="background-color: #fff799;">' + value + '</span>'); }); }).catch(error => console.error('Error:', error));
Here is where we parse the html of the page we are currently on (document.body.innerText) and, after some pre-processing with the unicodeToChar function, we send it to our Google Cloud Function via the Fetch API. You can add your own HTTP endpoint url in the api_url variable for this.
Again, leveraging Fetch, we return a Promise, which is the summary generated from our serverless function. Once we resolve this, we can parse the loop through the html content of our page and highlight the sentences from our summary.
Since — it can take a little while for all this processing to be done, we add an alert at the top of the page which will indicate this (“Generating summary highlights. This may take up to 30 seconds depending on length of article.").
Finally — we need to create a manifest.json file that is needed to publish the Extension:
{ "manifest_version": 2, "name": "SummarLight", "version": "0.7.5", "permissions": ["activeTab", "YOUR_GOOGLE_CLOUD_FUNCTION_URL"], "description": "Highlights the most important parts of posts/stories/articles!", "icons": {"16": "icon16.png", "48": "icon48.png", "128": "icon128.png" }, "browser_action": { "default_icon": "tab-icon.png", "default_title": "Highlight the important stuff", "default_popup": "popup.html" }}
Notice the permissions tab. We have to add our Google Cloud Function URL here to make sure we do not get a CORS error when we call our function via the Fetch API. We also fill out details like the name/description and icons to be displayed for our Chrome Extension on the Google Store.
And that’s it! We have created a Chrome Extension that leverages a serverless backbone aka Google Cloud Function. The end effect is something like this:
It’s a simple but effective way to build really cool Apps/Extensions. Think of some of the stuff you have done in Python. Now you can just hook up your scripts to a button in an Extension/App and make a nice product out of it. All without worrying about any servers or configurations.
Here is the Github Repo: https://github.com/btahir/summarlight
And you can use the Extension yourself. It is live on the Google Store here:
chrome.google.com
Please share your ideas for Extensions (or Apps) leveraging Google Cloud Functions in the comments. :) | [
{
"code": null,
"e": 390,
"s": 172,
"text": "This is a tutorial on building a Chrome Extension that leverages Serverless architecture. Specifically — we will use Google Cloud Functions in the Back-End of our Chrome Extension to do some fancy Python magic for us."
},
{
"code": null,
"e": ... |
Flutter - Themes - GeeksforGeeks | 23 Feb, 2022
Themes are an integral part of UI for any application. Themes are used to design the fonts and colors of an application to make it more presentable. In Flutter, the Theme widget is used to add themes to an application. One can use it either for a particular part of the application like buttons and navigation bar or define it in the root of the application to use it throughout the entire app.
Constructor of Theme class:
const Theme(
{Key? key,
required ThemeData data,
bool isMaterialAppTheme: false,
required Widget child}
)
Properties of Theme Widget:
child: The child property takes in a widget as the object to show below the Theme widget in the widget tree.
data: This property takes in ThemeData class as the object to specify the styling, colors and typography to be used.
isMarerialAppTheme: This property takes in a boolean (final) as the object. If it is set to true then theme uses the material design.
In this article, we will look into the same widget in detail by creating a simple app with the Theme widget. To do so follow the below steps:
Create a theme
Create a container to apply the theme
Use the same theme in part of the application (or, the entire application.)
Now, let’s look into the above steps in detail.
Use the Theme widget to create a theme. In themes some of the properties that can be used are given below:
TextTheme
brightness
primarycolor
accentColor
fontFamily
A simple theme would look like below:
Dart
MaterialApp( title: title, theme: ThemeData( // UI brightness: Brightness.dark, primaryColor: Colors.lightBlue[800], accentColor: Colors.cyan[600], // font fontFamily: 'Georgia', //text style textTheme: TextTheme( headline1: TextStyle(fontSize: 72.0, fontWeight: FontWeight.bold), headline6: TextStyle(fontSize: 36.0, fontStyle: FontStyle.italic), bodyText2: TextStyle(fontSize: 14.0, fontFamily: 'Hind'), ), ));
In Flutter a simple container can be defined as below:
Dart
Container( color: Theme.of(context).accentColor, child: Text( 'Hello Geeks!', style: Theme.of(context).textTheme.headline6, ),);
To override the default theme of a widget in Flutter one can wrap the same widget inside the Theme widget. A Themedata() instance can be created and passed to the respective widget as shown below:
Dart
Theme( data: ThemeData( accentColor: Colors.yellow, ), child: FloatingActionButton( onPressed: () {}, child: Icon(Icons.add), ),);
You can extend the same Theme app-wide using the copyWith() method as shown below:
Dart
Theme( data: Theme.of(context).copyWith(accentColor: Colors.yellow), child: FloatingActionButton( onPressed: null, child: Icon(Icons.add), ),);
Complete Source Code:
Dart
import 'package:flutter/foundation.dart';import 'package:flutter/material.dart'; void main() { runApp(MyApp());} class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { final appName = 'GeeksForGeeks'; return MaterialApp( title: appName, theme: ThemeData( brightness: Brightness.light, primaryColor: Colors.green, accentColor: Colors.deepOrangeAccent, fontFamily: 'Georgia', //text styling textTheme: TextTheme( headline1: TextStyle(fontSize: 72.0, fontWeight: FontWeight.bold), headline6: TextStyle(fontSize: 36.0, fontStyle: FontStyle.italic), bodyText2: TextStyle(fontSize: 14.0, fontFamily: 'Hind'), ), ), home: MyHomePage( title: appName, ), ); }} class MyHomePage extends StatelessWidget { final String title; MyHomePage({Key key, @required this.title}) : super(key: key); @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: Text(title), ), body: Center( child: Container( color: Theme.of(context).accentColor, child: Text( 'Hello Geeks!', style: Theme.of(context).textTheme.headline6, ), ), ), floatingActionButton: Theme( data: Theme.of(context).copyWith( colorScheme: Theme.of(context).colorScheme.copyWith(secondary: Colors.red), ), child: FloatingActionButton( onPressed: null, child: Icon(Icons.arrow_circle_up), ), ), ); }}
Output:
ankit_kumar_
mayankpatidar275
android
Flutter
Dart
Flutter
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Listview.builder in Flutter
Flutter - DropDownButton Widget
Flutter - Asset Image
Splash Screen in Flutter
Flutter - Custom Bottom Navigation Bar
Flutter - DropDownButton Widget
Flutter - Custom Bottom Navigation Bar
Flutter - BoxShadow Widget
Flutter - Checkbox Widget
Flutter - Flexible Widget | [
{
"code": null,
"e": 25940,
"s": 25912,
"text": "\n23 Feb, 2022"
},
{
"code": null,
"e": 26335,
"s": 25940,
"text": "Themes are an integral part of UI for any application. Themes are used to design the fonts and colors of an application to make it more presentable. In Flutter, th... |
Java program to convert a string to lowercase and uppercase. | Live Demo
import java.lang.*;
public class StringDemo {
public static void main(String[] args) {
// converts all upper case letters in to lower case letters
String str1 = "SELF LEARNING CENTER";
System.out.println("string value = " + str1.toLowerCase());
str1 = "TUTORIALS POINT";
System.out.println("string value = " + str1.toLowerCase());
// converts all lower case letters in to upper case letters
String str2 = "This is TutorialsPoint";
System.out.println("string value = " + str2.toUpperCase());
str2 = "www.tutorialspoint.com";
System.out.println("string value = " + str2.toUpperCase());
}
}
string value = self learning center
string value = tutorials point
string value = THIS IS TUTORIALSPOINT
string value = WWW.TUTORIALSPOINT.COM | [
{
"code": null,
"e": 1072,
"s": 1062,
"text": "Live Demo"
},
{
"code": null,
"e": 1744,
"s": 1072,
"text": "import java.lang.*;\npublic class StringDemo {\n public static void main(String[] args) { \n // converts all upper case letters in to lower case letters\n Stri... |
Arrays in Go - GeeksforGeeks | 19 Nov, 2019
Arrays in Golang or Go programming language is much similar to other programming languages. In the program, sometimes we need to store a collection of data of the same type, like a list of student marks. Such type of collection is stored in a program using an Array. An array is a fixed-length sequence that is used to store homogeneous elements in the memory. Due to their fixed length array are not much popular like Slice in Go language.In an array, you are allowed to store zero or more than zero elements in it. The elements of the array are indexed by using the [] index operator with their zero-based position, means the index of the first element is array[0] and the index of the last element is array[len(array)-1].
In Go language, arrays are created in two different ways:
Using var keyword: In Go language, an array is created using the var keyword of a particular type with name, size, and elements.Syntax:Var array_name[length]Type
or
var array_name[length]Typle{item1, item2, item3, ...itemN}
Important Points:In Go language, arrays are mutable, so that you can use array[index] syntax to the left-hand side of the assignment to set the elements of the array at the given index.Var array_name[index] = elementYou can access the elements of the array by using the index value or by using for loop.In Go language, the array type is one-dimensional.The length of the array is fixed and unchangeable.You are allowed to store duplicate elements in an array.Example:// Go program to illustrate how to // create an array using the var keyword // and accessing the elements of the// array using their index valuepackage main import "fmt" func main() { // Creating an array of string type // Using var keywordvar myarr[3]string // Elements are assigned using indexmyarr[0] = "GFG"myarr[1] = "GeeksforGeeks"myarr[2] = "Geek" // Accessing the elements of the array // Using index valuefmt.Println("Elements of Array:")fmt.Println("Element 1: ", myarr[0])fmt.Println("Element 2: ", myarr[1])fmt.Println("Element 3: ", myarr[2])}Output:Elements of Array:
Element 1: GFG
Element 2: GeeksforGeeks
Element 3: Geek
Using shorthand declaration: In Go language, arrays can also declare using shorthand declaration. It is more flexible than the above declaration.Syntax:array_name:= [length]Type{item1, item2, item3,...itemN}Example:// Go program to illustrate how to create// an array using shorthand declaration // and accessing the elements of the // array using for looppackage main import "fmt" func main() { // Shorthand declaration of arrayarr:= [4]string{"geek", "gfg", "Geeks1231", "GeeksforGeeks"} // Accessing the elements of // the array Using for loopfmt.Println("Elements of the array:") for i:= 0; i < 3; i++{fmt.Println(arr[i])} }Output:Elements of the array:
geek
gfg
Geeks1231
Using var keyword: In Go language, an array is created using the var keyword of a particular type with name, size, and elements.Syntax:Var array_name[length]Type
or
var array_name[length]Typle{item1, item2, item3, ...itemN}
Important Points:In Go language, arrays are mutable, so that you can use array[index] syntax to the left-hand side of the assignment to set the elements of the array at the given index.Var array_name[index] = elementYou can access the elements of the array by using the index value or by using for loop.In Go language, the array type is one-dimensional.The length of the array is fixed and unchangeable.You are allowed to store duplicate elements in an array.Example:// Go program to illustrate how to // create an array using the var keyword // and accessing the elements of the// array using their index valuepackage main import "fmt" func main() { // Creating an array of string type // Using var keywordvar myarr[3]string // Elements are assigned using indexmyarr[0] = "GFG"myarr[1] = "GeeksforGeeks"myarr[2] = "Geek" // Accessing the elements of the array // Using index valuefmt.Println("Elements of Array:")fmt.Println("Element 1: ", myarr[0])fmt.Println("Element 2: ", myarr[1])fmt.Println("Element 3: ", myarr[2])}Output:Elements of Array:
Element 1: GFG
Element 2: GeeksforGeeks
Element 3: Geek
Syntax:
Var array_name[length]Type
or
var array_name[length]Typle{item1, item2, item3, ...itemN}
Important Points:
In Go language, arrays are mutable, so that you can use array[index] syntax to the left-hand side of the assignment to set the elements of the array at the given index.Var array_name[index] = element
Var array_name[index] = element
You can access the elements of the array by using the index value or by using for loop.
In Go language, the array type is one-dimensional.
The length of the array is fixed and unchangeable.
You are allowed to store duplicate elements in an array.Example:// Go program to illustrate how to // create an array using the var keyword // and accessing the elements of the// array using their index valuepackage main import "fmt" func main() { // Creating an array of string type // Using var keywordvar myarr[3]string // Elements are assigned using indexmyarr[0] = "GFG"myarr[1] = "GeeksforGeeks"myarr[2] = "Geek" // Accessing the elements of the array // Using index valuefmt.Println("Elements of Array:")fmt.Println("Element 1: ", myarr[0])fmt.Println("Element 2: ", myarr[1])fmt.Println("Element 3: ", myarr[2])}Output:Elements of Array:
Element 1: GFG
Element 2: GeeksforGeeks
Element 3: Geek
Example:
// Go program to illustrate how to // create an array using the var keyword // and accessing the elements of the// array using their index valuepackage main import "fmt" func main() { // Creating an array of string type // Using var keywordvar myarr[3]string // Elements are assigned using indexmyarr[0] = "GFG"myarr[1] = "GeeksforGeeks"myarr[2] = "Geek" // Accessing the elements of the array // Using index valuefmt.Println("Elements of Array:")fmt.Println("Element 1: ", myarr[0])fmt.Println("Element 2: ", myarr[1])fmt.Println("Element 3: ", myarr[2])}
Output:
Elements of Array:
Element 1: GFG
Element 2: GeeksforGeeks
Element 3: Geek
Using shorthand declaration: In Go language, arrays can also declare using shorthand declaration. It is more flexible than the above declaration.Syntax:array_name:= [length]Type{item1, item2, item3,...itemN}Example:// Go program to illustrate how to create// an array using shorthand declaration // and accessing the elements of the // array using for looppackage main import "fmt" func main() { // Shorthand declaration of arrayarr:= [4]string{"geek", "gfg", "Geeks1231", "GeeksforGeeks"} // Accessing the elements of // the array Using for loopfmt.Println("Elements of the array:") for i:= 0; i < 3; i++{fmt.Println(arr[i])} }Output:Elements of the array:
geek
gfg
Geeks1231
Syntax:
array_name:= [length]Type{item1, item2, item3,...itemN}
Example:
// Go program to illustrate how to create// an array using shorthand declaration // and accessing the elements of the // array using for looppackage main import "fmt" func main() { // Shorthand declaration of arrayarr:= [4]string{"geek", "gfg", "Geeks1231", "GeeksforGeeks"} // Accessing the elements of // the array Using for loopfmt.Println("Elements of the array:") for i:= 0; i < 3; i++{fmt.Println(arr[i])} }
Output:
Elements of the array:
geek
gfg
Geeks1231
As we already know that arrays are 1-D but you are allowed to create a multi-dimensional array. Multi-Dimensional arrays are the arrays of arrays of the same type. In Go language, you can create a multi-dimensional array using the following syntax:
Array_name[Length1][Length2]..[LengthN]Type
You can create a multidimensional array using Var keyword or using shorthand declaration as shown in the below example.
Note: In multi-dimension array, if a cell is not initialized with some value by the user, then it will initialize with zero by the compiler automatically. There is no uninitialized concept in the Golang.
Example:
// Go program to illustrate the// concept of multi-dimension arraypackage main import "fmt" func main() { // Creating and initializing // 2-dimensional array // Using shorthand declaration// Here the (,) Comma is necessaryarr:= [3][3]string{{"C#", "C", "Python"}, {"Java", "Scala", "Perl"}, {"C++", "Go", "HTML"},} // Accessing the values of the // array Using for loopfmt.Println("Elements of Array 1")for x:= 0; x < 3; x++{for y:= 0; y < 3; y++{fmt.Println(arr[x][y])}} // Creating a 2-dimensional// array using var keyword// and initializing a multi// -dimensional array using indexvar arr1 [2][2] intarr1[0][0] = 100arr1[0][1] = 200arr1[1][0] = 300arr1[1][1] = 400 // Accessing the values of the arrayfmt.Println("Elements of array 2")for p:= 0; p<2; p++{for q:= 0; q<2; q++{fmt.Println(arr1[p][q]) }}}
Output:
Elements of Array 1
C#
C
Python
Java
Scala
Perl
C++
Go
HTML
Elements of array 2
100
200
300
400
In an array, if an array does not initialized explicitly, then the default value of this array is 0.Example:// Go program to illustrate an arraypackage main import "fmt" func main() { // Creating an array of int type// which stores, two elements// Here, we do not initialize the // array so the value of the array// is zerovar myarr[2]int fmt.Println("Elements of the Array :", myarr) }Output:Elements of the Array : [0 0]In an array, you can find the length of the array using len() method as shown below:Example:// Go program to illustrate how to find// the length of the arraypackage main import "fmt" func main() { // Creating array// Using shorthand declaration arr1:= [3]int{9,7,6}arr2:= [...]int{9,7,6,4,5,3,2,4}arr3:= [3]int{9,3,5} // Finding the length of the // array using len methodfmt.Println("Length of the array 1 is:", len(arr1))fmt.Println("Length of the array 2 is:", len(arr2))fmt.Println("Length of the array 3 is:", len(arr3))}Output:Length of the array 1 is: 3
Length of the array 2 is: 8
Length of the array 3 is: 3
In an array, if ellipsis ‘‘...’’ become visible at the place of length, then the length of the array is determined by the initialized elements. As shown in the below example:Example:// Go program to illustrate the// concept of ellipsis in an arraypackage main import "fmt" func main() { // Creating an array whose size is determined // by the number of elements present in it// Using ellipsismyarray:= [...]string{"GFG", "gfg", "geeks", "GeeksforGeeks", "GEEK"} fmt.Println("Elements of the array: ", myarray) // Length of the array// is determine by // Using len() methodfmt.Println("Length of the array is:", len(myarray))}Output:Elements of the array: [GFG gfg geeks GeeksforGeeks GEEK]
Length of the array is: 5
In an array, you are allowed to iterate over the range of the elements of the array. As shown in the below example:Example:// Go program to illustrate // how to iterate the arraypackage main import "fmt" func main() { // Creating an array whose size// is represented by the ellipsismyarray:= [...]int{29, 79, 49, 39, 20, 49, 48, 49} // Iterate array using for loopfor x:=0; x < len(myarray); x++{fmt.Printf("%d\n", myarray[x])}}Output:29
79
49
39
20
49
48
49
In Go language, an array is of value type not of reference type. So when the array is assigned to a new variable, then the changes made in the new variable do not affect the original array. As shown in the below example:Example:// Go program to illustrate value type arraypackage main import "fmt" func main() { // Creating an array whose size // is represented by the ellipsismy_array:= [...]int{100, 200, 300, 400, 500}fmt.Println("Original array(Before):", my_array) // Creating a new variable // and initialize with my_arraynew_array := my_array fmt.Println("New array(before):", new_array) // Change the value at index 0 to 500new_array[0] = 500 fmt.Println("New array(After):", new_array) fmt.Println("Original array(After):", my_array)}Output:Original array(Before): [100 200 300 400 500]
New array(before): [100 200 300 400 500]
New array(After): [500 200 300 400 500]
Original array(After): [100 200 300 400 500]
In an array, if the element type of the array is comparable, then the array type is also comparable. So we can directly compare two arrays using == operator. As shown in the below example:Example:// Go program to illustrate // how to compare two arrayspackage main import "fmt" func main() { // Arrays arr1:= [3]int{9,7,6}arr2:= [...]int{9,7,6}arr3:= [3]int{9,5,3} // Comparing arrays using == operatorfmt.Println(arr1==arr2)fmt.Println(arr2==arr3)fmt.Println(arr1==arr3) // This will give and error because the // type of arr1 and arr4 is a mismatch /*arr4:= [4]int{9,7,6}fmt.Println(arr1==arr4)*/}Output:true
false
false
In an array, if an array does not initialized explicitly, then the default value of this array is 0.Example:// Go program to illustrate an arraypackage main import "fmt" func main() { // Creating an array of int type// which stores, two elements// Here, we do not initialize the // array so the value of the array// is zerovar myarr[2]int fmt.Println("Elements of the Array :", myarr) }Output:Elements of the Array : [0 0]
Example:
// Go program to illustrate an arraypackage main import "fmt" func main() { // Creating an array of int type// which stores, two elements// Here, we do not initialize the // array so the value of the array// is zerovar myarr[2]int fmt.Println("Elements of the Array :", myarr) }
Output:
Elements of the Array : [0 0]
In an array, you can find the length of the array using len() method as shown below:Example:// Go program to illustrate how to find// the length of the arraypackage main import "fmt" func main() { // Creating array// Using shorthand declaration arr1:= [3]int{9,7,6}arr2:= [...]int{9,7,6,4,5,3,2,4}arr3:= [3]int{9,3,5} // Finding the length of the // array using len methodfmt.Println("Length of the array 1 is:", len(arr1))fmt.Println("Length of the array 2 is:", len(arr2))fmt.Println("Length of the array 3 is:", len(arr3))}Output:Length of the array 1 is: 3
Length of the array 2 is: 8
Length of the array 3 is: 3
Example:
// Go program to illustrate how to find// the length of the arraypackage main import "fmt" func main() { // Creating array// Using shorthand declaration arr1:= [3]int{9,7,6}arr2:= [...]int{9,7,6,4,5,3,2,4}arr3:= [3]int{9,3,5} // Finding the length of the // array using len methodfmt.Println("Length of the array 1 is:", len(arr1))fmt.Println("Length of the array 2 is:", len(arr2))fmt.Println("Length of the array 3 is:", len(arr3))}
Output:
Length of the array 1 is: 3
Length of the array 2 is: 8
Length of the array 3 is: 3
In an array, if ellipsis ‘‘...’’ become visible at the place of length, then the length of the array is determined by the initialized elements. As shown in the below example:Example:// Go program to illustrate the// concept of ellipsis in an arraypackage main import "fmt" func main() { // Creating an array whose size is determined // by the number of elements present in it// Using ellipsismyarray:= [...]string{"GFG", "gfg", "geeks", "GeeksforGeeks", "GEEK"} fmt.Println("Elements of the array: ", myarray) // Length of the array// is determine by // Using len() methodfmt.Println("Length of the array is:", len(myarray))}Output:Elements of the array: [GFG gfg geeks GeeksforGeeks GEEK]
Length of the array is: 5
Example:
// Go program to illustrate the// concept of ellipsis in an arraypackage main import "fmt" func main() { // Creating an array whose size is determined // by the number of elements present in it// Using ellipsismyarray:= [...]string{"GFG", "gfg", "geeks", "GeeksforGeeks", "GEEK"} fmt.Println("Elements of the array: ", myarray) // Length of the array// is determine by // Using len() methodfmt.Println("Length of the array is:", len(myarray))}
Output:
Elements of the array: [GFG gfg geeks GeeksforGeeks GEEK]
Length of the array is: 5
In an array, you are allowed to iterate over the range of the elements of the array. As shown in the below example:Example:// Go program to illustrate // how to iterate the arraypackage main import "fmt" func main() { // Creating an array whose size// is represented by the ellipsismyarray:= [...]int{29, 79, 49, 39, 20, 49, 48, 49} // Iterate array using for loopfor x:=0; x < len(myarray); x++{fmt.Printf("%d\n", myarray[x])}}Output:29
79
49
39
20
49
48
49
Example:
// Go program to illustrate // how to iterate the arraypackage main import "fmt" func main() { // Creating an array whose size// is represented by the ellipsismyarray:= [...]int{29, 79, 49, 39, 20, 49, 48, 49} // Iterate array using for loopfor x:=0; x < len(myarray); x++{fmt.Printf("%d\n", myarray[x])}}
Output:
29
79
49
39
20
49
48
49
In Go language, an array is of value type not of reference type. So when the array is assigned to a new variable, then the changes made in the new variable do not affect the original array. As shown in the below example:Example:// Go program to illustrate value type arraypackage main import "fmt" func main() { // Creating an array whose size // is represented by the ellipsismy_array:= [...]int{100, 200, 300, 400, 500}fmt.Println("Original array(Before):", my_array) // Creating a new variable // and initialize with my_arraynew_array := my_array fmt.Println("New array(before):", new_array) // Change the value at index 0 to 500new_array[0] = 500 fmt.Println("New array(After):", new_array) fmt.Println("Original array(After):", my_array)}Output:Original array(Before): [100 200 300 400 500]
New array(before): [100 200 300 400 500]
New array(After): [500 200 300 400 500]
Original array(After): [100 200 300 400 500]
Example:
// Go program to illustrate value type arraypackage main import "fmt" func main() { // Creating an array whose size // is represented by the ellipsismy_array:= [...]int{100, 200, 300, 400, 500}fmt.Println("Original array(Before):", my_array) // Creating a new variable // and initialize with my_arraynew_array := my_array fmt.Println("New array(before):", new_array) // Change the value at index 0 to 500new_array[0] = 500 fmt.Println("New array(After):", new_array) fmt.Println("Original array(After):", my_array)}
Output:
Original array(Before): [100 200 300 400 500]
New array(before): [100 200 300 400 500]
New array(After): [500 200 300 400 500]
Original array(After): [100 200 300 400 500]
In an array, if the element type of the array is comparable, then the array type is also comparable. So we can directly compare two arrays using == operator. As shown in the below example:Example:// Go program to illustrate // how to compare two arrayspackage main import "fmt" func main() { // Arrays arr1:= [3]int{9,7,6}arr2:= [...]int{9,7,6}arr3:= [3]int{9,5,3} // Comparing arrays using == operatorfmt.Println(arr1==arr2)fmt.Println(arr2==arr3)fmt.Println(arr1==arr3) // This will give and error because the // type of arr1 and arr4 is a mismatch /*arr4:= [4]int{9,7,6}fmt.Println(arr1==arr4)*/}Output:true
false
false
Example:
// Go program to illustrate // how to compare two arrayspackage main import "fmt" func main() { // Arrays arr1:= [3]int{9,7,6}arr2:= [...]int{9,7,6}arr3:= [3]int{9,5,3} // Comparing arrays using == operatorfmt.Println(arr1==arr2)fmt.Println(arr2==arr3)fmt.Println(arr1==arr3) // This will give and error because the // type of arr1 and arr4 is a mismatch /*arr4:= [4]int{9,7,6}fmt.Println(arr1==arr4)*/}
Output:
true
false
false
Golang
Golang-Arrays
Go Language
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Different ways to concatenate two strings in Golang
time.Sleep() Function in Golang With Examples
Time Formatting in Golang
strings.Contains Function in Golang with Examples
fmt.Sprintf() Function in Golang With Examples
strings.Replace() Function in Golang With Examples
Golang Maps
How to convert a string in lower case in Golang?
How to compare times in Golang?
How to Parse JSON in Golang? | [
{
"code": null,
"e": 24932,
"s": 24904,
"text": "\n19 Nov, 2019"
},
{
"code": null,
"e": 25657,
"s": 24932,
"text": "Arrays in Golang or Go programming language is much similar to other programming languages. In the program, sometimes we need to store a collection of data of the ... |
Intelligent Visual Data Discovery with Lux — A Python library | by Parul Pandey | Towards Data Science | “Exploratory data analysis is an attitude, a state of flexibility, a willingness to look for those things that we believe are not there, as well as those that we believe to be there.” — John W Tukey
The importance and necessity of data visualization in data science cannot be emphasized enough. The fact that a picture is worth a thousand words can be aptly applied to any project's life cycle associated with data. However, a lot of times, the tools that enable these visualizations aren’t intelligent enough. This essentially means that while we have hundreds of visualization libraries, most of them require users to write a substantial amount of code for plotting even a single graph. This shifts the focus on the mechanics of the visualization rather than the critical relationships within the data.
What if there were a tool that could simplify data exploration by recommending relevant visualizations to the users? There is a new library in the town called Lux 💡 , and it has been developed to address these very questions.
This article is based upon @Doris lee’s session on Lux during the Rise Camp 2020. Special thanks to Doris for allowing me to use the resources and images from the slides.
Today data analysts have access to multiple tools for data exploration. While interactive Jupyter notebooks enable iterative experimentation, there are also powerful BI tools like Power BI and Tableau, which enable advanced exploration by a mere point and click. However, even with the advent of these incredibly powerful tools, there are still challenges that hinder data exploration flow. This is especially true when we go from a question in our minds to discovering actionable insights. Let’s look at the three major identifiable obstacles that data analysts face currently:
While programming tools provide flexibility, they are mostly inaccessible to people with less programming experience. On the other hand, point and click tools are simple to use but have limited flexibility and are hard to customize.
Secondly, to create a visualization, we first need to think about how the visualization should look like with all the specifications. We then need to translate all these specification details into code. The figure above shows how a substantial amount of code is required in two popular python libraries — Matplotlib and Plotly, just to output a mere bar graph. This again affects data exploration, especially when the users only have a vague idea of what they’re looking for
Every EDA requires continuous cycles of trial and error. A user has to experiment with multiple visualizations before settling for the final one. There are chances that analysts might miss out on important insights that are in their data sets. Another common issue is that analysts might not know what set of operations they should perform on their data to get to the desired insights and often lose track or get stuck in their analysis.
There is an apparent gap between how people reason and think about their data and what actually needs to be done to the data to get to these insights. Lux is a step to address these possible gaps.
Lux is a Python library that helps users explore and discover meaningful insights from their data by automating certain data exploration aspects. It is an effort towards bridging the gap between code and interactive interfaces. Lux features an intent language that allows users to specify their analysis intent in a sloppy manner, and it automatically infers the unspecified details and determines appropriate visualization mappings.
The goal of Lux is to make it easier for data scientists to explore their data even when the user doesn’t have a clear idea of what they’re looking for.
Lux brings the power of interactive visualizations directly into Jupyter notebooks to bridge the gap between code and interactive interfaces.
Lux features a powerful intent language that allows users to specify their analysis interests to lower the programming cost.
Lux provides visualization recommendations of data frames automatically to users.
We now have a fair idea of how Lux tries to plugin the common problems users face when exploring data. Let’s now look at how we can use the Lux library with an example. Since the idea is just to provide a quick demo, I’ll use a very simple example. Once you have a fair idea, you’ll be able to use it with the dataset of your choice.
The palmer penguins dataset has currently become a favorite of the data science community and is a drop-in replacement for the overused Iris dataset. The dataset consists of data for 344 penguins. The data were collected and made available by Dr. Kristen Gorman and the Palmer Station, Antarctica LTER. Let’s start by installing and importing the Lux library. You can follow along with this tutorial in a Jupyter notebook via the Binder.
pip install lux-api#Activating extension for Jupyter notebookjupyter nbextension install --py luxwidgetjupyter nbextension enable --py luxwidget# Activating extension for Jupyter labjupyter labextension install @jupyter-widgets/jupyterlab-managerjupyter labextension install luxwidget
For more details like using Lux with SQL engine, read the documentation, which is pretty robust and contains many hands-on examples.
Once the Lux library has been installed, we’ll import it along with our dataset.
import pandas as pdimport luxdf = pd.read_csv('penguins_size.csv')df.head()
Lux's nice thing is that it can be used as it is with the pandas dataframe and doesn’t require any modifications to the existing syntax. For instance, if you drop any column or row, the recommendations are regenerated based on the updated dataframe. All the nice functionalities that we get from pandas like dropping columns, importing CSVs, etc., are also preserved. Let’s get an overview of the data set.
df.info()
There are some missing values in the dataset. Let’s get rid of those.
df = df.dropna()
Our data is now in memory, and we are all set to see how Lux can ease the EDA process for us.
df
When we print out the data frame, we see the default pandas table display. We can toggle it to get a set of recommendations generated automatically by Lux.
The recommendations in lux are organized by three different tabs, which represent potential next steps that users can take in their exploration.
The Correlation Tab: shows a set of pairwise relationships between quantitative attributes ranked by the most correlated to the least correlated one.
We can see that the penguin flipper length and body mass show a positive correlation. Penguins’ culmen length and depth also show some interesting patterns, and it appears that there is a negative correlation. To be specific, the culmen is the upper ridge of a bird’s bill.
The Distribution Tab shows a set of univariate distributions ranked by the most skewed to the least skewed.
The Occurrence Tab shows a set of bar charts that can be generated from the data set.
This tab shows there are three different species of penguins — Adelie, Chinstrap, and Gentoo. There are also three different islands — Torgersen, Biscoe, and Dream; and both male and female species have been included in the dataset.
Beyond the basic recommendations, we can also specify our analysis intent. Let's say that we want to find out how the culmen length varies with the species. We can set the intent here as [‘culmen_length_mm’,’species’].When we print out the data frame again, we can see that the recommendations are steered to what is relevant to the intent that we’ve specified.
df.intent = ['culmen_length_mm','species']df
On the left-hand side in the image below, what we see is Current Visualization corresponding to the attributes that we have selected. On the right-hand side, we have Enhance i.e. what happens when we add an attribute to the current selection. We also have the Filter tab which adds filter while fixing the selected variable.
If you closely look at the correlations within species, culmen length and depth are positively correlated. This is a classic example of Simpson’s paradox.
Finally, you can get a pretty clear separation between all three species by looking at flipper length versus culmen length.
Lux also makes it pretty easy to export and share the generated visualizations. The visualizations can be exported into a static HTML as follows:
df.save_as_html('file.html')
We can also access the set of recommendations generated for the data frames via the properties recommendation. The output is a dictionary, keyed by the name of the recommendation category.
df.recommendation
Not only can we export visualization as HTML but also as code. The GIF below shows how you can view the first bar chart's code in the Occurrence tab. The visualizations can then be exported to code in Altair for further edits or as Vega-Lite specification. More details can be found in the documentation.
The demo above was just a simple way to get started. Lux’s Github Repository contains a lot of resources and interactive binder notebooks on how to use Lux. This could be a great place to start. Additionally, there is also detailed documentation.
github.com
In the above article, we saw how a data analysis workflow in a Jupyter notebook could be completely transformed by using Lux. Lux offers a lot more visual richness to encourage meaningful data exploration. Lux is still under active development, and its maintainers are looking to hear from users who are using or might be interested in using Lux. This will help the team to understand how they could improve the tool for you. | [
{
"code": null,
"e": 371,
"s": 172,
"text": "“Exploratory data analysis is an attitude, a state of flexibility, a willingness to look for those things that we believe are not there, as well as those that we believe to be there.” — John W Tukey"
},
{
"code": null,
"e": 977,
"s": 371,
... |
Difference between continue and break statements in Java | As we know in programming execution of code is done line by line.Now in order to alter this flow C++ provides two statements break and coninue which mainly used to skip some specific code at specific line.
Following are the important differences between continue and break.
JavaTester.java
Live Demo
public class JavaTester{
public static void main(String args[]){
// Illustrating break statement (execution stops when value of i becomes to 4.)
System.out.println("Break Statement\n");
for(int i=1;i<=5;i++){
if(i==4) break;
System.out.println(i);
}
// Illustrating continue statement (execution skipped when value of i becomes to 1.)
System.out.println("Continue Statement\n");
for(int i=1;i<=5;i++){
if(i==1) continue;
System.out.println(i);
}
}
}
Break Statement
1
2
3
Continue Statement
2
3
4
5 | [
{
"code": null,
"e": 1268,
"s": 1062,
"text": "As we know in programming execution of code is done line by line.Now in order to alter this flow C++ provides two statements break and coninue which mainly used to skip some specific code at specific line."
},
{
"code": null,
"e": 1336,
... |
Count numbers upto N which are both perfect square and perfect cube in C++ | We are given a number N. The goal is to count the numbers upto N that are perfect squares as well as perfect cubes. For example, 1, 64 are both perfect squares and perfect cubes.
We will use sqrt() to calculate square root and cbrt() to calculate cube root of a number.
Let’s understand with examples.
Input − N=100
Output − Count of numbers that are perfect squares and cubes − 2
Explanation − 1 and 64 are only numbers from 1 to 100 that are both perfect squares and cubes.
Input − N=5000
Output −Count of numbers that are perfect squares and cubes − 3
Explanation − 1, 64 and 4096 are only numbers from 1 to 5000 that are both perfect squares and cubes.
We take an integer N.
We take an integer N.
Function getCount(int n) takes N and returns the count of numbers upto N that are both perfect squares and perfect cubes.
Function getCount(int n) takes N and returns the count of numbers upto N that are both perfect squares and perfect cubes.
Take the initial count as 0.
Take the initial count as 0.
Starting from i=1 to i=N, if floor(sqrt(i))==ceil(sqrt(i)) then i is a perfect square.
Starting from i=1 to i=N, if floor(sqrt(i))==ceil(sqrt(i)) then i is a perfect square.
Now check if floor(cbrt(i))==ceil(cbrt(i)), if true i is also a perfect cube. Increment count.
Now check if floor(cbrt(i))==ceil(cbrt(i)), if true i is also a perfect cube. Increment count.
At the end of loop return count as result.
At the end of loop return count as result.
Live Demo
#include <bits/stdc++.h>
#include <math.h>
using namespace std;
int getCount(int n){
int count=0;
for(int i=1;i<=n;i++){
if(floor(sqrt(i))==ceil(sqrt(i))){
if(floor(cbrt(i))==ceil(cbrt(i))){
count++;
//cout<<i<<" ";
}
}
}
return count;
}
int main(){
int N=100;
cout<<endl<<"Numbers upto N that are perfect squares and perfect cubes:"<<getCount(N);
return 0;
}
If we run the above code it will generate the following output −
Numbers upto N that are perfect squares and perfect cubes:2 | [
{
"code": null,
"e": 1241,
"s": 1062,
"text": "We are given a number N. The goal is to count the numbers upto N that are perfect squares as well as perfect cubes. For example, 1, 64 are both perfect squares and perfect cubes."
},
{
"code": null,
"e": 1332,
"s": 1241,
"text": "We ... |
Combining logistic regression and decision tree | by Andrzej Szymanski, PhD | Towards Data Science | Logistic regression is one of the most used machine learning techniques. Its main advantages are clarity of results and its ability to explain the relationship between dependent and independent features in a simple manner. It requires comparably less processing power, and is, in general, faster than Random Forest or Gradient Boosting.
However, it has also some serious drawbacks and the main one is its limited ability to resolve non-linear problems. In this article, I will demonstrate how we can improve the prediction of non-linear relationships by incorporating a decision tree into a regression model.
The idea is quite similar to weight of evidence (WoE), a method widely used in finance for building scorecards. WoE takes a feature (continuous or categorical) and splits it into bands to maximise separation between goods and bads (positives and negatives). Decision tree carries out a very similar task, splitting the data into nodes to achieve maximum segregation between positives and negatives. The main difference is that WoE is built separately for each feature, while nodes of decision tree select multiple features at the same time.
Knowing that the decision tree is good at identifying non-linear relationships between dependent and independent features, we can transform the output of the decision tree (nodes) into a categorical variable and then deploy it in a logistic regression, by transforming each of the categories (nodes) into dummy variables.
In my professional projects, using decision tree nodes in the model would out-perform both logistic regression and decision tree results in 1/3 of cases. However, I have struggled to find any publicly available data which could replicate it. This is probably because the available data contain only a handful of variables, pre-selected and cleansed. There is simply not much to squeeze! It is much easier to find additional dimensions of the relationship between dependent and independent features when we have hundreds or thousands of variables at our disposal.
In the end, I decided to use the data from a banking campaign. Using these data I have managed to get a minor, but still an improvement of combined logistic regression and decision tree over both these methods used separately.
After importing the data I did some cleansing. The code used in this paper is available on GitHub. I have saved the cleansed data into a separate file.
Because of the small frequency, I have decided to oversample the data using SMOTE technique.
import pandas as pdfrom sklearn.model_selection import train_test_splitfrom imblearn.over_sampling import SMOTEdf=pd.read_csv('banking_cleansed.csv')X = df.iloc[:,1:]y = df.iloc[:,0]os = SMOTE(random_state=0)X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)columns = X_train.columnsos_data_X,os_data_y=os.fit_sample(X_train, y_train)os_data_X = pd.DataFrame(data=os_data_X,columns=columns )os_data_y= pd.DataFrame(data=os_data_y,columns=['y'])
In the next steps I have built 3 models:
decision tree
logistic regression
logistic regression with decision tree nodes
Decision tree
It is important to keep the decision tree depth to a minimum if you want to combine with logistic regression. I’d prefer to keep the decision tree at maximum depth of 4. This already gives 16 categories. Too many categories may cause cardinality problems and overfit the model. In our example, the incremental increase in predictability between depth of 3 and 4 was minor, therefore I have opted for maximum depth = 3.
from sklearn.tree import DecisionTreeClassifierfrom sklearn import metricsfrom sklearn.metrics import roc_auc_scoredt = DecisionTreeClassifier(criterion='gini', min_samples_split=200,min_samples_leaf=100, max_depth=3)dt.fit(os_data_X, os_data_y)y_pred3 = dt.predict(X_test)print('Misclassified samples: %d' % (y_test != y_pred3).sum())print(metrics.classification_report(y_test, y_pred3))print (roc_auc_score(y_test, y_pred3))
The next step is to convert the nodes into new variable. To do so, we need to code-up the decision tree rules. Luckily, there is a bit of programme which can do it for us. The function below produces a piece of code which is a replication of decision tree split rules.
Now run the code:
tree_to_code(dt,columns)
and output will look like this:
We can now copy and paste the output into our next function, which we can use to create our new categorical variable.
Now we can quickly create a new variable (‘nodes’) and transfer it into dummies.
df['nodes']=df.apply(tree, axis=1)df_n= pd.get_dummies(df['nodes'],drop_first=True)df_2=pd.concat([df, df_n], axis=1)df_2=df_2.drop(['nodes'],axis=1)
After adding nodes variable, I re-run split to train and test groups and oversampled the train data using SMOTE .
X = df_2.iloc[:,1:]y = df_2.iloc[:,0]os = SMOTE(random_state=0)X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)columns = X_train.columnsos_data_X,os_data_y=os.fit_sample(X_train, y_train)
Now we can run logistic regressions and compare the impact of node dummies on predictability.
Logistic regression excluding nodes dummies
I have created a list of all features excluding the nodes dummies:
nodes=df_n.columns.tolist()Init = os_data_X.drop(nodes,axis=1).columns.tolist()
and run the logistic regression using the Init list:
from sklearn.linear_model import LogisticRegressionlr0 = LogisticRegression(C=0.001, random_state=1)lr0.fit(os_data_X[Init], os_data_y)y_pred0 = lr0.predict(X_test[Init])print('Misclassified samples: %d' % (y_test != y_pred0).sum())print(metrics.classification_report(y_test, y_pred0))print (roc_auc_score(y_test, y_pred0))
Logistic regression with nodes dummies
In the next step I re-run the regression, but this time I have included nodes dummies.
from sklearn.linear_model import LogisticRegressionlr1 = LogisticRegression(C=0.001, random_state=1)lr1.fit(os_data_X, os_data_y)y_pred1 = lr1.predict(X_test)print('Misclassified samples: %d' % (y_test != y_pred1).sum())print(metrics.classification_report(y_test, y_pred1))print (roc_auc_score(y_test, y_pred1))
Results comparison
The logistic regression with node dummies has the best performance. Although, the incremental improvement is not massive (especially if compared with decision tree), as I said before, it is hard to squeeze anything extra out data which contain only a handful of pre-selected variables and I can reassure you that in real life the differences can be bigger.
We can scrutinise the models a little bit more by comparing the distribution of positives and negatives across the decile score using Model Lift, which I have presented in my previous article.
First step is to obtain probabilities:
y_pred0b=lr0.predict_proba(X_test[Init])y_pred1b=lr1.predict_proba(X_test)
Next we need to run the function below:
Now we can check the differences between these two models. First, let’s evaluate the performance of the initial model without decision tree.
ModelLift0 = lift(y_test,y_pred0b,10)ModelLift0
Model Lift before applying decision tree nodes...
...and next the model with decision tree nodes
ModelLift1 = lift(y_test,y_pred1b,10)ModelLift1
Response in top 2 deciles of the model with decision tree nodes has improved, and so did the Kolmogorov-Smirnov test(KS). Once we translate the lift into financial value, it may turn out that this minimal incremental improvement may generate a substantial return in our marketing campaign.
Summarising, combining logistic regression and decision tree is not a well-known approach, but it may outperform the individual results of both decision tree and logistic regression. In the example presented in this article, the differences between decision tree and 2nd logistic regression are very negligible. However, in real life, when working on un-polished data, combining decision tree with logistic regression may produce far better results. That was rather a norm in projects I have run in the past. Node variable may not be a magic wand but definitely something worth knowing and trying out. | [
{
"code": null,
"e": 509,
"s": 172,
"text": "Logistic regression is one of the most used machine learning techniques. Its main advantages are clarity of results and its ability to explain the relationship between dependent and independent features in a simple manner. It requires comparably less proc... |
Bootstrap 4 .border-right-0 class | Use the border-right-0 class in Bootstrap to remove the right border.
To remove the right border −
<div class="mystyle border border-right-0">
Rectangle is missing the right border.
</div>
Style the div as −
.mystyle {
width: 350px;
height: 170px;
margin: 10px;
}
You can try to run the following code to implement the border-right- 0 class −
Live Demo
<!DOCTYPE html>
<html lang="en">
<head>
<title>Bootstrap Example</title>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.1.0/css/bootstrap.min.css">
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.1.0/js/bootstrap.min.js"></script>
<style>
.mystyle {
width: 350px;
height: 170px;
margin: 10px;
}
</style>
</head>
<body>
<div class="container">
<h2>Heading Two</h2>
<div class="mystyle border border-right-0">Rectangle is missing the right border.</div>
</div>
</body>
</html> | [
{
"code": null,
"e": 1132,
"s": 1062,
"text": "Use the border-right-0 class in Bootstrap to remove the right border."
},
{
"code": null,
"e": 1161,
"s": 1132,
"text": "To remove the right border −"
},
{
"code": null,
"e": 1253,
"s": 1161,
"text": "<div class=\... |
How can I define underlined text in an Android layout xml file? | This example demonstrate about How can I define underlined text in an Android layout xml file.
Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project.
Step 2 − Add the following code to res/layout/activity_main.java
<? xml version= "1.0" encoding= "utf-8" ?>
<RelativeLayout xmlns: android = "http://schemas.android.com/apk/res/android"
xmlns: tools = "http://schemas.android.com/tools"
android :layout_width= "match_parent"
android :layout_height= "match_parent"
android :orientation= "vertical"
tools :context= ".MainActivity" >
<TextView
android :id= "@+id/textView"
android :layout_width= "match_parent"
android :layout_height= "wrap_content"
android :layout_centerInParent= "true"
android :gravity= "center"
android :padding= "16dp"
android :text= "Bottom of the screen"
android :textAppearance= "@style/Base.TextAppearance.AppCompat.Large" />
</RelativeLayout>
Step 3 − Add the following code to src/MainActivity.java
package app.tutorialspoint.com.sample ;
import android.os.Bundle ;
import android.support.v7.app.AppCompatActivity ;
import android.text.SpannableString ;
import android.text.style.UnderlineSpan ;
import android.widget.TextView ;
public class MainActivity extends AppCompatActivity {
@Override
protected void onCreate (Bundle savedInstanceState) {
super .onCreate(savedInstanceState) ;
setContentView(R.layout. activity_main ) ;
TextView textView = findViewById(R.id. textView ) ;
SpannableString content = new SpannableString( "Content" ) ;
content.setSpan( new UnderlineSpan() , 0 , content.length() , 0 ) ;
textView.setText(content) ;
}
}
Step 4 − Add the following code to androidManifest.xml
<? xml version= "1.0" encoding= "utf-8" ?>
<manifest xmlns: android = "http://schemas.android.com/apk/res/android"
package= "app.tutorialspoint.com.sample" >
<application
android :allowBackup= "true"
android :icon= "@mipmap/ic_launcher"
android:label= "@string/app_name"
android:roundIcon= "@mipmap/ic_launcher_round"
android:supportsRtl= "true"
android:theme= "@style/AppTheme" >
<activity android:name= ".MainActivity" >
<intent-filter>
<action android:name= "android.intent.action.MAIN" />
<category android:name= "android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen – | [
{
"code": null,
"e": 1157,
"s": 1062,
"text": "This example demonstrate about How can I define underlined text in an Android layout xml file."
},
{
"code": null,
"e": 1286,
"s": 1157,
"text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all ... |
Subarray Sum Equals K in C++ | Suppose we have an array of integers and an integer k, we need to find the total number of continuous subarrays whose sum same as k. So if nums array is [1, 1, 1] and k is 2, then the output will be 2.
To solve this, we will follow these steps −
define one map called sums, temp := 0, sums[0] := 1 and ans := 0
for i in range 0 to size of the arraytemp := temp + n[i]if sums has k – temp, thenans := ans + sums[k - temp]increase the value of sums[-temp] by 1
temp := temp + n[i]
if sums has k – temp, thenans := ans + sums[k - temp]
ans := ans + sums[k - temp]
increase the value of sums[-temp] by 1
return ans
Let us see the following implementation to get a better understanding −
Live Demo
#include <bits/stdc++.h>
using namespace std;
class Solution {
public:
int subarraySum(vector<int>& n, int k) {
unordered_map <int, int> sums;
int temp = 0;
sums[0] = 1;
int ans =0;
for(int i =0;i<n.size();i++){
temp+= n[i];
if(sums.find(k-temp)!=sums.end()){
ans += sums[k-temp];
}
sums[-temp]++;
}
return ans;
}
};
main(){
Solution ob;
vector<int> v = {1,1,1};
cout << (ob.subarraySum(v, 2));
}
[1,1,1]
2
2 | [
{
"code": null,
"e": 1264,
"s": 1062,
"text": "Suppose we have an array of integers and an integer k, we need to find the total number of continuous subarrays whose sum same as k. So if nums array is [1, 1, 1] and k is 2, then the output will be 2."
},
{
"code": null,
"e": 1308,
"s":... |
Generate a number such that the frequency of each digit is digit times the frequency in given number - GeeksforGeeks | 21 Apr, 2021
Given a number N containing digits from 1 to 9 only. The task is to generate a new number using the number N such that the frequency of each digit in the new number is equal to the frequency of that digit in N multiplied by the digit itself.Note: The digits in the new number must be in increasing order.Examples:
Input : N = 312 Output : 122333 Explanation : The output contains digit 1 once, digit 2 twice and digit 3 thrice.Input : N = 525 Output : 225555555555 Explanation : The output contains digit 2 twice and digit 5 ten times. 5 is ten times because its frequency is 2 in the given integer.
The idea is to store the count or the frequency of the digits in the given number N using a counting array or hash. Now, for each digit add it to the new number, K number of times where K is equal to its frequency in the counting array multiplied by the digit itself.Below is the implementation of the above approach:
C++
Java
Python3
C#
PHP
Javascript
// CPP program to print a number such that the// frequency of each digit in the new number is// is equal to its frequency in the given number// multiplied by the digit itself. #include <bits/stdc++.h>using namespace std; // Function to print such a numbervoid printNumber(int n){ // initializing a hash array int count[10] = { 0 }; // counting frequency of the digits while (n) { count[n % 10]++; n /= 10; } // printing the new number for (int i = 1; i < 10; i++) { for (int j = 0; j < count[i] * i; j++) cout << i; }} // Driver codeint main(){ int n = 3225; printNumber(n); return 0;}
// Java program to print a number such that the// frequency of each digit in the new number is// is equal to its frequency in the given number// multiplied by the digit itself. import java.io.*; class GFG { // Function to print such a numberstatic void printNumber(int n){ // initializing a hash array int count[] = new int[10]; // counting frequency of the digits while (n>0) { count[n % 10]++; n /= 10; } // printing the new number for (int i = 1; i < 10; i++) { for (int j = 0; j < count[i] * i; j++) System.out.print(i); }} // Driver code public static void main (String[] args) { int n = 3225; printNumber(n); }}// This code is contributed by inder_verma
# Python 3 program to print a number such that the# frequency of each digit in the new number is# is equal to its frequency in the given number# multiplied by the digit itself. # Function to print such a numberdef printNumber(n): # initializing a hash array count = [0]*10 # counting frequency of the digits while (n) : count[n % 10] += 1 n //= 10 # printing the new number for i in range(1,10) : for j in range(count[i] * i): print(i,end="") # Driver codeif __name__ == "__main__": n = 3225 printNumber(n) # This code is contributed by# ChitraNayal
// C# program to print a number such// that the frequency of each digit// in the new number is equal to its// frequency in the given number// multiplied by the digit itself.using System; class GFG{ // Function to print such a numberstatic void printNumber(int n){ // initializing a hash array int []count = new int[10]; // counting frequency of // the digits while (n > 0) { count[n % 10]++; n /= 10; } // printing the new number for (int i = 1; i < 10; i++) { for (int j = 0; j < count[i] * i; j++) Console.Write(i); }} // Driver codepublic static void Main (){ int n = 3225; printNumber(n);}} // This code is contributed// by inder_verma
<?php// PHP program to print a number such// that the frequency of each digit// in the new number is equal to its// frequency in the given number// multiplied by the digit itself. // Function to print such a numberfunction printNumber($n){ // initializing a hash array $count = array(); for($i = 0; $i<= 10; $i++) $count[$i] = 0; // counting frequency of the digits while ($n) { $count[$n % 10]++; $n /= 10; } // printing the new number for ($i = 1; $i < 10; $i++) { for ($j = 0; $j < $count[$i] * $i; $j++) echo $i; }} // Driver code$n = 3225; printNumber($n); // This code is contributed// by Akanksha Rai(Abby_akku)
<script> // JavaScript program to print// a number such that the// frequency of each digit// in the new number is// is equal to its frequency// in the given number// multiplied by the digit itself. // Function to print such a numberfunction printNumber(n){ // initializing a hash array let count = new Uint8Array(10); // counting frequency of the digits while (n) { count[n % 10]++; n = Math.floor(n / 10); } // printing the new number for (let i = 1; i < 10; i++) { for (let j = 0; j < count[i] * i; j++) document.write(i); }} // Driver code let n = 3225; printNumber(n); // This code is contributed by Surbhi Tyagi. </script>
222233355555
inderDuMCA
vt_m
ukasp
Akanksha_Rai
surbhityagi15
frequency-counting
number-digits
C++ Programs
Competitive Programming
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Passing a function as a parameter in C++
Program to implement Singly Linked List in C++ using class
Const keyword in C++
cout in C++
Dynamic _Cast in C++
Competitive Programming - A Complete Guide
Practice for cracking any coding interview
Arrow operator -> in C/C++ with Examples
Top 15 Websites for Coding Challenges and Competitions
Modulo 10^9+7 (1000000007) | [
{
"code": null,
"e": 24528,
"s": 24500,
"text": "\n21 Apr, 2021"
},
{
"code": null,
"e": 24844,
"s": 24528,
"text": "Given a number N containing digits from 1 to 9 only. The task is to generate a new number using the number N such that the frequency of each digit in the new numbe... |
Setting Up Jupyter on AWS. A scriptable list of command lines to... | by Asko Seeba | Towards Data Science | In my previous post I gave the equivalent instructions for Setting Up Jupyter on Google Cloud — if you are more into Google Cloud, go and check that article.
AWS is one of The Big Three (guess, who are the others?) powerful contemporary cloud environments, providing plenty of cool services and tools for data scientists and data engineers. As a data scientist, when you run your analysis projects on your local laptop computer, you often face technical resource challenges, be it network speed for downloading large datasets, disk size limitations, CPU power limitations or memory limitations. In a contemporary cloud environment, you basically have easy solutions at hand for any of those obstacles. You have fast network connectivity. You can always start any task with small, lean and mean resources. Whenever you face disk storage problems, you just hire a bigger disk with a couple of clicks. Or you can load the dataset into S3 and query it via Amazon Athena. Or use some other suitable storage service for cost-effective handling. Too little CPU power? No worries, change your instance type into one with more CPU for your VM (which you can scale back down immediately when you don’t need it anymore). Out of memory? A couple of clicks later you have more memory, and again, you can take it back down if it isn’t needed any more. Any sort of cluster formation for distributed computing is also usually just some clicks away, when you just know what you do: boot the cluster up for the heavier compute job, persist the results in some storage service outside of the cluster, and shut the cluster down to not let it stand idle (and incur unnecessary cost).
If you are experienced with Jupyter Notebook and feel comfortable with Debian-like environments, then you might find the following instructions helpful.
If you haven’t yet set up the AWS Command Line Interface (CLI), please do it. Install the CLI and set up the access key for it so it can function. The rest of the instructions assume you have CLI and access key set up.
Instructions for:
1. Installing CLI
2. Setting up the access key
From this point on, everything is doable from your local computer’s command line shell interface. For me the instructions provided below work with both Git Bash on top of Windows, and with Debian Linux.
3. Adjust the key name, region and the output file name as you wish (the region has to match your planned Jupyter VM location, and chmod 400 is needed, otherwise Debian ssh client blocks its usage):
aws ec2 create-key-pair \ --key-name my-jupyter-kp \ --region eu-north-1 \ --query 'KeyMaterial' \ --output text \ > my_jupyter_private_key.pemchmod 400 my_jupyter_private_key.pem
3. Create and Run the VM. I am using here t3.micro as the instance type, as t3.micro is, as of writing these instructions, Free Tier eligible (feel free to switch it for anything that is appropriate for your usage needs, but beware the costs):
IMAGE_ID=`aws ec2 describe-images \ --owners aws-marketplace \ --filters "Name=name,Values=debian-10-amd64*" \ --query 'sort_by(Images, &CreationDate)[-1].ImageId' \ --output text`AWS_MY_JUPYTER_INSTANCE_ID=`aws ec2 run-instances \ --image-id $IMAGE_ID \ --count 1 \ --instance-type t3.micro \ --key-name my-jupyter-kp \ --tag-specifications 'ResourceType=instance,\ Tags=[{Key=Name,Value=my-jupyter-vm}]' \ --query 'Instances[0].InstanceId' \ --placement AvailabilityZone=eu-north-1a \ --output text`
4. Wait for the VM to become ready. It took a bit more than 2 minutes in my case, before the Status Check started to display “2/2 checks passed” on the EC2 dashboard UI for the newly created VM.
5. Note: from here on we assume that the AWS_MY_JUPYTER_INSTANCE_ID variable holds. If, for some reason, your shell terminal closes and you reopen it, you need to reinitialize that variable in order for the following commands to work. I’d advise you to take a backup note of your instance ID for that (so you can later reinitialize it with AWS_MY_JUPYTER_INSTANCE_ID=<your instance id> if needed):
# Take a backup note of the value of this variable:echo $AWS_MY_JUPYTER_INSTANCE_ID
6. We need to set up the proper security group (firewall rules) to enable the ssh connection:
VPC_ID=`aws ec2 describe-instances \ --instance-ids $AWS_MY_JUPYTER_INSTANCE_ID \ --query 'Reservations[0].Instances[0].VpcId' \ --output text`SG_ID=`aws ec2 create-security-group \ --group-name my-jupyter-sg \ --description "My Jupyter security group" \ --vpc-id $VPC_ID \ --query ‘GroupId’ \ --output text`aws ec2 authorize-security-group-ingress \ --group-id $SG_ID \ --protocol tcp \ --port 22 \ --cidr 0.0.0.0/0aws ec2 modify-instance-attribute \ --instance-id $AWS_MY_JUPYTER_INSTANCE_ID \ --groups $SG_ID
7. Test the ssh connection works (answer ‘yes’ to the ssh fingerprint questions):
DNS_NAME=`aws ec2 describe-instances \ --instance-ids $AWS_MY_JUPYTER_INSTANCE_ID \ --query 'Reservations[0].Instances[0].PublicDnsName' \ --output text`ssh -i my_jupyter_private_key.pem admin@$DNS_NAMEexit # to get back to your local compyter prompt
8. To be able to restart the VM later conveniently (we assume here that the private key file that we created above is in current working directory and stays there — adjust the alias that is constructed here appropriately if the key file location is going to be different):
echo >> ~/.bashrcecho "AWS_MY_JUPYTER_INSTANCE_ID=$AWS_MY_JUPYTER_INSTANCE_ID" \ >> ~/.bashrcecho "AWS_MY_JUPYTER_KEY=\"`pwd`/my_jupyter_private_key.pem\"" \ >> ~/.bashrcecho "alias aws_start_my_jupyter_vm=\"aws ec2 start-instances \\ --instance-ids \$AWS_MY_JUPYTER_INSTANCE_ID\"" \ >> ~/.bashrcecho "alias aws_connect_my_jupyter_vm=\" DNS_NAME=\\\`aws ec2 describe-instances \\ --instance-ids \$AWS_MY_JUPYTER_INSTANCE_ID \\ --query 'Reservations[0].Instances[0].PublicDnsName' \\ --output text\\\` ssh -i \\\"\$AWS_MY_JUPYTER_KEY\\\" admin@\\\$DNS_NAME\"" \ >> ~/.bashrc
9. Test the newly created aliases:
aws ec2 stop-instances --instance-ids $AWS_MY_JUPYTER_INSTANCE_ID
Wait until the instance is properly stopped. You can check it from the AWS EC2 dashboard — the Instance State must display “stopped”. It took less than a minute for me.
exit
Reopen your local shell command prompt (to have it reload the ~/.bashrc file). Enter the following command:
aws_start_my_jupyter_vm
You should see a message in json (depending on your AWS CLI configuration) that states the current and previous states of your VM.
Wait for the VM to start. It might take about 2 minutes — you can see it from the AWS EC2 dashboard: when the Status Checks displays “2/2 checks passed”, the instance is ready for the next command:
aws_connect_my_jupyter_vm
This will take you to the VM command prompt via ssh.
10. Your VM is now ready and you are now ready to relaunch it easily from your local computer’s command line whenever you want to continue your work.
We are going to set up JupyterLab, the newer version of the Jupyter Notebook.
11. Connect to your VM (if you dropped out of it after the previous command above):
aws_connect_my_jupyter_vm
12. Run the following commands at the VM command prompt:
sudo apt updatesudo apt upgrade # Hit 'Y' and Enter key when askedsudo apt install python3-pip # Hit 'Y' and Enter key when askedsudo pip3 install --upgrade jupyterlab boto3sudo mkdir -p /opt/my_jupyterlab/binsudo sh -c \ 'echo "#!/bin/bash" > /opt/my_jupyterlab/bin/run_jupyterlab'sudo chmod a+x /opt/my_jupyterlab/bin/run_jupyterlab
IMPORTANT!!!! On the next command, — ip=127.0.0.1 is necessary for the security reasons, to block any external access attempt, especially as we are turning off here the password and security token to achieve a convenient usage.
sudo sh -c 'echo "jupyter lab \\ --ip=127.0.0.1 \\ --NotebookApp.token=\"\" \\ --NotebookApp.password=\"\" \\ --NotebookApp.allow_origin=\"*\"" \ >> /opt/my_jupyterlab/bin/run_jupyterlab'exit
13. The VM with Jupyterlab is now provisioned and configured, and you are now back at your local computer’s command line. Let’s connect the dots that make the JupyterLab easy to launch and use:
echo "alias aws_connect_my_jupyterlab=\" DNS_NAME=\\\`aws ec2 describe-instances \\ --instance-ids \$AWS_MY_JUPYTER_INSTANCE_ID \\ --query 'Reservations[0].Instances[0].PublicDnsName' \\ --output text\\\` ssh -i \\\"\$AWS_MY_JUPYTER_KEY\\\" admin@\\\$DNS_NAME \\ -L 8888:localhost:8888 \\ -t '/opt/my_jupyterlab/bin/run_jupyterlab'\"" \ >> ~/.bashrcexit
14. Open your local command prompt again (to have it reload the ~/.bashrc file), to test the command line alias we just created:
aws_connect_my_jupyterlab
15. Open your browser and type in http://localhost:8888/ into your address bar.
16. Voila! You are in JupyterLab! You have the AWS python API installed (see the pip3 install — upgrade command above). Your JupyterLab python environment is set up and able to interact with AWS services (to the extent you have given the permissions to the account you are using through the CLI access key).
17. Whenever you stop working with your notebooks, don’t forget to shut down the VM, to avoid unnecessary cloud costs. To stop the VM from your local computer command line:
aws ec2 stop-instances --instance-ids $AWS_MY_JUPYTER_INSTANCE_IDexit
18. When you later return back to working on your notebooks, open your local command prompt and relaunch the VM with the following command:
aws_start_my_jupyter_vm
19. Wait for the VM to start. It might take about 2 minutes — you can see it from the AWS EC2 dashboard: when the Status Checks displays “2/2 checks passed”, the instance is ready for the next command:
aws_connect_my_jupyterlab
20. Open your browser and type in http://localhost:8888/ into your address bar.
If you don’t want to launch the JupyterLab, but just a shell connection to the VM, then you have another command line alias handy: aws_connect_my_jupyter_vm
The default volume size where the VM’s root file system is linked is 8GiB. I would recommend leaving it as it is and adding an additional EBS volume to the VM only when you need to work with bigger datasets. Having that extra space as a separate disk gives you the opportunity to drop and delete it (having those files you want to preserve backed up to S3 before deleting the disk) if you have finished your project. AWS Free Tier includes 30GB of Storage, according to official records, but beyond Free Tier it can be rather costly service at typical personal budget levels so it doesn’t make sense to just let them sit idle, having you to pay for them on a monthly basis. Also, due to the same cost reason, before creating another persistent disk, think a bit about how big disk you actually need. The exact pricing depends on multiple parameters you can adjust when creating an EBS volume, but a price for a 200GB volume (with daily snapshots) is likely to go more than $30/month, while maybe 25GB is already big enough for your current project at hand ($6/month)?
The above is enough to get you up and running with the Jupyter environment in AWS. But to force you to think, practice some AWS tinkering and strengthen your confidence with that environment, here are some bonus exercises, that result in making your life even easier, and that I suggest you try to solve on your own. Feel free to post the description of your solutions or questions you might get as a response below, and I will attempt to comment on them for your feedback.
Make the launching of the Jupyter VM more convenient. Improve your local command line aliases in the way that they check if the VM is already running — if not, then automatically launch the VM, wait for it to be up and ready, and launch Jupyter in it. This way you’ll eliminate the separate manual step of launching the VM and manually waiting for it to be ready.
Improve your local command line aliases in the way that after stopping the Jupyter service with ctrl+c it would offer you to stop the VM as well (with default action being stopping the VM) — this way you avoid unnecessary cloud costs incurring from accidentally forgetting to shut down the VM.
Automate the deployment of the new Jupyter VM, to avoid manually copy-pasting all the commands above every time you want to create one. This way you make your life even more flexible and agile if you want to maintain separate environments for different projects. There are several ways to do it. Here are a couple of ideas (pick your choice and implement it).
Ex. 4 option 1. Script everything in the above instructions so that it would all be triggered from one command line command.
Ex. 4 option 2. Create a Jupyter VM image that you can use as a template whenever you want to launch another clean Jupyter environment independent of other projects. | [
{
"code": null,
"e": 330,
"s": 172,
"text": "In my previous post I gave the equivalent instructions for Setting Up Jupyter on Google Cloud — if you are more into Google Cloud, go and check that article."
},
{
"code": null,
"e": 1835,
"s": 330,
"text": "AWS is one of The Big Three... |
How to create a ListView using JavaFX? | A list view is a scrollable list of items from which you can select desired items. You can create a list view component by instantiating the javafx.scene.control.ListView class. You can create either a vertical or a horizontal ListView.
Following the JavaFX program demonstrates the creation of a ListView.
import javafx.application.Application;
import javafx.collections.FXCollections;
import javafx.collections.ObservableList;
import javafx.geometry.Insets;
import javafx.scene.Scene;
import javafx.scene.control.Label;
import javafx.scene.control.ListView;
import javafx.scene.layout.VBox;
import javafx.scene.text.Font;
import javafx.scene.text.FontPosture;
import javafx.scene.text.FontWeight;
import javafx.stage.Stage;
public class ListViewExample extends Application {
public void start(Stage stage) {
//Label for education
Label label = new Label("Educational qualification:");
Font font = Font.font("verdana", FontWeight.BOLD, FontPosture.REGULAR, 12);
label.setFont(font);
//list View for educational qualification
ObservableList<String> names = FXCollections.observableArrayList("Engineering", "MCA", "MBA", "Graduation", "MTECH", "Mphil", "Phd");
ListView<String> listView = new ListView<String>(names);
listView.setMaxSize(200, 160);
//Creating the layout
VBox layout = new VBox(10);
layout.setPadding(new Insets(5, 5, 5, 50));
layout.getChildren().addAll(label, listView);
layout.setStyle("-fx-background-color: BEIGE");
//Setting the stage
Scene scene = new Scene(layout, 595, 200);
stage.setTitle("List View Example");
stage.setScene(scene);
stage.show();
}
public static void main(String args[]){
launch(args);
}
} | [
{
"code": null,
"e": 1299,
"s": 1062,
"text": "A list view is a scrollable list of items from which you can select desired items. You can create a list view component by instantiating the javafx.scene.control.ListView class. You can create either a vertical or a horizontal ListView."
},
{
"c... |
How to Count Elements in C# Array? | 30 Sep, 2021
To count the number of elements in the C# array, we can use the count() method from the IEnumerable. It is included in the System.Linq.Enumerable class. The count method can be used with any type of collection such as an array, ArrayList, List, Dictionary, etc.
Syntax:
Count<TSource>()
This method returns the total number of elements present in an array.
Count<TSource>(Func<TSource, Boolean>)
This method returns the total number of elements in an array that matches the specified condition using Func delegate.
Example 1: Using String values in the array.
In the below code block, we have implemented the count function to count the number of elements in a C# Array. Firstly, we have used the System.Linq, because the count function is located in this class, then we have created a count variable to count the number of elements in the array. After that, we have created an array of strings with 6 elements in it. Then we have used the Count function in the array and store the resultant count into the count variable that we have created earlier. Then using the console. Write line we are displaying the count of elements in the array.
C#
// C# program to illustrate the above conceptusing System;using System.Linq; class GFG{ static public void Main(){ // Initializing count variable var totalCount = 0; // Creating an array of strings string[] elements = { "Rem", "Hisoka", "Gon", "Monkey D Luffy", "Alvida", "Shank" }; // Invoking count function on the above elements totalCount = elements.Count(); // Displaying the count Console.WriteLine(totalCount);}}
6
Example 2: Using integer values in the array.
C#
// C# program to illustrate the above conceptusing System;using System.Linq; class GFG{ static public void Main(){ // Creating a count variable var total = 0; // creating array of numbers int[] nums = { 9, 6, 5, 2, 1, 5, 8, 4, 6, 2, 3, 4, 8, 7, 5, 6 }; // Counting the number of elements total = nums.Count(); // Displaying the count Console.WriteLine(total);}}
16
Example 3: To count specific elements based on conditions within the array.
Here, in the below program, we are creating an array of colors, and then using the count method, we are finding the number of times the color “Blue” appears in the array. Then displaying the count.
C#
// C# program to illustrate the above conceptusing System;using System.Linq; class GFG{ static public void Main(){ // Creating a count variable var total = 0; // Creating an array of colors string[] colors = { "Red", "Blue", "Black", "White", "Blue", "Blue" }; // Counting the total number of time blue appears // in the array total = colors.Count(c => c == "Blue"); // Displaying the count Console.WriteLine(total);}}
Output:
3
CSharp-Arrays
Picked
C#
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n30 Sep, 2021"
},
{
"code": null,
"e": 290,
"s": 28,
"text": "To count the number of elements in the C# array, we can use the count() method from the IEnumerable. It is included in the System.Linq.Enumerable class. The count method can b... |
QuickSort Tail Call Optimization (Reducing worst case space to Log n ) | 30 May, 2022
Prerequisite : Tail Call Elimination
In QuickSort, partition function is in-place, but we need extra space for recursive function calls. A simple implementation of QuickSort makes two calls to itself and in worst case requires O(n) space on function call stack.
The worst case happens when the selected pivot always divides the array such that one part has 0 elements and other part has n-1 elements. For example, in below code, if we choose last element as pivot, we get worst case for sorted arrays (See this for visualization)
C
Java
Python3
C#
Javascript
/* A Simple implementation of QuickSort that makes two two recursive calls. */void quickSort(int arr[], int low, int high){ if (low < high) { /* pi is partitioning index, arr[p] is now at right place */ int pi = partition(arr, low, high); // Separately sort elements before // partition and after partition quickSort(arr, low, pi - 1); quickSort(arr, pi + 1, high); }}// See below link for complete running code// http://geeksquiz.com/quick-sort/
// A Simple implementation of QuickSort that// makes two recursive calls.static void quickSort(int arr[], int low, int high){ if (low < high) { // pi is partitioning index, arr[p] is // now at right place int pi = partition(arr, low, high); // Separately sort elements before // partition and after partition quickSort(arr, low, pi - 1); quickSort(arr, pi + 1, high); }} // This code is contributed by rutvik_56
# Python3 program for the above approachdef quickSort(arr, low, high): if (low < high): # pi is partitioning index, arr[p] is now # at right place pi = partition(arr, low, high) # Separately sort elements before # partition and after partition quickSort(arr, low, pi - 1) quickSort(arr, pi + 1, high) # This code is contributed by sanjoy_62
// A Simple implementation of QuickSort that// makes two recursive calls.static void quickSort(int []arr, int low, int high){ if (low < high) { // pi is partitioning index, arr[p] is // now at right place int pi = partition(arr, low, high); // Separately sort elements before // partition and after partition quickSort(arr, low, pi - 1); quickSort(arr, pi + 1, high); }} // This code is contributed by pratham76.
<script>// A Simple implementation of QuickSort that// makes two recursive calls.function quickSort(arr , low , high){ if (low < high) { // pi is partitioning index, arr[p] is // now at right place var pi = partition(arr, low, high); // Separately sort elements before // partition and after partition quickSort(arr, low, pi - 1); quickSort(arr, pi + 1, high); }} // This code is contributed by umadevi9616.</script>
Can we reduce the auxiliary space for function call stack? We can limit the auxiliary space to O(Log n). The idea is based on tail call elimination. As seen in the previous post, we can convert the code so that it makes one recursive call. For example, in the below code, we have converted the above code to use a while loop and have reduced the number of recursive calls.
C
Java
Python3
C#
Javascript
/* QuickSort after tail call elimination using while loop */void quickSort(int arr[], int low, int high){ while (low < high) { /* pi is partitioning index, arr[p] is now at right place */ int pi = partition(arr, low, high); // Separately sort elements before // partition and after partition quickSort(arr, low, pi - 1); low = pi+1; }}// See below link for complete running code// https://ide.geeksforgeeks.org/qrlM31
/* QuickSort after tail call elimination using while loop */static void quickSort(int arr[], int low, int high){ while (low < high) { /* pi is partitioning index, arr[p] is now at right place */ int pi = partition(arr, low, high); // Separately sort elements before // partition and after partition quickSort(arr, low, pi - 1); low = pi+1; }}// See below link for complete running code// https://ide.geeksforgeeks.org/qrlM31 // This code is contributed by gauravrajput1
# QuickSort after tail call elimination using while loop '''def quickSort(arr, low, high): while (low < high): # pi is partitioning index, arr[p] is now # at right place ''' pi = partition(arr, low, high) # Separately sort elements before # partition and after partition quickSort(arr, low, pi - 1) low = pi+1 # See below link for complete running code# https:#ide.geeksforgeeks.org/qrlM31 # This code is contributed by gauravrajput1
/* QuickSort after tail call elimination using while loop */static void quickSort(int []arr, int low, int high){ while (low < high) { /* pi is partitioning index, arr[p] is now at right place */ int pi = partition(arr, low, high); // Separately sort elements before // partition and after partition quickSort(arr, low, pi - 1); low = pi+1; }}// See below link for complete running code// https://ide.geeksforgeeks.org/qrlM31 // This code contributed by gauravrajput1
<script>/* QuickSort after tail call elimination using while loop */function quickSort(arr , low , high){ while (low < high) { /* pi is partitioning index, arr[p] is now at right place */ var pi = partition(arr, low, high); // Separately sort elements before // partition and after partition quickSort(arr, low, pi - 1); low = pi+1; }}// See below link for complete running code// https://ide.geeksforgeeks.org/qrlM31 // This code is contributed by gauravrajput1</script>
Although we have reduced number of recursive calls, the above code can still use O(n) auxiliary space in worst case. In worst case, it is possible that array is divided in a way that the first part always has n-1 elements. For example, this may happen when last element is choses as pivot and array is sorted in increasing order.
We can optimize the above code to make a recursive call only for the smaller part after partition. Below is implementation of this idea.
Further Optimization :
C++
C
Java
Python3
C#
Javascript
// C++ program of th above approach#include <bits/stdc++.h>using namespace std; void quickSort(int arr[], int low, int high){ while (low < high) { /* pi is partitioning index, arr[p] is now at right place */ int pi = partition(arr, low, high); // If left part is smaller, then recur for left // part and handle right part iteratively if (pi - low < high - pi) { quickSort(arr, low, pi - 1); low = pi + 1; } // Else recur for right part else { quickSort(arr, pi + 1, high); high = pi - 1; } }} // This code is contributed by code_hunt.
/* This QuickSort requires O(Log n) auxiliary space in worst case. */void quickSort(int arr[], int low, int high){ while (low < high) { /* pi is partitioning index, arr[p] is now at right place */ int pi = partition(arr, low, high); // If left part is smaller, then recur for left // part and handle right part iteratively if (pi - low < high - pi) { quickSort(arr, low, pi - 1); low = pi + 1; } // Else recur for right part else { quickSort(arr, pi + 1, high); high = pi - 1; } }}// See below link for complete running code// https://ide.geeksforgeeks.org/LHxwPk
/* This QuickSort requires O(Log n) auxiliary space in worst case. */static void quickSort(int arr[], int low, int high){ while (low < high) { /* pi is partitioning index, arr[p] is now at right place */ int pi = partition(arr, low, high); // If left part is smaller, then recur for left // part and handle right part iteratively if (pi - low < high - pi) { quickSort(arr, low, pi - 1); low = pi + 1; } // Else recur for right part else { quickSort(arr, pi + 1, high); high = pi - 1; } }}// See below link for complete running code// https://ide.geeksforgeeks.org/LHxwPk // This code is contributed by gauravrajput1
''' This QuickSort requires O(Log n) auxiliary space in worst case. '''def quickSort(arr, low, high){ while (low < high): ''' pi is partitioning index, arr[p] is now at right place ''' pi = partition(arr, low, high); # If left part is smaller, then recur for left # part and handle right part iteratively if (pi - low < high - pi): quickSort(arr, low, pi - 1); low = pi + 1; # Else recur for right part else: quickSort(arr, pi + 1, high); high = pi - 1; # See below link for complete running code# https:#ide.geeksforgeeks.org/LHxwPk # This code is contributed by gauravrajput1
/* This QuickSort requires O(Log n) auxiliary space in worst case. */static void quickSort(int []arr, int low, int high){ while (low < high) { /* pi is partitioning index, arr[p] is now at right place */ int pi = partition(arr, low, high); // If left part is smaller, then recur for left // part and handle right part iteratively if (pi - low < high - pi) { quickSort(arr, low, pi - 1); low = pi + 1; } // Else recur for right part else { quickSort(arr, pi + 1, high); high = pi - 1; } }}// See below link for complete running code// https://ide.geeksforgeeks.org/LHxwPk // This code is contributed by gauravrajput1
<script>/* This QuickSort requires O(Log n) auxiliary space in worst case. */function quickSort(arr , low , high){ while (low < high) { /* pi is partitioning index, arr[p] is now at right place */ var pi = partition(arr, low, high); // If left part is smaller, then recur for left // part and handle right part iteratively if (pi - low < high - pi) { quickSort(arr, low, pi - 1); low = pi + 1; } // Else recur for right part else { quickSort(arr, pi + 1, high); high = pi - 1; } }}// See below link for complete running code// https://ide.geeksforgeeks.org/LHxwPk // This code contributed by gauravrajput1</script>
In the above code, if left part becomes smaller, then we make recursive call for left part. Else for the right part. In worst case (for space), when both parts are of equal sizes in all recursive calls, we use O(Log n) extra space.
Reference: http://www.cs.nthu.edu.tw/~wkhon/algo08-tutorials/tutorial2b.pdf
This article is contributed by Dheeraj Jain. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above
rutvik_56
pratham76
sanjoy_62
umadevi9616
GauravRajput1
rajdaveomns1
code_hunt
Quick Sort
Sorting
Sorting
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Chocolate Distribution Problem
Longest Common Prefix using Sorting
Sort a nearly sorted (or K sorted) array
Maximize difference between the Sum of the two halves of the Array after removal of N elements
Segregate 0s and 1s in an array
Quickselect Algorithm
Sorting in Java
Quick Sort vs Merge Sort
Find whether an array is subset of another array
Stability in sorting algorithms | [
{
"code": null,
"e": 54,
"s": 26,
"text": "\n30 May, 2022"
},
{
"code": null,
"e": 91,
"s": 54,
"text": "Prerequisite : Tail Call Elimination"
},
{
"code": null,
"e": 317,
"s": 91,
"text": "In QuickSort, partition function is in-place, but we need extra space ... |
SciPy – Integration | 26 Mar, 2021
Scipy is the scientific computing module of Python providing in-built functions on a lot of well-known Mathematical functions. The scipy.integrate sub-package provides several integration techniques including an ordinary differential equation integrator.
Numerical Integration is the approximate computation of an integral using numerical techniques. Methods for Integrating function given function object:
quad – General Purpose Integration
dblquad – General Purpose Double Integration
nquad – General Purpose n- fold Integration
fixed_quad – Gaussian quadrature, order n
quadrature – Gaussian quadrature to tolerance
romberg – Romberg integration
trapz – Trapezoidal rule
cumtrapz – Trapezoidal rule to cumulatively compute integral
simps – Simpson’s rule
romb – Romberg integration
polyint – Analytical polynomial integration (NumPy)
(1) quad :
The function quad is provided to integrate a function of one variable between two points. The points can be +infinite or – infinite to indicate infinite limits.
Example:
Python3
from scipy.integrate import quad def f(x): return 3.0*x*x + 1.0 I, err = quad(f, 0, 1)print(I)print(err)
Output :
2.0
2.220446049250313e-14
(2) dblquad :
This performs Double Integration with 2 arguments.
Example:
Python3
from scipy.integrate import dblquad area = dblquad(lambda x, y: x*y, 0, 0.5, lambda x: 0, lambda x: 1-2*x) print(area)
Output :
(0.010416666666666668, 4.101620128472366e-16)
(3) nquad :
Performs integration of n variables
Example:
Python3
from scipy.integrate import nquad def f(x, y, z): return x*y*z I = nquad(f, [[0, 1], [0, 5], [0, 5]])print(I)
Output :
(78.12499999999999, 8.673617379884033e-13)
(4) fixed_quad :
With the help of scipy.integrate.fixed_quad() method, we can get the computation of a definite integral using fixed order gaussian quadrature
Example:
Python3
# import scipy.integratefrom scipy import integrate def func(x): return 3*x**3 # using scipy.integrate.fixed_quad() method# n is the order of integrationgfg = integrate.fixed_quad(func, 1.0, 2.0, n=2) print(gfg)
Output:
(11.25, None)
(5) quadrature :
With the help of scipy.integrate.quadrature() method, we can get the computation of definite integral using fixed tolerance gaussian quadrature
Example:
Python3
# import scipy.integrate.from scipy import integrate def f(x): return 3*x**3 # using scipy.integrate.quadrature() methodg = integrate.quadrature(f, 0.0, 1.0) print(g)
Output:
(0.7500000000000001, 2.220446049250313e-16)
(6) romberg :
With the help of scipy.integrate.romberg() method, we can get the romberg integration of a callable function from limit a to b
Example:
Python3
# import numpy and scipy.integrate import numpy as np from scipy import integrate f = lambda x: 3*(np.pi)*x**3 # using scipy.integrate.romberg() g = integrate.romberg(f, 1, 2, show = True) print(g)
Output:
Romberg integration of <function vectorize1.<locals>.vfunc at 0x0000003C1E212790> from [1, 2]
Steps StepSize Results
1 1.000000 42.411501
2 0.500000 37.110063 35.342917
4 0.250000 35.784704 35.342917 35.342917
The final result is 35.34291735288517 after 5 function evaluations.
35.34291735288517
(7) trapz :
numpy.trapz() function integrate along the given axis using the composite trapezoidal rule.
Ref : numpy.trapz() function | Python
Python3
# Python program explaining # numpy.trapz() function # importing numpy as geek import numpy as np b = [2, 4] a = [6, 8] f = np.trapz(b, a) print(f)
Output:
6.0
(8) cumtraz:
With the help of scipy.integrate.cumtrapz() method, we can get the cumulative integrated value of y(x) using composite trapezoidal rule.
Example:
Python3
# import numpy and scipy.integrate.cumtrapzimport numpy as npfrom scipy import integrate a = np.arange(0, 5)b = np.arange(0, 5) # using scipy.integrate.cumtrapz() methodf = integrate.cumtrapz(b, a) print(f)
Output:
[0.5 2. 4.5 8. ]
(9) simps:
With the help of scipy.integrate.simps() method, we can get the integration of y(x) using samples along the axis and composite simpson’s rule.
Example:
Python3
# import numpy and scipy.integrateimport numpy as npfrom scipy import integrate a = np.arange(0, 5)b = np.arange(0, 5) # using scipy.integrate.simps() methodf = integrate.simps(b, a) print(f)
Output:
8.0
(10) romb:
With the help of scipy.integrate.romb() method, we can get the romberg integration using samples of a function from limit a to b
Example:
Python3
# import numpy and scipy.integrate import numpy as np from scipy import integrate x = np.arange(0,5) # using scipy.integrate.romb() method f = integrate.romb(x) print(f)
Output:
8.0
(11) polyint:
numpy.polyint(p, m) : Evaluates the anti – derivative of a polynomial with the specified order.
Ref : numpy.polyint() in Python
Python3
# Python code explaining # numpy.polyint() # importing libraries import numpy as np # Constructing polynomial p1 = np.poly1d([2,6]) p2 = np.poly1d([4,8]) a = np.polyint(p1, 1) b = np.polyint(p2, 2) print ("\n\nUsing polyint") print ("p1 anti-derivative of order = 2 : \n", a) print ("p2 anti-derivative of order = 2 : \n", b)
Output :
Using polyint
p1 anti-derivative of order = 2 :
2
1 x + 6 x
p2 anti-derivative of order = 2 :
3 2
0.6667 x + 4 x
Picked
Python-scipy
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 52,
"s": 24,
"text": "\n26 Mar, 2021"
},
{
"code": null,
"e": 308,
"s": 52,
"text": "Scipy is the scientific computing module of Python providing in-built functions on a lot of well-known Mathematical functions. The scipy.integrate sub-package provides severa... |
Interfaces in Java | 11 Jul, 2022
An Interface in Java programming language is defined as an abstract type used to specify the behavior of a class. An interface in Java is a blueprint of a class. A Java interface contains static constants and abstract methods.
The interface in Java is a mechanism to achieve abstraction. There can be only abstract methods in the Java interface, not the method body. It is used to achieve abstraction and multiple inheritance in Java. In other words, you can say that interfaces can have abstract methods and variables. It cannot have a method body. Java Interface also represents the IS-A relationship.
Like a class, an interface can have methods and variables, but the methods declared in an interface are by default abstract (only method signature, no body).
Interfaces specify what a class must do and not how. It is the blueprint of the class.
An Interface is about capabilities like a Player may be an interface and any class implementing Player must be able to (or must implement) move(). So it specifies a set of methods that the class has to implement.
If a class implements an interface and does not provide method bodies for all functions specified in the interface, then the class must be declared abstract.
A Java library example is Comparator Interface. If a class implements this interface, then it can be used to sort a collection.
Syntax:
interface {
// declare constant fields
// declare methods that abstract
// by default.
}
To declare an interface, use the interface keyword. It is used to provide total abstraction. That means all the methods in an interface are declared with an empty body and are public and all fields are public, static, and final by default. A class that implements an interface must implement all the methods declared in the interface. To implement interface use implements keyword.
It is used to achieve total abstraction.
Since java does not support multiple inheritances in the case of class, by using an interface it can achieve multiple inheritances.
It is also used to achieve loose coupling.
Interfaces are used to implement abstraction. So the question arises why use interfaces when we have abstract classes?
The reason is, abstract classes may contain non-final variables, whereas variables in the interface are final, public and static.
// A simple interface
interface Player
{
final int id = 10;
int move();
}
The major differences between a class and an interface are:
Implementation: To implement an interface we use the keyword implements
Java
// Java program to demonstrate working of// interface import java.io.*; // A simple interfaceinterface In1 { // public, static and final final int a = 10; // public and abstract void display();} // A class that implements the interface.class TestClass implements In1 { // Implementing the capabilities of // interface. public void display(){ System.out.println("Geek"); } // Driver Code public static void main(String[] args) { TestClass t = new TestClass(); t.display(); System.out.println(a); }}
Geek
10
Real-World Example: Let’s consider the example of vehicles like bicycle, car, bike........., they have common functionalities. So we make an interface and put all these common functionalities. And lets Bicycle, Bike, car ....etc implement all these functionalities in their own class in their own way.
Java
// Java program to demonstrate the // real-world example of Interfaces import java.io.*; interface Vehicle { // all are the abstract methods. void changeGear(int a); void speedUp(int a); void applyBrakes(int a);} class Bicycle implements Vehicle{ int speed; int gear; // to change gear @Override public void changeGear(int newGear){ gear = newGear; } // to increase speed @Override public void speedUp(int increment){ speed = speed + increment; } // to decrease speed @Override public void applyBrakes(int decrement){ speed = speed - decrement; } public void printStates() { System.out.println("speed: " + speed + " gear: " + gear); }} class Bike implements Vehicle { int speed; int gear; // to change gear @Override public void changeGear(int newGear){ gear = newGear; } // to increase speed @Override public void speedUp(int increment){ speed = speed + increment; } // to decrease speed @Override public void applyBrakes(int decrement){ speed = speed - decrement; } public void printStates() { System.out.println("speed: " + speed + " gear: " + gear); } }class GFG { public static void main (String[] args) { // creating an inatance of Bicycle // doing some operations Bicycle bicycle = new Bicycle(); bicycle.changeGear(2); bicycle.speedUp(3); bicycle.applyBrakes(1); System.out.println("Bicycle present state :"); bicycle.printStates(); // creating instance of the bike. Bike bike = new Bike(); bike.changeGear(1); bike.speedUp(4); bike.applyBrakes(3); System.out.println("Bike present state :"); bike.printStates(); }}
Bicycle present state :
speed: 2 gear: 2
Bike present state :
speed: 1 gear: 1
The advantages of using interfaces in Java are as follows:
Without bothering about the implementation part, we can achieve the security of the implementation.In Java, multiple inheritance is not allowed, however, you can use an interface to make use of it as you can implement more than one interface.
Without bothering about the implementation part, we can achieve the security of the implementation.
In Java, multiple inheritance is not allowed, however, you can use an interface to make use of it as you can implement more than one interface.
1. Prior to JDK 8, the interface could not define the implementation. We can now add default implementation for interface methods. This default implementation has a special use and does not affect the intention behind interfaces.
Suppose we need to add a new function in an existing interface. Obviously, the old code will not work as the classes have not implemented those new functions. So with the help of default implementation, we will give a default body for the newly added functions. Then the old codes will still work.
Java
// Java program to show that interfaces can// have methods from JDK 1.8 onwards interface In1{ final int a = 10; default void display() { System.out.println("hello"); }} // A class that implements the interface.class TestClass implements In1{ // Driver Code public static void main (String[] args) { TestClass t = new TestClass(); t.display(); }}
hello
2. Another feature that was added in JDK 8 is that we can now define static methods in interfaces that can be called independently without an object. Note: these methods are not inherited.
Java
// Java Program to show that interfaces can// have methods from JDK 1.8 onwards interface In1{ final int a = 10; static void display() { System.out.println("hello"); }} // A class that implements the interface.class TestClass implements In1{ // Driver Code public static void main (String[] args) { In1.display(); }}
hello
We can’t create an instance(interface can’t be instantiated) of the interface but we can make the reference of it that refers to the Object of its implementing class.
A class can implement more than one interface.
An interface can extend to another interface or interface (more than one interface).
A class that implements the interface must implement all the methods in the interface.
All the methods are public and abstract. And all the fields are public, static, and final.
It is used to achieve multiple inheritances.
It is used to achieve loose coupling.
From Java 9 onwards, interfaces can contain the following also:
Static methodsPrivate methodsPrivate Static methods
Static methods
Private methods
Private Static methods
Access Specifier of Methods in Interfaces
Access Specifiers for Classes or Interfaces in Java
Abstract Classes in Java
Comparator Interface in Java
Java Interface Methods
Nested Interface in Java
This article is contributed by Mehak Kumar and Nitsdheerendra. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above
RohitKumar8
RAraghavarora
highbeamer
nishkarshgandhi
java-interfaces
Java
School Programming
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 52,
"s": 24,
"text": "\n11 Jul, 2022"
},
{
"code": null,
"e": 279,
"s": 52,
"text": "An Interface in Java programming language is defined as an abstract type used to specify the behavior of a class. An interface in Java is a blueprint of a class. A Java inter... |
Amortized analysis for increment in counter - GeeksforGeeks | 28 Feb, 2022
Amortized analysis refers to determining the time-averaged running time for a sequence (not an individual) operation. It is different from average case analysis because here, we don’t assume that the data arranged in average (not very bad) fashion like we do for average case analysis for quick sort. That is, amortized analysis is worst case analysis but for a sequence of operation rather than an individual one. It applies to the method that consists of the sequence of operation, where a vast majority of operations are cheap but some of the operations are expensive. This can be visualized with the help of binary counter which is implemented below.
Let’s see this by implementing an increment counter in C. First, let’s see how counter increment works. Let a variable i contains a value 0 and we performs i++ many time. Since on hardware, every operation is performed in binary form. Let binary number stored in 8 bit. So, value is 00000000. Let’s increment many time. So, the pattern we find are as :00000000, 00000001, 00000010, 00000011, 00000100, 00000101, 00000110, 00000111, 00001000 and so on .....
Steps : 1. Iterate from rightmost and make all one to zero until finds first zero. 2. After iteration, if index is greater than or equal to zero, then make zero lie on that position to one.
C++
Java
C#
#include <bits / stdc++.h>using namespace std; int main(){ char str[] = "10010111"; int length = strlen(str); int i = length - 1; while (str[i] == '1') { str[i] = '0'; i--; } if (i >= 0) str[i] = '1'; printf("% s", str);}
import java.util.*; class GFG{ public static void main(String args[]){ String st = "10010111"; char[] str = st.toCharArray(); int lengthh = st.length(); int i = lengthh - 1; while (str[i] == '1') { str[i] = '0'; i--; } if (i >= 0) str[i] = '1'; System.out.print( str);}} // This code is contributed by sanjoy_62
using System; public class GFG{ public static void Main(String []args){ String st = "10010111"; char[] str = st.ToCharArray(); int lengthh = st.Length; int i = lengthh - 1; while (str[i] == '1') { str[i] = '0'; i--; } if (i >= 0) str[i] = '1'; Console.Write( str);}} // This code is contributed by Rajput-Ji
Output:
10011000
On a simple look on program or algorithm, its running cost looks proportional to the number of bits but in real, it is not proportional to a number of bits. Let’s see how!
Let’s assume that increment operation is performed k time. We see that in every increment, its rightmost bit is getting flipped. So, the number of flipping for LSB is k. For, second rightmost is flipped after a gap, i.e., 1 time in 2 increments. 3rd rightmost – 1 time in 4 increments. 4th rightmost – 1 time in 8 increments. So, the number of flipping is k/2 for 2nd rightmost bit, k/4 for 3rd rightmost bit, k/8 for 4th rightmost bit and so on ...
Total cost will be the total number of flipping, that is, C(k) = k + k/2 + k/4 + k/8 + k/16 + ...... which is Geometric Progression series and also, C(k) < k + k/2 + k/4 + k/8 + k/16 + k/32 + ...... up to infinity So, C(k) < k/(1-1/2) and so, C(k) < 2k So, C(k)/k < 2 Hence, we find that average cost for increment a counter for one time is constant and it does not depend on the number of bit. We conclude that increment of a counter is constant cost operation.
References :
http://www.cs.cornell.edu/courses/cs3110/2013sp/supplemental/recitations/rec21.htmlhttp://faculty.cs.tamu.edu/klappi/csce411-s17/csce411-amortized3.pdf
http://www.cs.cornell.edu/courses/cs3110/2013sp/supplemental/recitations/rec21.html
http://faculty.cs.tamu.edu/klappi/csce411-s17/csce411-amortized3.pdf
Akanksha_Rai
shubham_singh
sanjoy_62
Rajput-Ji
Analysis
Digital Electronics & Logic Design
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Analysis of Algorithms | Set 1 (Asymptotic Analysis)
Practice Questions on Time Complexity Analysis
Understanding Time Complexity with Simple Examples
Time Complexity and Space Complexity
Complexity of different operations in Binary tree, Binary Search Tree and AVL tree
4-bit binary Adder-Subtractor
IEEE Standard 754 Floating Point Numbers
Difference between RAM and ROM
Difference between Unipolar, Polar and Bipolar Line Coding Schemes
Analog to Digital Conversion | [
{
"code": null,
"e": 26834,
"s": 26806,
"text": "\n28 Feb, 2022"
},
{
"code": null,
"e": 27489,
"s": 26834,
"text": "Amortized analysis refers to determining the time-averaged running time for a sequence (not an individual) operation. It is different from average case analysis be... |
D3.js zoomTransform() Function - GeeksforGeeks | 07 Sep, 2020
The d3.zoomTransform() Function in D3.js is used to get the current transform for the specified node.
Syntax:
d3.zoomTransform(node)
Parameters: This function accepts a single parameter as mentioned above and described below
node: This parameter is the element that received the input event.
Return Value: This function returns the transformed zoom behaviour.
Below programs illustrate the d3.zoomTransform() function in D3.js.
Example 1:
<!DOCTYPE html> <html> <head> <meta charset="utf-8"> <script src="https://d3js.org/d3.v4.min.js"> </script> </head> <body> <center> <h1 style="color: green;"> Geeksforgeeks </h1> <h3>D3.js | d3.zoomTransform() Function</h3> <svg width="400" height="250"></svg> <script> var svg = d3.select("svg"), width = +svg.attr("width"), height = +svg.attr("height"); var radius = 30; var circle1 = {x: 100, y: height /2 } ; var circle2 = {x: 300, y: height /2 } ; var circle1 = svg.append("circle") .attr("cx", circle1.x) .attr("cy", circle1.y) .attr("r", radius) .attr("fill", "red"); var circle2 = svg.append("circle") .attr("cx", circle2.x) .attr("cy", circle2.y) .attr("r", radius) .attr("fill", "green"); //define zoom behaviour var zoom_handler = d3.zoom() .on("zoom", zoom_actions); zoom_handler(circle2); function zoom_actions(){ var transform = d3.zoomTransform(this); /* same as this.setAttribute( "transform", "translate(" + transform.x + ", " + transform.y + ") scale( " + transform.k + ")"); */ this.setAttribute("transform", transform) } </script> </center></body> </html>
Output:
Example 2:
<!DOCTYPE html> <html> <head> <meta charset="utf-8"> <script src="https://d3js.org/d3.v4.min.js"> </script> </head> <body> <center> <h1 style="color: green;"> Geeksforgeeks </h1> <h3>D3.js | d3.zoomTransform() Function</h3> <canvas width="500" height="300"></canvas> <script> var canvas = d3.select("canvas"), context = canvas.node().getContext("2d"), width = canvas.property("width"), height = canvas.property("height"), radius = 2.5; var points = d3.range(200).map(phyllotaxis(10)); canvas.call(d3.zoom() .scaleExtent([1 / 2, 4]) .on("zoom", zoomed)); drawPoints(); var k = 1; function zoomed() { context.save(); context.clearRect(0, 0, width, height); context.translate(d3.event.transform.x, d3.event.transform.y); context.scale(d3.event.transform.k, d3.event.transform.k); k = d3.event.transform.k; drawPoints(); context.restore(); } function drawPoints() { context.beginPath(); points.forEach(drawPoint); context.fill(); } function drawPoint(point) { context.moveTo(point[0] + radius, point[1]); context.arc(point[0], point[1], radius, 0, 2 * Math.PI); } function phyllotaxis(radius) { var theta = Math.PI * (3 - Math.sqrt(5)); return function(i) { var r = radius * Math.sqrt(i), a = theta * i; return [ width / 2 + r * Math.cos(a), height / 2 + r * Math.sin(a) ]; }; } </script> </center></body> </html>
Output:
D3.js
JavaScript
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Convert a string to an integer in JavaScript
Difference between var, let and const keywords in JavaScript
Differences between Functional Components and Class Components in React
How to append HTML code to a div using JavaScript ?
How to Open URL in New Tab using JavaScript ?
Roadmap to Become a Web Developer in 2022
Installation of Node.js on Linux
Top 10 Projects For Beginners To Practice HTML and CSS Skills
How to fetch data from an API in ReactJS ?
How to insert spaces/tabs in text using HTML/CSS? | [
{
"code": null,
"e": 24162,
"s": 24134,
"text": "\n07 Sep, 2020"
},
{
"code": null,
"e": 24264,
"s": 24162,
"text": "The d3.zoomTransform() Function in D3.js is used to get the current transform for the specified node."
},
{
"code": null,
"e": 24272,
"s": 24264,
... |
PyQt5 – How to hide app from taskbar ? - GeeksforGeeks | 26 Mar, 2020
While creating a PyQt5 app the window get opened and in task bar, automatically the app appears and when we close the app it get removed.
A taskbar is an element of a graphical user interface which has various purposes. It typically shows which programs are currently running. The specific design and layout of the taskbar varies between individual operating systems, but generally assumes the form of a strip located along one edge of the screen.
In this article we will see how to hide the app from the task bar, in order to do so we will use setWindowFlag() method and pass which belongs to the QWidget class.
Syntax : setWindowFlag(QtCore.Qt.Tool)
Argument : It takes Window type as argument.
Action performed : It hides the app from the task bar.
Code :
# importing the required libraries from PyQt5.QtWidgets import * from PyQt5.QtGui import * from PyQt5 import QtCoreimport sys class Window(QMainWindow): def __init__(self): super().__init__() # this will hide the app from task bar self.setWindowFlag(QtCore.Qt.Tool) # set the title self.setWindowTitle("NO task bar") # setting the geometry of window self.setGeometry(60, 60, 800, 500) # creating a label widget # by default label will display at top left corner self.label_1 = QLabel('no task bar', self) # moving position self.label_1.move(100, 100) # setting up border and background color self.label_1.setStyleSheet("background-color: lightgreen; border: 3px solid green") # show all the widgets self.show() # create pyqt5 appApp = QApplication(sys.argv) # create the instance of our Windowwindow = Window()# start the appsys.exit(App.exec())
Output :
Python-gui
Python-PyQt
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
How to Install PIP on Windows ?
Selecting rows in pandas DataFrame based on conditions
How to drop one or multiple columns in Pandas Dataframe
How To Convert Python Dictionary To JSON?
Check if element exists in list in Python
Python | Get unique values from a list
Defaultdict in Python
Python OOPs Concepts
Python | os.path.join() method
Python | Pandas dataframe.groupby() | [
{
"code": null,
"e": 24292,
"s": 24264,
"text": "\n26 Mar, 2020"
},
{
"code": null,
"e": 24430,
"s": 24292,
"text": "While creating a PyQt5 app the window get opened and in task bar, automatically the app appears and when we close the app it get removed."
},
{
"code": nul... |
HTML | DOM innerText Property - GeeksforGeeks | 25 Jul, 2019
The DOM innerText Property is used to set or returns the text content of a specified node and its descendants. This property is very similar to text content property but returns the content of all elements, except for <script> and <style> elements.
Syntax: It is used to set the innerText property.
node.innerText = text
Return Value: It returns a string value which represent the text content of the element along with its descendants.Example-1:
<!DOCTYPE html><html> <head> <title> HTML DOM innerText Property </title></head> <body> <h1>GeeksforGeeks</h1> <h2>HTML DOM textContent Property</h2> <button id="geeks" onclick="MyGeeks()"> Submit </button> <p id="sudo"></p> <script> function MyGeeks() { var text = document.getElementById("geeks").innerText; document.getElementById("sudo").innerHTML = text; } </script></body> </html>
Output :Before clicking on the Button:
After clicking on the Button:
Example-2:
<!DOCTYPE html><html> <head> <title> HTML DOM innerText Property </title></head> <body> <center> <h1>GeeksforGeeks</h1> <h2>HTML DOM innerText Property</h2> <p id="geeks" style="font-size:20px;">Hello Geeks!</p> <button onclick="MyGeeks()"> Submit </button> <!-- MyGeeks function replace the inner text--> <script> function MyGeeks() { document.getElementById("geeks").innerText = "Welcome to GeeksforGeeks!"; } </script> </center></body> </html>
Output :Before clicking on The Button:After clicking on the Button:
Supported Browsers: The browser supported by DOM innerText property are listed below:
Google Chrome 1.0
Internet Explorer 4.0
Firefox 1.0
Opera 3.5
Safari 1.0
Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course.
HTML-DOM
Picked
HTML
Web Technologies
HTML
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to insert spaces/tabs in text using HTML/CSS?
Top 10 Projects For Beginners To Practice HTML and CSS Skills
How to update Node.js and NPM to next version ?
How to set the default value for an HTML <select> element ?
Hide or show elements in HTML using display property
Remove elements from a JavaScript Array
Installation of Node.js on Linux
Convert a string to an integer in JavaScript
How to fetch data from an API in ReactJS ?
How to insert spaces/tabs in text using HTML/CSS? | [
{
"code": null,
"e": 25167,
"s": 25139,
"text": "\n25 Jul, 2019"
},
{
"code": null,
"e": 25416,
"s": 25167,
"text": "The DOM innerText Property is used to set or returns the text content of a specified node and its descendants. This property is very similar to text content proper... |
Area of the circumcircle of any triangles with sides given - GeeksforGeeks | 11 Nov, 2021
Given a triangle with known sides a, b and c; the task is to find the area of its circumcircle.Examples:
Input: a = 2, b = 2, c = 3
Output: 7.17714
Input: a = 4, b = 5, c = 3
Output: 19.625
Approach: For a triangle with side lengths a, b, and c,
Radius of the circumcircle:
where A = √(s*(s-a)*(s-b)*(s-c))
and s = (a+b+c)/2 is the semiperimeter.
Therefore, Area of the circumcircle:
Below is the implementation of the above approach:
C++
Java
Python3
C#
PHP
Javascript
// C++ Program to find the area// the circumcircle of the given triangle #include <bits/stdc++.h>using namespace std; // Function to find the area// of the circumcirclefloat circlearea(float a, float b, float c){ // the sides cannot be negative if (a < 0 || b < 0 || c < 0) return -1; // semi-perimeter of the circle float p = (a + b + c) / 2; // area of triangle float At = sqrt(p * (p - a) * (p - b) * (p - c)); // area of the circle float A = 3.14 * pow(((a * b * c) / (4 * At)), 2); return A;} // Driver codeint main(){ // Get the sides of the triangle float a = 4, b = 5, c = 3; // Find and print the area of the circumcircle cout << circlearea(a, b, c) << endl; return 0;}
// Java Program to find the area// the circumcircle of the given triangleimport java.*;class gfg{// Function to find the area// of the circumcirclepublic double circlearea(double a, double b, double c){ // the sides cannot be negative if (a < 0 || b < 0 || c < 0) return -1; // semi-perimeter of the circle double p = (a + b + c) / 2; // area of triangle double At = Math.sqrt(p * (p - a) * (p - b) * (p - c)); // area of the circle double A = 3.14 * Math.pow(((a * b * c) / (4 * At)), 2); return A;}} class geek{// Driver codepublic static void main(String[] args){ gfg g = new gfg(); // Get the sides of the triangle double a = 4, b = 5, c = 3; // Find and print the area of the circumcircle System.out.println(g.circlearea(a, b, c)); }} //This code is contributed by shk..
# Python3 Program to find the area# the circumcircle of the given triangleimport math # Function to find the area# of the circumcircledef circlearea(a, b, c): # the sides cannot be negative if (a < 0 or b < 0 or c < 0): return -1; # semi-perimeter of the circle p = (a + b + c) / 2; # area of triangle At = math.sqrt(p * (p - a) * (p - b) * (p - c)); # area of the circle A = 3.14 * pow(((a * b * c) / (4 * At)), 2); return A; # Driver code # Get the sides of the trianglea = 4;b = 5;c = 3; # Find and print the area# of the circumcircleprint (float(circlearea(a, b, c))); # This code is contributed# by Shivi_Aggarwal
// C# Program to find the area// the circumcircle of the given triangleusing System;class gfg{ // Function to find the area // of the circumcircle public double circlearea(double a, double b, double c) { // the sides cannot be negative if (a < 0 || b < 0 || c < 0) return -1; // semi-perimeter of the circle double p = (a + b + c) / 2; // area of triangle double At = Math.Sqrt(p * (p - a) * (p - b) * (p - c)); // area of the circle double A = 3.14 * Math.Pow(((a * b * c) / (4 * At)), 2); return A; }} class geek{ // Driver code public static int Main() { gfg g = new gfg(); // Get the sides of the triangle double a = 4, b = 5, c = 3; // Find and print the area of the circumcircle Console.WriteLine(g.circlearea(a, b, c)); return 0; }}//This code os contributed by SoumikMondal
<?php// PHP Program to find the// area the circumcircle of// the given triangle // Function to find the area// of the circumcirclefunction circlearea($a, $b, $c){ // the sides cannot be negative if ($a < 0 || $b < 0 || $c < 0) return -1; // semi-perimeter of the circle $p = ($a + $b + $c) / 2; // area of triangle $At = sqrt($p * ($p - $a) * ($p - $b) * ($p - $c)); // area of the circle $A = 3.14 * pow((($a * $b * $c) / (4 * $At)), 2); return $A;} // Driver code // Get the sides of the triangle$a = 4; $b = 5; $c = 3; // Find and print the area// of the circumcircleecho circlearea($a, $b, $c); // This code is contributed// by inder_verma?>
<script>// javascript Program to find the area// the circumcircle of the given triangle // Function to find the area// of the circumcirclefunction circlearea(a , b , c){ // the sides cannot be negative if (a < 0 || b < 0 || c < 0) return -1; // semi-perimeter of the circle var p = (a + b + c) / 2; // area of triangle var At = Math.sqrt(p * (p - a) * (p - b) * (p - c)); // area of the circle var A = 3.14 * Math.pow(((a * b * c) / (4 * At)), 2); return A;} // Driver code // Get the sides of the trianglevar a = 4, b = 5, c = 3; // Find and print the area of the circumcircledocument.write(circlearea(a, b, c)); // This code contributed by Princi Singh </script>
19.625
SoumikMondal
Shashank12
inderDuMCA
Shivi_Aggarwal
divyanshsinghxi
princi singh
Kirti_Mangal
circle
triangle
Geometric
School Programming
Geometric
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Equation of circle when three points on the circle are given
Convex Hull using Divide and Conquer Algorithm
Circle and Lattice Points
Orientation of 3 ordered points
Line Clipping | Set 2 (Cyrus Beck Algorithm)
Python Dictionary
Arrays in C/C++
Inheritance in C++
Reverse a string in Java
Interfaces in Java | [
{
"code": null,
"e": 25196,
"s": 25168,
"text": "\n11 Nov, 2021"
},
{
"code": null,
"e": 25303,
"s": 25196,
"text": "Given a triangle with known sides a, b and c; the task is to find the area of its circumcircle.Examples: "
},
{
"code": null,
"e": 25389,
"s": 253... |
How to use bootstrap-select for dropdown ? - GeeksforGeeks | 08 Jul, 2020
Bootstrap Select is a form control that shows a collapsable list of different values that can be selected. This can be used for displaying forms or menus to the user. This article shows the methods by which a <select> element can be styled in Bootstrap, using both custom styles and bootstrap-select.
Using the default custom styles: Bootstrap has custom styles that can be applied to some form elements. Custom select menus require only one custom class, that is, .custom-select to trigger the custom styles. However, custom styles are restricted to the select’s initial appearance and cannot alter the option’s due to browser limitation. The example below represents how the default <select> element can be styled using .custom-select in Bootstrap.
Example:
HTML
<!DOCTYPE html><html lang="en"> <head> <!-- Bootstrap CSS --> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.5.0/css/bootstrap.min.css"> <script src="https://kit.fontawesome.com/577845f6a5.js" crossorigin="anonymous"> </script> <!-- Optional JavaScript --> <script src="https://code.jquery.com/jquery-3.5.1.slim.min.js"> </script> <script src="https://cdn.jsdelivr.net/npm/popper.js@1.16.0/dist/umd/popper.min.js"> </script> <script src="https://stackpath.bootstrapcdn.com/bootstrap/4.5.0/js/bootstrap.min.js"> </script></head> <body> <h1 style="color: green"> GeeksforGeeks </h1> <!-- Use the custom-select class --> <select class="custom-select" style="width:150px;"> <option>Pizzas</option> <option>Burger</option> <option>Ice Cream</option> <option>Fried Potatoes</option> </select></body> </html>
Output:
There are only some style properties that can be applied to the <option> component. This is because this sort of component is a case of a “replaced component”. They are OS-dependent and are not the portion of the HTML/browser. It cannot be styled through CSS. Except for background-color and color, the style settings applied through the style object for the <option> component are disregarded.
The select option is styled by the Operating System, not by HTML. Hence the CSS style does not have any impact.
option {
background-color: color;
border-radius: value;
font-size: value
}
In general, the above values will work. However, we can not customize the padding, margin, and other properties.
Bootstrap-select: To solve the above problems, bootstrap-select can be used to style <option> and <select> tags.
Note: By default, bootstrap-select naturally recognizes the version of Bootstrap that is being used. However, there are a few occurrences where the version detection may not work.
The below example shows how bootstrap-select can be included in the page and initialized. The selectpicker class is used in the select components to auto-initialize bootstrap-select as explained in the example:
Example:
HTML
<!DOCTYPE html><html> <head> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css"> <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js"> </script> <script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.12.9/umd/popper.min.js"> </script> <script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/js/bootstrap.min.js"> </script> <!-- CDN link used below is compatible with this example --> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/bootstrap-select/1.13.1/css/bootstrap-select.min.css"> <script src="https://cdnjs.cloudflare.com/ajax/libs/bootstrap-select/1.13.1/js/bootstrap-select.min.js"> </script></head> <body> <h1 style="color: green"> GeeksforGeeks </h1> <select class="selectpicker" data-style="btn-success"> <option>Pizzas</option> <option>Burger</option> <option>Ice Cream</option> <option>Fried Potatoes</option> </select></body> </html>
Output:
Below are some attributes that can be used to style the <select> tag:
data-live-search: It allows us to add a search input.
data-tokens: It allows us to add keywords to options to improve their search ability.
data-max-options: It allows us to specify the limit the number of options that can be selected. It also works for option groups.
title: This attribute allows us to set the default placeholder text when nothing is selected.
data-style: This attribute helps us to style the button classes.
show-tick: This attribute helps us to show the checkmark icon on standard select boxes.
data-width: This attribute helps us to set the width of the select.
Below are some attributes that can be used to style the <option> tag:
data-icon: It is used to add an icon to an <option> or <optgroup>.
data-content: It is used to insert custom HTML into the <option>.
data-subtext: It is used to add a subtext to an <option> or <optgroup> element.
Example:
HTML
<!DOCTYPE html><html> <head> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css"> <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js"> </script> <script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.12.9/umd/popper.min.js"> </script> <script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/js/bootstrap.min.js"> </script> <!-- CDN link used below is compatible with this example --> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/bootstrap-select/1.13.1/css/bootstrap-select.min.css"> <script src="https://cdnjs.cloudflare.com/ajax/libs/bootstrap-select/1.13.1/js/bootstrap-select.min.js"> </script></head> <body> <h1 style="color: green"> GeeksforGeeks </h1> <!-- Using the attributes to style the <select> and <option> tag --> <select class="selectpicker"> <option data-content="<span class='badge badge-danger'>Pizzas</span>"> Pizzas </option> <option data-content="<span class='badge badge-success'>Burger</span>"> Burger </option> <option data-content="<span class='badge badge-primary'>Ice Cream</span>"> Ice Cream </option> <option data-content="<span class='badge badge-warning'>Fried Potatoes</span>"> Fried Potatoes </option> </select></body> </html>
Output:
Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course.
Bootstrap-4
Bootstrap-Misc
HTML-Misc
Picked
Bootstrap
HTML
Web Technologies
Web technologies Questions
HTML
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
How to pass data into a bootstrap modal?
How to set Bootstrap Timepicker using datetimepicker library ?
How to Show Images on Click using HTML ?
Create a Homepage for Restaurant using HTML , CSS and Bootstrap
Difference between Bootstrap 4 and Bootstrap 5
Top 10 Projects For Beginners To Practice HTML and CSS Skills
How to insert spaces/tabs in text using HTML/CSS?
How to update Node.js and NPM to next version ?
How to set input type date in dd-mm-yyyy format using HTML ?
How to Insert Form Data into Database using PHP ? | [
{
"code": null,
"e": 25042,
"s": 25014,
"text": "\n08 Jul, 2020"
},
{
"code": null,
"e": 25343,
"s": 25042,
"text": "Bootstrap Select is a form control that shows a collapsable list of different values that can be selected. This can be used for displaying forms or menus to the us... |
C Program for Selection Sort? | The selection Sort is assaulting algorithm that works bye buy a finding the smallest number from the array and then placing it to the first position. the next array that is to be traversed will start from index next to the position where the smallest number is placed.
Let's take an example to make this concept more clear.
We have an array {6, 3, 8, 12, 9} in this array the smallest element is 3. So we will place 3 at the first position, after this the array will look like {3, 6, 8, 12, 9}. Now we will again find the smallest number but this time we will not consider 3 in our search because it is in its place. Finding the next smallest element that is 6, creating an array with 6 at the second position and then again searching in the array till the array is sorted.
Working of selection Sort algorithm−
the following steps are followed by the selection Sort algorithm
Let's take an array {20, 12 , 23, 55 ,21}
Set the first element of the array as minimum.Minimum = 20
Set the first element of the array as minimum.
Minimum = 20
Compare the minimum with the next element, if it is smaller than minimum assign this element as minimum. Do this till the end of the array.Comparing with 12 : 20 > 12 , minimum = 12Comparing with 23 : 12 < 23 , minimum = 12Comparing with 55 : 12 < 55 , minimum = 12Comparing with 21 : 12 < 21 , minimum = 12
Compare the minimum with the next element, if it is smaller than minimum assign this element as minimum. Do this till the end of the array.
Comparing with 12 : 20 > 12 , minimum = 12
Comparing with 23 : 12 < 23 , minimum = 12
Comparing with 55 : 12 < 55 , minimum = 12
Comparing with 21 : 12 < 21 , minimum = 12
Place the minimum at the first position( index 0) of the array.Array = {12, 20 ,23, 55, 21}
Place the minimum at the first position( index 0) of the array.
Array = {12, 20 ,23, 55, 21}
for the next iteration, start sorting from the first unsorted element i.e. the element next to where the minimum is placed.Array = {12, 20 ,23, 55, 21}Searching starts from 20, next element where minimum is placed.Iteration 2 :Minimum = 20Comparing with 23 : 20 < 23 , minimum = 20Comparing with 55 : 20 < 55 , minimum = 20Comparing with 21 : 20 < 21 , minimum = 20Minimum in place no change,Array = {12, 20 ,23, 55, 21}Iteration 3 :Minimum = 23.Comparing with 55 : 23 < 55 , minimum = 23Comparing with 21 : 23 > 21 , minimum = 21Minimum is moved to index = 2Array = {12, 20, 21, 55, 23}Iteration 4 :Minimum = 55Comparing with 23 : 23 < 55 , minimum = 23Minimum in moved to index 3
Array = {12, 20, 21, 23, 55}
for the next iteration, start sorting from the first unsorted element i.e. the element next to where the minimum is placed.
Array = {12, 20 ,23, 55, 21}
Searching starts from 20, next element where minimum is placed.
Iteration 2 :
Minimum = 20
Comparing with 23 : 20 < 23 , minimum = 20
Comparing with 55 : 20 < 55 , minimum = 20
Comparing with 21 : 20 < 21 , minimum = 20
Minimum in place no change,
Array = {12, 20 ,23, 55, 21}
Iteration 3 :
Minimum = 23.
Comparing with 55 : 23 < 55 , minimum = 23
Comparing with 21 : 23 > 21 , minimum = 21
Minimum is moved to index = 2
Array = {12, 20, 21, 55, 23}
Iteration 4 :
Minimum = 55
Comparing with 23 : 23 < 55 , minimum = 23
Minimum in moved to index 3
Array = {12, 20, 21, 23, 55}
#include <stdio.h>
int main() {
int arr[10]={6,12,0,18,11,99,55,45,34,2};
int n=10;
int i, j, position, swap;
for (i = 0; i < (n - 1); i++) {
position = i;
for (j = i + 1; j < n; j++) {
if (arr[position] > arr[j])
position = j;
}
if (position != i) {
swap = arr[i];
arr[i] = arr[position];
arr[position] = swap;
}
}
for (i = 0; i < n; i++)
printf("%d\t", arr[i]);
return 0;
}
0 2 6 11 12 18 34 45 55 99 | [
{
"code": null,
"e": 1331,
"s": 1062,
"text": "The selection Sort is assaulting algorithm that works bye buy a finding the smallest number from the array and then placing it to the first position. the next array that is to be traversed will start from index next to the position where the smallest nu... |
Python | sympy.cot() method - GeeksforGeeks | 17 Jul, 2019
With the help of sympy.cot() method, we are able to find the value of cot theta using sympy.cot() function.
Syntax : sympy.cot()Return : Return value of cot theta.
Example #1 :In this example we can see that by using sympy.cot() method, we can find the value of cot theta.
# import sympyfrom sympy import * # Use sympy.cot() methodgfg = simplify(cot(pi / 3)) print(gfg)
Output :
sqrt(3)/3
Example #2 :
# import sympyfrom sympy import * # Use sympy.cot() methodgfg = simplify(cot(pi / 4)) print(gfg)
Output :
1
SymPy
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Install PIP on Windows ?
How To Convert Python Dictionary To JSON?
Check if element exists in list in Python
How to drop one or multiple columns in Pandas Dataframe
Python Classes and Objects
Python | os.path.join() method
Create a directory in Python
Defaultdict in Python
Python | Pandas dataframe.groupby()
Python | Get unique values from a list | [
{
"code": null,
"e": 25647,
"s": 25619,
"text": "\n17 Jul, 2019"
},
{
"code": null,
"e": 25755,
"s": 25647,
"text": "With the help of sympy.cot() method, we are able to find the value of cot theta using sympy.cot() function."
},
{
"code": null,
"e": 25811,
"s": 25... |
Python for finance: an implementation of the Modern Portfolio Theory | by Riccardo Poli | Towards Data Science | The Modern Portfolio Theory
The Modern portfolio theory (MPT) is a financial theory that describes, in mathematical terms, concepts such as diversification and risk management. The MPT offers the investor a toolset for building a diversified portfolio, whose return is maximised for a given level of risk. The risk is commonly measured with the standard deviation.
The theory was introduced by the economist Harry Markowitz, that was awarded the Nobel Prize in Economics in 1990 for this research work. The mathematical formulation of the theory involves concepts such as variance and covariance. If you are interested in the details, please visit the dedicated Wikipedia page.
The Efficient Frontier
The concept of the Efficient Frontier can be formally defined as the set of portfolios that have the highest return at any given risk and it is closely coupled with the MPT. This frontier usually has a “C” shape, as in the figure below, and three points can be distinguished:
The portfolio with minimum risk.
The portfolio with maximum return.
The portfolio with maximum Sharpe ratio, which is typically the investors’ preferred choice, as it thought to be a good compromise between the level of risk and the expected return.
Python implementation of the MPT
The Python notebook presented below enables the user to easily explore the design space of investment opportunities, by simulating a customised number of portfolios, and creates an interactive dashboard that can be used to choose the desired trading strategy.
The full code can be found at the link below, on my GitHub page, together with other scripts of the series “python for finance”.
github.com
The implementation is made in Python 3 and the download of the data is made through the FINNHUB Application Programming Interface (API). The user will need to register (registration is free) to obtain a personal API key.
The script consists of 3 sections: input section, main body and plotting section. The user has to specify the following inputs:
The list of symbols of stocks or the exchange traded funds (ETFs) of interest (e.g. “TSLA” for Tesla, Inc.; “VOO” for the Vanguard 500 Index Fund ETF, etc.), including a short description of each stocks or ETFs.
The number of portfolios she/he wants to simulate.
The range of dates she/he wants to use to perform the back-testing.
The API key, which can be easily obtained for free here.
## DEFINE INPUTS ticks = ["EMB", "AGG", "VGT", "TSLA", "AMZN"] line_name = ["Emerging countries debt", "US debt", "S&P 500 Vanguard ETF", "Tesla", "Amazon"] num_port = 1000 start_date = '01/06/2015' end_date = '27/06/2020' api_key = "YOUR API KEY"
You can find further comments on the code and additional details in the notebook on my GitHub page. There, you will also find other tools of the “Python for finance” series.
Enjoy, and happy trading!
Riccardo Poli https://www.linkedin.com/in/riccardopoli/
Disclaimer: investing in the stock market involves risk and can lead to monetary loss. The content of this article is not to be taken as financial advice. | [
{
"code": null,
"e": 74,
"s": 46,
"text": "The Modern Portfolio Theory"
},
{
"code": null,
"e": 411,
"s": 74,
"text": "The Modern portfolio theory (MPT) is a financial theory that describes, in mathematical terms, concepts such as diversification and risk management. The MPT offe... |
C# | Verbatim String Literal - @ - GeeksforGeeks | 16 Oct, 2019
In C#, a verbatim string is created using a special symbol @. @ is known as a verbatim identifier. If a string contains @ as a prefix followed by double quotes, then compiler identifies that string as a verbatim string and compile that string. The main advantage of @ symbol is to tell the string constructor to ignore escape characters and line breaks. There is mainly three uses of @ symbol which is as follows:
Use 1: Keyword as an IdentifierThis symbol allows using a keyword as an identifier. The @ symbol prefixes the keyword, so the compiler takes keyword as an identifier without any error as shown in the below example:
Example:
// C# program to illustrate// the use of @ by using keyword// as an identifierusing System; public class GFG { // Main method static public void Main() { // Creating and initializing the array // here 'for' keyword is used as // an identifier by using @ symbol string[] @for = {"C#", "PHP", "Java", "Python"}; // as and for keywords is // as an identifier // using @ symbol foreach (string @as in @for) { Console.WriteLine("Element of Array: {0}", @as); } }}
Element of Array: C#
Element of Array: PHP
Element of Array: Java
Element of Array: Python
Use 2: For printing the escape sequences in string literals and also using the line breaks etc. in a string literal without any escape sequence.
If one will put the escape sequence like “\\” (for backslash), “\u” (Unicode escape sequence), “\x” (hexadecimal escape sequence) etc. in a string literal without using @ symbol then these sequences will be interpreted by compiler automatically. But “” (double quotes) are not interpreted literally. Its like a string interpolation. Let’s see different cases with and without @ symbol.
Case 1:// taking a string literal and
// try to print double quotes
string str1 = """";
// printing output
// this will give compile
// time error as Unexpected
// symbol `'
Console.WriteLine(str1);
In the above program, the double quotes inside double quotes as a string literal are interpreted as a single quotation mark.
// taking a string literal and
// try to print double quotes
string str1 = """";
// printing output
// this will give compile
// time error as Unexpected
// symbol `'
Console.WriteLine(str1);
In the above program, the double quotes inside double quotes as a string literal are interpreted as a single quotation mark.
Case 2:// taking a string literal prefixes
// with @ and try to print double quotes
string str1 = @"""";
// printing output
// this will output as "
Console.WriteLine(str1);
In the above program, the output is double quote(“) not “”
// taking a string literal prefixes
// with @ and try to print double quotes
string str1 = @"""";
// printing output
// this will output as "
Console.WriteLine(str1);
In the above program, the output is double quote(“) not “”
Case 3:// taking a string in which we are storing
// some location of file but \Testing will
// interpreted as eascape sequence \T
// similarly \N
string str1 = "\\C:\Testing\New\Target";
// printing str1
// this will give compile time error as
// Unrecognized escape sequence `\T'
// Unrecognized escape sequence `\N'
// Unrecognized escape sequence `\T'
Console.WriteLine(str1);
// taking a string in which we are storing
// some location of file but \Testing will
// interpreted as eascape sequence \T
// similarly \N
string str1 = "\\C:\Testing\New\Target";
// printing str1
// this will give compile time error as
// Unrecognized escape sequence `\T'
// Unrecognized escape sequence `\N'
// Unrecognized escape sequence `\T'
Console.WriteLine(str1);
Case 4:// taking a string and prefix literal with @ symbol.
// Storing some location of file
string str1 = @"\\C:\Testing\New\Target";
// printing str1 will give output as
// \\C:\Testing\New\Target
Console.WriteLine(str1);
// taking a string and prefix literal with @ symbol.
// Storing some location of file
string str1 = @"\\C:\Testing\New\Target";
// printing str1 will give output as
// \\C:\Testing\New\Target
Console.WriteLine(str1);
Program:
// C# program to illustrate// the use of @ in terms of // escape sequences and new // line and tabusing System; public class GFG { // Main method static public void Main() { // If you use the below commented // the part then this will give // Unrecognized escape sequence error // string S1 = "\\welcome \to GeeksforGeeks \ portal \"; // Console.WriteLine("String 1 is :{0}", S1); // By using @ in the given string // it runs smoothly because // @ symbol tells the compiler to // ignore all escape sequences string S2 = @"\\welcome \to GeeksforGeeks \ portal \"; Console.WriteLine("String 2 is: {0}", S2); // printing new line character in string literal // but it will make the string to break // into a new line, see output string S3 = "This is \n C# non verbatim string"; Console.WriteLine("String 3 is :{0}", S3); // By using @ symbol /n does not processed string S4 = @"This is \n C# verbatim string"; Console.WriteLine("String 4 is :{0}", S4); // printing a string literal contains // tabs and new line without using // any escape sequence Console.WriteLine(@"Without Tab Sequence and New Line Character C C++ Java Python"); }}
String 2 is: \\welcome \to GeeksforGeeks \ portal \
String 3 is :This is
C# non verbatim string
String 4 is :This is \n C# verbatim string
Without Tab Sequence and New Line Character
C C++ Java Python
nidhi_biet
CSharp-string
C#
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Difference between Abstract Class and Interface in C#
String.Split() Method in C# with Examples
C# | How to check whether a List contains a specified element
C# | IsNullOrEmpty() Method
C# | Arrays of Strings
C# | Delegates
C# | Abstract Classes
Difference between Ref and Out keywords in C#
Extension Method in C#
C# | Replace() Method | [
{
"code": null,
"e": 26549,
"s": 26521,
"text": "\n16 Oct, 2019"
},
{
"code": null,
"e": 26963,
"s": 26549,
"text": "In C#, a verbatim string is created using a special symbol @. @ is known as a verbatim identifier. If a string contains @ as a prefix followed by double quotes, th... |
JavaScript Map forEach() Method - GeeksforGeeks | 21 Jan, 2022
Below is the example of Map.forEach() Method
Javascript
<script> // Creating a map using Map object let mp=new Map() // Adding values to the map mp.set("a",1); mp.set("b",2); mp.set("c",3); // Logging map object to console mp.forEach((values,keys)=>{ document.write(values,keys+"<br>") })</script>
Output:
1a
2b
3c
The Map.forEach method is used to loop over the map with the given function and executes the given function over each key-value pair.
Syntax:
myMap.forEach(callback, value, key, thisArg)
Parameters: This method accepts four parameters as mentioned above and described below:
callback: This is the function that executes on each function call.
value: This is the value for each iteration.
key: This is the key to reach iteration.
thisArg: This is the value to use as this when executing callback.
Returns: It returns the undefined value.
Code for the above method is provided below:Program 1:
HTML
<script> // Creating a map using Map object let mp=new Map() // Adding values to the map mp.set("a",65); mp.set("b",66); mp.set("c",67); mp.set("d",68); mp.set("e",69); mp.set("f",70); // Logging map object document.write(mp+ "<br>"); mp.forEach((values,keys)=>{ document.write("values: ",values+ ", keys: ",keys+ "<br>") })</script>
Output:
[object Map]
values: 65, keys: a
values: 66, keys: b
values: 67, keys: c
values: 68, keys: d
values: 69, keys: e
values: 70, keys: f
Program 2:
HTML
<!DOCTYPE html><html lang="en"><head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Document</title></head><body> <ul class="list"> </ul> <script> // Creating a map using Map object let mp=new Map() //adding values to the map mp.set("a",65); mp.set("b",66); mp.set("c",67); mp.set("d",68); mp.set("e",69); mp.set("f",70); //logging map object to console document.log(mp); let ul=document.querySelector("ul"); mp.forEach((values,keys)=>{ ul.innerHTML+=ul.innerHTML="<li>"+keys+" =>"+values+"</li>" }) </script></body></html>
Output:
Browsers Supported:
Google Chrome 38 and above
Microsoft Edge 12 and above
Mozilla Firefox 25 and above
Internet Explorer 11 and above
Safari 8 and above
Opera 25 and above
ysachin2314
JavaScript-Methods
JavaScript
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Remove elements from a JavaScript Array
Convert a string to an integer in JavaScript
Difference between var, let and const keywords in JavaScript
Differences between Functional Components and Class Components in React
How to append HTML code to a div using JavaScript ?
Remove elements from a JavaScript Array
Installation of Node.js on Linux
Convert a string to an integer in JavaScript
How to fetch data from an API in ReactJS ?
How to insert spaces/tabs in text using HTML/CSS? | [
{
"code": null,
"e": 25755,
"s": 25727,
"text": "\n21 Jan, 2022"
},
{
"code": null,
"e": 25801,
"s": 25755,
"text": "Below is the example of Map.forEach() Method"
},
{
"code": null,
"e": 25812,
"s": 25801,
"text": "Javascript"
},
{
"code": "<script> ... |
GPS trajectory clustering with Python | by Steve Cao | Towards Data Science | The rapid growth of mobile devices has led to vast amounts of GPS trajectories collected by location-based services, geo-social networks, transport, or ride-sharing apps.
GPS trajectory clustering is being increasingly used in many applications. For example, it can help to identify the frequent routes or trips. The trajectory similarity can be used to find whether the trajectory follows a certain route. It can then be used for tracking transport services, e.g. public buses in a city.
In this article, I will provide a gentle introduction to fast GPS trajectory clustering. Python open source libraries are used in this solution. Typically, there may be several issues encountered during clustering trajectories.
GPS trajectories often contain many data points. Clustering can take a long time.
Suitable similarity measures are needed to compare different trajectories.
Clustering approach and configuration
Visualizing trajectories and their clusters
There are a number of interesting articles on this topic [1, 2]. However, not all the above issues are addressed. In this article, I will illustrate a Python solution covering all these areas. The data used here is originally from the Vehicle Energy Dataset [3]. A process of clustering all GPS points was first applied to the original data. I then selected the trajectories for several start-end groups. For simplicity, the data provided in this article are for these groups of trajectories, as shown below:
The codes for this article and a tutorial can be found in my GitHub here.
The Ramer-Douglas-Peucker (RDP) algorithm is one of the most popular methods to reduce the number of points in a polyline. The objective of this algorithm is to find a subset of points to represent the whole polyline.
The parameter epsilon is set as 10 (meter) here. After RDP, the number of data points in the trajectories is reduced significantly, compared with the original data points.
traj # data points original after RDP0 266 211 276 362 239 34...
This contributes to a drastic reduction in the computation time for the distance matrix.
distance matrix without RDP: 61.0960 secondsdistance matrix with RDP: 0.4899 seconds
There are various methods to measure the similarity of two trajectories, e.g. Fréchet distance, Area method, as visualized in the following figure. They can be calculated using similaritymeasures package.
For the examples in this article, the Fréchet distance is used to compute the distance matrix as below. The Area similarity measure is also supported.
Scikit-learn provides several clustering methods. In this article, I present the DBSCAN with the pre-computed matrix. Parameters are set as below:
eps: 1000 (m) for Fréchet distance, 300000 (m2) for Area measure.
min_samples:1, it is to make sure all trajectories will be clustered into a cluster.
The codes are as below.
Matplotlib is a comprehensive library for visualizations in Python. It provides flexible solutions for both static and animated visualization.
The clustering results for the six groups are visualized using subplots in Matplotlib. As shown below, most trajectory groups have several clusters. Only the last two groups have one cluster, for the reason that the similarity of these trajectories is high (the Fréchet distances are less than eps, i.e.1000).
As many GPS-enabled devices are used today, GPS trajectory clustering is being increasingly popular. This article walked you through the common problems in this field. The Python tools are then explored to solve these problems. For further readings, you can check these articles on polyline simplification, distance matrix calculation, and web-based spatial visualizations.
João Paulo Figueira, Clustering Moving Object Trajectories,William Yee, A Gentle Introduction to IoT/GPS Trajectory Clustering and Geospatial ClusteringVehicle Energy Dataset (VED, A Large-scale Dataset for Vehicle Energy Consumption Research
João Paulo Figueira, Clustering Moving Object Trajectories,
William Yee, A Gentle Introduction to IoT/GPS Trajectory Clustering and Geospatial Clustering
Vehicle Energy Dataset (VED, A Large-scale Dataset for Vehicle Energy Consumption Research | [
{
"code": null,
"e": 343,
"s": 172,
"text": "The rapid growth of mobile devices has led to vast amounts of GPS trajectories collected by location-based services, geo-social networks, transport, or ride-sharing apps."
},
{
"code": null,
"e": 661,
"s": 343,
"text": "GPS trajectory ... |
Convert.ChangeType Method in C# | The ChangeType() method returns an object of the specified type and whose value is equivalent to the specified object.
Let’s say we have a double type.
double val = -3.456
Now, use the ChangeType method to change the type to integer.
num = (int)Convert.ChangeType(val, TypeCode.Int32);
Let us see the complete example.
Live Demo
using System;
public class Demo {
public static void Main() {
double val = -3.456;
int num = (int)Convert.ChangeType(val, TypeCode.Int32);
Console.WriteLine("{0} converted to an Int32: {1}", val, num);
}
}
-3.456 converted to an Int32: -3 | [
{
"code": null,
"e": 1181,
"s": 1062,
"text": "The ChangeType() method returns an object of the specified type and whose value is equivalent to the specified object."
},
{
"code": null,
"e": 1214,
"s": 1181,
"text": "Let’s say we have a double type."
},
{
"code": null,
... |
Python List remove() Method | Python list method remove() searches for the given element in the list and removes the first matching element.
Following is the syntax for remove() method −
list.remove(obj)
obj − This is the object to be removed from the list.
obj − This is the object to be removed from the list.
This Python list method does not return any value but removes the given object from the list.
The following example shows the usage of remove() method.
#!/usr/bin/python
aList = [123, 'xyz', 'zara', 'abc', 'xyz'];
aList.remove('xyz');
print "List : ", aList
aList.remove('abc');
print "List : ", aList
When we run above program, it produces following result −
List : [123, 'zara', 'abc', 'xyz']
List : [123, 'zara', 'xyz']
187 Lectures
17.5 hours
Malhar Lathkar
55 Lectures
8 hours
Arnab Chakraborty
136 Lectures
11 hours
In28Minutes Official
75 Lectures
13 hours
Eduonix Learning Solutions
70 Lectures
8.5 hours
Lets Kode It
63 Lectures
6 hours
Abhilash Nelson
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2355,
"s": 2244,
"text": "Python list method remove() searches for the given element in the list and removes the first matching element."
},
{
"code": null,
"e": 2401,
"s": 2355,
"text": "Following is the syntax for remove() method −"
},
{
"code": nul... |
Connecting two points on a 3D scatter plot in Python and Matplotlib | To connect two points on a 3D scatter plot, we can take the following steps
Set the figure size and adjust the padding between and around the subplots.
Create a new figure or activate an existing figure using figure() method.
Add an axes to the current figure as a subplot arrangement.
Create lists for x, y and z.
Plot x, y and z data points using scatter() method
To connect the points, use plot() method with x, y and z data points with black color line.
To display the figure, use show() method.
from matplotlib import pyplot as plt
plt.rcParams["figure.figsize"] = [7.50, 3.50]
plt.rcParams["figure.autolayout"] = True
fig = plt.figure()
ax = fig.add_subplot(projection="3d")
x, y, z = [1, 1.5], [1, 2.4], [3.4, 1.4]
ax.scatter(x, y, z, c='red', s=100)
ax.plot(x, y, z, color='black')
plt.show() | [
{
"code": null,
"e": 1138,
"s": 1062,
"text": "To connect two points on a 3D scatter plot, we can take the following steps"
},
{
"code": null,
"e": 1214,
"s": 1138,
"text": "Set the figure size and adjust the padding between and around the subplots."
},
{
"code": null,
... |
Python program to list the difference between two lists. | In this problem given two lists. Our tasks is to display difference between two lists. Python provides set() method. We use this method here. A set is an unordered collection with no duplicate elements. Set objects also support mathematical operations like union, intersection, difference, and symmetric difference.
Input::A = [10, 15, 20, 25, 30, 35, 40]
B = [25, 40, 35]
Output:
[10, 20, 30, 15]
difference list = A - B
Step 1: Input of two arrays.
Step 2: convert the lists into sets explicitly.
Step 3: simply reduce one from the other using the subtract operator.
# Python code to get difference of two lists
# Using set()
def Diff(A, B):
print("Difference of two lists ::>")
return (list(set(A) - set(B)))
# Driver Code
A=list()
n1=int(input("Enter the size of the first List ::"))
print("Enter the Element of first List ::")
for i in range(int(n1)):
k=int(input(""))
A.append(k)
B=list()
n2=int(input("Enter the size of the second List ::"))
print("Enter the Element of second List ::")
for i in range(int(n2)):
k=int(input(""))
B.append(k)
print(Diff(A, B))
Enter the size of the first List ::5
Enter the Element of first List ::
11
22
33
44
55
Enter the size of the second List ::4
Enter the Element of second List ::
11
55
44
99
Difference of two lists ::>
[33, 22] | [
{
"code": null,
"e": 1378,
"s": 1062,
"text": "In this problem given two lists. Our tasks is to display difference between two lists. Python provides set() method. We use this method here. A set is an unordered collection with no duplicate elements. Set objects also support mathematical operations l... |
Univariate Linear Regression-Theory and Practice | by Gokul S Kumar | Towards Data Science | Introduction: This article explains the math and execution of univariate linear regression. This will include the math behind cost function, gradient descent, and the convergence of cost function.
Machine Learning is majorly divided into 3 types
Supervised Machine LearningUnsupervised Machine LearningReinforcement learning
Supervised Machine Learning
Unsupervised Machine Learning
Reinforcement learning
In supervised machine learning, a set of training examples with the expected output are used to train the model. The model then after training on these examples, tries to predict the output values of another set of examples. There are two types of supervised machine learning:
Regression- predicts continuous value output. All the types of regression come under this section.Classification- predict discrete-valued outputs. SVM, KNN, and random forests come under this section.
Regression- predicts continuous value output. All the types of regression come under this section.
Classification- predict discrete-valued outputs. SVM, KNN, and random forests come under this section.
There are several types of regression algorithms such as Linear regression, Logistic regression, Polynomial regression, Stepwise regression, Ridge regression, Lasso regression, Elastic-net regression.
Let’s consider the case of Linear regression with one variable. We will be dealing with math and executions side-by-side. The link to the dataset used is given below:
https://www.kaggle.com/c/home-data-for-ml-course/data
It is the dataset having the info about the sale values of different houses in Boston and its dependency on other features.
Let’s import the required libraries:
import pandas as pdimport matplotlib.pyplot as pltfrom matplotlib import style
And load the dataset into a Pandas dataframe
read_df = pd.read_csv(‘train.csv’)df = read_df.copy()df.head()df.info()
The detailed description of the features is provided along with the data-set. Brief info can be obtained:
<class 'pandas.core.frame.DataFrame'>RangeIndex: 1460 entries, 0 to 1459Data columns (total 81 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Id 1460 non-null int64 1 MSSubClass 1460 non-null int64 2 MSZoning 1460 non-null object 3 LotFrontage 1201 non-null float64 4 LotArea 1460 non-null int64 5 Street 1460 non-null object 6 Alley 91 non-null object 7 LotShape 1460 non-null object 8 LandContour 1460 non-null object 9 Utilities 1460 non-null object 10 LotConfig 1460 non-null object 11 LandSlope 1460 non-null object 12 Neighborhood 1460 non-null object 13 Condition1 1460 non-null object 14 Condition2 1460 non-null object 15 BldgType 1460 non-null object 16 HouseStyle 1460 non-null object 17 OverallQual 1460 non-null int64 18 OverallCond 1460 non-null int64 19 YearBuilt 1460 non-null int64 20 YearRemodAdd 1460 non-null int64 21 RoofStyle 1460 non-null object 22 RoofMatl 1460 non-null object 23 Exterior1st 1460 non-null object 24 Exterior2nd 1460 non-null object 25 MasVnrType 1452 non-null object 26 MasVnrArea 1452 non-null float64 27 ExterQual 1460 non-null object 28 ExterCond 1460 non-null object 29 Foundation 1460 non-null object 30 BsmtQual 1423 non-null object 31 BsmtCond 1423 non-null object 32 BsmtExposure 1422 non-null object 33 BsmtFinType1 1423 non-null object 34 BsmtFinSF1 1460 non-null int64 35 BsmtFinType2 1422 non-null object 36 BsmtFinSF2 1460 non-null int64 37 BsmtUnfSF 1460 non-null int64 38 TotalBsmtSF 1460 non-null int64 39 Heating 1460 non-null object 40 HeatingQC 1460 non-null object 41 CentralAir 1460 non-null object 42 Electrical 1459 non-null object 43 1stFlrSF 1460 non-null int64 44 2ndFlrSF 1460 non-null int64 45 LowQualFinSF 1460 non-null int64 46 GrLivArea 1460 non-null int64 47 BsmtFullBath 1460 non-null int64 48 BsmtHalfBath 1460 non-null int64 49 FullBath 1460 non-null int64 50 HalfBath 1460 non-null int64 51 BedroomAbvGr 1460 non-null int64 52 KitchenAbvGr 1460 non-null int64 53 KitchenQual 1460 non-null object 54 TotRmsAbvGrd 1460 non-null int64 55 Functional 1460 non-null object 56 Fireplaces 1460 non-null int64 57 FireplaceQu 770 non-null object 58 GarageType 1379 non-null object 59 GarageYrBlt 1379 non-null float64 60 GarageFinish 1379 non-null object 61 GarageCars 1460 non-null int64 62 GarageArea 1460 non-null int64 63 GarageQual 1379 non-null object 64 GarageCond 1379 non-null object 65 PavedDrive 1460 non-null object 66 WoodDeckSF 1460 non-null int64 67 OpenPorchSF 1460 non-null int64 68 EnclosedPorch 1460 non-null int64 69 3SsnPorch 1460 non-null int64 70 ScreenPorch 1460 non-null int64 71 PoolArea 1460 non-null int64 72 PoolQC 7 non-null object 73 Fence 281 non-null object 74 MiscFeature 54 non-null object 75 MiscVal 1460 non-null int64 76 MoSold 1460 non-null int64 77 YrSold 1460 non-null int64 78 SaleType 1460 non-null object 79 SaleCondition 1460 non-null object 80 SalePrice 1460 non-null int64 dtypes: float64(3), int64(35), object(43)memory usage: 924.0+ KB
There are a total of 81 features including ID. The total no. of entries is 1460. We won’t be doing any data cleaning as of now as the purpose is to get to know about Linear regression alone. Lets us consider the feature LotArea as the input feature and SalePrice as the output. Plotting both of them:
fig = plt.figure(figsize = (10,10))style.use(‘ggplot’)plt.scatter(df.LotArea, df.SalePrice, s = 5, c = ‘k’)
As we can see most of the houses are having a plot area less than 50000 sqft and having a sale price less than 500000 $. All the other cases may be considered as exceptional ones or outliers(depending upon other features).
The basic idea of Linear regression is to find a straight line that fits the set of data points. I will be using the following notations in this article:
m = No. of training examples => 1460 in our case
x’s = ‘Input’ variable/features => LotArea
y’s = ‘Output’ variable/ labels => SalePrice
(x,y) comprises of one training example.
represents the i’th training example.
The process of regression, in general, is explained by the following flow chart:
The training set is seeded into the learning algorithm to produce a hypothesis. A hypothesis is a function that predicts the output values based on the input values. In our case, the LotArea is given as input to the hypothesis to get the estimated SalePrice. h is a function that maps from x’s to y’s.
In the case of Linear regression the hypothesis is represented by the following equation:
are the parameters of the hypothesis. I’ll be using ‘theta_0’ and ‘theta_1’ while referring to the parameters.
The above figure shows a pictorial example of the hypothesis. It is the regression line that fits the given data-points. Here the hypothesis is similar to the equation of a straight line as we are using Linear Regression. We can also use any type of function as the hypothesis to fit the data accordingly. The hypothesis used here is that of linear regression with one variable or also Uni-variate Linear Regression.
Let’s see how to choose the parameters of the hypothesis. The idea here is to choose the parameters so that it best fits the y’s i.e., choose theta_0 and theta_1 such that h(x) is close to the values of y for each x. This condition can be mathematically represented as follows:
where m is the no. of training examples.
The above equation is equivalent to:
Here h(x) is the hypothesis mentioned above and y is the output values for corresponding x values. The above equation reduces to finding the value of theta_0 and theta_1 which minimizes the difference between h(x) which is the estimated output and y which is the actual output.
J(theta_0, theta_1) is called the cost function. There are many types of cost functions available. The mean square error (MSE) cost function is the one usually used in regression problems, which is shown above.
Lets further analyze the cost function. For simplification consider:
The hypothesis then becomes:
and the cost function
This means that all the hypothesis functions pass through origin i.e., (0,0).
Consider the following data points:
Consider:
The following regression line will be obtained:
The value of the cost function is:
Consider
The regression line obtained will be:
The lines dropping from the data-points to the regression line are the errors between the actual and the estimated values. Calculating cost J
Consider
Plotting the cost for different values of theta_0, we get:
From the plot, we can see that the value of cost function is minimum at
which corresponds to the regression line passing through all the three data-points. From this, we can imply that the value of parameters giving the least value of cost function corresponds to the best-fitting regression line.
Let’s come back to our original cost function. As we had considered theta_0 =0 in the previous section, we were able to draw a 2-D plot for the cost function. But in reality, as cost-function depends on both theta_0 and theta_1, we will be getting a bow-shaped 3-D figure(the shape depends upon the training set). Consider the LotArea and SalePrice features from the housing prices dataset. Let’s try plotting the cost function for those features.
from mpl_toolkits.mplot3d.axes3d import Axes3Dimport numpy as npm = len(df.LotArea)theta_0 = np.linspace(10000, 100000, num = 1000)theta_1 = np.linspace(5, 10, num = 1000)Theta_0, Theta_1 = np.meshgrid(theta_0, theta_1)def cost(theta_0, theta_1, LotArea, SalePrice): h = 0 for i in range(len(LotArea)): h += (((theta_0 + (theta_1 * LotArea[i])) - SalePrice[i]) ** 2) J = (1/2*len(LotArea)) * h return Jfig = plt.figure(figsize = (12,12))style.use('ggplot')ax = fig.add_subplot(111, projection = '3d')ax.plot_surface(Theta_0, Theta_1, cost(Theta_0, Theta_1, df.LotArea, df.SalePrice), cmap = 'viridis')ax.set_xlabel('θ0')ax.set_ylabel('θ1')ax.set_zlabel('J')
The following plot is obtained:
The XY plane represents the different values of theta_0 and theta_1 and the height of the surface plot from any point on the XY plane gives the value of J for the values of theta_0 and theta_1 corresponding to the point. We can also draw a contour plot to represent J(theta_0, theta_1).
fig = plt.figure(figsize = (12,12))style.use(‘ggplot’)ax = fig.add_subplot(111)cs = ax.contourf(Theta_0, Theta_1, cost(Theta_0, Theta_1, df.LotArea, df.SalePrice))cbar = fig.colorbar(cs)ax.set_xlabel('θ0')ax.set_ylabel('θ1')plt.show()
The following contour plot is obtained:
The side-bar shows the changing values of J with the variation in the colors of the plot. All the points in a particular ring in the contour plot have the same value of J for different values of theta_0 and theta_1.
Lets plot the regression lines for the values of theta_0 and theta_1 that we have considered above:
minj = np.min(cost(Theta_0, Theta_1, df.LotArea, df.SalePrice))point = np.array(cost(Theta_0, Theta_1, df.LotArea, df.SalePrice)) == minjposition = np.where(point)positionoutput>>(array([9]), array([999]))
The output shows that the position of values of theta_0 and theta_1 that gives the minimum cost value.
theta_1_min = Theta_1[9][999]theta_0_min = Theta_0[9][999]def fitline(theta_0, theta_1, LotArea): x = [] for i in range(m): x.append(theta_0 + (theta_1 * LotArea[i])) return xfig = plt.figure(figsize = (10,10))style.use('ggplot')count = 0for i in range(len(theta_0)): plt.plot(df.LotArea,fitline(theta_0[i], theta_1[i], df.LotArea), color = 'k', alpha = 0.1, linewidth = 0.1) count += 1print(count)plt.scatter(df.LotArea, df.SalePrice, s = 5, c = 'r')plt.plot(df.LotArea, (theta_0_min + (theta_1_min * df.LotArea)), color = 'k')plt.show()
The shaded area consists of 1000 lines corresponding to the theta_0 and theta_1 values and the highlighted line is the line corresponding to the values of parameters for which we get the minimum cost. This is not the actual minimum cost but only in the minimum in the considered range of parameters.
The algorithm used to find the values of the parameters which gives the actual minimum value of cost is Gradient Descent. It is a general algorithm which is used to minimize not only cost function J but also other function as well. The general outline of this algorithm will be:
Start with some value of theta_0 and theta_1(generally both are set to zero.)Keep changing the values until we reach the minimum value of J(theta_0, theta_1).
Start with some value of theta_0 and theta_1(generally both are set to zero.)
Keep changing the values until we reach the minimum value of J(theta_0, theta_1).
The algorithm is as follows:
where
Alpha or learning rate decides the size of steps that the algorithm takes to reach the minimum cost value(i.e., the values of parameters which give the minimum cost value). The very important detail that we have to notice is that we have to update theta_1 and theta_0 simultaneously. We shouldn’t be updating theta_0, then upgrade the cost function and then theta_1, that doesn’t work it the way we want.
By looking into the partial derivative part of the algorithm, consider the 2-D cost function plot that we plotted earlier. Assume we are updating the value of theta_1 as we neglected theta_0 in that case.
The partial derivative term gives us the slope of the tangent line at each theta_1 point on the plot. The algorithm will be:
Consider the point and its tangent line in the above plot. As we can see the tangent line has a positive slope, which puts the equation as:
Therefore we can see that the value of theta_1 is reducing, which is what we have to do reach the minimum point step by step. If we consider a value of theta_1 on the left side of the plot, the slope will be a negative number thus increasing theta_1 and reaching the minimum point eventually.
The learning rate(alpha) can also affect the path of convergence. If alpha is too small then the algorithm takes a long time to reach the minimum value thereby delaying convergence.
If alpha is too large then the algorithm can overshoot the minimum thus not converging or maybe even diverging, as shown in the above plot.
Considering applying the algorithm when we have reached the local minimum value. The partial derivative part of the algorithm will be zero as the tangent line will be horizontal at that point, thereby the value of theta_0 won't be changed.
As we approach a local minimum value, the slope of the tangent lines keeps on decreasing to zero. So the value by which the parameters are decreased or increased also reduces as we near the minimum value. By this, we can see that the algorithm converges even if we maintain a constant value of alpha.
Applying this Gradient Descent algorithm to the problem of Univariate Linear regression.
For j = 0
For j = 1
which gives us the gradient descent algorithm as
Consider the housing prices dataset again. Let's perform Gradient Descent on the feature LotArea and the output SalePrice.
from sklearn import preprocessingx = df.LotAreay = df.SalePricex = preprocessing.scale(x)theta_0_gd = 0theta_1_gd = 0alpha = 0.01h_theta_0_gd = 1h_theta_1_gd = 1epoch = 0fig = plt.figure(figsize = (10,10))style.use('ggplot')plt.scatter(x, y, s = 5, c = 'r')while h_theta_0_gd != 0 or h_theta_0_gd != 0: if epoch > 1000: break h_theta_0_gd = 0 h_theta_1_gd = 0 for i in range(m): h_theta_0_gd += (theta_0_gd + (theta_1_gd * x[i]) - y[i]) h_theta_1_gd += ((theta_0_gd + (theta_1_gd * x[i]) - y[i]) * x[i]) h_theta_0_gd = (1/m) * h_theta_0_gd h_theta_1_gd = (1/m) * h_theta_1_gd theta_0_gd -= (alpha * h_theta_0_gd) theta_1_gd -= (alpha * h_theta_1_gd) plt.plot(x,(theta_0_gd + (theta_1_gd * x)), color = 'k', alpha = 0.1, linewidth = 1) epoch += 1plt.plot(x,(theta_0_gd + (theta_1_gd * x)), color = 'r', linewidth = 3)plt.show()
The initial values of parameters are set to be zero. The learning rate is set as 0.01. A maximum of 1000 repetitions or epochs is allowed. All the shaded part in the plot consists of regression lines plotted after obtaining the parameters in each epoch. The highlighted red-line is the final regression line after 1000 epochs. As we can see the density of the lines increases as it nears to the red-line. This happens as the steps taken when the algorithm nears t the minimum value are small thus reducing the space between the corresponding regression lines in each epoch. To confirm if we have correctly applied the algorithm, we can cross-verify it with the LinearRegression model from sklearn.
from sklearn import model_selectionfrom sklearn.linear_model import LinearRegressionx_model = np.array(df.LotArea).reshape(-1,1)y_model = np.array(df.SalePrice).reshape(-1,1)x_model = preprocessing.scale(x_model)x_train, x_test, y_train, y_test = model_selection.train_test_split(x_model, y_model, test_size = 0.33)clf = LinearRegression()clf.fit(x_train, y_train)theta_1_model = clf.coef_theta_0_model = clf.intercept_fig = plt.figure(figsize = (10,10))style.use('ggplot')plt.scatter(x, y, s = 5, c = 'r')plt.plot(x_model,(theta_0_model + (theta_1_model * x_model)), color = 'k')
As we can see the regression model from sklearn also produces the same graph.
Conclusion: We came to know about the different types of machine learning and also the types of supervised learning. We saw the concepts behind hypothesis, cost function, and gradient descent of univariate linear regression and built a regression line on the housing prices dataset’s features using the above concepts from scratch. It was then compared with the model built using sklearn’s LinearRegression model.
References:
I am learning most of the concepts about Machine learning from this Youtube playlist. It is quite helpful and easy to understand too. | [
{
"code": null,
"e": 368,
"s": 171,
"text": "Introduction: This article explains the math and execution of univariate linear regression. This will include the math behind cost function, gradient descent, and the convergence of cost function."
},
{
"code": null,
"e": 417,
"s": 368,
... |
How to Build a Gallery of Streamlit Apps as a Single Web App | by Alan Jones | Towards Data Science | I’ve been playing around with Streamlit for a little while now and have written a handful of experimental apps. While I’m happy enough with them (as far as they go — they are only simple examples), none of them are worthy of a dedicated web page.
So what is the best way of presenting a gallery of apps that can be accessed from a single web page?
You could put all the code in a single app and chose between them with an if statement (as I did in this article) but that’s not very scalable — once you get to more than a couple of apps it gets messy.
So how about creating a library of apps with a dispatcher that calls each one depending of the selection in a drop-down menu. That way the apps are written as stand-alone functions. You just create a library folder to contain the apps and the dispatcher calls the appropriate one.
That sounds better, I think.
So, to that end, I have created an application template called, Mustrapp (Multiple Streamlit Apps), (that you can download for free) which makes the process of creating a multiple Streamlit app very easy.
For the apps to work you need to structure them so that they are in a callable function named run(). And, ideally (but not necessarily), contain a description string that will be used as the text in the drop down menu (if you don’t provide one the module name will be used).
So, an app will look like this:
description = "My first app"def run(): st.header("This is my first app") st.write("I hope you like it")if __name__ == "__main__": run()
You can have as many functions as you like in an app but it must start in a function called run and that’s about the only requirement. The last if statement is optional but useful as it allows you to run the individual app as a stand-alone app, or as part of the multi-app. It basically checks to see is the function has been called as a main module (in which case its __name__ will be __main__). If it is called as a library module (which is what we intend to do) then __name__ will be the module name.
Being able to run the app as a stand-alone function can be useful for debugging purposes as you don’t need the rest of the application to be operational.
Let’s say you write the app above and save it in a file app1.py.
You could then write another app, app2.py, that looks like this:
description = "My second app"def run(): st.header("This is my second app") st.write("Here is an image:") st.image("image.png")if __name__ == "__main__": run()
We then put these files in a folder called, for example, stlib — this will be our library folder where all our apps will live. In order for Python to recognise this folder as a library it needs to contain a file called __init__.py. If you’ve downloaded the template it is already there.
The dispatcher lives in the home folder. This is the program that will be run when we start up the Streamlit app. I’m going to call it index.py.
So our complete app is structured like this:
/home | |-- index.py | |-- /stlib | |-- __init__.py | |-- app1.py | |-- app2.py
To run the app you execute this:
streamlit run index.py
And this will pop up in your browser:
You can see the full code of the dispatcher in a Gist at the very end of this article (it’s too long to include here). And I will also include, at the end, a link to a Github repository where you can clone, or download a zip file of, the complete template including the dispatcher, the directory structure and a couple of dummy apps.
But basically it works like this.
First we import the Streamlit library (of course) along with the other libraries that we need. (It also sets the layout to wide. This is optional but to my mind it looks better if you are going to use a sidebar, as we are doing here — this can be removed if you prefer.)
We then set about identifying the apps in the library (stlib) and record the names, module references and descriptions of the apps in global arrays called names, modules and descriptions. Any module in the stlib library is deemed to be a Streamlit app unless its name begins with the _ character. In this way you can add non-app library modules (e.g. _mylibrary.py), if you need to and they won’t be picked up as Streamlit apps.
Having done this, we create a drop-down menu populated with the module names (the descriptions are displayed, if available) and once selected, the appropriate module is selected from the modules array and its function run() is executed. In other words the app that corresponds to the select is run.
Even if I say it myself, this works quite well. Using the dispatcher app with an arbitrary number of callable apps is easy and it is very straightforward to maintain. Now we can deploy any number of mini-apps in one go, on one web page.
None of this is rocket science but I hope it has been useful. You can see a sample website that uses this template here and you can download, or clone, the complete template from its Github repository.
As ever thanks for reading and if you would like to know when I publish new articles, please consider signing up for an email alert here. I also publish an occasional free newsletter on Substack. Please leave any comments below or you can get in touch via LinkedIn or Twitter.
alanjones2.github.io
Here, for interest, is the dispatcher code: | [
{
"code": null,
"e": 418,
"s": 171,
"text": "I’ve been playing around with Streamlit for a little while now and have written a handful of experimental apps. While I’m happy enough with them (as far as they go — they are only simple examples), none of them are worthy of a dedicated web page."
},
... |
How to sort an ArrayList in C#? | To sort an ArrayList in C#, use the Sort() method.
The following is the ArrayList.
ArrayList arr = new ArrayList();
arr.Add(32);
arr.Add(12);
arr.Add(55);
arr.Add(8);
arr.Add(13);
Now the Sort() method is used to sort the ArrayList.
arr.Sort();
You can try to run the following code to sort an ArrayList in C#.
Live Demo
using System;
using System.Collections;
namespace Demo {
class Program {
static void Main(string[] args) {
ArrayList arr = new ArrayList();
arr.Add(32);
arr.Add(12);
arr.Add(55);
arr.Add(8);
arr.Add(13);
Console.Write("List: ");
foreach (int i in arr) {
Console.Write(i + " ");
}
Console.WriteLine();
Console.Write("Sorted List: ");
arr.Sort();
foreach (int i in arr) {
Console.Write(i + " ");
}
Console.WriteLine();
Console.ReadKey();
}
}
}
List: 32 12 55 8 13
Sorted List: 8 12 13 32 55 | [
{
"code": null,
"e": 1113,
"s": 1062,
"text": "To sort an ArrayList in C#, use the Sort() method."
},
{
"code": null,
"e": 1145,
"s": 1113,
"text": "The following is the ArrayList."
},
{
"code": null,
"e": 1242,
"s": 1145,
"text": "ArrayList arr = new ArrayLis... |
do...while loop examples | C - Programming HOME
C - Basic Introduction
C - Program Structure
C - Reserved Keywords
C - Basic Datatypes
C - Variable Types
C - Storage Classes
C - Using Constants
C - Operator Types
C - Control Statements
C - Input and Output
C - Pointing to Data
C - Using Functions
C - Play with Strings
C - Structured Datatypes
C - Working with Files
C - Bits Manipulation
C - Pre-Processors
C - Useful Concepts
C - Built-in Functions
C - Useful Resources
Computer Glossary
Who is Who
Copyright © 2014 by tutorialspoint
Try following example to understand do...while loop. You can put the following code into a test.c file and then compile it and then run it.
#include <stdio.h>
main()
{
int i = 10;
do{
printf("Hello %d\n", i );
i = i -1;
}while ( i > 0 );
}
This will produce following output:
Hello 10
Hello 9
Hello 8
Hello 7
Hello 6
Hello 5
Hello 4
Hello 3
Hello 2
Hello 1
You can make use of break to come out of do...while loop at any time.
#include <stdio.h>
main()
{
int i = 10;
do{
printf("Hello %d\n", i );
i = i -1;
if( i == 6 )
{
break;
}
}while ( i > 0 );
}
This will produce following output:
Hello 10
Hello 9
Hello 8
Hello 7
Hello 6
Advertisements
6 Lectures
1.5 hours
Mr. Pradeep Kshetrapal
41 Lectures
5 hours
AR Shankar
11 Lectures
58 mins
Musab Zayadneh
59 Lectures
15.5 hours
Narendra P
11 Lectures
1 hours
Sagar Mehta
39 Lectures
4 hours
Vikas Yadav
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 1475,
"s": 1454,
"text": "C - Programming HOME"
},
{
"code": null,
"e": 1498,
"s": 1475,
"text": "C - Basic Introduction"
},
{
"code": null,
"e": 1520,
"s": 1498,
"text": "C - Program Structure"
},
{
"code": null,
"e": 1542,
... |
Semi-Automated Exploratory Data Analysis (EDA) in Python | by Destin Gong | Towards Data Science | Exploratory Data Analysis, also known as EDA, has become an increasingly hot topic in data science. Just as the name suggests, it is the process of trial and error in an uncertain space, with the goal of finding insights. It usually happens at the early stage of the data science lifecycle. Although there is no clear-cut between the definition of data exploration, data cleaning, or feature engineering. EDA is generally found to be sitting right after the data cleaning phase and before feature engineering or model building. EDA assists in setting the overall direction of model selection and it helps to check whether the data has met the model assumptions. As a result, carrying out this preliminary analysis may save you a large amount of time for the following steps.
In this article, I have created a semi-automated EDA process that can be broken down into the following steps:
Know Your DataData Manipulation and Feature EngineeringUnivariate AnalysisMultivariate Analysis
Know Your Data
Data Manipulation and Feature Engineering
Univariate Analysis
Multivariate Analysis
Feel free to jump to the part that you are interested in, or grab the full code from my website if you find it helpful.
Firstly, we need to load the python libraries and the dataset. For this exercise, I am using several public datasets from the Kaggle community, feel free to have exploration on these amazing data using the link below:
Restaurant Business Rakings 2020
Reddit WallStreetBets Posts
I will be using four main libraries: Numpy — to work with arrays; Pandas — to manipulate data in a spreadsheet format that we are familiar with; Seaborn and matplotlib — to create data visualization.
import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import numpy as np from pandas.api.types import is_string_dtype, is_numeric_dtype
Create a data frame from the imported dataset by copying the path of the dataset and use df.head(5) to take a peek at the first 5 rows of the data.
Before zooming into each field, let’s first take a bird’s eye view of the overall dataset characteristics.
It gives the count of non-null values for each column and its data type.
This function provides basic statistics of each column. By passing the parameter “include = ‘all’”, it outputs the value count, unique count, the top-frequency value of the categorical variables and count, mean, standard deviation, min, max, and percentile of numeric variables
If we leave it empty, it only shows numeric variables. As you can see, only columns being identified as “int64” in the info() output are shown below.
Handling missing values is a rabbit hole that cannot be covered in one or two sentences. If you would love to know how to address missing values in the model lifecycle and understand different types of missing data, here are some articles that may help:
towardsdatascience.com
medium.com
In this article, we will focus on identifying the number of missing values. isnull().sum()returns the number of missing values for each column.
We can also do some simple manipulations to make the output more insightful. Firstly, calculate the percentage of missing values.
Then, visualize the percentage of the missing value based on the data frame “missing_df”. The for loop is basically a handy way to add labels to the bars. As we can see from the chart, nearly half of the “body” values from the “reddit_wsb” dataset are missing, which leads us to the next step “feature engineering”.
This is the only part that requires some human judgment, thus cannot be easily automated. Don’t be afraid of this terminology. I think of feature engineering as a fancy way of saying transforming the data at hand to make it more insightful. There are several common techniques, e.g. change the date of birth into age, decomposing date into year, month, day, and binning numeric values. But the general rule is that this process should be tailored to both the data at hand and the objectives to achieve. If you would like to know more about these techniques, I found this article “Fundamental Techniques of Feature Engineering for Machine Learning” brings a holistic view of feature engineering in practice. And if you would like to know more about feature selection and feature engineering techniques, you may find these helpful:
towardsdatascience.com
towardsdatascience.com
For the “reddit_wsb” dataset, I simply did three manipulations on the existing data.
1. title → title_length;
df['title_length'] = df['title'].apply(len)
As a result, the high-cardinality column “title” has been transformed into a numeric variable which can be further used in the correlation analysis.
2. body → with_body
df['with_body'] = np.where(df['body'].isnull(), 'Yes', 'No')
Since there is a large portion of missing values, the “body” field is transformed into either with_body = “Yes” and with_body = “No”, thus it can be easily analyzed as a categorical variable.
3. timestamp→ month
df['month'] = pd.to_datetime(df['timestamp']).dt.month.apply(str)
Since most data are gather from the year “2021”, there is no point comparing the year. Therefore I kept the month section of the “date”, which also helps to group data into larger subsets.
In order to streamline the further analysis, I drop the columns that won’t contribute to the EDA.
df = df.drop(['id', 'url', 'timestamp', 'title', 'body'], axis=1)
For the “restaurant” dataset, the data is already clean enough, therefore I simply trimmed out the columns with high cardinality.
df = df.drop(['Restaurant', 'City'], axis=1)
Furthermore, the remaining variables are categorized into numerical and categorical, since univariate analysis and multivariate analysis require different approaches to handle different data types. “is_string_dtype” and “is_numeric_dtype” are handy functions to identify the data type of each field.
After finalizing the numerical and categorical variables lists, the univariate and multivariate analysis can be automated.
The describe() function mentioned in the first section has already provided a univariate analysis in a non-graphical way. In this section, we will be generating more insights by visualizing the data and spot the hidden patterns through graphical analysis.
Have a read of my article on “How to Choose the Most Appropriate Chart” if you are interested in knowing which chart types are most suitable for which data type.
towardsdatascience.com
Categorical Variables → Bar chart
The easiest yet most intuitive way to visualize the property of a categorical variable is to use a bar chart to plot the frequency of each categorical value.
Numerical Variables → histogram
To graph out the numeric variable distribution, we can use histogram which is very similar to bar chart. It splits continuous numbers into equal size bins and plot the frequency of records falling between the interval.
I use this for loop to iterate through the columns in the data frame and create a plot for each column. Then use a histogram if it is numerical variables and a bar chart if categorical.
Multivariate analysis is categorized into these three conditions to address various combinations of numerical variables and categorical variables.
Firstly, let’s use the correlation matrix to find the correlation of all numeric data type columns. Then use a heat map to visualize the result. The annotation inside each cell indicates the correlation coefficient of the relationship.
Secondly, since the correlation matrix only indicates the strength of linear relationship, it is better to plot the numerical variables using seaborn function sns.pairplot(). Notice that, both the sns.heatmap() and sns.pairplot() function ignore non-numeric data type.
Pair plot or scatterplot is a good complementary to the correlation matrix, especially when nonlinear relationships (e.g. exponential, inverse relationship) might exist. For example, the inverse relationship between “Rank” and “Sales” observed in the restaurant dataset may be mistaken as strong linear relationship if we simply look at number “- 0.92” the correlation matrix.
The relationship between two categorical variables can be visualized using grouped bar charts. The frequency of the primary categorical variables is broken down by the secondary category. This can be achieved using sns.countplot().
I use a nested for loop, where the outer loop iterates through all categorical variables and assigns them as the primary category, then the inner loop iterate through the list again to pair the primary category with a different secondary category.
Within one grouped bar chart, if the frequency distribution always follows the same pattern across different groups, it suggests that there is no dependency between the primary and secondary category. However, if the distribution is different then it indicates that it is likely that there is a dependency between two variables.
Since there is only one categorical variable in the “restaurant” dataset, no graph is generated.
Box plot is usually adopted when we need to compare how numerical data varies across groups. It is an intuitive way to graphically depict if the variation in categorical features contributes to the difference in values, which can be additionally quantified using ANOVA analysis. In this process, I pair each column in the categorical list with all columns in the numerical list and plot the box plot accordingly.
In the “reddit_wsb” dataset, no significant difference is observed across different categories.
On the other hand, the “restaurant” dataset gives us some interesting output. Some states (e.g. “Mich.”) seem to jump all around the plots. Is it just because of the relatively smaller sample size for these states, which might worth further investigation.
Another approach is built upon the pairplot that we performed earlier for numerical vs. numerical. To introduce the categorical variable, we can use different hues to represent. Just like what we did for countplot. To do this, we can simply loop through the categorical list and add each element as the hue of the pairplot.
Consequently, it is easy to visualize whether each group forms clusters in the scatterplot.
Hope you enjoy my article :). If you would like to read more of my articles on Medium, please contribute by signing up Medium membership using this affiliate link (https://destingong.medium.com/membership).
This article covers several steps to perform EDA:
Know Your Data: have a bird’s view of the characteristics of the dataset.Feature Engineering: transform variables into something more insightful.Univariate Analysis: 1) histogram to visualize numerical data; 2) bar chart to visualize categorical data.Multivariate Analysis: 1) Numerical vs. Numerical: correlation matrix, scatterplot (pairplot); 2) Categorical vs. Categorical: Grouped bar chart; 3) Numerical vs. Categorical: pairplot with hue, box plot.
Know Your Data: have a bird’s view of the characteristics of the dataset.
Feature Engineering: transform variables into something more insightful.
Univariate Analysis: 1) histogram to visualize numerical data; 2) bar chart to visualize categorical data.
Multivariate Analysis: 1) Numerical vs. Numerical: correlation matrix, scatterplot (pairplot); 2) Categorical vs. Categorical: Grouped bar chart; 3) Numerical vs. Categorical: pairplot with hue, box plot.
Feel free to grab the code from my website. As mentioned earlier, other than the feature engineering part, the rest of the analysis can be automated. However, it is always better when the automation process is accompanied by some human touch, for example, experimenting on the bin size to optimize the histogram distribution. As always, I hope you find this article helpful and I encourage you to give it a go with your own dataset :)
link.medium.com
towardsdatascience.com
medium.com
Originally published at https://www.visual-design.net on February 28, 2021. | [
{
"code": null,
"e": 947,
"s": 172,
"text": "Exploratory Data Analysis, also known as EDA, has become an increasingly hot topic in data science. Just as the name suggests, it is the process of trial and error in an uncertain space, with the goal of finding insights. It usually happens at the early s... |
Find the direction from given string - GeeksforGeeks | 01 Apr, 2021
Given a string containing only L’s and R’s which represents left rotation and right rotation respectively. The task is to find the final direction of pivot(i.e N/E/S/W). Let a pivot is pointed towards the north(N) in a compass.
Examples:
Input: str = "LLRLRRL"
Output: W
In this input string we rotate pivot to left
when a L char is encountered and right when
R is encountered.
Input: str = "LL"
Output: S
Approach:
Use a counter that incremented on seeing R and decremented on seeing L.Finally, use modulo on the counter to get the direction.If the count is negative then directions will be different. Check the code for negative as well.
Use a counter that incremented on seeing R and decremented on seeing L.
Finally, use modulo on the counter to get the direction.
If the count is negative then directions will be different. Check the code for negative as well.
Below is the implementation of the above approach:
C++
Java
Python3
C#
PHP
Javascript
// CPP implementation of above approach#include <bits/stdc++.h>using namespace std; // Function to find the final directionstring findDirection(string s){ int count = 0; string d = ""; for (int i = 0; i < s.length(); i++) { if (s[0] == '\n') return NULL; if (s[i] == 'L') count--; else { if (s[i] == 'R') count++; } } // if count is positive that implies // resultant is clockwise direction if (count > 0) { if (count % 4 == 0) d = "N"; else if (count % 4 == 1) d = "E"; else if (count % 4 == 2) d = "S"; else if (count % 4 == 3) d = "W"; } // if count is negative that implies // resultant is anti-clockwise direction if (count < 0) { if (count % 4 == 0) d = "N"; else if (count % 4 == -1) d = "W"; else if (count % 4 == -2) d = "S"; else if (count % 4 == -3) d = "E"; } return d;} // Driver codeint main(){ string s = "LLRLRRL"; cout << (findDirection(s)) << endl; s = "LL"; cout << (findDirection(s)) << endl;} // This code is contributed by// SURENDRA_GANGWAR
// Java implementation of above approachimport java.util.*; class GFG { // Function to find the final direction static String findDirection(String s) { int count = 0; String d = ""; for (int i = 0; i < s.length(); i++) { if (s.charAt(0) == '\n') return null; if (s.charAt(i) == 'L') count--; else { if (s.charAt(i) == 'R') count++; } } // if count is positive that implies // resultant is clockwise direction if (count > 0) { if (count % 4 == 0) d = "N"; else if (count % 4 == 1) d = "E"; else if (count % 4 == 2) d = "S"; else if (count % 4 == 3) d = "W"; } // if count is negative that implies // resultant is anti-clockwise direction if (count < 0) { if (count % 4 == 0) d = "N"; else if (count % 4 == -1) d = "W"; else if (count % 4 == -2) d = "S"; else if (count % 4 == -3) d = "E"; } return d; } // Driver code public static void main(String[] args) { String s = "LLRLRRL"; System.out.println(findDirection(s)); s = "LL"; System.out.println(findDirection(s)); }}
# Python3 implementation of# the above approach # Function to find the# final direction def findDirection(s): count = 0 d = "" for i in range(len(s)): if (s[i] == 'L'): count -= 1 else: if (s[i] == 'R'): count += 1 # if count is positive that # implies resultant is clockwise # direction if (count > 0): if (count % 4 == 0): d = "N" elif (count % 4 == 10): d = "E" elif (count % 4 == 2): d = "S" elif (count % 4 == 3): d = "W" # if count is negative that # implies resultant is anti- # clockwise direction if (count < 0): count *= -1 if (count % 4 == 0): d = "N" elif (count % 4 == 1): d = "W" elif (count % 4 == 2): d = "S" elif (count % 4 == 3): d = "E" return d # Driver codeif __name__ == '__main__': s = "LLRLRRL" print(findDirection(s)) s = "LL" print(findDirection(s)) # This code is contributed by 29AjayKumar
// C# implementation of above approachusing System; class GFG { // Function to find the final direction static String findDirection(String s) { int count = 0; String d = ""; for (int i = 0; i < s.Length; i++) { if (s[0] == '\n') return null; if (s[i] == 'L') count--; else { if (s[i] == 'R') count++; } } // if count is positive that implies // resultant is clockwise direction if (count > 0) { if (count % 4 == 0) d = "N"; else if (count % 4 == 1) d = "E"; else if (count % 4 == 2) d = "S"; else if (count % 4 == 3) d = "W"; } // if count is negative that implies // resultant is anti-clockwise direction if (count < 0) { if (count % 4 == 0) d = "N"; else if (count % 4 == -1) d = "W"; else if (count % 4 == -2) d = "S"; else if (count % 4 == -3) d = "E"; } return d; } // Driver code public static void Main() { String s = "LLRLRRL"; Console.WriteLine(findDirection(s)); s = "LL"; Console.WriteLine(findDirection(s)); }} // This code is contributed by Shashank
<?php// PHP implementation of above approach // Function to find the final directionfunction findDirection($s){ $count = 0; $d = ""; for ($i = 0; $i < strlen($s); $i++) { if ($s[0] == '\n') return null; if ($s[$i] == 'L') $count -= 1; else { if ($s[$i] == 'R') $count += 1; } } // if count is positive that implies // resultant is clockwise direction if ($count > 0) { if ($count % 4 == 0) $d = "N"; else if ($count % 4 == 1) $d = "E"; else if ($count % 4 == 2) $d = "S"; else if ($count % 4 == 3) $d = "W"; } // if count is negative that // implies resultant is // anti-clockwise direction if ($count < 0) { if ($count % 4 == 0) $d = "N"; else if ($count % 4 == -1) $d = "W"; else if ($count % 4 == -2) $d = "S"; else if ($count % 4 == -3) $d = "E"; } return $d;} // Driver code$s = "LLRLRRL";echo findDirection($s)."\n"; $s = "LL";echo findDirection($s)."\n"; // This code is contributed// by ChitraNayal ?>
<script> // Javascript implementation of above approach // Function to find the final direction function findDirection(s) { let count = 0; let d = ""; for (let i = 0; i < s.length; i++) { if (s[0] == '\n') return null; if (s[i] == 'L') count--; else { if (s[i] == 'R') count++; } } // if count is positive that implies // resultant is clockwise direction if (count > 0) { if (count % 4 == 0) d = "N"; else if (count % 4 == 1) d = "E"; else if (count % 4 == 2) d = "S"; else if (count % 4 == 3) d = "W"; } // if count is negative that implies // resultant is anti-clockwise direction if (count < 0) { if (count % 4 == 0) d = "N"; else if (count % 4 == -1) d = "W"; else if (count % 4 == -2) d = "S"; else if (count % 4 == -3) d = "E"; } return d; } let s = "LLRLRRL"; document.write(findDirection(s) + "</br>"); s = "LL"; document.write(findDirection(s)); // This code is contributed by divyeshrabadiya07.</script>
W
S
Time Complexity: O(N) where N is the length of the stringAuxiliary Space: O(1)
Shashank12
ukasp
SURENDRA_GANGWAR
29AjayKumar
ujjwalgoel1103
divyeshrabadiya07
Constructive Algorithms
Modular Arithmetic
Competitive Programming
Strings
Strings
Modular Arithmetic
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Top 15 Websites for Coding Challenges and Competitions
Breadth First Traversal ( BFS ) on a 2D array
Shortest path in a directed graph by Dijkstra’s algorithm
Count of strings whose prefix match with the given string to a given length k
Graph implementation using STL for competitive programming | Set 2 (Weighted graph)
Write a program to reverse an array or string
Reverse a string in Java
Longest Common Subsequence | DP-4
Write a program to print all permutations of a given string
C++ Data Types | [
{
"code": null,
"e": 25034,
"s": 25006,
"text": "\n01 Apr, 2021"
},
{
"code": null,
"e": 25263,
"s": 25034,
"text": "Given a string containing only L’s and R’s which represents left rotation and right rotation respectively. The task is to find the final direction of pivot(i.e N/E... |
Find original array from encrypted array (An array of sums of other elements) - GeeksforGeeks | 06 Dec, 2021
Find original array from a given encrypted array of size n. Encrypted array is obtained by replacing each element of the original array by the sum of the remaining array elements.Examples :
Input : arr[] = {10, 14, 12, 13, 11}
Output : {5, 1, 3, 2, 4}
Original array {5, 1, 3, 2, 4}
Encrypted array is obtained as:
= {1+3+2+4, 5+3+2+4, 5+1+2+4, 5+1+3+4, 5+1+3+2}
= {10, 14, 12, 13, 11}
Each element of original array is replaced by the
sum of the remaining array elements.
Input : arr[] = {95, 107, 103, 88, 110, 87}
Output : {23, 11, 15, 30, 8, 31}
Approach is purely based on arithmetic observations which are illustrated below:
Let n = 4, and
the original array be ori[] = {a, b, c, d}
encrypted array is given as:
arr[] = {b+c+d, a+c+d, a+b+d, a+b+c}
Elements of encrypted array are :
arr[0] = (b+c+d), arr[1] = (a+c+d),
arr[2] = (a+b+d), arr[3] = (a+b+c)
add up all the elements
sum = arr[0] + arr[1] + arr[2] + arr[3]
= (b+c+d) + (a+c+d) + (a+b+d) + (a+b+c)
= 3(a+b+c+d)
Sum of elements of ori[] = sum / n-1
= sum/3
= (a+b+c+d)
Thus, for a given encrypted array arr[] of size n, the sum of
the elements of the original array ori[] can be calculated as:
sum = (arr[0]+arr[1]+....+arr[n-1]) / (n-1)
Then, elements of ori[] are calculated as:
ori[0] = sum - arr[0]
ori[1] = sum - arr[1]
.
.
ori[n-1] = sum - arr[n-1]
Below is the implementation of above steps.
C++
Java
Python 3
C#
PHP
Javascript
// C++ implementation to find original array// from the encrypted array#include <bits/stdc++.h>using namespace std; // Finds and prints the elements of the original// arrayvoid findAndPrintOriginalArray(int arr[], int n){ // total sum of elements // of encrypted array int arr_sum = 0; for (int i=0; i<n; i++) arr_sum += arr[i]; // total sum of elements // of original array arr_sum = arr_sum/(n-1); // calculating and displaying // elements of original array for (int i=0; i<n; i++) cout << (arr_sum - arr[i]) << " ";} // Driver program to test aboveint main(){ int arr[] = {10, 14, 12, 13, 11}; int n = sizeof(arr) / sizeof(arr[0]); findAndPrintOriginalArray(arr, n); return 0;}
import java.util.*; class GFG { // Finds and prints the elements of the original // array static void findAndPrintOriginalArray(int arr[], int n) { // total sum of elements // of encrypted array int arr_sum = 0; for (int i = 0; i < n; i++) { arr_sum += arr[i]; } // total sum of elements // of original array arr_sum = arr_sum / (n - 1); // calculating and displaying // elements of original array for (int i = 0; i < n; i++) { System.out.print(arr_sum - arr[i] + " "); } } // Driver code public static void main(String[] args) { int arr[] = { 10, 14, 12, 13, 11 }; int n = arr.length; findAndPrintOriginalArray(arr, n); }} // This code is contributed by rj13to.
# Python 3 implementation to find# original array from the encrypted# array # Finds and prints the elements of# the original arraydef findAndPrintOriginalArray(arr, n): # total sum of elements # of encrypted array arr_sum = 0 for i in range(0, n): arr_sum += arr[i] # total sum of elements # of original array arr_sum = int(arr_sum / (n - 1)) # calculating and displaying # elements of original array for i in range(0, n): print((arr_sum - arr[i]), end = " ") # Driver program to test abovearr = [10, 14, 12, 13, 11]n = len(arr)findAndPrintOriginalArray(arr, n) # This code is contributed By Smitha
// C# program to find original// array from the encrypted arrayusing System; class GFG { // Finds and prints the elements // of the original array static void findAndPrintOriginalArray(int []arr, int n) { // total sum of elements // of encrypted array int arr_sum = 0; for (int i = 0; i < n; i++) arr_sum += arr[i]; // total sum of elements // of original array arr_sum = arr_sum / (n - 1); // calculating and displaying // elements of original array for (int i = 0; i < n; i++) Console.Write(arr_sum - arr[i] + " "); } // Driver Code public static void Main (String[] args) { int []arr = {10, 14, 12, 13, 11}; int n =arr.Length; findAndPrintOriginalArray(arr, n); }} // This code is contributed by parashar...
<?php// PHP implementation to find// original array from the// encrypted array // Finds and prints the elements// of the original arrayfunction findAndPrintOriginalArray($arr, $n){ // total sum of elements // of encrypted array $arr_sum = 0; for ( $i = 0; $i < $n; $i++) $arr_sum += $arr[$i]; // total sum of elements // of original array $arr_sum = $arr_sum / ($n - 1); // calculating and displaying // elements of original array for ( $i = 0; $i < $n; $i++) echo $arr_sum - $arr[$i] , " ";} // Driver Code$arr = array(10, 14, 12, 13, 11);$n = count($arr);findAndPrintOriginalArray($arr, $n); // This code is contributed by anuj_67.?>
<script> // Javascript program to find original // array from the encrypted array // Finds and prints the elements // of the original array function findAndPrintOriginalArray(arr, n) { // total sum of elements // of encrypted array let arr_sum = 0; for (let i = 0; i < n; i++) arr_sum += arr[i]; // total sum of elements // of original array arr_sum = parseInt(arr_sum / (n - 1), 10); // calculating and displaying // elements of original array for (let i = 0; i < n; i++) document.write(arr_sum - arr[i] + " "); } let arr = [10, 14, 12, 13, 11]; let n =arr.length; findAndPrintOriginalArray(arr, n); // This code is contributed by rameshtravel07.</script>
Output :
5 1 3 2 4
Time complexity: O(N)
Auxiliary Space: O(1)This article is contributed by Ayush Jauhari. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
parashar
Smitha Dinesh Semwal
vt_m
souravmahato348
rameshtravel07
rj13to
Arrays
Arrays
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Stack Data Structure (Introduction and Program)
Top 50 Array Coding Problems for Interviews
Introduction to Arrays
Multidimensional Arrays in Java
Linear Search
Given an array A[] and a number x, check for pair in A[] with sum as x (aka Two Sum)
Linked List vs Array
Python | Using 2D arrays/lists the right way
Maximum and minimum of an array using minimum number of comparisons
Given an array of size n and a number k, find all elements that appear more than n/k times | [
{
"code": null,
"e": 24571,
"s": 24543,
"text": "\n06 Dec, 2021"
},
{
"code": null,
"e": 24762,
"s": 24571,
"text": "Find original array from a given encrypted array of size n. Encrypted array is obtained by replacing each element of the original array by the sum of the remaining... |
Greatest odd factor of an even number - GeeksforGeeks | 05 Aug, 2021
Given an even number N, the task is to find the greatest possible odd factor of N.
Examples:
Input: N = 8642 Output: 4321 Explanation: Here, factors of 8642 are {1, 8642, 2, 4321, 29, 298, 58, 149} in which odd factors are {1, 4321, 29, 149} and the greatest odd factor among all odd factors is 4321.
Input: N = 100 Output: 25 Explanation: Here, factors of 100 are {1, 100, 2, 50, 4, 25, 5, 20, 10} in which odd factors are {1, 25, 5} and the greatest odd factor among all odd factors is 25.
Naive Approach: The naive approach is to find all the factors of N and then select the greatest odd factor from it. Time Complexity: O(sqrt(N))
Efficient Approach: The efficient approach for this problem is to observe that every even number N can be represented as:
N = 2i*odd_number
Therefore to get the largest odd number we need to divide the given number N by 2 until N becomes an odd number.
Below is the implementation of the above approach:
C++
Java
Python3
C#
Javascript
// C++ program for the above approach#include<bits/stdc++.h>using namespace std; // Function to print greatest odd factorint greatestOddFactor(int n){ int pow_2 = (int)(log(n)); // Initialize i with 1 int i = 1; // Iterate till i <= pow_2 while (i <= pow_2) { // Find the pow(2, i) int fac_2 = (2 * i); if (n % fac_2 == 0) { // If factor is odd, then // print the number and break if ((n / fac_2) % 2 == 1) { return (n / fac_2); } } i += 1; }} // Driver Codeint main(){ // Given Number int N = 8642; // Function Call cout << greatestOddFactor(N); return 0;} // This code is contributed by Amit Katiyar
// Java program for the above approachclass GFG{ // Function to print greatest odd factorpublic static int greatestOddFactor(int n){ int pow_2 = (int)(Math.log(n)); // Initialize i with 1 int i = 1; // Iterate till i <= pow_2 while (i <= pow_2) { // Find the pow(2, i) int fac_2 = (2 * i); if (n % fac_2 == 0) { // If factor is odd, then // print the number and break if ((n / fac_2) % 2 == 1) { return (n / fac_2); } } i += 1; } return 0;} // Driver codepublic static void main(String[] args){ // Given Number int N = 8642; // Function Call System.out.println(greatestOddFactor(N));}} // This code is contributed by divyeshrabadiya07
# Python3 program for the above approach # importing Maths libraryimport math # Function to print greatest odd factordef greatestOddFactor(n): pow_2 = int(math.log(n, 2)) # Initialize i with 1 i = 1 # Iterate till i <= pow_2 while i <= pow_2: # find the pow(2, i) fac_2 = (2**i) if (n % fac_2 == 0) : # If factor is odd, then print the # number and break if ( (n // fac_2) % 2 == 1): print(n // fac_2) break i += 1 # Driver Code # Given NumberN = 8642 # Function CallgreatestOddFactor(N)
// C# program for the above approachusing System; class GFG{ // Function to print greatest odd factorpublic static int greatestOddFactor(int n){ int pow_2 = (int)(Math.Log(n)); // Initialize i with 1 int i = 1; // Iterate till i <= pow_2 while (i <= pow_2) { // Find the pow(2, i) int fac_2 = (2 * i); if (n % fac_2 == 0) { // If factor is odd, then // print the number and break if ((n / fac_2) % 2 == 1) { return (n / fac_2); } } i += 1; } return 0;} // Driver codepublic static void Main(String[] args){ // Given number int N = 8642; // Function call Console.WriteLine(greatestOddFactor(N));}} // This code is contributed by gauravrajput1
<script> // JavaScript program for the above approach // Function to print greatest odd factorfunction greatestOddFactor(n){ let pow_2 = (Math.log(n)); // Initialize i with 1 let i = 1; // Iterate till i <= pow_2 while (i <= pow_2) { // Find the pow(2, i) let fac_2 = (2 * i); if (n % fac_2 == 0) { // If factor is odd, then // print the number and break if ((n / fac_2) % 2 == 1) { return (n / fac_2); } } i += 1; } return 0;} // Driver Code // Given Number let N = 8642; // Function Call document.write(greatestOddFactor(N)); ; </script>
4321
Time Complexity: O(log2(N))
amit143katiyar
divyeshrabadiya07
GauravRajput1
splevel62
khushboogoyal499
gabaa406
factor
prime-factor
Mathematical
Mathematical
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Merge two sorted arrays
Program to find GCD or HCF of two numbers
Prime Numbers
Modulo Operator (%) in C/C++ with Examples
Program for Decimal to Binary Conversion
Find all factors of a natural number | Set 1
Program to find sum of elements in a given array
Modulo 10^9+7 (1000000007)
The Knight's tour problem | Backtracking-1
Program for factorial of a number | [
{
"code": null,
"e": 24977,
"s": 24949,
"text": "\n05 Aug, 2021"
},
{
"code": null,
"e": 25060,
"s": 24977,
"text": "Given an even number N, the task is to find the greatest possible odd factor of N."
},
{
"code": null,
"e": 25071,
"s": 25060,
"text": "Example... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.